Tag: ai-assisted-programming
-
Simon Willison’s Weblog: Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac
Source URL: https://simonwillison.net/2024/Nov/12/qwen25-coder/ Source: Simon Willison’s Weblog Title: Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac Feedly Summary: There’s a whole lot of buzz around the new Qwen2.5-Coder Series of open source (Apache 2.0 licensed) LLM releases from Alibaba’s Qwen research team. On first impression it looks like the buzz…
-
Simon Willison’s Weblog: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview
Source URL: https://simonwillison.net/2024/Oct/30/copilot-models/#atom-everything Source: Simon Willison’s Weblog Title: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview Feedly Summary: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview The big announcement from GitHub Universe: Copilot is growing support for…
-
Simon Willison’s Weblog: Run a prompt to generate and execute jq programs using llm-jq
Source URL: https://simonwillison.net/2024/Oct/27/llm-jq/#atom-everything Source: Simon Willison’s Weblog Title: Run a prompt to generate and execute jq programs using llm-jq Feedly Summary: llm-jq is a brand new plugin for LLM which lets you pipe JSON directly into the llm jq command along with a human-language description of how you’d like to manipulate that JSON and have…
-
Simon Willison’s Weblog: Running prompts against images and PDFs with Google Gemini
Source URL: https://simonwillison.net/2024/Oct/23/prompt-gemini/#atom-everything Source: Simon Willison’s Weblog Title: Running prompts against images and PDFs with Google Gemini Feedly Summary: Running prompts against images and PDFs with Google Gemini New TIL. I’ve been experimenting with the Google Gemini APIs for running prompts against images and PDFs (in preparation for finally adding multi-modal support to LLM) –…
-
Simon Willison’s Weblog: Everything I built with Claude Artifacts this week
Source URL: https://simonwillison.net/2024/Oct/21/claude-artifacts/#atom-everything Source: Simon Willison’s Weblog Title: Everything I built with Claude Artifacts this week Feedly Summary: I’m a huge fan of Claude’s Artifacts feature, which lets you prompt Claude to create an interactive Single Page App (using HTML, CSS and JavaScript) and then view the result directly in the Claude interface, iterating on…
-
Simon Willison’s Weblog: The 3 AI Use Cases: Gods, Interns, and Cogs
Source URL: https://simonwillison.net/2024/Oct/20/gods-interns-and-cogs/#atom-everything Source: Simon Willison’s Weblog Title: The 3 AI Use Cases: Gods, Interns, and Cogs Feedly Summary: The 3 AI Use Cases: Gods, Interns, and Cogs Drew Breunig introduces an interesting new framework for categorizing use cases of modern AI: Gods refers to the autonomous, AGI stuff that’s still effectively science fiction. Interns…
-
Simon Willison’s Weblog: An LLM TDD loop
Source URL: https://simonwillison.net/2024/Oct/13/an-llm-tdd-loop/#atom-everything Source: Simon Willison’s Weblog Title: An LLM TDD loop Feedly Summary: An LLM TDD loop Super neat demo by David Winterbottom, who wrapped my LLM and files-to-prompt tools in a short Bash script that can be fed a file full of Python unit tests and an empty implementation file and will then…
-
Simon Willison’s Weblog: lm.rs: run inference on Language Models locally on the CPU with Rust
Source URL: https://simonwillison.net/2024/Oct/11/lmrs/ Source: Simon Willison’s Weblog Title: lm.rs: run inference on Language Models locally on the CPU with Rust Feedly Summary: lm.rs: run inference on Language Models locally on the CPU with Rust Impressive new LLM inference implementation in Rust by Samuel Vitorino. I tried it just now on an M2 Mac with 64GB…
-
Simon Willison’s Weblog: Quoting Jason Gorman
Source URL: https://simonwillison.net/2024/Sep/29/jason-gorman/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Jason Gorman Feedly Summary: In the future, we won’t need programmers; just people who can describe to a computer precisely what they want it to do.— Jason Gorman Tags: ai-assisted-programming, llms, ai, generative-ai AI Summary and Description: Yes Summary: The text anticipates a future where programming…