Simon Willison’s Weblog: Notes on using LLMs for code

Source URL: https://simonwillison.net/2024/Sep/20/using-llms-for-code/
Source: Simon Willison’s Weblog
Title: Notes on using LLMs for code

Feedly Summary: I was recently the guest on TWIML – the This Week in Machine Learning & AI podcast. Our episode is titled Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison, and the focus of the conversation was the ways in which I use LLM tools in my day-to-day work as a software developer and product engineer.
Here’s the YouTube video version of the episode:

I ran the transcript through MacWhisper and extracted some edited highligts below.
Two different modes of LLM use
At 19:53:

There are two different modes that I use LLMs for with programming.
The first is exploratory mode, which is mainly quick prototyping – sometimes in programming languages I don’t even know.
I love asking these things to give me options. I will often start a prompting session by saying, “I want to draw a visualization of an audio wave. What are my options for this?"
And have it just spit out five different things. Then I’ll say "Do me a quick prototype of option three that illustrates how that would work."
The other side is when I’m writing production code, code that I intend to ship, then it’s much more like I’m treating it basically as an intern who’s faster at typing than I am.
That’s when I’ll say things like, "Write me a function that takes this and this and returns exactly that."
I’ll often iterate on these a lot. I’ll say, "I don’t like the variable names you used there. Change those." Or "Refactor that to remove the duplication."
I call it my weird intern, because it really does feel like you’ve got this intern who is screamingly fast, and they’ve read all of the documentation for everything, and they’re massively overconfident, and they make mistakes and they don’t realize them.
But crucially, they never get tired, and they never get upset. So you can basically just keep on pushing them and say, "No, do it again. Do it differently. Change that. Change that."
At three in the morning, I can be like, "Hey, write me 100 lines of code that does X, Y, and Z," and it’ll do it. It won’t complain about it.
It’s weird having this small army of super talented interns that never complain about anything, but that’s kind of how this stuff ends up working.

Prototyping
At 25:22:

My entire career has always been about prototyping.
Django itself, the web framework, we built that in a local newspaper so that we could ship features that supported news stories faster. How can we make it so we can turn around a production-grade web application in a few days?
Ever since then, I’ve always been interested in finding new technologies that let me build things quicker, and my development process has always been to start with a prototype.
You have an idea, you build a prototype that illustrates the idea, you can then have a better conversation about it. If you go to a meeting with five people, and you’ve got a working prototype, the conversation will be so much more informed than if you go in with an idea and a whiteboard sketch.
I’ve always been a prototyper, but I feel like the speed at which I can prototype things in the past 12 months has gone up by an order of magnitude.
I was already a very productive prototype producer. Now, I can tap a thing into my phone, and 30 seconds later, I’ve got a user interface in Claude Artifacts that illustrates the idea that I’m trying to explore.
Honestly, if I didn’t use these models for anything else, if I just used them for prototyping, they would still have an enormous impact on the work that I do.

The full conversation covers a bunch of other topics. I ran the transcript through Claude, told it "Give me a bullet point list of the most interesting topics covered in this transcript" and then deleted the ones that I didn’t think were particularly interesting – here’s what was left:

Using AI-powered voice interfaces like ChatGPT’s Voice Mode to code while walking a dog
Leveraging AI tools like Claude and ChatGPT for rapid prototyping and development
Using AI to analyze and extract data from images, including complex documents like campaign finance reports
The challenges of using AI for tasks that may trigger safety filters, particularly for journalism
The evolution of local AI models like Llama and their improving capabilities
The potential of AI for data extraction from complex sources like scanned tables in PDFs
Strategies for staying up-to-date with rapidly evolving AI technologies
The development of vision-language models and their applications
The balance between hosted AI services and running models locally
The importance of examples in prompting for better AI performance

Tags: podcasts, ai, generative-ai, llms, ai-assisted-programming

AI Summary and Description: Yes

Summary: The text discusses the integration of LLM tools like ChatGPT and Claude into the software development process, focusing on their use for rapid prototyping and coding efficiency. It emphasizes the transformative impact of these tools on developer productivity, showcasing the balance between exploratory and production coding. The insights are particularly relevant for AI professionals and developers looking to leverage AI capabilities for enhanced development workflows.

Detailed Description:
The text provides a firsthand account of how LLMs are utilized in software development, particularly highlighting two distinct modes of usage: exploratory and production coding.

– **Exploratory Mode**:
– Engaged for quick prototyping, even in unfamiliar programming languages.
– The developer initiates prompts to explore options effectively, facilitating rapid idea generation and ideation.
– Examples include asking for different approaches to visualizing data and quickly prototyping one of the methods suggested.

– **Production Coding Mode**:
– Functions as a virtual intern to streamline the coding process.
– The developer iterates on and refines the code produced by the LLM, correcting variable names and optimizing code.
– The key benefit is that the LLM remains tireless, accommodating endless iterations without complaint, which enhances productivity, especially during unconventional hours.

– **Prototyping Emphasis**:
– The discussion underlines the long-standing importance of prototyping in development.
– The rapid prototyping capabilities provided by LLMs have significantly increased the speed of turning ideas into functional models, impacting collaborative discussions substantially.

– **Other Highlights**:
– Coding while multitasking, such as using voice commands during dog walks.
– The use of AI for analyzing complex documents and data extraction.
– Evolution in local AI models like Llama and their capabilities over time.
– Exploration of the trade-offs between using hosted AI services and local model deployment.
– The critical role of exemplary prompts in enhancing AI responsiveness and performance.

These insights reflect substantial advancements in the application of AI in software engineering, offering security and compliance professionals a lens into the evolving relationship between AI tools and the methodologies applied in development. The able extraction and transformation of complex tasks into manageable workflows due to AI benefits the overarching aims of innovation, agility, and productivity in tech, underscoring the importance of integrating AI responsibly and with the right governance frameworks.