Source URL: https://simonwillison.net/2024/Sep/25/o1-preview-llm/
Source: Simon Willison’s Weblog
Title: Solving a bug with o1-preview, files-to-prompt and LLM
Feedly Summary: Solving a bug with o1-preview, files-to-prompt and LLM
I added a new feature to DJP this morning: you can now have plugins specify their metadata in terms of how it should be positioned relative to other metadata – inserted directly before or directly after django.middleware.common.CommonMiddleware for example.
At one point I got stuck with a weird test failure, and after ten minutes of head scratching I decided to pipe the entire thing into OpenAI’s o1-preview to see if it could spot the problem. I used files-to-prompt to gather the code and LLM to run the prompt:
files-to-prompt **/*.py -c | llm -m o1-preview ”
The middleware test is failing showing all of these – why is MiddlewareAfter repeated so many times?
[‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware5’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware2’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware5’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware4’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware5’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware2’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware5’, ‘MiddlewareAfter’, ‘Middleware3’, ‘MiddlewareAfter’, ‘Middleware’, ‘MiddlewareBefore’]"
The model whirled away for a few seconds and spat out an explanation of the problem – one of my middleware classes was accidentally calling self.get_response(request) in two different places.
Tags: o1, llm, djp, openai, ai, llms, ai-assisted-programming, generative-ai
AI Summary and Description: Yes
Summary: The text describes the author’s experience with debugging in a programming environment using an AI language model (LLM). This reflects the growing role of AI in enhancing software development processes and addresses potential challenges faced in middleware testing.
Detailed Description: The content provides insights into the practical application of AI technology in software debugging, specifically utilizing OpenAI’s LLM for identifying issues in code. Key points include:
* **New Feature Development**: The introduction of a feature that allows plugins to specify metadata positioning within Django middleware, showcasing an enhancement in the coding framework.
* **Debugging Process**: The author faced a test failure and utilized OpenAI’s o1-preview model to help investigate the issue, exemplifying AI’s utility in real-time problem-solving within software development.
* **Use of LLM for Problem Solving**: By employing files-to-prompt and LLM, the author was able to generate insights that led to the identification of a double call in the middleware, demonstrating the effective use of AI as a debugging aid.
* **Challenges Encountered**: The initial perplexity experienced by the author before consulting the LLM reflects common struggles in debugging and emphasizes the importance of AI tools in mitigating such complexities.
Overall, this instance illustrates a shift towards integrating AI in software security and compliance processes, underlining the growing importance of AI-assisted programming in modern development workflows. By leveraging AI tools, developers can enhance their ability to detect and resolve vulnerabilities or coding errors, which ultimately supports more secure software development practices.