Simon Willison’s Weblog: The 3 AI Use Cases: Gods, Interns, and Cogs

Source URL: https://simonwillison.net/2024/Oct/20/gods-interns-and-cogs/#atom-everything
Source: Simon Willison’s Weblog
Title: The 3 AI Use Cases: Gods, Interns, and Cogs

Feedly Summary: The 3 AI Use Cases: Gods, Interns, and Cogs
Drew Breunig introduces an interesting new framework for categorizing use cases of modern AI:

Gods refers to the autonomous, AGI stuff that’s still effectively science fiction.
Interns are supervised copilots. This is how I get most of the value out of LLMs at the moment, delegating tasks to them that I can then review, such as AI-assisted programming.
Cogs are the smaller, more reliable components that you can build pipelines and automations on top of without needing to review everything they do – think Whisper for transcriptions or maybe some limited LLM subtasks such as structured data extraction.

Drew also considers Toys as a subcategory of Interns: things like image generators, “defined by their usage by non-experts. Toys have a high tolerance for errors because they’re not being relied on for much beyond entertainment."
Tags: drew-breunig, ai-assisted-programming, generative-ai, ai, llms

AI Summary and Description: Yes

Summary: The text presents a new framework for categorizing modern AI use cases, delineating three primary categories (Gods, Interns, and Cogs) along with a subcategory (Toys). This classification is relevant for understanding the different operational roles AI plays, particularly in software development and automation.

Detailed Description: The framework introduced by Drew Breunig categorizes AI applications into distinct groups, which provides insights into how AI can be leveraged in various operational contexts.

– **Gods**: Represents the concept of Artificial General Intelligence (AGI), which is still a theoretical construct. These AI systems would hypothetically operate at a level comparable to human intelligence across diverse tasks but remain in the realm of science fiction.

– **Interns**: This category includes AI systems that assist human users in a supervised capacity, often referred to as copilots. An example given is the usage of Large Language Models (LLMs) for tasks such as AI-assisted programming. Here, users delegate tasks to AI, allowing them to focus on reviewing and refining the outputs rather than generating everything from scratch.

– **Cogs**: These are reliable, smaller components used in automation and workflows without the need for constant supervision or review. Examples include tools like Whisper for transcription or LLMs performing specific subtasks like structured data extraction, which streamline processes in various applications.

– **Toys**: A subcategory of Interns highlighted by Breunig, Toys refer to AI applications that are generally simple and intended for entertainment or non-critical user experience. These applications can tolerate errors better since they aren’t employed for significant decision-making solutions.

This classification system has implications for practitioners in AI, software development, and infrastructure security:
– Understanding these categories can help teams decide which AI tools are appropriate for their purposes based on reliability and oversight needs.
– The distinction can influence compliance considerations, especially as organizations integrate more ‘Interns’ and ‘Cogs’ into their workflows, necessitating new privacy and security protocols.
– Recognizing the different roles of AI can guide investments in infrastructure security, ensuring that critical systems utilize the most reliable solutions while leveraging less critical applications appropriately.