The Register: Have we stopped to think about what LLMs actually model?

Source URL: https://www.theregister.com/2024/08/30/ai_language_cognition_research/
Source: The Register
Title: Have we stopped to think about what LLMs actually model?

Feedly Summary: Claims about much-hyped tech show flawed understanding of language and cognition, research argues
In May, Sam Altman, CEO of $80-billion-or-so OpenAI, seemed unconcerned about how much it would cost to achieve the company’s stated goal. “Whether we burn $500 million a year or $5 billion – or $50 billion a year – I don’t care," he told students at Stanford University. "As long as we can figure out a way to pay the bills, we’re making artificial general intelligence. It’s going to be expensive."…

AI Summary and Description: Yes

Summary: The text discusses the challenges and implications of large language models (LLMs) in relation to claims regarding their capabilities and impact, emphasizing the difference between human language understanding and machine output. It highlights concerns raised by cognitive scientists on the potential risks of conflating advanced computational outputs with genuine linguistic comprehension, advocating for a more prudent approach in deploying LLMs across critical sectors like education and healthcare.

Detailed Description:
The provided text delves into the evolving landscape of large language models (LLMs), presenting the following key points:

– **Financial Commitment by Tech Giants**: Sam Altman, CEO of OpenAI, emphasizes the high costs associated with developing artificial general intelligence (AGI), indicating a prioritization of investment over cost concerns among leading tech companies.
– **Peer-Reviewed Concerns**: A recent paper questions the fundamental assumptions underlying LLM capabilities, distinguishing between engineering feats and actual linguistic comprehension. Key points include:
– **Misleading Claims**: The paper argues that the language used by tech leaders and corporations often misrepresents what LLMs can achieve leading to unjustified expectations about their capabilities.
– **Limits of Data Representation**: LLMs are seen as incomplete representations of human language, lacking the nuanced understanding that comes from embodied and experiential learning.
– **Social and Ethical Implications**: The conflation of LLM output with human language capabilities poses risks to social interaction, policy-making, and regulation.
– **Critical Evaluation of LLM Functionality**:
– The authors assert that LLMs do not participate in social interaction, lacking shared experiences or emotional engagement, which undermines their supposed mastery of language.
– The notion that language can be distilled into tokens or data sets ignores its dynamic, experiential nature, which the authors liken to a flowing river rather than a static collection of text.
– **Call for Regulation and Caution**: The discussion underlines the absence of rigorous testing and regulatory frameworks similar to those in more traditional industries such as automotive or pharmaceutical sectors. This absence poses significant risks when deploying LLMs in sensitive areas such as education and healthcare.
– **Future of AI**: Despite criticisms, predictions regarding the widespread adoption of AI and its economic impact remain optimistic, with estimates suggesting large-scale deployment by 2030.

Key Insights for Professionals:
– Security and Compliance Risks: The text stresses the necessity of implementing governance frameworks that evaluate the social and ethical ramifications of LLMs before widespread use.
– Importance of Clarity in Communication: Clear distinctions between the capabilities of LLMs and human understanding should be maintained to prevent misleading narratives that could influence policy and public perception.
– Need for Robust Testing and Evaluation: Professionals in AI and compliance must advocate for and establish rigorous evaluation methods akin to those in other high-stakes industries, ensuring the safe rollout of these technologies.

In conclusion, the text raises essential questions about the development, deployment, and societal impacts of LLMs, urging stakeholders to adopt a more analytical and cautious perspective as they navigate the complexities of AI technology in various domains.