Source URL: https://www.theregister.com/2024/09/10/brute_force_ai_era_gartner/
Source: Hacker News
Title: We’re in the brute force phase of AI – once it ends, demand for GPUs will too
Feedly Summary: Comments
AI Summary and Description: Yes
**Summary:** The text highlights insights from Gartner analysts regarding the shortcomings of specialist hardware for AI applications, the limitations of generative AI, and the benefits of composite AI approaches. It underscores the necessity for businesses to pivot back to established AI techniques in conjunction with generative methods, emphasizing a cautious and pragmatic approach to AI deployment.
**Detailed Description:**
– **Gartner’s Viewpoint on Specialist Hardware:**
– Analyst Erick Brethenoux argues that AI techniques dependent on specialist hardware like GPUs are “doomed” as advancements in programming refined over time largely negate the need for such hardware.
– The reference to a “brute force” phase implies that reliance on powerful hardware reflects a lack of sophistication in AI algorithm design.
– Brethenoux anticipates that generative AI will not diverge from this trend toward minimizing hardware dependencies.
– **Generative AI Usage and Observations:**
– Brethenoux claims that generative AI occupies a disproportionate share of public and media attention (“90 percent of the airwaves”) compared to its actual practical applications (“five percent of the use cases”).
– He notes a transitional period where organizations revisited existing AI solutions amid the generative AI hype, realizing that foundational AI approaches often suffice for their operational needs.
– An example of composite AI is provided, where generative AI enhances existing AI applications, like generating text for predictive maintenance analyses or improving security by suggesting actionable insights based on firewall logs.
– **Cautions about Generative AI:**
– Analyst Bern Elliot warns against overestimating generative AI’s capabilities, highlighting its lack of reasoning and explaining the potential unreliability of its outputs, which he likens to “Swiss cheese”—with unknown gaps in accuracy.
– Elliot emphasizes the importance of using generative AI only in specific contexts and calls for the integration of guardrails—implementing trusted AI techniques to validate generative AI outputs.
– **Implications for Professionals:**
– For security and compliance professionals, understanding the blend of established and generative AI techniques can enhance operational effectiveness and mitigate risks associated with AI deployment.
– Knowledge about the limitations of generative AI can inform better decision-making when adopting new technologies and developing robust security policies around the usage of AI systems.
– The recommendation for composite AI reinforces the idea of employing a layered security approach, integrating generative capabilities without sacrificing reliability or security posture.
Overall, the discussions presented by Gartner underscore the need for a measured approach to AI investments, steering clear of over-reliance on new technologies without substantial business cases.