Tag: Generative AI
-
Cloud Blog: Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/google-cloud-named-a-leader-in-forrester-wave-for-ai-platforms/ Source: Cloud Blog Title: Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024 Feedly Summary: Today, we are excited to announce that Google is a Leader in The Forrester Wave™: AI/ML Platforms, Q3 2024, tying for the highest score of all vendors evaluated in the Strategy category. At Google…
-
Wired: A New Group Is Trying to Make AI Data Licensing Ethical
Source URL: https://www.wired.com/story/dataset-providers-alliance-ethical-generative-ai-licensing/ Source: Wired Title: A New Group Is Trying to Make AI Data Licensing Ethical Feedly Summary: The Dataset Providers Alliance calls for creators and rights holders to be able to opt in to having their material used for training purposes. AI Summary and Description: Yes Summary: The text discusses the evolving landscape…
-
Simon Willison’s Weblog: Quoting anjor
Source URL: https://simonwillison.net/2024/Sep/3/anjor/#atom-everything Source: Simon Willison’s Weblog Title: Quoting anjor Feedly Summary: history | tail -n 2000 | llm -s “Write aliases for my zshrc based on my terminal history. Only do this for most common features. Don’t use any specific files or directories."— anjor Tags: llm, llms, ai, generative-ai AI Summary and Description: Yes…
-
Wired: AI-Fakes Detection Is Failing Voters in the Global South
Source URL: https://www.wired.com/story/generative-ai-detection-gap/ Source: Wired Title: AI-Fakes Detection Is Failing Voters in the Global South Feedly Summary: With generative AI affecting politics worldwide, researchers face a “detection gap,” as the biases built into systems mean tools for identifying fake content often work poorly or not at all in the Global South. AI Summary and Description:…
-
Hacker News: Procreate defies AI trend, pledges "no generative AI" in its illustration app
Source URL: https://arstechnica.com/information-technology/2024/08/procreate-defies-ai-trend-pledges-no-generative-ai-in-its-illustration-app/ Source: Hacker News Title: Procreate defies AI trend, pledges "no generative AI" in its illustration app Feedly Summary: Comments AI Summary and Description: Yes Summary: Procreate’s announcement to exclude generative AI from its iPad illustration app has stirred significant conversation in the creative community. CEO James Cuda articulated strong opposition to generative…
-
Simon Willison’s Weblog: OpenAI says ChatGPT usage has doubled since last year
Source URL: https://simonwillison.net/2024/Aug/31/openai-says-chatgpt-usage-has-doubled-since-last-year/#atom-everything Source: Simon Willison’s Weblog Title: OpenAI says ChatGPT usage has doubled since last year Feedly Summary: OpenAI says ChatGPT usage has doubled since last year Official ChatGPT usage numbers don’t come along very often: OpenAI said on Thursday that ChatGPT now has more than 200 million weekly active users — twice as…
-
Simon Willison’s Weblog: Quoting Forrest Brazeal
Source URL: https://simonwillison.net/2024/Aug/31/forrest-brazeal/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Forrest Brazeal Feedly Summary: I think that AI has killed, or is about to kill, pretty much every single modifier we want to put in front of the word “developer.” “.NET developer”? Meaningless. Copilot, Cursor, etc can get anyone conversant enough with .NET to be productive…
-
Simon Willison’s Weblog: llm-claude-3 0.4.1
Source URL: https://simonwillison.net/2024/Aug/30/llm-claude-3/#atom-everything Source: Simon Willison’s Weblog Title: llm-claude-3 0.4.1 Feedly Summary: llm-claude-3 0.4.1 New minor release of my LLM plugin that provides access to the Claude 3 family of models. Claude 3.5 Sonnet recently upgraded to a 8,192 output limit recently (up from 4,096 for the Claude 3 family of models). LLM can now…
-
Simon Willison’s Weblog: Quoting Magic AI
Source URL: https://simonwillison.net/2024/Aug/30/magic-ai/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Magic AI Feedly Summary: We have recently trained our first 100M token context model: LTM-2-mini. 100M tokens equals ~10 million lines of code or ~750 novels. For each decoded token, LTM-2-mini’s sequence-dimension algorithm is roughly 1000x cheaper than the attention mechanism in Llama 3.1 405B for…