Tag: lm
-
CSA: How ISO 42001 Enhances AI Risk Management
Source URL: https://www.schellman.com/blog/iso-certifications/how-to-assess-and-treat-ai-risks-and-impacts-with-iso42001 Source: CSA Title: How ISO 42001 Enhances AI Risk Management Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the adoption of ISO/IEC 42001:2023 as a global standard for AI governance, emphasizing a holistic approach to AI risk management that goes beyond traditional cybersecurity measures. StackAware’s implementation of this standard…
-
Simon Willison’s Weblog: W̶e̶e̶k̶n̶o̶t̶e̶s̶ Monthnotes for October
Source URL: https://simonwillison.net/2024/Oct/30/monthnotes/#atom-everything Source: Simon Willison’s Weblog Title: W̶e̶e̶k̶n̶o̶t̶e̶s̶ Monthnotes for October Feedly Summary: I try to publish weeknotes at least once every two weeks. It’s been four since the last entry, so I guess this one counts as monthnotes instead. In my defense, the reason I’ve fallen behind on weeknotes is that I’ve been…
-
Simon Willison’s Weblog: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview
Source URL: https://simonwillison.net/2024/Oct/30/copilot-models/#atom-everything Source: Simon Willison’s Weblog Title: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview Feedly Summary: Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview The big announcement from GitHub Universe: Copilot is growing support for…
-
The Register: How to jailbreak ChatGPT and trick the AI into writing exploit code using hex encoding
Source URL: https://www.theregister.com/2024/10/29/chatgpt_hex_encoded_jailbreak/ Source: The Register Title: How to jailbreak ChatGPT and trick the AI into writing exploit code using hex encoding Feedly Summary: ‘It was like watching a robot going rogue’ says researcher OpenAI’s language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an…
-
The Register: Linus Torvalds: 90% of AI marketing is hype
Source URL: https://www.theregister.com/2024/10/29/linus_torvalds_ai_hype/ Source: The Register Title: Linus Torvalds: 90% of AI marketing is hype Feedly Summary: Linux kernel creator says let’s see which workloads use GenAI in five years Linus Torvalds, creator of the Linux kernel, thinks the majority of marketing circulated by the industry on Generative AI is simply fluff with no real…
-
The Register: The troublesome economics of CPU-only AI
Source URL: https://www.theregister.com/2024/10/29/cpu_gen_ai_gpu/ Source: The Register Title: The troublesome economics of CPU-only AI Feedly Summary: At the end of the day, it all boils down to tokens per dollar Analysis Today, most GenAI models are trained and run on GPUs or some other specialized accelerator, but that doesn’t mean they have to be. In fact,…
-
Simon Willison’s Weblog: You can now run prompts against images, audio and video in your terminal using LLM
Source URL: https://simonwillison.net/2024/Oct/29/llm-multi-modal/#atom-everything Source: Simon Willison’s Weblog Title: You can now run prompts against images, audio and video in your terminal using LLM Feedly Summary: I released LLM 0.17 last night, the latest version of my combined CLI tool and Python library for interacting with hundreds of different Large Language Models such as GPT-4o, Llama,…
-
CSA: Integrating CSA CCM Controls into ISO/IEC 27001
Source URL: https://cloudsecurityalliance.org/blog/2024/10/29/streamlining-cloud-security-integrating-csa-ccm-controls-into-your-iso-iec-27001-framework Source: CSA Title: Integrating CSA CCM Controls into ISO/IEC 27001 Feedly Summary: AI Summary and Description: Yes Summary: The text provides valuable insights on how organizations can integrate the Cloud Security Alliance’s Cloud Controls Matrix (CCM) with their existing ISO/IEC 27001 Information Security Management System (ISMS). It emphasizes that compliance does not…