Tag: vision-llms

  • Simon Willison’s Weblog: Say hello to gemini-exp-1121

    Source URL: https://simonwillison.net/2024/Nov/22/gemini-exp-1121/#atom-everything Source: Simon Willison’s Weblog Title: Say hello to gemini-exp-1121 Feedly Summary: Say hello to gemini-exp-1121 Google Gemini’s Logan Kilpatrick on Twitter: Say hello to gemini-exp-1121! Our latest experimental gemini model, with: significant gains on coding performance stronger reasoning capabilities improved visual understanding Available on Google AI Studio and the Gemini API right…

  • Simon Willison’s Weblog: Pixtral Large

    Source URL: https://simonwillison.net/2024/Nov/18/pixtral-large/ Source: Simon Willison’s Weblog Title: Pixtral Large Feedly Summary: Pixtral Large New today from Mistral: Today we announce Pixtral Large, a 124B open-weights multimodal model built on top of Mistral Large 2. Pixtral Large is the second model in our multimodal family and demonstrates frontier-level image understanding. The weights are out on…

  • Simon Willison’s Weblog: Ollama: Llama 3.2 Vision

    Source URL: https://simonwillison.net/2024/Nov/13/ollama-llama-vision/#atom-everything Source: Simon Willison’s Weblog Title: Ollama: Llama 3.2 Vision Feedly Summary: Ollama: Llama 3.2 Vision Ollama released version 0.4 last week with support for Meta’s first Llama vision model, Llama 3.2. If you have Ollama installed you can fetch the 11B model (7.9 GB) like this: ollama pull llama3.2-vision Or the larger…

  • Simon Willison’s Weblog: Claude API: PDF support (beta)

    Source URL: https://simonwillison.net/2024/Nov/1/claude-api-pdf-support-beta/#atom-everything Source: Simon Willison’s Weblog Title: Claude API: PDF support (beta) Feedly Summary: Claude API: PDF support (beta) Claude 3.5 Sonnet now accepts PDFs as attachments: The new Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) model now supports PDF input and understands both text and visual content within documents. I just released llm-claude-3 0.7 with support…

  • Simon Willison’s Weblog: You can now run prompts against images, audio and video in your terminal using LLM

    Source URL: https://simonwillison.net/2024/Oct/29/llm-multi-modal/#atom-everything Source: Simon Willison’s Weblog Title: You can now run prompts against images, audio and video in your terminal using LLM Feedly Summary: I released LLM 0.17 last night, the latest version of my combined CLI tool and Python library for interacting with hundreds of different Large Language Models such as GPT-4o, Llama,…

  • Simon Willison’s Weblog: LLM Pictionary

    Source URL: https://simonwillison.net/2024/Oct/26/llm-pictionary/ Source: Simon Willison’s Weblog Title: LLM Pictionary Feedly Summary: LLM Pictionary Inspired by my SVG pelicans on a bicycle, Paul Calcraft built this brilliant system where different vision LLMs can play Pictionary with each other, taking it in turns to progressively draw SVGs while the other models see if they can guess…

  • Simon Willison’s Weblog: Running prompts against images and PDFs with Google Gemini

    Source URL: https://simonwillison.net/2024/Oct/23/prompt-gemini/#atom-everything Source: Simon Willison’s Weblog Title: Running prompts against images and PDFs with Google Gemini Feedly Summary: Running prompts against images and PDFs with Google Gemini New TIL. I’ve been experimenting with the Google Gemini APIs for running prompts against images and PDFs (in preparation for finally adding multi-modal support to LLM) –…

  • Simon Willison’s Weblog: mistral.rs

    Source URL: https://simonwillison.net/2024/Oct/19/mistralrs/#atom-everything Source: Simon Willison’s Weblog Title: mistral.rs Feedly Summary: mistral.rs Here’s an LLM inference library written in Rust. It’s not just for that one family of models – like how llama.cpp has grown beyond Llama, mistral.rs has grown beyond Mistral. This is the first time I’ve been able to run the Llama 3.2…

  • Simon Willison’s Weblog: mlx-vlm

    Source URL: https://simonwillison.net/2024/Sep/29/mlx-vlm/#atom-everything Source: Simon Willison’s Weblog Title: mlx-vlm Feedly Summary: mlx-vlm The MLX ecosystem of libraries for running machine learning models on Apple Silicon continues to expand. Prince Canuma is actively developing this library for running vision models such as Qwen-2 VL and Pixtral and LLaVA using Python running on a Mac. I used…

  • Simon Willison’s Weblog: Gemini Bounding Box Visualization

    Source URL: https://simonwillison.net/2024/Aug/26/gemini-bounding-box-visualization/#atom-everything Source: Simon Willison’s Weblog Title: Gemini Bounding Box Visualization Feedly Summary: Gemini Bounding Box Visualization Here’s another fun tool I built with the help of Claude 3.5 Sonnet. I was browsing through Google’s Gemini documentation while researching how different multi-model LLM APIs work when I stumbled across this note in the vision…