Tag: Huggingface

  • Simon Willison’s Weblog: llm-gguf 0.2, now with embeddings

    Source URL: https://simonwillison.net/2024/Nov/21/llm-gguf-embeddings/#atom-everything Source: Simon Willison’s Weblog Title: llm-gguf 0.2, now with embeddings Feedly Summary: llm-gguf 0.2, now with embeddings This new release of my llm-gguf plugin – which adds support for locally hosted GGUF LLMs – adds a new feature: it now supports embedding models distributed as GGUFs as well. This means you can…

  • Cloud Blog: How to deploy Llama 3.2-1B-Instruct model with Google Cloud Run GPU

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/how-to-deploy-llama-3-2-1b-instruct-model-with-google-cloud-run/ Source: Cloud Blog Title: How to deploy Llama 3.2-1B-Instruct model with Google Cloud Run GPU Feedly Summary: As open-source large language models (LLMs) become increasingly popular, developers are looking for better ways to access new models and deploy them on Cloud Run GPU. That’s why Cloud Run now offers fully managed NVIDIA…

  • Simon Willison’s Weblog: Nous Hermes 3

    Source URL: https://simonwillison.net/2024/Nov/4/nous-hermes-3/#atom-everything Source: Simon Willison’s Weblog Title: Nous Hermes 3 Feedly Summary: Nous Hermes 3 The Nous Hermes family of fine-tuned models have a solid reputation. Their most recent release came out in August, based on Meta’s Llama 3.1: Our training data aggressively encourages the model to follow the system and instruction prompts exactly…

  • Simon Willison’s Weblog: SmolLM2

    Source URL: https://simonwillison.net/2024/Nov/2/smollm2/#atom-everything Source: Simon Willison’s Weblog Title: SmolLM2 Feedly Summary: SmolLM2 New from Loubna Ben Allal and her research team at Hugging Face: SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough…

  • The Register: Hugging Face puts the squeeze on Nvidia’s software ambitions

    Source URL: https://www.theregister.com/2024/10/24/huggingface_hugs_nvidia/ Source: The Register Title: Hugging Face puts the squeeze on Nvidia’s software ambitions Feedly Summary: AI model repo promises lower costs, broader compatibility for NIMs competitor Hugging Face this week announced HUGS, its answer to Nvidia’s Inference Microservices (NIMs), which the AI repo claims will let customers deploy and run LLMs and…

  • Hacker News: 1-Click Models Powered by Hugging Face

    Source URL: https://www.digitalocean.com/blog/one-click-models-on-do-powered-by-huggingface Source: Hacker News Title: 1-Click Models Powered by Hugging Face Feedly Summary: Comments AI Summary and Description: Yes Summary: DigitalOcean has launched a new 1-Click Model deployment service powered by Hugging Face, termed HUGS on DO. This feature allows users to quickly deploy popular generative AI models on DigitalOcean GPU Droplets, aiming…

  • Simon Willison’s Weblog: mistral.rs

    Source URL: https://simonwillison.net/2024/Oct/19/mistralrs/#atom-everything Source: Simon Willison’s Weblog Title: mistral.rs Feedly Summary: mistral.rs Here’s an LLM inference library written in Rust. It’s not just for that one family of models – like how llama.cpp has grown beyond Llama, mistral.rs has grown beyond Mistral. This is the first time I’ve been able to run the Llama 3.2…

  • Simon Willison’s Weblog: lm.rs: run inference on Language Models locally on the CPU with Rust

    Source URL: https://simonwillison.net/2024/Oct/11/lmrs/ Source: Simon Willison’s Weblog Title: lm.rs: run inference on Language Models locally on the CPU with Rust Feedly Summary: lm.rs: run inference on Language Models locally on the CPU with Rust Impressive new LLM inference implementation in Rust by Samuel Vitorino. I tried it just now on an M2 Mac with 64GB…

  • Hacker News: Nvidia releases NVLM 1.0 72B open weight model

    Source URL: https://huggingface.co/nvidia/NVLM-D-72B Source: Hacker News Title: Nvidia releases NVLM 1.0 72B open weight model Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces NVLM 1.0, a new family of advanced multimodal large language models (LLMs) developed with a focus on vision-language tasks. It demonstrates state-of-the-art performance comparable to leading proprietary and…