Tag: Hugging Face

  • Simon Willison’s Weblog: Pixtral Large

    Source URL: https://simonwillison.net/2024/Nov/18/pixtral-large/ Source: Simon Willison’s Weblog Title: Pixtral Large Feedly Summary: Pixtral Large New today from Mistral: Today we announce Pixtral Large, a 124B open-weights multimodal model built on top of Mistral Large 2. Pixtral Large is the second model in our multimodal family and demonstrates frontier-level image understanding. The weights are out on…

  • Cloud Blog: How to deploy Llama 3.2-1B-Instruct model with Google Cloud Run GPU

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/how-to-deploy-llama-3-2-1b-instruct-model-with-google-cloud-run/ Source: Cloud Blog Title: How to deploy Llama 3.2-1B-Instruct model with Google Cloud Run GPU Feedly Summary: As open-source large language models (LLMs) become increasingly popular, developers are looking for better ways to access new models and deploy them on Cloud Run GPU. That’s why Cloud Run now offers fully managed NVIDIA…

  • Cloud Blog: How to deploy and serve multi-host gen AI large open models over GKE

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/deploy-and-serve-open-models-over-google-kubernetes-engine/ Source: Cloud Blog Title: How to deploy and serve multi-host gen AI large open models over GKE Feedly Summary: Context As generative AI experiences explosive growth fueled by advancements in LLMs (Large Language Models), access to open models is more critical than ever for developers. Open models are publicly available pre-trained foundational…

  • Hacker News: Tencent drops a 389B MoE model(Open-source and free for commercial use))

    Source URL: https://github.com/Tencent/Tencent-Hunyuan-Large Source: Hacker News Title: Tencent drops a 389B MoE model(Open-source and free for commercial use)) Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text introduces the Hunyuan-Large model, the largest open-source Transformer-based Mixture of Experts (MoE) model, developed by Tencent, which boasts 389 billion parameters, optimizing performance while managing resource…

  • Simon Willison’s Weblog: Nous Hermes 3

    Source URL: https://simonwillison.net/2024/Nov/4/nous-hermes-3/#atom-everything Source: Simon Willison’s Weblog Title: Nous Hermes 3 Feedly Summary: Nous Hermes 3 The Nous Hermes family of fine-tuned models have a solid reputation. Their most recent release came out in August, based on Meta’s Llama 3.1: Our training data aggressively encourages the model to follow the system and instruction prompts exactly…

  • Hacker News: SmolLM2

    Source URL: https://simonwillison.net/2024/Nov/2/smollm2/ Source: Hacker News Title: SmolLM2 Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces SmolLM2, a new family of compact language models from Hugging Face, designed for lightweight on-device operations. The models, which range from 135M to 1.7B parameters, were trained on 11 trillion tokens across diverse datasets, showcasing…

  • Simon Willison’s Weblog: SmolLM2

    Source URL: https://simonwillison.net/2024/Nov/2/smollm2/#atom-everything Source: Simon Willison’s Weblog Title: SmolLM2 Feedly Summary: SmolLM2 New from Loubna Ben Allal and her research team at Hugging Face: SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough…

  • Cloud Blog: PyTorch/XLA 2.5: vLLM support and an improved developer experience

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/whats-new-with-pytorchxla-2-5/ Source: Cloud Blog Title: PyTorch/XLA 2.5: vLLM support and an improved developer experience Feedly Summary: Machine learning engineers are bullish on PyTorch/XLA, a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. And now, PyTorch/XLA 2.5 is here, along with a set…

  • Hacker News: Meta’s Open Source NotebookLM

    Source URL: https://github.com/meta-llama/llama-recipes/tree/main/recipes/quickstart/NotebookLlama Source: Hacker News Title: Meta’s Open Source NotebookLM Feedly Summary: Comments AI Summary and Description: Yes Summary: The text presents a comprehensive guide to using an open-source project called NotebookLlama, aimed at creating a workflow that converts PDF documents into podcasts using various LLMs (Large Language Models). This process is likely to…

  • The Register: Hugging Face puts the squeeze on Nvidia’s software ambitions

    Source URL: https://www.theregister.com/2024/10/24/huggingface_hugs_nvidia/ Source: The Register Title: Hugging Face puts the squeeze on Nvidia’s software ambitions Feedly Summary: AI model repo promises lower costs, broader compatibility for NIMs competitor Hugging Face this week announced HUGS, its answer to Nvidia’s Inference Microservices (NIMs), which the AI repo claims will let customers deploy and run LLMs and…