Tag: webgpu
-
Simon Willison’s Weblog: llama-3.2-webgpu
Source URL: https://simonwillison.net/2024/Sep/30/llama-32-webgpu/#atom-everything Source: Simon Willison’s Weblog Title: llama-3.2-webgpu Feedly Summary: llama-3.2-webgpu Llama 3.2 1B is a really interesting models, given its 128,000 token input and its tiny size (barely more than a GB). This page loads a 1.24GB q4f16 ONNX build of the Llama-3.2-1B-Instruct model and runs it with a React-powered chat interface directly…