Instill AI changelog

Groq: Fast, Affordable, and Energy Efficient AI

changelog cover

Groq solutions are based on the Language Processing Unit (LPU), a new category of processor and LPUs run Large Language Models (LLMs) at substantially faster speeds and, on an architectural level, up to 10x better energy efficiency compared to GPUs.

We added Groq Component on šŸ’§ Instill VDP to help you achieve fast AI inference at scale for your AI applications.

Features

  • Support text generation chat task with models: llama3.1, llama3, gemma2, gemma, and more

ā˜ļø Instill Cloud demo

To request a new component, please go here and click "Request A Component".