Liquid AI introduces Liquid Foundation Models (LFMs), built from first principles to boost GenAI performance with smaller memory requirements.
Liquid AI, a Massachusetts-based AI startup, has launched Liquid Foundation Models (LFMs), a new architecture designed to outperform traditional Generative Pre-trained Transformers (GPTs). Unlike GPT-based models like OpenAI’s GPT-4, Liquid AI’s LFMs are built from scratch using first principles. Enabling more efficient use of memory while delivering superior performance. These models come in three sizes—1.3B, 3.1B, and 40.3B parameters—and are optimized for complex tasks. Especially in multimedia and time-series data processing.
One major challenge faced by businesses using GenAI is the high memory cost of running large models, especially during inference. LFMs address this by redesigning the traditional token system into a Liquid system. It condenses information and maximizes knowledge capacity. This reduces memory costs without sacrificing performance, allowing enterprises to deploy AI solutions more efficiently across various platforms. This includes NVIDIA, AMD, and Apple hardware. The largest model, a 40.3B Mixture of Experts (MoE), is ideal for handling intricate tasks while remaining lightweight.
Another benefit of the Liquid system is its flexibility. The architecture is designed to automatically optimize for specific hardware configurations, ensuring seamless deployment across diverse systems. This adaptability makes LFMs highly attractive for enterprises that need AI models tailored to their infrastructure, offering performance improvements without locking them into a specific vendor ecosystem.
Despite these promises, Liquid AI has yet to disclose key details like dataset sources and safety measures, leaving some uncertainty around real-world applications. However, with models now available on platforms like Liquid Playground and Perplexity Labs, developers will soon have the chance to evaluate their performance firsthand.