Liquid Foundation Models (LFMs) are a series of innovative generative AI models that achieve state-of-the-art performance across various scales while maintaining lower memory usage and higher inference efficiency. LFMs leverage computational units from dynamic systems theory, signal processing, and numerical linear algebra to handle all types of sequential data, including video, audio, text, time series, and signals. These models are general-purpose AI solutions designed to process large-scale, multimodal sequential data, enabling advanced reasoning and reliable decision-making.