Apple released its own large language model, MM1, a multimodal LLM with a maximum size of 30B. Through pre-training and SFT, the MM1 model achieved SOTA performance on multiple benchmark tests, demonstrating attractive features such as in-context prediction, multi-image reasoning, and few-shot learning.