During the turbulent period at OpenAI, Microsoft introduced the smaller language models Orca 2, with parameters of 700 million and 1.3 billion, respectively, which rival the performance of Llama-2-Chat-70B. The models excel in zero-shot tests, trained on synthetic datasets to teach the most effective solution strategies for various tasks. Orca 2 outperformed models five to ten times its size across 15 diverse benchmarks, offering a cost-effective option for businesses with limited resources.