Data to be translated: The Microsoft research team's 13-billion parameter model, Orca2, explores how to enhance the reasoning capabilities of smaller language models in a recent paper. With enhanced training signals, Orca2 can achieve performance comparable to, or even better than, models five to ten times its size among similarly sized models. The goal of Orca2 is to teach small language models a series of reasoning techniques and help them determine the most effective reasoning strategies for specific tasks. Similar to its predecessor, Orca1, the research team has drawn on the reasoning strategies of more powerful large language models and made meticulous adjustments based on the capabilities of smaller models. Orca2 employs a cautious reasoning technique called Prompt Erasure, making it a cautious reasoner. This technique allows the model not only to perform specific reasoning steps but also to formulate task-handling strategies at a higher level. In empirical studies, researchers conducted a comprehensive evaluation of Orca2 on 15 benchmark tests, showing that Orca2 significantly outperforms its similarly sized counterparts and is even comparable to or exceeds models five to ten times its size in tasks requiring advanced reasoning. Enhancing the capabilities of smaller models will open up new possibilities for various application deployment scenarios and strike a balance between efficiency and functionality.