Translated data: Microsoft has released Phi-2, a model with 2.7 billion parameters, surpassing larger models in performance. Developed based on Phi-1.5, Phi-2 has not undergone RLHF (Reinforcement Learning with Human Feedback) or instruction fine-tuning. It employs a 24-layer Transformer architecture and was pre-trained for 14 days using 1.4 trillion high-quality data. Microsoft has also open-sourced Phi-1.5 and Phi-1, encouraging developers to explore models with fewer parameters. Phi-2 has outperformed models with 25 times its parameters in multi-task evaluations, highlighting the potential of smaller parameter models beyond traditional large models.