Translated data: Microsoft researchers have introduced the LongRoPE method, which extends the context window of LLMs to 2048k, achieving an 8-fold expansion while maintaining performance. This method efficiently searches for non-uniformity, eliminating the need for complex fine-tuning. Experimental results show that perplexity remains at baseline levels with a 2048k context, opening new avenues for future performance improvements in language models.