The translated data: YaRN is a computationally efficient method designed to extend the context window length of large language models based on transformers. It enhances the model's ability to handle sequential data and capture positional information by leveraging rotary position embedding (RoPE) and compresses the transformer to expand the context window. Experiments demonstrate that YaRN can successfully achieve context window extension for language models with fewer training samples and steps, while maintaining high computational efficiency. This method offers an efficient solution for extending the context window of large language models.