E^2-LLM is an efficient extreme extension large language model method that effectively supports long context tasks through a single training process and significantly reduced computational cost. The method employs RoPE positional embeddings and introduces two distinct enhancement methods aimed at enhancing the model's robustness during inference. Comprehensive experimental results on multiple benchmark datasets have demonstrated the effectiveness of E^2-LLM in challenging long context tasks.