DynVideo-E is a tool for editing human videos with large-scale motion and viewpoint changes using dynamic NeRF technology. It represents videos as 3D foreground-normalized human spaces, combining deformation fields and 3D static background spaces. By leveraging techniques such as reconstruction loss, 2D personalized diffusion priors, 3D diffusion priors, and local partial super-resolution, it enables editing of movable standardized human spaces across multiple viewpoints and poses. Simultaneously, it transfers the reference style to the 3D background model via style transfer loss in feature space. Users can render accordingly based on the source video camera pose in the edited video-NeRF model. DynVideo-E not only handles short videos but also large-scale motion and viewpoint changes in human videos, providing users with more direct and controllable editing methods. Experiments on two challenging datasets demonstrate that DynVideo-E achieves a significant advantage of 50% ~ 95% over existing methods in terms of human preference. The code and data of DynVideo-E will be released to the community.