China's first full-process AIGC animated film, 'The Family Reunion Edict,' is scheduled to be released on February 28, focusing on the theme of reunification across the Taiwan Strait. The film is guided and produced by the Chinese Democratic League Central Committee, CCTV, and others, with part of the box office proceeds donated to the unification public welfare cause.
Meta introduces AI features to attract younger users, including dynamic avatars that animate static photos and enhanced visuals for text posts.....
China's first AIGC animated film, 'The Reunion Decree,' is set to premiere on February 28th, guided by the Central Committee of the Chinese Democratic League and CCTV. The film fully utilizes AI technology to showcase technological innovation while exploring new pathways for the dissemination of Chinese culture, achieving a fusion of technology and emotion.
Adobe announced that its 2D animation software Animate will stop service on March 1, 2026, marking a shift in the company's strategic focus toward artificial intelligence. Enterprise customer support will be extended until 2029, while regular users will receive support until 2027. Adobe acknowledges the important position of Animate in the animation ecosystem but has decided not to release a 2025 version due to the influence of new technologies and AI trends.
QuiverAI helps teams generate, edit, and animate editable SVG assets.
Award-winning 3D animation studio, providing services such as product videos for multiple industries, trusted by over 350 brands.
Elser AI can instantly generate anime, comics, animations, music videos, and short films, etc.
The world's first AI tool that can generate a complete 60-second animated short film with a single prompt, suitable for social platforms.
Openai
$2.8
Input tokens/M
$11.2
Output tokens/M
1k
Context Length
-
Anthropic
$105
$525
200
Google
$0.7
$7
$35
$2.1
$17.5
$21
Alibaba
$3.9
$15.2
64
Bytedance
$0.8
$2
128
Tencent
$1
$4
32
Deepseek
$12
$1.75
$14
400
NewBie-AI
NewBie image Exp0.1 is an efficient basic image generation model developed based on the Next-DiT architecture, specifically designed for generating high-quality anime-style images. It integrates advanced text encoders and visual components, supports natural language and structured tag input, and is a powerful tool for multi-character anime image generation.
MCG-NJU
SteadyDancer is a powerful animation framework based on the image-to-video paradigm, specifically designed to generate high-fidelity and temporally coherent human body animations. This framework effectively solves the identity drift problem in traditional methods through a robust first frame retention mechanism, performs excellently in visual quality and controllability, and significantly reduces the training resource requirements.
drbaph
This is a LoRA model based on Qwen Image Edit 2509, specifically designed to convert input images into sketch comic artworks with exaggerated features. The model can create humorous and artistic comic images for human and animal subjects, highlighting facial features and characteristics.
yaleiyaleichiling
The first truly open-source and unrestricted anime video generation model, based on the Wan2.2-5B architecture, can run with only 6GB of video memory and can generate amazing anime animation content.
lrzjason
Anything2Real is a dedicated LoRA model built on Qwen Edit 2509, capable of transforming any art style (illustrations, anime, cartoons, paintings, etc.) into realistic images while preserving the original composition and content.
uriel353
Anime2Realism is a text-to-image conversion model based on the Qwen/Qwen-Image base model, specifically designed to achieve image conversion from anime style to realistic style. This model utilizes LoRA and Diffusers technologies to generate corresponding realistic-style images based on text descriptions.
anikifoss
This project is a high-quality HQ4_K quantization of the MiniMax-M2 model, optimized specifically for text generation tasks and particularly suitable for dialogue scenarios. This quantized version does not use imatrix and maintains the model's performance.
unsloth
JanusCoder-8B is an open-source code intelligence foundation model built on Qwen3-8B, aiming to establish a unified visual programming interface. This model is trained on JANUSCODE-800K (the largest multimodal code corpus to date) and can handle various visual programming tasks, including data visualization, interactive web UI, and code-driven animation.
valiantcat
Qwen-Image-Edit-Cosplay LoRA is an anime character dressing migration model fine-tuned based on the Qwen-Image-Edit-2509 image editing model. It focuses on accurately migrating the hairstyles, clothing, and decorative props of anime characters to real human images while maintaining the facial features of the original person.
Mitchins
This is a deep learning model based on the EfficientNet-B0 architecture, specifically designed for classifying the art styles of anime and visual novel images. The model can accurately identify 6 different anime art styles, including dark, flat, modern, cute, painting, and retro styles.
A high-quality quantized version of GLM-4.6, implemented using advanced quantization techniques without using imatrix, maintaining good model performance and compatibility, suitable for various application scenarios such as dialogue.
John6666
An anime-style model focused on text-to-image generation, capable of generating 2D illustrations, character portraits, and character settings featuring cute girl images. The images have characteristics such as dynamic poses, clear structures, and strong lighting.
bunnycore
chroma_art-lora is a specially designed LoRA model aimed at endowing the generated images with unique artistic aesthetics, especially focusing on vivid colors and artistic styles. It is suitable for creating stylized digital artworks, such as anime, movie-like images, and desktop wallpapers.
Ashmotv
animat3d_style_wan-lora is a LoRA model trained based on AI Toolkit by Ostris, specifically designed for text-to-video generation, which can bring unique 3D animation style effects to image generation. This model is fine-tuned based on the Wan2.2-T2V-A14B base model and supports use on multiple mainstream AI platforms.
Nova Anime XL is a text - to - image generation model based on the Stable Diffusion XL architecture, specifically designed to generate high - quality images in various styles such as anime, fantasy, and landscapes. It combines the advantages of multiple excellent base models and performs outstandingly in terms of details, characters, backgrounds, and color representation.
Noobai-XL-1.0 is a text-to-image generation model based on the diffusers library, specifically designed for generating anime-style girl images. This model was created by HetaKoneko, based on Laxhar/noobai-XL-1.0, and can generate anime images with a unique style.
This is a text-to-image generation model focused on generating anime and cartoon-style images. It can generate anime-style images containing various elements such as fantasy and beautiful women. The model is built on the OnomaAIResearch/Illustrious-xl-early-release-v0 base model and implemented using the diffusers library.
Illustrious-xl-early-release-v0 is a text-to-image generation model based on the Stable Diffusion XL architecture, specifically optimized for anime and 2D illustration styles, capable of generating high-quality image works based on text descriptions.
This is a high-quality quantized version of Moonshot AI's Kimi-K2-Instruct-0905 model, using the HQ4_K quantization method. It is specifically optimized for inference performance, supports a context length of 75,000, and is suitable for text generation tasks.
sagata007
RUSKANIME2025 is a text-to-image generation model based on LoRA and Diffusers technologies, specifically designed to generate relevant anime-style images through specific trigger words. This model is built on the black-forest-labs/FLUX.1-dev base model and uses the diffusion LoRA template technology.
Blender MCP VXAI is a powerful integration tool that allows users to control Blender through natural language to create and modify 3D models, animations, and scenes. It simplifies complex operations and supports real-time export to projects.
The MCP service of Bangumi TV provides access to the BangumiTV API, supporting queries for information on entries such as anime, manga, music, games, and related character and personnel data.
The AniList MCP Server is a service that interacts with the AniList API through the Model Context Protocol (MCP), allowing LLM clients to access and manipulate anime, manga, character, staff, and user data on AniList.
This project includes two MCP servers: OnePiece for querying anime character information and IP address geographical location query services. It is developed using TypeScript and can be integrated with AI clients through standard input and output.
The MCP service of Bangumi TV provides access to the BangumiTV API, supporting queries for information on entries such as anime, manga, music, and games, including entry details, characters, personnel, and related data retrieval functions.
This is a server based on the Model Context Protocol (MCP) that allows AI assistants to control the Unreal Engine game engine through the remote control API, enabling game development automation. It supports various functions such as asset management, actor control, editor operations, level management, animation physics, visual effects, and Sequencer control.
Project for 2D spine animation production
This is an 8th Wall MCP server project that allows users to build WebAR experiences in Claude Desktop through natural language instructions. It provides over 66 tools, supports scene building, 3D model management, animation addition, physical effects, asset search, and project file management, and can be integrated with 8th Wall Desktop and cloud APIs.
This is a cognitive tool project that integrates Indigenous traditional knowledge and cultural expressions. It is developed and maintained by members of the Anishinaabe tribe, following a strict cognitive process (OOReDAct cycle) and tribal sovereignty protection agreements.
The MCP server implementation of the anitabi pilgrimage map, providing quick installation via NPX and local development configuration
A GIF generation tool based on the MCP protocol that can convert video files into high-quality GIF animations, supporting functions such as customizing the frame rate, size, and截取片段 (extracting segments).
A modern, responsive website project for the XACHE cryptocurrency trading platform, including adaptive design, glassmorphism UI effects, and smooth animations.
The SPINE2D Animation MCP Server provides a tool for creating character animations from PSD files, supporting natural language description - based animation generation, automatic bone rigging, preview, and export functions.
The LottieFiles MCP Server is a Model Context Protocol service for searching and retrieving Lottie animations, providing functions for animation search, detail retrieval, and a list of popular animations.
Pokime is an MCP server that provides services for real-time querying of Pokémon data and anime information, supporting Pokémon attribute comparison and retrieval of multi-season anime metadata.
A Manim animation generation server based on the MCP protocol, which can receive scripts and return rendered videos, supporting dynamic generation of mathematical animations and integration with Claude.
The ReactBits MCP Server is a Model Context Protocol service that enables AI assistants to access the ReactBits.dev component library. It contains over 135 animated React components and supports functions such as component discovery, smart search, and style selection.
This project provides an MCP server for interacting with the Wolfram Mathematica kernel, supporting the execution of Wolfram Language code in a secure session environment, and features an easy-to-remember, animal-themed session ID generation tool designed for LLMs.
A Manim animation generation server based on the MCP protocol, which can execute Manim scripts and return the rendered videos, support dynamic animation generation and integrate with Claude.
The LottieFiles MCP server is a Model Context Protocol service for searching and retrieving Lottie animations.