ParaAttention
PublicContext parallel attention that accelerates DiT model inference with dynamic caching
Creat:2024-10-28T19:01:12
Update:2025-03-26T20:30:13
347
Stars
1
Stars Increase
Context parallel attention that accelerates DiT model inference with dynamic caching