AIbase
Product LibraryTool NavigationMCP

flash-linear-attention

Public

? Efficient implementations of state-of-the-art linear attention models in Torch and Triton

Creat2023-12-20T14:50:18
Update2025-03-27T10:48:59
https://github.com/fla-org/flash-linear-attention
2.8K
Stars
23
Stars Increase