Don't miss any moment of global AI innovation
Daily three-minute AI industry trends
AI industry milestones
Lists all AI hardware products.
AI monetization case sharing
AI image creation monetization cases
AI video creation monetization cases
AI audio creation monetization cases
AI content writing monetization cases
Free sharing of the latest AI tutorials
Shows total visits ranking of AI websites
Track fastest growing AI websites by traffic
Focus on AI websites with significant traffic drops
Shows weekly visits ranking of AI websites
AI websites most popular with US users
AI websites most popular with Chinese users
AI websites most popular with Indian users
AI websites most popular with Brazilian users
Total visits ranking of AI image generation websites
Total visits ranking of AI personal assistant websites
Total visits ranking of AI character generation websites
Total visits ranking of AI video generation websites
GitHub popular AI projects by total stars
GitHub popular AI projects by growth rate
GitHub popular AI developer ranking
GitHub popular AI organization ranking
GitHub popular deepseek open source projects
GitHub popular TTS open source projects
GitHub popular LLM open source projects
GitHub popular ChatGPT open source projects
Overview of GitHub popular AI open source projects
[NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
SGLang is a fast serving framework for large language models and vision language models.
:electron: An unofficial https://bgm.tv ui first app client for Android and iOS, built with React Native. 一个无广告、以爱好为驱动、不以盈利为目的、专门做 ACG 的类似豆瓣的追番记录,bgm.tv 第三方客户端。为移动端重新设计,内置大量加强的网页端难以实现的功能,且提供了相当的自定义选项。 目前已适配 iOS / Android / WSA、mobile / 简单 pad、light / dark theme、移动端网页。
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
Mixture-of-Experts for Large Vision-Language Models
MoBA: Mixture of Block Attention for Long-Context LLMs
?? LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
Tutel MoE: An Optimized Mixture-of-Experts Implementation
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models