Vision transformers with JAX & Flax (ViT, DeiT, LeViT, MAE, ConvPass)
? Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
21 Lessons, Get Started Building with Generative AI ? https://microsoft.github.io/generative-ai-for-beginners/
Deep Learning for humans
??? 60+ Implementations/tutorials of deep learning papers with side-by-side notes ?; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ? reinforcement learning (ppo, dqn), capsnet, distillation, ... ?
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
? Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.