The official released code of FRF-GCN accepted by AAAI-24.
??? 60+ Implementations/tutorials of deep learning papers with side-by-side notes ?; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ? reinforcement learning (ppo, dqn), capsnet, distillation, ... ?
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Machine learning, in numpy
《李宏毅深度学习教程》(李宏毅老师推荐?,苹果书?),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
Natural Language Processing Tutorial for Deep Learning Researchers
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
?? ? Cross Device & High Performance Normal Form/Dynamic(JSON Schema) Form/Form Builder -- Support React/React Native/Vue 2/Vue 3
? Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.???
A PyTorch implementation of the Transformer model in "Attention is All You Need".