Custom-Transformer-Pytorch
PublicA clean, ground-up implementation of the Transformer architecture in PyTorch, including positional encoding, multi-head attention, encoder-decoder layers, and masking. Great for learning or building upon the core model.
attention-mechanismdeep-learningencoder-decodermachine-learningnlppositional-encodingpytorchself-attentionsequence-to-sequencetransformers
作成時間:2025-05-22T13:29:43
更新時間:2025-06-02T19:47:02
14
Stars
0
Stars Increase