LLM-distributed-finetune
PublicTune efficiently any LLM model from HuggingFace using distributed training (multiple GPU) and DeepSpeed. Uses Ray AIR to orchestrate the training on multiple AWS GPU instances
awsdeep-learningdistributed-trainingfalconfine-tuninghuggingfacelarge-language-modelsnatural-language-processingtransformers
Creat:2023-06-18T18:54:23
Update:2025-03-25T00:09:37
59
Stars
0
Stars Increase