MachineLearningLM-7B-v1 is a large language model that is continuously pre-trained on millions of synthetic table machine learning tasks based on Qwen/Qwen2.5-7B-Instruct. It is specifically optimized for table classification tasks and supports few-shot context learning with 8 to 1024 examples.
Natural Language Processing
Transformers