Hey everyone! I'm Andrew, the dev of AITraining.
I built this because I kept losing time to trainer boilerplate instead of actually iterating on models. The other frustration was hardware—code that worked on NVIDIA would break on my Mac's MPS, and tools like HuggingFace's Autotrain didn't handle these edge cases well.
AITraining wraps all of that into a CLI wizard that walks you through model selection, dataset conversion (auto-detects 6 formats), and training config. It supports SFT, DPO, ORPO, PPO, reward modeling, and knowledge distillation. After training, aitraining chat lets you test and compare iterations locally.
Works on consumer hardware—auto-detects Apple Silicon vs CUDA and optimizes accordingly.
Built on HuggingFace's ecosystem and open source (Apache 2.0). Docs available in English, Spanish, Chinese, and Portuguese.
Would love to hear what training workflows or features you'd find useful. PRs welcome!