
Fine-tune and train LLMs up to 30x faster with 90% less memory
Unsloth is an open-source platform for fine-tuning and reinforcement learning (RL) on large language models. It dramatically reduces the time and memory needed to fine-tune LLMs through custom-written GPU kernels and handwritten math derivations — achieving up to 30x faster training speeds with 90% less VRAM compared to traditional methods. Unsloth supports 500+ models including Llama 4, DeepSeek-R1, Gemma, Qwen3, Mistral, and OpenAI gpt-oss, with training modes for full fine-tuning, LoRA, QLoRA, pre-training, and reinforcement learning. The platform works on NVIDIA GPUs from Tesla T4 to H100, with portability to AMD and Intel GPUs. Unsloth offers free notebooks for Google Colab and Kaggle, making LLM fine-tuning accessible to individual developers, while Pro and Enterprise tiers unlock multi-GPU and multi-node training for teams at scale.
Use case
Use case
Use case
Use case
Use case
Use case
Integration
Integration
Integration
Integration
Integration
Integration
Integration