Low-code framework for building custom LLMs and AI models
Ludwig is a declarative machine learning framework that lets you build, train, and fine-tune custom LLMs, neural networks, and other AI models using simple YAML configuration. Originally developed at Uber, it supports multi-modal learning, distributed training, and integrates with Hugging Face models.
Define model architecture, training, and evaluation with simple YAML instead of writing code
Fine-tune state-of-the-art large language models on your own data with parameter-efficient methods like LoRA and QLoRA
Mix and match tabular data, text, images, and audio into complex model configurations
Scale training with DDP, DeepSpeed, and Ray on Kubernetes for large datasets
Natively use any pre-trained model from the Hugging Face Transformers library
Automatic batch size selection, hyperparameter optimization, and data preprocessing
Export models to TorchScript and Triton, or upload to Hugging Face with one command
Fine-tune large language models on domain-specific data for better task performance
Build models that combine text, images, and structured data for complex classification tasks
Quickly experiment with different model architectures and hyperparameters using YAML config changes
Deploy trained models to production with TorchScript export and Docker containers
Swap encoders, decoders, and other components; extend with custom PyTorch modules

AI-powered SQL client that turns natural language into database queries