
Framework for programming—not prompting—language models
DSPy is an open-source framework from Stanford NLP that replaces manual prompt engineering with programmatic optimization. Define input/output signatures and let DSPy's optimizers automatically find the best prompts, few-shot examples, and fine-tuning strategies for your LLM pipeline.
Define what your LLM should do with typed input/output specifications instead of hand-crafted prompts
Optimizers like MIPROv2 automatically generate and test instructions and few-shot examples
Reflects on program trajectories to identify failures and propose prompt improvements
Uses stochastic mini-batch sampling to find challenging examples and generate self-improvement rules
Compile high-level code into optimized prompts or weight updates aligned with your metrics
Combine retrieval, generation, classification, and reasoning modules into complex pipelines
Works with OpenAI, Anthropic, Google, and local models through unified interface
Build and optimize retrieval-augmented generation systems with automatic prompt tuning for better accuracy
Create LLM-based classifiers that automatically optimize prompts to maximize classification accuracy
Build complex reasoning chains where DSPy optimizes each step's prompts for the overall pipeline metric
Systematically explore prompt strategies and model configurations with reproducible optimization
Best LangChain alternative for prompt optimization — the only framework that replaces manual prompt engineering with systematic, reproducible optimization
Best for AI engineers building production LLM pipelines who want algorithmic prompt optimization — delivers measurable quality gains over manual engineering, but requires technical investment to learn.
Add runtime constraints and assertions to guide LLM behavior programmatically

AI-powered SQL client that turns natural language into database queries