
OpenTelemetry-native observability for GenAI and LLM applications
OpenLIT is an open-source AI engineering platform that provides OpenTelemetry-native observability for LLM applications, vector databases, and GPU workloads. With a single line of code, it auto-instruments 50+ LLM providers and AI frameworks to deliver unified traces, metrics, cost tracking, and evaluations.
Automatically instruments 50+ LLM providers, vector databases, and AI frameworks with a single line of code
Tracks token usage and calculates API costs across all integrated providers in real time
Monitors NVIDIA GPU utilization, power draw, memory usage, and temperature for self-hosted LLM workloads
Centralized prompt management system for versioning, organizing, and retrieving prompts across AI applications
Side-by-side LLM playground for comparing cost, latency, and response quality across multiple providers
Online LLM-as-a-judge evaluation pipeline for zero-setup AI quality monitoring
Secure credential store for LLM API keys that applications can retrieve remotely at runtime
Track latency, token usage, cost, and error rates across production LLM calls from 40+ providers
Identify expensive prompts and compare cheaper model alternatives side by side
Monitor NVIDIA GPU health and utilization alongside LLM inference metrics
Trace complex multi-step AI agent pipelines built with LangChain, LlamaIndex, or CrewAI
Real-time interactive dashboards backed by ClickHouse for visualizing metrics, traces, and cost analytics
Standard OTLP output enabling seamless export to Grafana, Datadog, New Relic, Jaeger, and more
Centrally manage, version, and deploy prompts while securely storing API credentials

The open-source AI coding assistant for VS Code and JetBrains