
The open-source AI coding assistant for VS Code and JetBrains
Continue is an open-source AI coding assistant that integrates directly into VS Code and JetBrains IDEs, offering chat, inline editing, autocomplete, and autonomous agent capabilities. It supports any LLM — including OpenAI, Anthropic, Gemini, and local models via Ollama — giving teams full control over their AI stack without vendor lock-in. Teams can also use Continue's CI-integrated PR review checks to enforce custom code standards across every pull request.
Context-aware chat assistant embedded in your editor. Select code, ask questions, get explanations, and apply changes without leaving VS Code or JetBrains.
Trigger inline edits with a keyboard shortcut, describe a change in natural language, and have the AI rewrite the selected code block in place.
Real-time AI-powered inline code suggestions as you type, functioning like enhanced IntelliSense backed by any supported LLM including local models.
Autonomous agent that can plan multi-step tasks, search the codebase, edit files, run terminal commands, and iterate — reducing manual context gathering.
Connect any AI model provider: OpenAI, Anthropic, Google Gemini, Mistral, Cohere, or local models via Ollama. No forced vendor dependency.
Integrates with MCP-compatible tools such as GitHub, Sentry, Snyk, and Linear to pull live context from your development workflow into AI responses.
Teams handling sensitive or proprietary code can connect Continue to local Ollama models, keeping all code and prompts entirely on-premises.
Developers who want to compare different AI models (Claude vs GPT-4 vs Gemini) for coding tasks without switching tools or paying multiple subscriptions.
Engineering teams enforce custom code standards by defining checks as markdown files. Every PR gets AI-powered pass/fail analysis as a GitHub status check.
Orgs wanting all developers to use the same AI rules and context sources can commit a .continue/ directory and have everyone inherit those settings.
Define custom code standards as markdown files. Continue runs AI checks on every pull request and reports pass/fail as GitHub status checks.
Store shared AI rules, prompts, and standards in .continue/rules/ inside your repository so all team members use consistent configuration.
Run entirely on local models (Ollama, llama.cpp) to keep code and prompts 100% private and never send data to third-party cloud providers.
Apache 2.0 licensed. Fully customizable through configuration files, custom context providers, slash commands, and community-built extensions.
Senior developers use Agent mode to delegate large, multi-step refactoring tasks while retaining review control.

Modular open-source ERP for manufacturing & beyond