Guides

Editors and LLM Providers: Choosing Your AI Coding Stack

A practical overview of the editors, CLI tools, and LLM providers you can combine for AI-assisted development — from proprietary to fully open-source.


The landscape in 2025-2026

AI-assisted coding is no longer one tool. It's a stack: an editor (or CLI) that provides the UX, connected to one or more LLM providers that supply the intelligence. You can mix and match — and you should understand the tradeoffs before locking in.

This guide maps out the main options, what each is good at, and how to combine them.


Editors and IDE integrations

Cursor

  • Based on: VS Code fork
  • Best for: Deep AI integration (Composer, Tab, Chat), multi-file edits, checkpoints
  • LLM providers: Built-in (Claude, GPT-4, etc.) via subscription; supports API keys for custom models
  • Strengths: Composer for multi-step workflows, inline diffs, checkpoint/revert, project-level rules (.cursorrules)
  • Watch out for: Proprietary; subscription cost; context window limits on large codebases

Windsurf (formerly Codeium)

  • Based on: VS Code fork
  • Best for: AI-first editing with Cascade (agentic workflow), fast completions
  • LLM providers: Built-in models + API key support
  • Strengths: Cascade for multi-step autonomous tasks, good free tier
  • Watch out for: Newer ecosystem, smaller community than Cursor

VS Code + Extensions

  • Best for: Keeping your existing setup and adding AI on top
  • Options:
    • GitHub Copilot — Tab completions + Chat panel. Solid, widely adopted, works well for inline suggestions.
    • Continue (open source) — Chat, autocomplete, and edit with any LLM provider (OpenAI, Anthropic, Ollama, OpenRouter, etc.). Highly configurable.
    • Cline (open source) — Autonomous coding agent in VS Code. Can edit files, run terminal commands, create files. Supports Claude, GPT, local models.
  • Strengths: Flexibility; keep your keybindings, extensions, and settings
  • Watch out for: Less integrated than purpose-built AI editors; more setup required

Codium (open source)

  • Based on: VS Code (fully open-source build, like VSCodium)
  • Best for: Privacy-conscious teams who want AI without telemetry
  • LLM providers: Pair with Continue, Cline, or any extension that supports API keys
  • Strengths: No proprietary telemetry; full control over what data leaves your machine
  • Watch out for: You assemble the AI stack yourself; no built-in Composer equivalent

Neovim / Vim

  • Best for: Terminal-native developers who want AI without leaving the keyboard
  • Options:
    • avante.nvim — Cursor-like AI experience in Neovim (chat, inline edits, diffs)
    • codecompanion.nvim — Chat and inline assistance with multiple providers
    • copilot.vim — GitHub Copilot for Vim/Neovim
  • Strengths: Speed, customizability, no Electron overhead
  • Watch out for: More setup; less visual diff/checkpoint UX

JetBrains IDEs (IntelliJ, WebStorm, PyCharm, etc.)

  • Best for: Teams already invested in JetBrains tooling
  • Options: JetBrains AI Assistant (built-in), GitHub Copilot plugin, Continue plugin
  • Strengths: Deep language support, refactoring tools, database integration
  • Watch out for: AI integration less mature than Cursor/Windsurf; heavier IDE

CLI tools

Claude Code (Anthropic)

  • What: Terminal-based AI agent that reads your codebase, edits files, runs commands
  • Best for: Multi-file refactors, repo-wide changes, complex tasks that need full project context
  • LLM: Claude (Sonnet, Opus) via Anthropic API or Max subscription
  • Strengths: Reads entire repos, runs tests, creates commits, understands project structure via CLAUDE.md
  • Pairs with: Any editor — use Claude Code for heavy lifting, your editor for fine-tuning

OpenCode

  • What: Open-source terminal AI coding tool
  • Best for: Privacy-first teams who want a CLI agent without proprietary lock-in
  • LLM providers: OpenRouter, Ollama, any OpenAI-compatible API
  • Strengths: Open source, provider-agnostic, local model support
  • Watch out for: Younger project; community-driven

Aider

  • What: Terminal pair-programming tool
  • Best for: Git-aware AI editing — Aider commits changes with meaningful messages
  • LLM providers: OpenAI, Anthropic, local models, OpenRouter
  • Strengths: Git integration (auto-commits), supports voice input, repo-map for large codebases
  • Watch out for: Terminal-only; learning curve for the chat workflow

LLM providers

Anthropic (Claude)

  • Models: Claude Opus 4, Sonnet 4, Haiku
  • Best for: Long context (200k tokens), careful reasoning, code quality, following complex instructions
  • Access: API, Claude.ai, Claude Code, Max subscription
  • Pricing: Per-token API or flat-rate subscription (Max)

OpenAI (GPT)

  • Models: GPT-4o, GPT-4 Turbo, o1, o3
  • Best for: General-purpose coding, broad language support, fast responses
  • Access: API, ChatGPT Plus/Pro, GitHub Copilot
  • Pricing: Per-token API or subscription

OpenRouter

  • What: Unified API gateway to 100+ models (Claude, GPT, Llama, Mistral, Gemini, etc.)
  • Best for: Trying different models without managing multiple API keys; cost optimization by routing to cheaper models for simple tasks
  • Access: Single API key, OpenAI-compatible endpoint
  • Pairs with: Continue, Cline, OpenCode, Aider — anything that accepts an OpenAI-compatible API

Ollama (local models)

  • What: Run open-source LLMs locally (Llama, CodeLlama, DeepSeek Coder, Mistral, etc.)
  • Best for: Privacy (no data leaves your machine), offline work, experimentation
  • Access: Local server, OpenAI-compatible API
  • Strengths: Free, private, no rate limits
  • Watch out for: Quality gap with frontier models for complex tasks; needs good GPU for large models

Google (Gemini)

  • Models: Gemini 2.5 Pro, Flash
  • Best for: Very long context (1M+ tokens), multimodal tasks
  • Access: API, AI Studio, some editor integrations

Recommended combinations

For teams / professional use

Stack When to use
Cursor + Claude (built-in) Maximum AI integration, multi-file workflows, team with budget
VS Code + Continue + Claude API Keep VS Code, use Claude's quality, control costs via API
VS Code + Copilot + Claude Code Copilot for inline completions, Claude Code CLI for heavy refactors
Codium + Continue + OpenRouter Full open-source editor, flexible provider switching

For solo / indie developers

Stack When to use
Cursor (Pro) Best all-in-one experience if budget allows
VS Code + Cline + OpenRouter Autonomous agent + provider flexibility, pay per use
Neovim + avante.nvim + Claude API Terminal-native, fast, minimal overhead
Any editor + Aider + local Ollama Fully offline, zero cost, privacy-first

For privacy-sensitive / air-gapped

Stack When to use
Codium + Continue + Ollama Fully open-source, fully local
Neovim + local models Minimal attack surface, no telemetry

How to evaluate

When choosing your stack, consider:

  1. Quality ceiling: Frontier models (Claude Opus, GPT-4) still outperform local models for complex reasoning and multi-file edits. Use them for hard tasks.
  2. Cost model: Subscription (Cursor, Copilot, Max) vs. per-token (API). Per-token is cheaper if usage is moderate; subscription is predictable.
  3. Privacy: Does code leave your machine? Ollama = local only. API calls = code goes to the provider. Check your org's policy.
  4. Lock-in: Cursor rules and Composer are Cursor-specific. Continue and Cline work with any provider. Claude Code works with any editor.
  5. Context window: Large repos need large context. Claude (200k), Gemini (1M+), or repo-map tools (Aider) help.
  6. Team adoption: The best tool is the one your team will actually use. Don't force terminal tools on GUI people or vice versa.

Author's pick: Codium + Claude Code subscription

If I had to recommend one stack, it would be Codium (open-source VS Code) paired with a Claude Code subscription (Max or Teams).

Why this combo works:

  • Codium gives you VS Code without the telemetry. Same extensions, same keybindings, same ecosystem — but you control what leaves your machine. No proprietary editor lock-in.
  • Claude Code is the best agentic coding tool available. It reads your entire repo, runs your tests, creates commits, and understands project structure via CLAUDE.md. It works from the terminal alongside any editor.
  • The subscription (Max) removes API cost anxiety. You don't ration prompts or worry about token bills. You just use it. This matters more than people think — cost friction changes behavior.
  • Rate limits pace you instead of a quota. Unlike Cursor's token-based plans where you can burn your monthly allowance in a week and then be stuck, Claude Max uses rate limits that spread your usage evenly. You're never locked out for the rest of the month — you just wait a bit and keep going. This is a better model for sustained daily work.
  • No duplication. You don't need Cursor's built-in AI when Claude Code handles the heavy lifting from the terminal. Codium handles the editing UX, Claude Code handles the intelligence.
  • Flexibility remains. You can still add Continue or Copilot extensions in Codium for inline completions if you want. And Claude Code works with any editor, so you're never locked in.

This is not the cheapest option and not the most integrated (Cursor's Composer is slicker for inline workflows). But it's the best balance of quality, privacy, flexibility, and cost predictability I've found.


The meta-insight

The editor is the UX. The LLM is the brain. Don't conflate them. You can use a great editor with a mediocre model and get mediocre results. You can use a great model with a bad editor and get frustrated. The best setups pair a comfortable editor with the right model for the task — and make it easy to switch models when the task changes.