Guides

AI-Assisted Development: Workflow with Cursor & Claude

How to operate in the IDE day to day — recovery, planning, verification, sharper prompts, and Claude-specific habits. Drawn from the Cursor forum, Anthropic docs, and real practices.


What is "vibecoding"?

Vibecoding is development that leans on AI coding tools (Cursor, Claude Code, Copilot, etc.) while staying in flow: you describe intent, the model suggests code, and you iterate. To do that safely and sustainably you need a workflow that keeps the AI on track and easy to undo — Composer, planning, short sessions, tests as criteria.

Who this is for: Anyone using Cursor or Claude for real development who wants fewer wrong applies, clearer context, and better results.

See also: Regression-Proof Practices for hooks, tests, and repo safety. Editors & LLM Providers for tool choices beyond Cursor.


1. Set Yourself Up So You Can Recover

Before "smarter" prompts, set up two safety nets. When the AI goes off the rails, you'll be glad you did.

Use Composer (and its checkpoints), not only Chat

Chat is great for quick questions. For real multi-step work, use Composer. Unlike Chat, Composer checkpoints the conversation. If a run of edits goes wrong or you waste time on bugfixes, you can revert to an earlier checkpoint instead of arguing with the model in a single long thread.

Try this: Next time you start a non-trivial task, open Composer (e.g. Cmd+Shift+I / Ctrl+Shift+I). When things go sideways, use the checkpoint list and jump back to a good state before continuing.

Commit little and often

Commits are not milestones. They're checkpoints. If you wait until "everything is perfect," you'll eventually hit a bad apply or a mangled file with no easy way back.

Try this: Commit after every small, coherent change. Get comfortable with reverting or resetting so that "undo" is a normal part of your workflow, not a last resort.


2. Keep the AI on Track (Planning and Context)

The biggest issue on larger projects is context drift: the model forgets the big picture, repeats itself, or answers an old question instead of the current one. The forum's most-liked guide boils down to: plan first, keep the plan visible, and keep sessions short.

Write your reference docs before touching code

Before prompting the AI — before even opening Composer — have a written source of truth: a manifesto, a spec, requirements, acceptance criteria, or even just a structured brief. This is the single most impactful thing you can do for AI-assisted development, and most people skip it.

Why it matters:

  • You can't judge output without a target. If you don't know exactly what "done" looks like, you'll accept whatever the AI gives you — and only realize it's wrong later.
  • The AI has no intent of its own. It will confidently build the wrong thing if the prompt is vague. A spec removes ambiguity before it becomes wasted code.
  • It's your pinnable context. The notepad/spec you pin in Composer (see below) should come from this document, not be invented on the fly in a chat session.
  • It survives sessions. Unlike chat history, a written spec doesn't degrade or get lost. Every new Composer session can start from the same source of truth.

What counts as "reference docs":

  • A product spec or brief (what to build, for whom, why, constraints).
  • Acceptance criteria (given X, when Y, then Z).
  • An architecture decision or manifesto (tech choices, patterns, boundaries).
  • Even a bullet list in a markdown file — the bar is "written down and specific," not "formal."

Try this: Before your next feature, spend 10–15 minutes writing a short spec (half a page is enough). Pin it or paste it into every Composer session. Notice how much less you need to correct the AI — and how much easier it is to say "no, that's wrong" when you have a reference to point to.

Start each Composer session with a short "intro" prompt

Tell the model how you want it to behave. For example: "You prefer decoupling and encapsulation," or "You're an expert in [React / Python / our stack]." It sounds silly, but it nudges the model toward more consistent, expert-like output. If the thread gets long and behavior drifts, start a new Composer session and paste the same intro again.

Pin a notepad or spec in context

Keep a notepad or short spec (goals, constraints, "what we're building") and pin it in Composer. This is one of the most effective ways to keep the model aligned. Without it, the "big picture" fades as the conversation grows.

Demand a plan before any code

Don't let the model code immediately. Ask it to:

  1. Summarise what you asked for.
  2. Propose a plan (steps, order, key decisions).

Save that plan (e.g. copy into your notepad). When the conversation gets noisy, paste the plan back and ask: "Recap the plan and say what we're doing next."

In each turn (or every few), ask the model to:

  • Recap the plan and what's next.
  • Make the changes.
  • Explain what it did and why (logic, tradeoffs).

That cycle keeps the "big picture" in recent context and reduces tangents.

Keep Composer sessions short

Long sessions lead to a sluggish UI and worse quality: more hallucinations, A→B→A loops, and answers to old questions. When you feel drift or confusion, start a new Composer (+), re-paste your intro and notepad, briefly say where you left off and what's broken, then continue. Fresh context almost always beats "one more message."


3. Let the AI Verify Its Own Work (Tests and Logs)

Vibecoding goes wrong when you accept code on faith. The most reliable fix is to make the AI prove its work.

TDD-style: you write the spec, AI writes the tests

Instead of hoping the code is correct, tell the model what the expected behavior is and ask it to write failing tests first. Then have it implement until the tests pass. This is the most reliable pattern for getting correct code from an AI.

When debugging, demand logs

If the AI's fix doesn't work, don't argue. Ask it to add heavy debug logging to the area, then run the code, paste the logs back, and ask: "Given these logs, what's actually happening?" This grounds the model in real output instead of letting it speculate.


4. Sharper Prompts and Fewer Tokens

The "rewrite this prompt" rule

If you're not getting what you want, before rephrasing yourself, try: "Rewrite this prompt so that it gives you the best possible context to complete the task." The model often rewrites your prompt better than you would, including structure and constraints you missed.

Use conversation history (long chats)

When a conversation has grown long, you can select "use conversation history" (if available) or manually paste a summary of key decisions and where you left off. This helps more than repeating yourself in different words.


5. Saving Credits and Getting Help

  • Use "Auto" mode for low-stakes exploration, summaries, and boilerplate.
  • Reserve premium (Claude Opus, o1-pro, etc.) for hard bugs, design decisions, and architecture.
  • If you're stuck, start a new session with a short summary of the problem instead of adding messages to a broken thread. Fresh context is cheaper and better.

6. Claude-Specific Habits (Anthropic's Angle)

  • Claude Code (CLI): Excellent for multi-file refactors, repo-wide searches, and running tests. Use it when the IDE agent feels limited by context or file scope.
  • Extended thinking: For complex tasks, ask Claude to "think step by step" or use extended thinking mode. More reasoning tokens = better plans.
  • Artifacts: When Claude produces a plan or spec, ask it to save as an artifact so you can reference it later without re-generating.
  • CLAUDE.md: Add a CLAUDE.md at the repo root with project conventions, stack, and constraints. Claude Code reads it automatically and stays aligned.

7. Cursor Rules and Composer in Practice

  • Project-level rules (.cursor/rules or .cursorrules): Set coding style, framework conventions, and constraints. Keep rules short and specific.
  • Composer vs. Chat: Composer for multi-step implementation; Chat for quick questions, explanations, and one-off edits.
  • Tab completion: For small edits, Tab is often faster than opening Composer. Use it for repetitive patterns and boilerplate.

8. PowerShell and the Agent (Windows)

If you use PowerShell and the agent seems "blind" or flaky, see How Powershell becomes more pleasant for the agent. VS Code's shell integration can introduce escape sequences and race conditions. Replacing or cleaning shellIntegration.ps1 (see the thread for a concrete script), using PowerShell 7.5, tuning PSReadLine, and setting a sensible buffer size (e.g. 120x1000) can help. If the agent runs commands in your environment, consider Constrained Language Mode or a restricted profile.


Quick reference

I want to... Try this
Handle bigger projects Composer + short sessions + plan first + notepad/spec in context
Stay on track Intro boilerplate each session; new session when context drifts
Have clear success criteria TDD: you specify behaviour, AI writes tests; paste failing tests back
Get better responses "Rewrite this prompt" rule; use conversation history for long chats
Save credits Auto for exploration/summaries; premium only for hard tasks
Avoid wrong applies Don't just say "yes"; say "implement the changes you just recommended"
Get out of debugging loops Ask for heavy debug logging, run, paste logs back

References