Guides

Managing an LLM Is Not So Different from Managing an Intern

The skills that make you a good mentor to a junior developer are the same ones that make you effective with AI coding tools. Here's why — and what to do about it.


The parallel nobody talks about

When a new intern joins your team, you don't hand them the entire codebase and say "build the feature." You break the work down. You give context. You review their output. You correct course early.

When you work with an LLM, the pattern is exactly the same — and the developers who struggle with AI tools are often the same ones who struggle with delegation.

This isn't a coincidence. An LLM is a very fast, very eager, somewhat unreliable team member with no institutional memory. The management skills are transferable.


What they have in common

Trait Junior / Intern LLM
Eager to please Will say "yes" and produce something even when confused Will generate code even when the prompt is ambiguous
No institutional knowledge Doesn't know your codebase, conventions, or history Doesn't know your codebase unless you provide it as context
Inconsistent quality Great on simple tasks, risky on complex ones Same — excellent for boilerplate, risky for architecture
Needs clear scope "Add the button" works. "Improve the UX" doesn't. Specific prompts work. Vague ones produce generic output.
Benefits from examples "Do it like this file" is the best instruction "Follow the pattern in X.ts" dramatically improves output
Doesn't push back Won't say "this is a bad idea" unless very confident Will implement a bad design without questioning it
Forgets between sessions Needs reminders of past decisions after a break Has no memory between conversations (unless you provide it)
Improves with feedback Gets better over time with mentoring Gets better within a session with corrections and context

The management playbook (works for both)

1. Break the work into small, clear tasks

With an intern: "First, add the API endpoint. Then wire the UI. Then add tests." Not: "Build the feature."

With an LLM: One logical change per prompt. "Add a getUser endpoint that returns the same shape as getProduct." Not: "Add user management."

Why it works: Small scope = less room for misinterpretation. Both interns and LLMs drift when the task is too big.

2. Provide context, not just instructions

With an intern: "Here's how our API clients work (see api/client.ts). Follow the same pattern."

With an LLM: "@-mention the relevant files. Say 'follow the pattern in UserService.fetchById'."

Why it works: Neither the intern nor the LLM can read your mind. Explicit context beats implicit expectations every time.

3. Review early, review often

With an intern: You don't wait until the PR is 500 lines. You check in after each step.

With an LLM: Review each AI-generated change before moving to the next. Run lint, type-check, and tests between steps.

Why it works: Errors compound. Catching a wrong assumption early saves hours of debugging later — with both humans and machines.

4. Give examples, not just rules

With an intern: "Here's a component that does something similar. Use the same structure."

With an LLM: "Same structure as src/components/ProductCard.vue." Or paste a short snippet as a reference.

Why it works: Rules are abstract. Examples are concrete. Both interns and LLMs perform dramatically better with a reference implementation.

5. State what NOT to do

With an intern: "Don't add new dependencies without asking. Don't change the public API."

With an LLM: "Don't add a new dependency. Don't refactor unrelated code. Don't change the function signature."

Why it works: Eager workers (human or AI) often over-deliver. Explicit boundaries prevent scope creep and unwanted changes.

6. When they get it wrong, be specific

With an intern: "The error handling here doesn't match our pattern. Look at how OrderService does it and use the same approach."

With an LLM: Paste the error. Say "this test fails with: [error]. Fix it by [specific approach]." Don't just say "it's wrong, try again."

Why it works: Vague feedback leads to random changes. Specific feedback leads to targeted fixes.

7. Build up institutional knowledge

With an intern: Onboarding docs, architecture diagrams, a buddy system, and progressively more complex tasks.

With an LLM: CLAUDE.md or .cursorrules with project conventions. Audits and docs in a stable path. Pinned notepads with project context.

Why it works: The more context they have upfront, the less correcting you do later.


Where the analogy breaks

There are differences, and they matter:

Dimension Intern LLM
Speed Hours to days per task Seconds to minutes
Learning persistence Remembers across days, weeks, months Forgets between sessions (unless you persist context)
Creativity Can surprise you with genuinely novel ideas Recombines patterns; rarely truly novel
Emotional awareness Reads team dynamics, asks for help No awareness of team context or social cues
Growth trajectory Becomes a senior engineer over years Stays the same capability (until the next model release)
Trust building You delegate more as trust grows Trust should be calibrated per task, not per session
Cost Salary + mentoring time API tokens or subscription

The key insight

An intern eventually doesn't need you. They grow. They become independent. They push back on bad ideas. They understand the why behind decisions.

An LLM doesn't grow. It's equally capable (and equally unreliable) every single session. You cannot coast on accumulated trust. Every new conversation starts from zero unless you've built the scaffolding (docs, rules, context files) to bootstrap it.

This means the investment in documentation, conventions, and project context isn't just "nice to have" — it's the mechanism by which you make AI effective at all.


Practical takeaways

  1. If you're a good mentor, you'll be good at AI-assisted development. The skills transfer directly: clear communication, scoped tasks, early review, specific feedback.

  2. If you struggle to delegate to humans, you'll struggle with AI. The impulse to "just do it myself" because explaining takes too long is the #1 blocker for both intern management and LLM effectiveness.

  3. Invest in context infrastructure. CLAUDE.md, .cursorrules, architecture docs, example files — these are the onboarding docs for your AI. Treat them with the same care you'd give to a new hire's first-week guide.

  4. Calibrate trust per task, not per tool. You wouldn't let an intern design your auth system. Don't let an LLM do it either. Use AI for what it's good at (boilerplate, tests, refactoring, exploration) and humans for what requires judgment (architecture, security, tradeoffs).

  5. The best developers with AI are the ones who already know how to lead. Technical skill gets you correct prompts. Leadership skill gets you correct outcomes.


One last thought

The phrase "vibecoding" can sound dismissive — as if working with AI means you don't need to think. The reality is the opposite. Working effectively with AI requires more clarity of thought, not less. You need to know what you want, break it down, communicate it precisely, and verify the result.

That's not vibing. That's managing. And the developers who are best at it are the ones who've already learned to do it with humans.