Confession: I was supposed to write this post in October. Then GitHub shipped a new Copilot agent mode, Cursor changed its pricing twice, and Anthropic released Claude Code 2.0. Every time I sat down to publish, something invalidated half the draft. So I gave up trying to time it and just wrote what I’m using right now, on the actual repos I’m being paid to ship.
I’ve been bouncing between all three on the same projects for about six months. Same Next.js codebase, same Laravel API, same dumb little Rust CLI I keep tinkering with. None of them is the clear winner. None of them is dead. Anyone telling you otherwise is selling something.
Here’s the honest breakdown.
The short answer
If you read nothing else: I keep all three installed and use them for different things. Copilot for ambient autocomplete, Cursor for “rewrite this whole file” edits, Claude Code for anything that touches more than three files at once. I tried to consolidate down to one tool twice. Both times I came back inside two weeks.
The real question isn’t “which one is best”. It’s “which one fits the shape of the work I’m doing right now”. That shape changes throughout the day, which is why I have all three.
Where Cursor still wins for me
Cursor’s killer feature is still its inline edit mode. Highlight a function, press Cmd+K, type “make this idempotent”, and it edits in place with a clean diff view. Copilot’s been catching up here, but Cursor’s diff UX is just better. The keystroke flow doesn’t break my thinking.
Where it really shines is what I’d call shape-changing edits. Taking a function that returns a Promise and converting the whole thing to use a generator. Yanking inline state out into a custom hook. Stuff that’s mechanical but tedious.
// Before — I highlighted this and asked Cursor to extract it into a hook
function UserProfile({ userId }: { userId: string }) {
const [user, setUser] = useState<User | null>(null)
const [loading, setLoading] = useState(true)
const [error, setError] = useState<Error | null>(null)
useEffect(() => {
let cancelled = false
fetch(`/api/users/${userId}`)
.then(r => r.json())
.then(u => { if (!cancelled) setUser(u) })
.catch(e => { if (!cancelled) setError(e) })
.finally(() => { if (!cancelled) setLoading(false) })
return () => { cancelled = true }
}, [userId])
// ...render code
}
// After — clean extraction in one shot
function useUser(userId: string) {
const [user, setUser] = useState<User | null>(null)
const [loading, setLoading] = useState(true)
const [error, setError] = useState<Error | null>(null)
useEffect(() => {
let cancelled = false
fetch(`/api/users/${userId}`)
.then(r => r.json())
.then(u => { if (!cancelled) setUser(u) })
.catch(e => { if (!cancelled) setError(e) })
.finally(() => { if (!cancelled) setLoading(false) })
return () => { cancelled = true }
}, [userId])
return { user, loading, error }
}
function UserProfile({ userId }: { userId: string }) {
const { user, loading, error } = useUser(userId)
// ...render code
}
Cursor also has a workspace view that actually understands my file tree. When I tag @docs and @code, it pulls in things I haven’t manually attached. That sounds small. It saves me ten clicks per refactor. If you’ve got the budget, the current Pro tier on Cursor’s pricing page is a no-brainer for solo developers.
Where GitHub Copilot earns its keep
For everyday autocomplete, Copilot is still the one I leave running quietly in the background. It’s not flashy. It just gets out of my way.
The thing nobody talks about: Copilot’s context window for inline suggestions is small on purpose. That’s a feature. It means when I’m writing a function, it suggests something based on what’s right here. The imports at the top of this file. The function I just wrote two lines up. Not some ambient hallucination about what your codebase might want.
// Typing this controller method in Laravel...
public function store(StoreUserRequest $request)
{
$validated = $request->validated();
// ↓ Copilot suggests this whole block with the right relations
}
// ...and what it actually fills in, correctly first try:
public function store(StoreUserRequest $request)
{
$validated = $request->validated();
$user = User::create($validated);
if ($request->has('roles')) {
$user->roles()->sync($validated['roles']);
}
return new UserResource($user);
}
It got the Eloquent relationship right because it read the User model in this same project. That’s genuinely useful. It’s the kind of small win that adds up across a workday.
The new agent mode in Copilot Chat is fine. I’ve used it. I don’t reach for it. If I want an agent, I go to Claude Code, which I’ll get to next. But if you want a single subscription that gives you both inline autocomplete and a chat panel, Copilot is the cleanest answer. The GitHub Copilot docs are also genuinely good now, which wasn’t true two years ago.
When I reach for Claude Code instead
Here’s where I’ll probably annoy people: when the task is “look at six files, plan a refactor across all of them, and execute it”, I close the GUI tools and open Claude Code in the terminal.
This used to feel weird. Now it doesn’t. The tradeoff is that I lose visual diff review for individual edits, but I gain the ability to say things like “find every place we use userId as a string instead of a branded type, fix the call sites, and update the tests”. It can run my test suite. It can look at the failures. It can fix them. The other tools can do versions of this. Claude Code is the only one I trust to come back with a sensible summary of what it actually changed.
I wrote about how to keep it on task in my piece on Claude rambling. The short version is that giving the tool good guardrails matters more than which tool you pick. With a tight CLAUDE.md and clear permissions, Claude Code is shockingly good at large refactors.
It’s also the one I use for code review on my own PRs before I open them. I run a review against main and let it tell me what I missed. About 30% of the time, it catches something real. The other 70% is noise, but the 30% pays for itself.
Big caveat: it’s expensive if you’re not careful. Token costs add up fast on a long session. Anthropic’s Claude Code documentation covers the cost controls. Use them. I learned this the hard way after a bored Saturday afternoon turned into a $40 session.
The boring stuff nobody mentions
The pricing trap is real. Cursor and Copilot are both flat monthly fees you can budget for. Claude Code bills by token. If you’re running multi-file agent sessions, your monthly bill on Claude Code can blow past $200 in a busy week if you’re sloppy with context.
Latency matters more than I thought. Cursor’s edit mode feels snappy. Copilot’s autocomplete is near-instant. Claude Code, when it’s doing big jobs, can take 30+ seconds to plan. That’s fine for a refactor. Not fine for “what’s the syntax for this lodash method”.
Lock-in is invisible until you switch. When I tried going Cursor-only for a month, I found myself opening plain VS Code with Copilot for tiny one-line completions because Cursor’s autocomplete model felt heavier. When I tried Copilot-only, I missed Cursor’s Cmd+K. The friction wasn’t huge. It added up.
How I actually combine them
Here’s the rough split, not a rule.
I keep VS Code with Copilot open as my default editor. Most of the time I’m typing, Copilot is the only assistant active.
When I want to do a focused refactor on a single file or function, I switch to Cursor and use Cmd+K. It’s fast and the diff review is good.
When the task spans multiple files or involves running tests, I open Claude Code in a terminal next to my editor. I describe the goal, point at the files, and let it work. I review the diff in git, not in the tool.
For everything else (writing this blog post, drafting an email, explaining a concept) I use Claude in a regular browser tab. Different muscle.
I cover this kind of workflow stuff in more detail on my about page. It’s the boring half of the job nobody writes about, and it’s the half that actually saves hours a week.
What to try this week
Pick one workflow you find tedious. A specific refactor, a config change, a test you keep meaning to write. Try doing it three times: once with Cursor’s Cmd+K, once with Copilot Chat, once with Claude Code. Time each one. Note what felt smooth and what felt like fighting the tool.
You’ll learn more from one afternoon of that than from any vendor benchmark. Promise.