News

I didn’t plan to become a vibe coder.

Published

on

A senior US programmer on how AI quietly rewired his work

When we spoke with Ethan Calder, a fictional name for a very real kind of engineer, he sounded slightly amused by how much his work had changed. Ethan is a senior backend programmer in the United States with more than fifteen years of experience. He has built distributed systems, survived rewrites, worked through outages at 3 a.m., and mentored junior engineers who eventually surpassed him. He did not think of himself as an early adopter of developer tools, and he definitely did not expect to become what people on the internet now call a “vibe coder.”

“I still don’t love the term,” he says. “But I recognize myself in it.”

Ethan’s transition into AI-assisted development did not start with excitement or ideology. It started with frustration. He was tired of repeating the same kinds of work: wiring endpoints, translating schemas, writing boilerplate, scaffolding tests, and moving between languages and frameworks depending on the project. None of that was intellectually interesting, but it consumed enormous amounts of time. When large language models started to become usable inside editors, he tried them the way most senior engineers do: skeptically, cautiously, and with low expectations.

“I assumed it would be another autocomplete gimmick,” he says. “Something flashy, but not really helpful once you get past the demo.”

What surprised him was not that the models were brilliant. It was that they were good enough to remove friction. Not perfect, not autonomous, but capable of absorbing a lot of mechanical effort. That was the moment his relationship with coding began to shift.

At first, Ethan used AI only to avoid what he calls “yak-shaving.” He would ask it to generate a migration, outline a data transformation, or sketch a test file. He treated the output as disposable. He read everything carefully, rewrote parts of it, and often threw it away entirely. But over time, something changed. The AI stopped feeling like a novelty and started feeling like a collaborator that could be directed by intent rather than instructions.

“That’s when I realized I was no longer coding line by line,” he says. “I was coding by describing what I wanted and then shaping the result.”

That is what many developers now mean by vibe coding, even if the term itself is often misunderstood. Ethan is careful to distinguish between what he does and what critics fear. “Vibe coding doesn’t mean blind trust,” he says. “It means driving by high-level intent and letting the model handle the keystrokes. You still own the outcome.”

As his usage deepened, Ethan began experimenting with different models. He quickly learned that there is no single “best” AI for programming. Instead, there are different personalities, strengths, and failure modes. Some models are fast and energetic, good for brainstorming or quick scaffolding. Others are slower but more disciplined, better at following multi-step constraints or maintaining consistency across a refactor.

“I stopped thinking in terms of ‘the AI’ and started thinking in terms of routing,” he explains. “This task needs careful reasoning. That task needs speed. That one needs to stay inside very tight constraints.”

This mirrors what many experienced developers have discovered independently. AI is not a replacement for judgment. It is a multiplier for clarity. When Ethan gives vague instructions, he gets vague or misleading results. When he gives precise intent, constraints, and boundaries, the output improves dramatically.

What AI is genuinely good at today, in Ethan’s experience, is accelerating the early and middle phases of development. He can spin up a new service, API, or internal tool in a fraction of the time it used to take. He can cross language boundaries more easily, moving between backend logic, infrastructure code, and frontend glue without the usual mental overhead. He can ask the model to explain unfamiliar libraries, suggest patterns, or translate between paradigms.

“For prototypes and internal tools, the speed difference is enormous,” he says. “Things that used to take days now take hours.”

AI is also good at generating first drafts. Ethan emphasizes that phrase repeatedly. First drafts of code, documentation, tests, migration plans, and even architectural outlines. He no longer starts from a blank file. He starts from something imperfect that he can react to.

“That’s psychologically huge,” he says. “Editing is easier than inventing.”

But Ethan is equally clear about what AI is not good at. The most dangerous failures are not obvious errors. They are confident, plausible solutions that are subtly wrong. He describes hallucinated APIs that look exactly like something a real library might expose. He describes edge cases that are quietly ignored. He describes logic that works for the happy path but collapses under real-world constraints.

“The model is really good at producing code that looks like it belongs,” he says. “That’s the trap.”

Security is another area where Ethan refuses to trust AI output without heavy scrutiny. He has seen models generate authentication logic that feels reasonable but violates basic security principles. He has seen permission checks applied inconsistently. He has seen secrets handled incorrectly. These are not things you notice at a glance, especially when the code is clean and well formatted.

“If you don’t already know what secure code looks like, AI won’t save you,” he says. “It will make you faster at being wrong.”

There is also the problem of context decay. Over longer sessions, especially in agent-style workflows where the model modifies files repeatedly, assumptions drift. Earlier decisions are forgotten. Invariants are broken. The codebase becomes internally inconsistent.

“I call it entropy,” Ethan says. “Every iteration increases the risk unless you actively reset and reassert constraints.”

This is where the romantic idea of fully autonomous coding breaks down. Ethan does not see AI replacing senior engineers anytime soon. Instead, he sees it changing what senior engineers do. Less typing. More reviewing. More designing constraints. More thinking about failure modes.

“I spend more time asking ‘what could go wrong?’ than ‘how do I write this loop?’” he says.

When asked whether he now feels like an “AI babysitter,” Ethan laughs. “Sometimes, yes. But babysitting a very fast intern is still faster than doing everything yourself.”

That analogy comes up often in developer discussions. AI behaves like a junior engineer with infinite energy and zero accountability. It will happily produce output forever. It will never push back unless prompted. It will never say, “I’m not sure.” The responsibility remains entirely with the human.

Ethan has developed personal rules to manage this dynamic. He never merges AI-generated code without review. He demands tests or at least reproducible scenarios for anything non-trivial. He treats security-sensitive areas as off-limits for autonomous changes. And most importantly, he insists on understanding what the code does before it ships.

“If I can’t explain it, it doesn’t go in,” he says.

This is where the gap between hype and reality becomes clear. AI can dramatically increase throughput, but it does not eliminate the need for expertise. In fact, it raises the cost of ignorance. When code is produced faster, mistakes propagate faster too.

Ethan also points out a social dimension that often gets overlooked. Junior developers who rely too heavily on AI risk skipping the painful but necessary process of learning fundamentals. Senior developers who delegate too much risk losing touch with the systems they are responsible for.

“AI doesn’t absolve you of responsibility,” he says. “If anything, it increases it.”

Despite these concerns, Ethan is not pessimistic. He believes the current phase of AI-assisted programming is similar to the early days of high-level languages or managed runtimes. There was fear then too. Fear that abstraction would weaken understanding. In reality, it shifted where understanding mattered.

“I still need to know how things work,” he says. “I just don’t need to manually do everything to prove that I know.”

The biggest change in Ethan’s day-to-day work is not speed, but focus. He spends more time thinking about system boundaries, invariants, and user impact. He spends less time fighting tools. He feels less drained by repetitive work and more engaged with design.

“I didn’t become less of a programmer,” he says. “I became a different kind of one.”

So does he consider himself a vibe coder?

He pauses. “If vibe coding means trusting intuition without verification, then no. If it means expressing intent and letting the machine handle syntax, then yes. Absolutely.”

Ethan does not believe AI will replace programmers. He believes it will expose the difference between people who understand systems and people who only write code. In that sense, AI is not the end of software engineering. It is a stress test.

“The models are powerful,” he says. “But power just amplifies whatever you already are. If you’re careless, you get faster chaos. If you’re thoughtful, you get leverage.”

For now, Ethan is comfortable living in that tension. He continues to experiment with new models. He continues to refine his workflow. And he continues to read the same forums where developers argue about whether vibe coding is the future or a fad.

“I don’t think it’s either,” he says. “It’s just the next tool. A dangerous one if you don’t respect it. An incredible one if you do.”

He pauses again, then adds, “And no, I still don’t let it merge to main.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version