Yesterday, I had my first successful AI coding experience.

I’ve used AI coding tools before—and come away disappointed. The results were underwhelming: low-quality code, inconsistent abstraction levels, and subtle bugs that take longer to fix than it would take to write the whole thing from scratch.

Those problems haven’t vanished. The code quality this time was still disappointing. As I asked the AI to refined its work, it would randomly drop important constraints or refactor things in unhelpful ways. And yet, this experience was different—and genuinely valuable—for two reasons.

The first benefit was the obvious one: the AI helped me get over the blank-page problem. It produced a workable skeleton for the project—imperfect, but enough to start building on.

The second benefit was more surprising. I was working on a problem in odds-ratio preference optimization—specifically, finding a way to combine similar examples in datasets for AI training. I wanted an ideal algorithm, one that extracted every ounce of value from the data.

The AI misunderstood my description. Its first attempt was laughably simple—it just concatenated two text strings. Thanks, but I can call strcat or the Python equivalent without help.

However, the second attempt was different. It was still not what I had asked for—but as I thought about it, I realized it was good enough. The AI had created a simpler algorithm that would probably solve my problem in practice.

In trying too hard to make the algorithm perfect, I’d overlooked that the simpler approach might be the right one. The AI, by misunderstanding, helped me see that.

This experience reminded me of something that happened years ago when I was mentoring a new developer. They came to me asking how to solve a difficult problem. Rather than telling them it was impossible, I explained what would be required: a complex authorization framework, intricate system interactions, and a series of political and organizational hurdles that would make deployment nearly impossible.

A few months later, they returned and said they’d found a solution. I was astonished—until I looked more closely. What they’d built wasn’t the full, organization-wide system I had envisioned. Instead, they’d reframed the problem. By narrowing the scope—reducing the need for global trust and deep integration—they’d built a local solution that worked well enough within their project.

They succeeded precisely because they didn’t see all the constraints I did. Their inexperience freed them from assumptions that had trapped me.

That’s exactly what happened with the AI. It didn’t know which boundaries not to cross. In its simplicity, it found a path forward that I had overlooked.

My conclusion isn’t that AI coding is suddenly great. It’s that working with someone—or something—that thinks differently can open new paths forward. Whether it’s an AI, a peer, or a less experienced engineer, that collaboration can bring fresh perspectives that challenge your assumptions and reveal simpler, more practical ways to solve problems.

Profile

Sam Hartman

October 2025

S M T W T F S
   1234
567891011
12131415161718
192021222324 25
262728293031 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 15th, 2025 09:47 pm
Powered by Dreamwidth Studios