For a long time, I assumed friction was just part of building software with large language models. You prompt, you refine, you correct, you repeat. If something felt hard, the answer was usually more structure, clearer instructions, or better prompts. I evaluated models mostly on correctness, speed, and whether they could eventually get to the right answer.
I treated that loop as normal.
Two weeks working almost exclusively with Opus changed that assumption.
Not because Opus was magically better at everything, but because it exposed a mismatch between how I think and how I had been choosing tools. When I switched back, the contrast was immediate. What I had normalized as “how AI works” turned out to be a specific interaction pattern, not an inherent limitation.
The change was not subtle. It reshaped how I choose models, how I prompt them, and how I design AI native systems.
I Now Choose Models Based on How They Handle Intent, Not Output Quality
Before Opus, I evaluated models primarily on visible output. Was the code correct. Was it fast. Could it eventually get there if I nudged it enough. I assumed friction was inevitable and that better prompting was always the solution.
After working with Opus, I realized that the real differentiator for me is intent fidelity.
How well a model preserves the goal across context, files, and time matters more than how clever or fast it is. If a model drifts, improvises, or optimizes for adjacent problems, it creates hidden cost even when the output looks impressive. That cost shows up as rework, frustration, and loss of trust.
With Opus, I noticed that intent stayed intact. I could describe what I was trying to achieve, sometimes messily or emotionally or without full clarity, and the model would hold onto the core of that goal as the work progressed.
That changed my model selection criteria entirely. I now optimize first for how a model listens and preserves direction, not how flashy the first response looks.
I Changed How I Prompt When Exploring Versus Executing
Opus also made something else obvious. Not all models are built for the same mode of work.
Some models tolerate ambiguity and help shape it. Others expect precision and specifications upfront. Neither is wrong, but they support different phases of building.
As a result, I now separate my work into two modes.
When I am exploring ideas, architecture, or how something should feel, I use models that can work with messy context, long transcripts, and incomplete thoughts. I allow myself to describe intent rather than implementation.
When I am executing well defined tasks, I switch to models that perform best with strict instructions and clear constraints.
Before Opus, I tried to force one model to do both. That created unnecessary friction and made me think the problem was my clarity, when it was really a tooling mismatch.
Where This Became Very Concrete
This difference became especially clear while working on a Swift project.
With Opus, coordinating changes across multiple Xcode files felt coherent. Structure stayed intact without constant reminders. I did not need to aggressively constrain scope to prevent unintended changes. I spent more time thinking about the system and less time guarding edges.
When I switched back, I noticed a consistent pattern. Requests for data structure alignment often triggered adjacent design changes. Instructions were interpreted opportunistically rather than literally. Nothing was technically broken, but the intent drift mattered.
That distinction becomes critical in production systems, where correctness includes respecting boundaries, not just producing valid output.
Sentio Made the Shift Impossible to Ignore
Sentio began with Opus, and that starting point shaped the entire project.
I could describe what the app was meant to support, how voice notes should transform into different outputs, and how those transformations should relate to one another. The model helped connect those ideas into something cohesive.
When I later continued development after downgrading, the same descriptions required more guardrails. The same clarity produced more divergence. That contrast helped me understand why some projects had felt harder than they needed to be in the past.
The bottleneck was not the language or the framework. It was interpretation.
I Now Design Systems Assuming the Model Is a Collaborator, Not a Compiler
Working with Opus changed how I think about AI inside applications like Sentio.
Instead of treating the model as something that simply transforms inputs into outputs, I started designing around how it reasons over time. That led to concrete architectural changes. Fewer repeated queries. More scoped retrieval. Clearer separation between context that needs persistence and context that does not.
Cost and architecture decisions improved because the model surfaced constraints I had not explicitly optimized for. It noticed priorities I had mentioned earlier and incorporated them into suggestions later.
This shifted how I build AI native systems, not just how I write prompts.
The Actual Change
Two weeks with Opus recalibrated my baseline.
It made me realize that a lot of what I thought was “just how AI tools work” was actually a mismatch between my thinking style and the models I was using. Once that became clear, a lot of past frustration made sense.
Now I build differently because I choose differently.
