A weekly dispatch on what I'm building, learning, and thinking about in AI.
Hey friends,
I remember when ChatGPT was first released. I was excited in a way that felt almost embarrassing. I'd spent years working alongside machine learning engineers, hearing them talk about what was coming. So when it finally landed for everyone, it felt like something I'd somehow been expecting. I went and told all my friends. They told me it was stupid. That it wasn't going anywhere.
That got in my head more than I'd like to admit. I kept using it anyway - quietly, mostly for some freelance writing I was doing at the time, just asking it to help me think through ideas. Back then it was simple, no web search, no tools. But it worked for what I needed, and looking back, that should have been enough signal. Then the models seemed to plateau, and I thought maybe everyone was right. I stepped back.
Then in the summer of 2023 I was with a friend, whom also is AuDHD. We were talking about how technology had shaped both of our lives, how for people like us, technology had genuinely changed our lives - not metaphorically, but in the most practical sense. And somewhere in that conversation he said: "Naya, you of all people need to get ahead of this." I've heard words like that a few times in my life. When I first got into security. When I was doing web dev. Someone sees something in you before you do and says it plainly. Every time it's happened, something has shifted.
I went home that day and bought a book on AI. Read it. Started using AI every single day from that point on. And then, like I always do with anything I care about, I needed to understand how it actually worked.I'm the kind of person who pulls old TVs off the sidewalk just to open them, look up every part, and trace every connection until the whole thing makes sense from the inside. This is also how I started my cellphone repair business in high school. I can't just use something - I have to understand it. So I went from user, to daily user, to asking: how does this thing actually work? That question led me here.
Welcome to Contextual Intelligence. Each week I'll share what I'm building, what I've learned, and what's caught my attention. I'll only show up when I have something worth saying.
🔨 Built
✍️ Imori
The biggest week for Imori so far. The draft editor is finally stable - write, dictate, revise, and chat with the AI without it breaking mid-thought. But the work I'm most proud of is under the hood: I rebuilt how the AI understands your data. Each agent now knows exactly what it's allowed to pull and when. The full story is in the Learned section. Also shipped a Style Guide feature that interviews you about your writing voice, then uses that to shape how the AI responds to you inside the app.
🔗 Sentio
The platform layer that ties my other apps together. Shipped a new event-routing screen on iOS, started the same on Android, and fixed a bug on the marketing site that was greeting visitors with the wrong time of day.
📰 Hudson Life Dispatch
A local newsletter I run. This week was mostly invisible work - silent backend errors, an auth loop sending logged-in users back to the login screen, wrong error messages in the login flow. Got all of that cleared. Also built two terminal commands that let me generate and send editions without doing it manually. And published Edition 5.
📋 Enso
Feedback and changelog tool. Cleared nearly 550 code quality errors that had been quietly piling up, fixed changelogs not auto-stamping their publish date, and cleaned up the UI across the boards and dashboard.
🎯 Prepspace
Interview prep app, getting production-ready. Fixed a live auth bug (renaming one file was all it took), wired up dark mode across the full UI, and tab navigation now saves your place when you switch sections.
🛠️ Everything else
Timestamps on blog posts so readers know when something's been updated, a variable-doubling bug in Outreachos, two rounds of UX improvements for Namos Courses, and a persistent log error in Adbooker that's finally gone.
🧠 Learned
I've been thinking about the difference between information and knowledge, and what it means to try to teach that distinction to a machine.
I'm building a writing app called Imori, structured around my own creative process: research, bookmark, artifact, synthesize, create. The app has multiple AI agents - one for open conversation, one attached to drafts, one for generating ideas. The problem I kept running into was that these agents didn't know what they didn't know. They'd pull from bookmarks when they should have pulled from artifacts. They'd treat a link I'd saved in five seconds the same way they'd treat a document I'd spent an hour processing.
This week I finally understood why that distinction matters so much.
A bookmark is a signal of potential. "This might be useful." An artifact is a signal of understanding. "I've processed this. I know what it means." Teaching an agent the difference isn't just an organizational problem. It's an epistemological one. You're asking the system: what does it actually mean to know something, versus to have merely encountered it?
I rebuilt the context layer for each agent with that in mind. The main chat agent now reads artifacts only. The ideation tab draws from bookmarks, artifacts, and existing drafts. Each agent has a defined scope of awareness and operates within it.
I've used the Anthropic Agent SDK before in other projects - Intero (glucose monitoring), Koyami (to-do), Takumi. But those were simpler: one agent, one data source, a clear purpose. What I'm building now requires an agent that understands the shape of information, not just its content. I'm still in the middle of it. But being able to articulate the problem clearly was the first real unlock.
🔖 Bookmarked
Anthropic's guide to building effective AI agents - I've been building agentic systems for a while, but this gave me enough structure that I turned it into a reusable skill in my workflow. I now reference it every time I start a new agent project. Worth reading if you build with AI.
Anthropic's open letter on not working with the Department of Defense - I don't think every AI company would have the spine to say this publicly. Whether you agree with the position or not, the willingness to draw a line matters.
Many companies are blaming layoffs on AI - CNN - Worth reading with some skepticism. A lot of these companies overhired aggressively in 2019-2020 and are now using AI as a cleaner narrative for cuts that were probably coming anyway.
That's Edition 001. Thanks for being here from the start.
“Artificial intelligence is not a substitute for human intelligence; it is a tool to amplify human creativity and ingenuity.”
— Fei-Fei Li
Be Kind.
Best, Naya
