For the past few weeks, I've been testing Google Antigravity for coding. It's been good, sometimes great, at understanding what I'm trying to build and actually building it. But yesterday I gave it a different kind of task, this time using browser use. One that normally takes me hours of clicking through menus, creating channels one by one, setting up roles, writing descriptions, configuring permissions. I told it to set up a Discord server for Namos Labs.
Here's what I did
I didn't write out a spec. I didn't create wireframes or map out the information architecture. I told Antigravity: Do deep research on Gemini first. Wait until that research finishes. Then take that research, paste it locally, and use it to build the Discord server. I pointed it to our website. I told it to read our ethics page to understand what kind of community we're trying to build. I explained that Namos Labs isn't a SaaS company, we build experiences. Then I just watched.
What it actually did
Antigravity opened the browser. It navigated to our site. It read through pages, understanding our voice, our values, what we're building. Then it started building the server.
It created seven product-specific roles, one for each app we've shipped. Each one with the right color scheme. Each one configured with appropriate permissions. It built out the entire channel structure. Public categories for welcome, rules, announcements. Community spaces for general conversation, showcasing work, giving feedback. A private lounge for founding members. Seven product-specific categories with dedicated chat channels. Voice channels for coworking and support calls.
It wrote channel descriptions that actually captured what each space is for. It set up the welcome message. It configured community rules based on our ethics statement, not generic Discord rules, but rules that reflected how we actually want people to interact.
The only thing I had to do manually? Sign into my Discord account so it could access the server. (I also had to stop it when it looked like it was about to start inviting people. And I uploaded our logo myself because that felt like something I should handle.) But everything else? Antigravity did it.
Why this feels different
I've been using AI to code for months. I shipped 43 applications in two weeks with Claude Opus 4.5. I thought I had a pretty good sense of what these tools could do. But watching Antigravity navigate an interface, understand brand context from multiple sources, make design decisions, and execute a complex setup end-to-end? That's different. It didn't just follow instructions. It understood what I was trying to create.
Look at the channel descriptions it wrote. They aren't generic templates. They capture what each product does. They match our tone. They make sense for the people who'll actually be using these spaces. When I told it Namos Labs builds experiences, not products, it got it. The server structure reflects that. The language reflects that. The whole vibe reflects that.
The distance collapsed
This is what got me. Usually, setting up a Discord server means spending an hour planning the structure, another hour creating channels and categories, another hour writing descriptions and welcome messages, another hour configuring roles and permissions, and then testing everything to make sure it works. That's 4-5 hours minimum. Probably more if you're thoughtful about it.
With Antigravity, the time from "I need a Discord server" to "here's a fully configured Discord server" was maybe 20 minutes. Most of that was me watching it work. The work that's left? Setting up the onboarding flow. Fine-tuning some permissions. Testing everything. Maybe 15-20 minutes total. We went from 4-5 hours to 35-40 minutes. And most of the 35-40 minutes is the AI working, not me.
What this means for how we build
I keep thinking about this. We talk a lot about AI helping us code faster. Helping us write better. Helping us think through problems. But this is different. This is AI understanding context across multiple domains, brand identity, community guidelines, product positioning, user experience—and using that understanding to make hundreds of small decisions that add up to a coherent whole.
That's not automation. That's not even assistance. That's AI as a collaborator that actually understands what you're trying to build and can execute on that vision independently.
What's next
The Namos Labs Discord is almost ready. Just need to finish the onboarding flow and test everything. If you're a founding member or you've been following what we're building, you'll get access first.
But more than that, I'm thinking about what else becomes possible when AI can understand context like this. When it can navigate interfaces. When it can make judgment calls about tone and structure and user experience. I'm excited to explore more ways to use browser use.
