The Hagakure #88: Your Org Chart Isn’t Ready for AI
Rethinking team dynamics in the midst of a paradigm shift
A few people reached out after my recent “One Year In” post, asking what I meant when I said that being truly AI-native requires rethinking team structure.
Here’s what I meant.
Traditional product teams often follow a linear flow: PM defines, designer mocks, engineers build. But that model breaks down when you're building AI-native products. We’re not designing screens for users to click—we’re building systems that interpret intent and act on it. That changes everything.
To get it right, everyone needs to be involved early: product, design, frontend, backend, data. Discovery and delivery collapse into the same loop. Feedback is faster. Outcomes are less predictable. The system learns from use, so you need tight, collaborative cycles to learn with it.
But here’s the catch: just having a cross-functional team on paper isn’t enough. You can still fall into the old pattern—PM and design think, engineering executes.
That won’t cut it anymore. Especially in the midst of a technological paradigm shift like the one we’re in. AI is evolving rapidly—new models, new patterns, new constraints and possibilities. And it’s the engineers who are closest to that edge. They’re the ones experimenting with tools as they emerge, seeing what breaks, and intuiting what’s now possible. Their input isn’t just helpful—it’s foundational to shaping the right thing in the first place.
So while the team’s composition may look the same, how it operates has to shift. More shared ownership. Tighter loops. A bias toward building to learn.
Starting the Shift Internally
At Resquared, we’re just beginning to make this shift—helping engineers think in terms of outcomes, not just implementation. We’ve started having more open conversations about what success looks like: defining north star and supporting metrics, exploring the difference between leading and lagging indicators, and looking at simple Mixpanel dashboards together. We’re introducing lightweight feedback loops into our routines—cross-functional check-ins where we try to make sense of what’s moving and why.
It’s early days, and we’re figuring it out as we go. But the real goal is to build a habit of critical thinking across the team—getting everyone to engage not just with what we’re building, but why it matters.
AI-native products demand AI-native teams.
What This Can Sound Like in the Room
I haven’t heard these at Resquared—but I can easily imagine them coming up elsewhere, especially in fast-moving startups. They’re common instincts when things feel urgent.
“This slows us down.”
Only at first. Involving people early avoids costly rework and misalignment later. It’s a short-term tradeoff for long-term velocity.
“Engineers should just build. That’s what PMs are for.”
In AI products, the what and the how are deeply intertwined. Engineers often surface opportunities or constraints that shape the product itself. Excluding them limits our thinking.
“We don’t have time to teach everyone how to think.”
If we want a team that moves fast and adapts well, critical thinking isn’t optional—it’s leverage. We’re not pausing to teach; we’re building thinking into the work itself.
These are fair concerns. But holding onto old mental models in the midst of a paradigm shift is the bigger risk.
AI-native teams don’t just build differently—they think differently.
Dear Paulo,
Thank you for sharing your thoughts on the implications of AI for technology professionals and the way we organize our work. I appreciated your reflections and decided to unpack some of your ideas — and gently challenge a few assumptions and suggested recipes for success.
To start, I’ll assume your team will be using one or more AI engines, and that they’ll take the time to scrutinize them — understanding what’s “under the hood” and the implications of their calls and outputs.
I agree with your point that simply assembling a multidisciplinary team doesn’t, by itself, change how people think or work. Since discovering Agile in 2007, I’ve introduced it (mostly via Scrum) to six teams. Some of them achieved strong results; others required multiple attempts. As you imply in your article, the real revolution in technology is about people. Tools and technologies are just the starting point.
The most successful transformations I’ve seen were within teams of experienced developers who trusted each other, had worked together for a long time, were strong collaborators and critical thinkers, and were committed to making change happen. They didn’t just accept the need for change — they embraced it.
You describe a team where collective responsibility drives delivery, and where everyone does what it takes to make it happen. In practice, however, I’ve seen “everyone is responsible” devolve into “no one is accountable.” It’s something we must keep a close eye on.
Given that AI encompasses a broad spectrum of technologies, disciplines, and methodologies, we’ll need to determine which of these are most essential for any given application.
Personally, I gravitate toward the machine learning aspect — mainly because it’s the part I can best understand. There’s something fascinating about application code that evolves in real time in response to observed behaviour.
That said, I don’t believe engineers — in the traditional sense — are the core of tomorrow’s teams. What we need is a new kind of “full-stack engineer”: someone who is a product builder, capable of experimenting, defining initial behaviors and workflows, and testing to a level that delivers acceptable value and quality.
The truth is, we’re still figuring out how to harness these emerging technologies — and how to reorganize our work accordingly. Owning the underlying tech is just the starting point. I look forward to watching this transition unfold and hope to contribute in some small way.