Every AI governance policy mentions it: “human in the loop.” It’s framed as a safeguard. A check on the machine. Someone to catch errors before they become decisions.
That framing is too small. This isn’t a technology question. It’s a design question. What role humans play in AI workflows determines whether you get compliance or competitive advantage. And how you think about that role — whether humans are checking the machine’s work, speeding up their own, or reimagining what the work should be in the first place — shapes everything that follows.
I’ve watched organizations implement oversight the same way for months now. They deploy an AI system, add a review step, and call it human oversight. A human reads what the machine did. The human approves or rejects. Then the machine moves forward. It feels safe. It looks compliant. It’s leaving exponential value on the table.
The problem isn’t that humans are in the loop. The problem is what role they’re playing in it.
Three Modes of Human-AI Collaboration
When I look at how teams work with AI in practice, I see three distinct modes. They’re not a hierarchy where one is good and the others are bad — they’re different capabilities, and you need all three. But most organizations have only developed the first one, which means they’re missing the value of the other two.
Mode one: Approver. The AI executes a task. A human reviews the output and decides whether to approve it. The human is a quality checkpoint — catching errors, ensuring accuracy, protecting the organization from bad outputs. This mode is essential for high-stakes decisions, regulated environments, and anywhere the cost of an AI mistake is significant. Where it becomes a problem is when it’s the only mode an organization operates in. If every AI interaction ends with a human rubber-stamping the output, you’ve created a bottleneck without building capability.
Mode two: Augmenter. The AI helps a human work faster and more thoughtfully at the same thing they’ve always done. A marketer uses AI to draft emails. An analyst uses it to generate reports. A strategist uses it to pressure-test frameworks. The work hasn’t changed shape, but it’s quicker and often better. This is where most of the immediate productivity value lives, and it’s significant. The risk is staying here. When augmentation becomes the ceiling, you’re using AI to do more of the same rather than asking whether “the same” is still the right work to be doing.
Mode three: Director. The human steps back from the task entirely and asks a different question: What outcome are we trying to achieve, and what are all the ways we could get there? This is where the human isn’t optimizing an existing process — they’re reimagining what’s possible. What does this work look like if we’re not constrained by how we’ve always done it? What would we build from scratch, knowing what AI can now do? Where does human judgment, creativity, and relationship-building create the most value — and how do we redesign everything else around that?
Mode three is where the exponential gains live. It’s also the hardest to get to, because it requires questioning assumptions that have been baked into workflows for years.
All three modes are necessary. You need approvers for quality and governance. You need augmentation for speed and depth. But you also need people operating in director mode — people whose job it is to step back, question the outcome, and reimagine the path. The organizations that are pulling ahead are the ones developing all three capabilities simultaneously, and knowing when each one is appropriate.
What Humans Uniquely Bring
There’s a 2024 study from Harvard, Wharton, and Procter & Gamble that I keep coming back to. Across 776 professionals, teams working with AI were 9.2% more likely to deliver top-10% solutions, and individuals augmented by AI performed at the level of two-person teams without AI. The intentionally designed combination of human collaboration and AI augmentation outperformed every single-mode approach.
The reason: humans bring things that AI doesn’t have access to.
Creativity that makes conceptual leaps across domains. Emotional understanding of what will matter to a customer or a team. Collaborative energy that shifts how people feel about the work. These are the competitive edge in a world where AI handles the algorithmic stuff.
I’ve also noticed something about embodied intelligence. AI doesn’t currently have a body. It can’t stand in front of a team during a crisis and feel the room shift. It can’t read the thousand micro-expressions that happen in a conversation and adjust in real time, whether that’s in a conference room or on a video call. It doesn’t carry the shared history that makes feedback land with a team or get dismissed. Those are human advantages, not limitations.
AI can score well on traditional empathy metrics — reading emotion from text, matching tone, providing personalized responses. But humans own the intangible relationship value. Trust. Shared experience. Embodied presence. Those can’t be automated, and they’re essential to how work gets done.
When you recognize that, you stop trying to use AI to replace human judgment. You start using it to amplify the judgment that matters most.
What Directing Looks Like in Practice
A team that reviews AI output but doesn’t question the workflow is approving. A team that has redesigned which decisions need human judgment — which ones require human values, which ones need creative thinking, which ones are best left to the machine — that team is directing.
I worked with a leadership team that was using AI to generate performance reviews. They had a human review step. But they were still working inside a model where the machine did the thinking and humans cleaned up the output. I asked them a different question: What do you want your human conversations with people to be about?
They realized they wanted those conversations focused on growth, potential, and development — the things that matter for building capability. The machine could handle consistency checks, pattern spotting, and baseline feedback generation. The human conversations should be about what’s next. They redesigned the whole thing. Now AI does the structural work. Human leaders spend time on the conversations that shape career trajectories.
That’s directing. It’s not about adding humans to AI systems. It’s about designing systems where humans and AI both play to their strengths.
But there’s an even deeper version of directing that goes beyond workflow redesign. It’s the moment a team stops asking “how do we do this work better with AI?” and starts asking “is this the right work to be doing at all?” That’s the shift — when AI doesn’t just change how you execute, it changes what you see as possible. A product team that used to spend months on market research can now test ten hypotheses in a week. That doesn’t just speed up the old process. It fundamentally changes the questions you can afford to ask and the bets you can afford to take.
The difference between individual tools and enterprise systems matters here too. An individual tool scales one person’s productivity. An enterprise system creates collective uplift — it changes how teams think, collaborate, and make decisions. Directing is system-level work. It means converting strategic plans into living tools that teams interact with daily, question, refine, and evolve.
The Three Stages of AI Maturity
I’ve started thinking about AI adoption in three stages. They build on each other — you can’t skip to stage three without the foundations of one and two. But where organizations invest their attention tells me what’s going to shift in their business.
-
Stage one: Automating tasks. Use AI to do existing work faster. Summarize emails. Generate code. Answer routine questions. This is the foundation — it frees up time and reduces friction. Every organization needs this.
-
Stage two: Augmenting work. Teams reshape how they work with AI in the mix. A designer uses it for iteration speed. A strategist uses it to test frameworks. A manager uses it to prepare for difficult conversations. The work is faster and more thoughtful, but still recognizably the same work. This is where most of the near-term value lives.
-
Stage three: Redesigning what work is. This is where the conversation shifts. Instead of “how do we use AI to do this faster,” teams ask “what would we build from scratch if we weren’t constrained by how we’ve always done it?” They question the outcome itself, not just the path to get there. They rebuild workflows. They eliminate steps that exist only because they used to be necessary. They reallocate human energy to the work that creates the most value — the strategic thinking, the relationship building, the creative problem-solving that only humans can do.
Stage three requires leaders willing to question assumptions about what work should look like when humans and AI are genuine partners. It requires redesign, not just tool adoption. And it requires the willingness to let go of processes that have always worked well enough — because “well enough” is no longer the bar when the possibilities have expanded.
Most organizations are in stage one. Using AI as a genuine thinking and strategic tool — something that fundamentally reshapes what’s possible — that’s where transformation happens. And it doesn’t happen by accident.
The Real Question
When I talk to leadership teams about this, the conversation eventually gets clear: this isn’t about which AI tools you buy. It’s about what kind of organization you want to build, and what role humans play in that future.
The answer reshapes roles, workflows, organizational structure, and hiring. If you’re designing for director-level collaboration, you’re not hiring the same people in the same roles. You’re not building teams the same way. You’re not evaluating performance on the same metrics.
Most organizations are still framing human-in-the-loop as governance: “How do we make sure AI doesn’t break things?” The organizations building competitive advantage are asking a different question: “What is the uniquely human contribution we want to preserve and amplify — and what becomes possible when we design everything else around it?”
That’s the conversation that matters. That’s where the exponential gains live.