Everyone is betting on AI to make better decisions. But what if we’re about to automate and scale exactly the wrong things?
Three days into my solo Atlantic crossing—somewhere between the Canaries and Cape Verde, 300 nautical miles from the nearest land—my autopilot began to fail. The wind had shifted and increased while I slept. The autopilot was struggling to compensate, overworking itself, overheating.
Here’s what’s remarkable: I was asleep below deck when I sensed something was wrong. Not heard—sensed. The movement of the boat changed. My body registered the shift before my conscious mind processed it. I woke up and immediately knew I needed to adjust the autopilot settings before it burned out completely.
This is human-machine collaboration at its best. The autopilot was doing something I couldn’t—holding course with precision while I slept. But I was doing something it couldn’t—sensing the broader context, feeling when the conditions had changed beyond its parameters, knowing when to intervene. We were partners, each contributing what the other lacked.
For those unfamiliar with solo sailing: without an autopilot, you cannot sleep. Someone must be at the helm 24 hours a day. In a crewed boat, you rotate. Solo, if you stop to sleep, you’re not racing—you’re drifting. And if you don’t sleep, within days you’re hallucinating, your judgment deteriorates, you become a danger to yourself. Sleep deprivation at sea kills. So this partnership isn’t philosophical—it’s survival.
When the autopilot eventually did fail completely, I was able to repair it. Not because I’m a mechanical genius, but because I had prepared. Before leaving, I’d thought through the scenarios: What could break? What would I need? I’d brought spare parts specifically for this failure. The preparation didn’t prevent the problem, but it meant I could solve it when it occurred.
This connects to something I learned across all my expeditions: preparation doesn’t eliminate fear or prevent problems. But it transforms your relationship with uncertainty. When you’ve thought through the scenarios and equipped yourself to handle them, fear becomes useful information rather than paralyzing emotion. You gain the space between stimulus and response where wisdom lives.
We’re at a similar moment with AI in business. And the mistake most leaders are making is thinking AI will fix their decision-making problems. It won’t. It will amplify them.
The Seductive Promise of Artificial Objectivity
The sales pitch for AI in business is compelling: finally, decisions based on data rather than emotions. Finally, pattern recognition at scale. Finally, bias-free analysis that’s not colored by human psychology.
It sounds perfect. And it’s dangerously incomplete.
Here’s what’s actually happening: AI is pattern-matching at superhuman speeds. But the patterns it finds are only as good as the data it’s trained on and the questions you ask it. If your training data contains historical biases—and it does—AI will learn and scale those biases. If you’re asking the wrong questions because you’re in a fear state—and most leaders are—AI will give you very efficient answers to the wrong questions.
Garbage in, garbage out. But now the garbage is being generated at the speed of light and with the veneer of algorithmic authority.
What AI Cannot Do
Let me be clear: I’m not anti-AI. I’m using AI right now in various aspects of my coaching practice and business operations. It’s an extraordinary tool. But like any powerful tool, it’s only as good as the person wielding it. And there are things AI fundamentally cannot do:
AI Cannot Sense Context Beyond Data
When you’re in Antarctica and you feel the quality of the ice change—not just see it on instruments, but feel it through the hull—that’s human sensing. When you walk into a boardroom and immediately know the real conversation isn’t the one on the agenda, that’s human sensing. When you read a contract and something feels off before you can articulate why, that’s human sensing.
AI can process millions of data points. But it cannot sense the things that aren’t yet data. It cannot pick up on the subtle shifts in energy, tone, and unspoken dynamics that often contain the most important information. The best leaders I know—in sailing and in business—have highly developed sensing capacity. AI can support this, but never replace it.
AI Cannot Hold Paradox
The most important business decisions exist in paradox: expand or consolidate, hire fast or hire slow, maintain culture or disrupt it, optimize the present or invest in the future. AI can analyze the trade-offs. It can model scenarios. But it cannot hold the creative tension that allows a third option to emerge—the one that honors both sides of the paradox in a new way.
This capacity to hold paradox, to stay present with competing truths until wisdom emerges, is distinctly human. And it requires a nervous system that can tolerate discomfort without collapsing into either/or thinking.
AI Cannot Navigate the Unmapped
AI is brilliant at optimization—finding the best path within known parameters. But true innovation happens in uncharted territory. It requires intuitive leaps that can’t be justified by existing data because the data doesn’t exist yet.
When I decided to do the Atlantic crossing in 2001, no AI would have recommended it. The data said: recession, post-9/11 economy, sponsors pulling out, high risk, uncertain return. But my intuition—informed by deep self-knowledge, pattern recognition from years of sailing, and a clear sense of my values and purpose—said this was exactly what I needed to do. I was right. That journey transformed me in ways that created value for decades.
AI Cannot Be Authentic
The leadership that builds trust, attracts talent, and creates psychological safety is authentic leadership. People follow humans they trust, who are genuine, who admit uncertainty, who show vulnerability alongside competence. AI can help you craft messages and analyze sentiment. But it cannot be you. And in an increasingly AI-mediated world, authentic human presence becomes more valuable, not less.
The Real Risk: Outsourcing Wisdom
Here’s my concern: we’re rushing to outsource decision-making to AI precisely because decision-making from a clear, wise place is so difficult. It requires:
- Doing your inner work to distinguish fear from intuition
- Developing the capacity to tolerate uncertainty without premature closure
- Building genuine self-knowledge about your patterns, biases, and triggers
- Staying present with complexity rather than reducing it to false simplicity
- Taking responsibility for outcomes even when the path wasn’t clear
That’s hard. Really hard. So the temptation is to let AI do it. ”The algorithm says…” becomes the new ”I was just following orders.” It’s abdication of responsibility disguised as technological sophistication.
But here’s what happens when you outsource your wisdom: your capacity for wisdom atrophies. Like any human capability, if you don’t use it, you lose it. A generation of leaders who defer to AI for every significant decision will be a generation that never develops the judgment, intuition, and wisdom that makes great leadership possible.
And we’ll create organizations that are optimized for everything except what matters most: human flourishing, genuine innovation, and creating value that serves rather than extracts.
The Path of Integration: AI as Collaborative Tool
So what’s the alternative? Not rejecting AI—that’s foolish and impossible. But approaching it as a collaborative tool that amplifies human wisdom rather than replaces it.
This requires a very specific type of leadership development. You need to become the kind of leader who can:
Use AI for Pattern Recognition, Not Decision-Making
Let AI show you patterns in your data, your market, your organization. But make decisions from a place of integrated wisdom—cognitive understanding PLUS somatic awareness PLUS intuitive knowing PLUS values alignment. AI informs the decision; it doesn’t make it.
Question AI’s Assumptions
Every AI model has assumptions baked into its training data and algorithms. Leaders need the clarity to ask: What is this optimizing for? Whose perspective is centered? What’s being excluded? What would shift if we changed the question? This requires you to be so clear about your own values and biases that you can spot them in the AI’s outputs.
Create Psychological Safety Alongside Technological Advancement
As AI handles more tactical decisions, the strategic and human elements become more crucial. But these only flourish in psychologically safe environments where people can think independently, challenge assumptions, share half-formed ideas, and take intelligent risks.
Fear-based cultures cannot create this safety. Leaders who are themselves driven by unexamined fears cannot model the courage required. This is why the inner work isn’t separate from the AI strategy—it’s the foundation that makes AI useful rather than dangerous.
Stay Connected to Your Somatic Wisdom
When you’re looking at AI-generated insights, can you feel the resonance or dissonance in your body? That subtle sense of ”yes, this aligns” or ”something’s off here”? Most leaders have learned to ignore these signals in favor of purely rational analysis. But in an AI-augmented world, this somatic wisdom becomes your most valuable differentiator.
Developing this capacity requires practice: learning to read your body’s signals, to distinguish fear from intuition, to stay present with uncertainty long enough for wisdom to emerge. This is learnable—but it requires commitment and usually guidance.
The Leadership Imperative
We’re at a fork in the road. One path leads to AI-optimized organizations that are efficient, data-driven, and fundamentally soulless—places where humans are increasingly peripheral to decisions that affect their lives. Where innovation is incremental because AI can only work with what already exists. Where psychological safety is impossible because fear is encoded into the algorithms.
The other path leads to organizations where AI amplifies human wisdom—where technology handles the routine and humans focus on the creative, the strategic, the compassionate, the innovative. Where leaders have done enough inner work to distinguish fear from intuition, to stay present in uncertainty, to make decisions from wholeness rather than wounds. Where psychological safety allows both humans and AI to contribute their unique strengths.
Which path we take is not determined by the technology. It’s determined by the consciousness of the leaders deploying it.
This is why I believe the most important work for leaders right now isn’t learning to use AI better—though that matters. It’s becoming the kind of human who can use AI wisely. Who can distinguish between optimization and wisdom. Who can hold space for the unmeasurable and unquantifiable aspects of value creation. Who can stay connected to authentic intuition even when algorithms are whispering different suggestions.
This isn’t soft skill development. This is the hardest work there is—confronting your fears, examining your biases, developing your capacity to stay present in uncertainty, learning to distinguish the different voices in your system. But it’s also the highest-leverage work you can do. Because as you become clearer, everything else becomes clearer. Your strategic thinking sharpens. Your people decisions improve. Your innovation accelerates. Your use of AI becomes wise rather than just efficient.
The Questions That Matter
As you think about integrating AI into your organization, ask yourself these questions:
- Am I clear enough about my own fears and biases to recognize when they’re influencing how I use AI?
- Can I distinguish between what feels efficient and what feels wise?
- Do I have the somatic literacy to sense when something’s off in AI’s recommendations?
- Am I creating the psychological safety that allows humans to challenge AI and think independently?
- What values am I optimizing for, and are they reflected in how I’m deploying AI?
- Am I using AI to avoid difficult decisions or uncomfortable self-examination?
If you can’t answer these questions clearly, that’s not a failure—it’s information. It’s showing you where the work needs to happen.
The Journey Ahead
In my years of sailing, I learned that the most dangerous sailor isn’t the one who lacks technology—it’s the one who has technology but lacks the wisdom to use it well. Who trusts the GPS without questioning whether the chart data is current. Who follows the routing algorithm into dangerous waters because they’ve stopped using their own judgment.
The same is true in business. The most dangerous leader in the AI era won’t be the technophobe—it will be the one who deploys AI without doing the inner work to use it wisely. Who optimizes for efficiency without questioning what’s being optimized. Who scales their biases at algorithmic speed because they never examined them in the first place.
The opportunity—and it’s a massive one—is to become the kind of leader who can harness AI’s power while remaining grounded in human wisdom. Who can let algorithms handle the routine while you focus on the genuinely strategic. Who can use data and pattern recognition to inform decisions that remain fundamentally wise, values-aligned, and courageously human.
This is the leadership work of our time: developing the inner clarity, somatic awareness, and authentic courage that allows you to use powerful tools without being used by them. To create organizations where technology and humanity both flourish. Where innovation happens not despite fear but because leaders have learned to work with it constructively.
You cannot do this work alone. The patterns you most need to see are precisely the ones you’re living inside of. The fears you need to distinguish from intuition are the ones that feel most like truth. The authentic self you need to access has been covered by decades of protection and adaptation.
This is the work I do with leaders: helping you develop the internal clarity and capacity that makes everything else possible. Not telling you what decisions to make, but helping you become the kind of leader who makes wise decisions naturally—with or without AI. The kind of leader who creates organizations where courage, authenticity, and genuine innovation thrive. Where AI is a powerful ally in service of human wisdom, not a replacement for it.
—
If these ideas resonate with you, if you’re recognizing yourself or your organization in these patterns, let’s talk. The journey from fear-based to wisdom-based leadership is one of the most important you’ll take—for yourself, for your organization, and for everyone your decisions touch.
Related Articles
AI, Coaching, Innovation, Leadership
AI, Coaching, Innovation, Leadership