Wheelies, wipeouts and the best leadership lesson I've had all year!

More Articles
Simon Collinson
Simon Collinson
AI Change Consultant

I was sitting in the sun this weekend, watching my son Austin flying around on his bike while my daughter Ryan grappled with her new love of skateboarding, queue super proud dad! Somewhere between cheering them on, I caught myself wondering: what will their world look like in the next decade? What careers will they have? What problems will they be solving that we can't even name yet?

The truth is, no one knows. And that's not so different from the era we're all navigating right now. No one truly knows what the impacts of AI (the good and the bad) will ultimately look like. Which is exactly why how we lead through the uncertainty matters so much.

Most NZ AI rollouts are failing before they start, not because of the technology, but because of the culture around it.

Leaders are talking about AI as a way to work smarter and move faster. But scratch the surface of most transformation programmes and you'll find the real agenda is efficiency, cost reduction, and margin. Employees aren't naive, they clock this quickly. And when they do, the psychological contract takes a hit before a single tool has even been rolled out.

The focus is almost entirely on the technology. What's being systematically underinvested in is the human infrastructure around it: the conversations, the clarity, and most critically, the psychological safety to actually try something new without fearing the consequences.

When psychological safety is low, people remain silent about uncertainty, avoid experimentation, and quietly disengage from the change process. Adoption slows, not because the technology is ineffective, but because the workforce is psychologically cautious.

A study published in February this year, drawing on data from over 2,200 employees across a global consulting firm, found that psychological safety is the single strongest predictor of whether people actually adopt AI tools, more than their role, seniority, experience, or geography. We're not talking about a soft nice-to-have. We're talking about the mechanism that determines whether your investment pays off.

And right now in Aotearoa, there's a meaningful gap between what leaders believe and what their people are experiencing. Randstad's 2026 data shows that 60% of NZ employers think AI will significantly reshape work tasks, but only 48% of their employees see it that way. That's not resistance. That's a signal that people haven't been brought along on the journey.

The fear is out there — and we're not talking about it enough.

There has arguably never been more genuine uncertainty about the future of work in our lifetimes. I regularly wonder about what my kids (Austin and Ryan) will be doing in their careers. It's impossible to tell. What I do know is that their world of work will look vastly different to the roles we know today, and that sits with me.

The media narrative has been relentless, and the divide between those who feel positioned to benefit from AI and those who don't is widening. I hear it in conversations with leaders and their teams, a low-level hum of anxiety that isn't being named or addressed. People are asking: does the organisation still need me, and for how long? What happens when my tasks get automated? Am I being upskilled, or managed out?

When leaders fail to make space for these fears, when they keep the conversation at the level of productivity metrics and tool demos, they're avoiding and deferring the discomfort. And deferred fear can become disengagement, then resistance, then failed adoption, then real business risk.

So what does good actually look like?

It starts with something that doesn't come naturally to most senior leaders: vulnerability. Not weakness, let's be clear about that. Brené Brown's research is unambiguous on this point: vulnerability is not the absence of courage, it is the birthplace of trust. A leader who says "I don't have all the answers about where AI might take us, but I'm committed to figuring it out alongside you" is not showing weakness. They're creating the psychological safety that makes it possible for their team to take the risks that actually drive innovation.

What erodes trust is the performative confidence, the meetings where everything is fine, the strategy deck that makes it sound all in hand, the language of "opportunity" when people are genuinely worried about their mortgages. People don't need certainty. They need honesty, and they need to feel heard.

Vulnerability is not oversharing. It's not a lack of direction. It's the decision to show up honestly in ambiguous conditions — and in 2026, that is exactly the condition we're all operating in.

Beyond that, good looks like an adaptive approach to transformation, one built with teams, not done to them. Instead of announcing AI strategy from the top, what if you brought your people into the question? What would faster access to information make possible for this team? What could we do with the hours we currently spend on admin? What experiments are worth running, even if some of them fail?

This kind of co-creation does two things simultaneously. It builds psychological ownership, people who helped design the change are far more likely to champion it. And it surfaces insight that leadership simply doesn't have access to from the top floor. Your frontline teams know exactly where the friction is, where the time is wasted, where the opportunity lies. You just have to create the conditions where they feel safe enough to tell you.

Then test, learn, and iterate - visibly. Not every experiment will work, and that's not just okay, it's the point. When leaders model intellectual curiosity and resilience in the face of failure, they give their teams permission to do the same. That's how you build a culture of innovation that outlasts any single tool or technology cycle.

As I sat watching my kids in the sun, I noticed something. They weren't paralysed by the uncertainty of trying. Austin tried to wheelie time and time again. Each time it didn't come off, he just reset and went again. Ryan fell off that skateboard more times than I could count, thank goodness for knee and elbow pads. And then finally she didn't, she stood up. The look on her face in that moment was one of pure accomplishment and pure joy, that will stay with me for a long time.

They were showing me exactly the mindset this moment calls for. An understanding that failure will happen and just an openness to trying and learning from it.

That's not a children's story. That's the leadership challenge sitting in front of every C-suite and People team in the country right now. Build the conditions where your people feel safe enough to do the same, and the rest follows.

At Five NZ, we built the AI Compass to help leaders understand exactly where their organisation stands across the five dimensions that drive successful AI transformations: Leadership & Strategy, Performance, Culture, People, and Tech.

Keen to know more, start with our free AI Readiness Check — it takes less than five minutes and gives you a clear picture of where to focus first.

Take the free readiness check → aicompass.io

Back to Journal
Enquire Us
This is some text inside of a div block.
This is some text inside of a div block.

Who to talk to at Five

James Kitney
Chief Strategy and Transformation Catalyst
james@fivenz.com
Paula Riano
AI Strategy Advisor
paula@fivenz.com
Nick Mackeson-Smith
Chief Curiosity Officer, Founder and Director
nick@fivenz.com

Related services

Loading
Five logo