Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.