Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 15)

TL;DR: March’s Go Flux Yourself explores what good leadership looks like in an AI-driven world. Spoiler: it’s not Donald Trump. From psychological safety and the “Lost Einsteins” to lessons from the inventor of plastic, it examines why innovation without inclusion is reckless – and why collaboration, kindness, and asking better questions might be our best defence against digital delusion and existential drift

Image created on Midjourney

The future

“Leadership is the art of harnessing the efforts of others to achieve greatness.”

Donald Trump’s America-first agenda may appeal to the base instincts of populism, a nationalist fever dream dressed up as economic strategy. However, it is hopelessly outdated as a leadership model for a globally connected, AI-enabled future. 

In fact, it’s worse than that. It’s actively regressive. Trumpism, and the rise of Trumpian imitators across the globe, isn’t just shutting borders. It’s shutting minds, too, and that’s more debilitating for society. It trades in fear, not foresight. It rewards silence over dissent. And in doing so, it stifles precisely the kind of leadership the future demands.

Because let’s be clear: the coming decades will not be defined by those who shout the loudest or build the tallest walls. They will be defined by those who keep channels open – not just for trade, but for ideas. For difference. For disagreement. For discovery.

That starts with listening. And not just listening politely, but listening generatively – creating the psychological space where people feel safe enough to share the thought that might change everything.

At the recent Workhuman Live Forum in London, Harvard’s Amy Edmondson – a global authority on leadership and psychological safety – warned of the “almost immeasurable” consequences of holding back. In her research, 93% of senior leaders admitted that their silence had tangible costs. Not theoretical. Not abstract. Tangible. Safety failures. Wasted resources. Poor decisions. Quiet disengagement. And perhaps worst of all, missed opportunities to learn.

Why do we hold back, and not speak up? Because we’re human. And humans are wired to avoid looking stupid. We’d rather be safe than smart. Edmondson calls it “impression management”, and we’re all fluent in it. From the start of primary school, we learn not to raise our hand unless we’re sure of the answer. By the time we enter the workforce, that instinct is second nature.

But in today’s volatile, uncertain, complex, and ambiguous (VUCA) world – after a chorus in the early pandemic days, five years ago, I’m hearing this used a lot more by business leaders now – that instinct is no longer helpful. It’s dangerous. Because real innovation doesn’t happen in safe, silent rooms. It happens in teams willing to fail fast, speak up, and challenge the status quo. In rooms where “I think we might be wrong” is not a career-ending statement, but a spark.

So how should leaders lead? The quotation that begins this month’s Go Flux Yourself is from Ken Frazier, former CEO of Merck, and was cited by Edmundson, who heard it in one of her sessions. It’s worth repeating: “Leadership is the art of harnessing the efforts of others to achieve greatness.”

This brings us to Aneesh Raman, LinkedIn’s Chief Economic Opportunity Officer, and his powerful message at Talent Connect and Sancroft Convene, in the shadow of St Paul’s Cathedral in London. Raman argues that we are moving out of the “knowledge economy” – where technical proficiency was king – and into the “innovation economy”, where our most human skills become our greatest assets.

He lists them as the five Cs: communication, creativity, compassion, courage, and curiosity. Let’s make it six: collaboration. These are no longer “soft skills” but the defining skills of the age. They allow us to build trust, forge connections, and work across differences. They are, as Raman says, why we are the apex species on the planet.

But here’s the catch: while these skills are distributed broadly across the population, the opportunity to develop and express them is not. Enter the “Lost Einsteins” – those with the potential to innovate but without the credentials, connections, or capital to turn ideas into impact. Economist Raj Chetty’s landmark study found that children from wealthy families are 10-times more likely to become inventors than equally talented peers from lower-income backgrounds.

This is a global failure. We are squandering talent on an industrial scale – not because of a lack of ability, but because of a lack of inclusion. And that’s a leadership failure.

We need leaders who can spot and elevate the quiet genius in the room, who don’t confuse volume with value, and who can look beyond the CV and see the potential in a person’s questions, not just their answers.

And we need to stop romanticising “hero” innovation – the lone genius in a garage – and embrace the truth: innovation is a team sport. For instance, Leonardo da Vinci, as biographer Walter Isaacson points out, was a great collaborator. He succeeded because he listened as much as he led.

Which brings us back to psychological safety – the necessary precondition for team-based innovation. Without it, diversity becomes dysfunction. With it, it becomes dynamite.

Edmondson’s research shows that diverse teams outperform homogenous ones only when psychological safety is high. Without that safety, diversity leads to miscommunication, mistrust, and missed potential. But with it? You get the full benefit of varied perspectives, lived experiences, and cognitive styles. You get the kind of high-quality conversations that lead to breakthroughs.

But these conversations don’t happen by accident. They require framing, invitation, and modelling. They require leaders to say – out loud – things like: “I’ve never flown a perfect flight” (as one airline captain Edmondson studied told his new crew). Or “I need to hear from you”. Or even: “I don’t know the answer. Let’s figure it out together.”

KeyAnna Schmiedl, Workhuman’s Chief Human Experience Officer, put it beautifully in a conversation we had at the Live Forum event: leadership today is less about having the answer and more about creating the conditions for answers to emerge. It’s about making work more human – not through performative gestures, but through daily, deliberate acts of kindness. Not niceness. Kindness.

Niceness avoids conflict. Kindness leans into it, constructively. Niceness says, “That’s fine”. Kindness says, “I hear you – but here’s what we need.” Niceness smooths things over. Kindness builds things up.

And kindness is deeply pragmatic. It’s not about making everyone happy. It’s about making sure everyone is heard. Because the next big idea could come from the intern. From the quiet one. From the woman in trainers, not the man in a suit.

This reframing of leadership is already underway. Schmiedl herself never thought of herself as a leader – until others started reflecting it back to her. Not because she had all the answers, but because she had a way of asking the right questions, of creating rooms where people showed up fully, where difference wasn’t just tolerated but treasured.

So what does all this mean for the rest of us?

It means asking better questions. Not “Does anyone disagree?” (cue crickets). But “Who has a different perspective?” It means listening more than speaking. It means noticing who hasn’t spoken yet – and inviting them in. It means, as Edmondson says, getting curious about the dogs that don’t bark. Other “good questions” include: “What are we missing?” Also: “Please can you explain that further?”

And it means remembering that the goal is not psychological safety itself. The goal is excellence. Innovation. Learning. Fairness. Safety is just the soil in which those things can grow.

The future belongs to the leaders who know how to listen, invite dissent, ask good questions, and, ultimately, understand that the art of leadership is not dominance, but dialogue.

Because the next Einstein is out there. She, he, or they just haven’t been heard yet.

The present

“We’re gearing up for this year to be a year where you’ll have some ‘oh shit’ moments,” said Jack Clark, policy chief at Anthropic, the $40 billion AI start-up behind the Claude chatbot, earlier this year. He wasn’t exaggerating. From melting servers at OpenAI (more on this below) to the dizzying pace of model upgrades, 2025 already feels like we’re living through the future on fast-forward.

And yet, amid all the noise, hype, and existential hand-wringing, something quieter – but arguably more profound – is happening: people are remembering the value of connection.

This March, I had the pleasure of speaking at a Federation of Small Businesses (FSB) virtual event for members in South East London. The session, held on Shrove Tuesday, was fittingly titled “Standing Out: The Power of Human Leadership in an AI World”. Between pancake references and puns (some better than others), I explored what it means to lead with humanity in an age when digital tools dominate every dashboard, inbox, and conversation.

The talk was personal, anchored in my own experience as a business owner, a journalist, and a human surfing the digital tide. I shared my CHUI framework – Community, Health, Understanding, and Interconnectedness – as a compass for turbulent times. Because let’s face it: the world is messy right now. Geopolitical uncertainty is high. Domestic pressures are mounting. AI is changing faster than our ability to regulate or even comprehend it. And loneliness – real, bone-deep isolation – is quietly eroding the foundations of workplaces and communities.

And yet, there are bright spots. And they’re often found in the places we least expect – like virtual networking events, Slack channels, and local business groups.

Since that FSB session, I’ve connected with a flurry of new people, each conversation sparking unexpected insight or opportunity. One such connection was Bryan Altimas, founder of Riverside Court Consulting. Bryan’s story perfectly exemplifies how leadership and collaboration can scale, even in a solo consultancy.

After the pandemic drove a surge in cybercrime, Altimas responded not by hiring a traditional team but by building a nimble, global network of 15 cybersecurity specialists – from policy experts to ethical hackers based as far afield as Mauritius. “Most FSB members don’t worry about cybersecurity until it’s too late,” he told me in our follow-up chat. But instead of fear-mongering, Altimas and his team educate. They equip small businesses to be just secure enough that criminals look elsewhere – the digital equivalent of fitting a burglar alarm on your front door while your neighbour leaves theirs ajar.

What struck me most about Altimas wasn’t just his technical acumen, but his collaborative philosophy. Through FSB’s Business Crimes Forum, he’s sat on roundtables with the London Mayor’s Office and contributed to parliamentary discussions. These conversations – forged through community, not competition – have directly generated new client relationships and policy influence. “It’s about raising the floor,” he said. “We’re stronger when we work together.”

That sentiment feels increasingly urgent. In an age where cybercriminals operate within sophisticated, decentralised networks, small businesses can’t afford to work in silos. Our defence must be networked, too – built on shared knowledge, mutual accountability, and trust.

And yet, many governments seem to be doing the opposite. The recent technical capability notice issued to Apple – which led to the withdrawal of advanced data protection services from UK devices – is a case in point. Altimas called it “the action of a digitally illiterate administration”, one that weakens security for all citizens while failing to deter the real bad actors. The irony? In trying to increase control, we’ve actually made ourselves more vulnerable.

This brings us back to the role of small business leaders and, more broadly, to the power of community. As I told the audience at the FSB event, the future of work isn’t just about AI. It’s about who can thrive in an AI world. And the answer, increasingly, is those who can collaborate, communicate, and connect across differences.

In a world where 90% of online content is projected to be AI-generated this year, authentic human interaction becomes not just a nice-to-have, but a business differentiator. Relationship capital is now as valuable as financial capital. And unlike content, it can’t be automated.

That’s why I encourage business leaders to show up. Join the webinars. Say yes to the follow-up call. Ask the awkward questions. Be curious. Some of the most valuable conversations I’ve had recently – including with Altimas – started with nothing more than a LinkedIn connection or a quick post-event “thanks for your talk”.

This isn’t about nostalgia or rejecting technology. As I said in my FSB talk, tech is not the enemy of human connection – it’s how we use it that matters. The question is whether our tools bring us closer to others or push us further into isolation.

The paradox of the AI age is that the more powerful our technologies become, the more essential our humanity is. AI can optimise, analyse, and synthesise, but it can’t empathise, mentor, or build trust in a room. It certainly can’t make someone feel seen, valued, or safe enough to speak up.

That’s where leadership comes in. As Edmondson noted, psychological safety doesn’t happen by accident. It must be modelled, invited, and reinforced. In many cases, work must be reframed to make clear that anyone and everyone can make a difference, alongside an acknowledgement by leaders that things will inevitably go wrong. And as Raman said, the next phase of work will be defined not by who codes the best, but by who collaborates the most.

Our best bet for surviving the “oh shit” moments of 2025 is not to go it alone, but to lean in together. As FSB members, for instance, we are not just business owners. We are nodes in a network. And that network – messy, human, imperfect – might just be our greatest asset.

The past

In 1907, Leo Baekeland changed the world. A Belgian-born chemist working in New York, he created Bakelite – the world’s first fully synthetic plastic. It was, by every measure, a breakthrough. Hard, durable, and capable of being moulded into almost any shape (the clue is in the name – plastikos, from the Greek, meaning “capable of being shaped”), Bakelite marked the dawn of the modern plastics industry. 

For the first time, humankind wasn’t limited to what nature could provide. We could manufacture our own materials. These materials would soon find their way into everything from telephones to televisions, jewellery to jet engines.

Baekeland had no idea what he was unleashing. And perhaps that’s the point.

More than a century later, we’re drowning in the aftershocks of that innovation. At Economist Impact’s 10th Sustainability Week earlier this month – once again in the quietly majestic surroundings of Sancroft Covene – I had the pleasure of moderating a panel titled “Preventing plastics pollution through novel approaches”. I even dressed for the occasion, sporting a nautical bow tie (always good to keep the theme on-brand), and kicked things off with a bit of self-aware humour about my surname.

One of the panellists, Kris Renwick of Reckitt, represented the makers of Harpic – the toilet cleaner founded by none other than Harry Pickup, surely the most illustrious bearer of my surname. (Although late actor Ronald Pickup has a case.) There’s a certain poetry in that Harry made his name scrubbing away society’s waste. 

Especially when set against another panellist, Alexandra Cousteau – granddaughter of Jacques-Yves, the pioneering oceanographer who co-invented the Aqua-Lung and brought the mysteries of the sea to the world. Cousteau, who first set sail on an expedition at just four months old, told the audience that there is 50% less sea life today than in her grandfather’s time.

Let that sink in. Half of all marine life gone – in just three generations.

And plastics are a big part of the problem. We now produce around 460 million tonnes of plastic every year. Of that, 350 million tonnes becomes waste – a staggering 91% is never recycled. Contrary to popular belief, very little of it ends up in the oceans directly, though. 

According to Gapminder, just under 6% of all plastic waste makes it to the sea. Most of it – around 80 million tonnes – is mismanaged: dumped, burned, or buried in ways that still wreak havoc on ecosystems and human health. As Cousteau pointed out, the average person, astonishingly, is believed to carry around the equivalent of a plastic spoon’s worth of microplastics in their body. Including in their brain.

Image created on Midjourney

It’s a bleak picture – and one with eerie echoes in the current hype cycle around AI.

Bakelite was hailed as a wonder material. It made things cheaper, lighter, more efficient. So too does AI. We marvel at what generative tools can do – composing music, designing logos, writing code, diagnosing diseases. Already there are brilliant use cases – and undoubtedly more to come. But are we, once again, rushing headlong into a future we don’t fully understand? Are we about to repeat the same mistake: embracing innovation, while mismanaging its consequences?

Take energy consumption. This last week, OpenAI’s servers were reportedly “melting” under the strain of demand after the launch of their new image-generation model. Melting. It’s not just a metaphor. The environmental cost of training and running large AI models is immense – with a 2019 estimate (ie before the explosion of ChatGPT) suggesting a single model can emit as much carbon as five cars over their entire lifetimes. That’s not a sustainable trajectory.

And yet, much like Bakelite before it, AI is being pushed into every corner of our lives. Often with the best of intentions. But intentions, as the old saying goes, are not enough. What matters is management.

On our plastics panel, Cousteau made the case for upstream thinking. Rather than just reacting to waste, we must design it out of the system from the start. That means rethinking materials, packaging, infrastructure. In other words, it requires foresight. A willingness to zoom out, to consider long-term impacts rather than just short-term gains.

AI demands the same. We need to build governance, ethics, and accountability into its architecture now – before it becomes too entrenched, too ubiquitous, too powerful to regulate meaningfully. Otherwise, we risk creating a different kind of pollution: not plastic, but algorithmic. Invisible yet insidious. Microbiases instead of microplastics. Systemic discrimination baked into decision-making processes. A digital world that serves the few at the expense of the many.

All of this brings us back to leadership. Because the real challenge isn’t innovation. It’s stewardship. As Cousteau reminded us, humans are phenomenally good at solving problems when we decide to care. The tragedy is that we so often wait until it’s too late – until the oceans are full, until the servers melt, until the damage is done.

Moderating that session reminded me just how interconnected these conversations are. Climate. Technology. Health. Equity. We can’t afford to silo them anymore. The story of Bakelite is not just the story of plastics. It’s the story of unintended consequences. The story of how something miraculous became monstrous – not because it was inherently evil, but because we weren’t paying attention.

And that, in the end, is what AI forces us to confront. Are we paying attention? Are we asking the right questions, at the right time, with the right people in the room?

Or are we simply marvelling at the magic – and leaving someone else to clean up the mess?

Statistics of the month

📊 AI in a bubble? Asana’s latest research reveals that AI adoption is stuck in a ‘leadership bubble’ – while executives embrace the tech, most employees remain on the sidelines. Two years in, 67% of companies still haven’t scaled AI across their organisations. 🔗

🤝 Collaboration drives adoption. According to the same study, workers are 46% more likely to adopt AI when a cross-functional partner is already using it. Yet most current implementations are built for solo use – missing the chance to unlock AI’s full, collective potential. 🔗

📉 Productivity gap alert. Gartner predicts that by 2028, over 20% of workplace apps will use AI personalisation to adapt to individual workers. Yet today, only 23% of digital workers are fully satisfied with their tools – and satisfied users are nearly 3x more productive. The workplace tech revolution can’t come soon enough.

📱 Emoji wars at work. New research from The Adaptavist Group exposes a generational rift in office comms: 45% of UK over-50s say emojis are inappropriate, while two-thirds of Gen Z use them daily. Meanwhile, full-stops are deemed ‘professional’ by older workers, but 23% of Gen Z perceive them as ‘rude’. Bring on the AI translators! 🔗

😓 Motivation is fading. Culture Amp finds that UK and EMEA employee motivation has declined for three straight years. Recognition is at a five-year low, and fewer workers feel performance reviews reflect their impact. Hard work, unnoticed. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 8)

TL;DR: August’s Go Flux Yourself includes the CHUI Framework to navigate human-work evolution, stresses the need to be kind explorers for human flourishing, EE’s helpful advice not to allow under 11s access to smartphones, and me facing my fear of heights by jumping out of a plane at 12,000ft … 

Image created on Midjourney with the prompt “a smiling, young, blonde-haired explorer, navigating the future world – one filled with wonder and opportunity, and futuristic-looking things – in the style of a Caravaggio painting”

The future

“When I grow up, I want to be an explorer.”

My four-year-old daughter, Darcey, “graduated” from her nursery in South East London a few weeks ago. On a warming July morning, proud parents perched awkwardly on children’s chairs to watch their little ones say farewell to Cheeky Monkeys. 

The preschool kids, dressed in their smartest clothes for the big occasion, stood in a line before us mums and dads and, with star-covered plastic mortarboards on their heads, put on a show.

After a collective opening song, they spelt their names individually – just about. Then, they took turns to reveal their career ambitions. The children listed many professions, some more realistic (and legal) than others. We had a dancer, an actor, a ballerina, a train driver, a ninja, a pirate, and four teachers – copycats! – before Darcey broke the chain.

Darcey at her Cheeky Monkeys graduation

When she declared her desire to be an “explorer”, I initially chortled. How cute, I thought. However, on reflection, I genuinely loved her answer – so much so that I’ve worked it into a keynote speech I’m giving in mid-September. 

The remit for the event, titled Navigating the Changing World of Work (if interested in claiming a seat, please let me know, and I’ll ask – I think I have an allocation), held at the Institute of Engineering and Technology (where I am a member), a stunning venue steeped in history, is to set the scene, drawing on my expertise in human-work evolution.

Thinking back to the graduation at Cheeky Monkeys, I realised that Darcey, in her four-year-old wisdom, had stumbled upon the essence of what we all must be in this rapidly evolving digital world: explorers. Kind explorers, to be precise. (Darcey, who adores babies – we have an army of plastic ones – is caring, but she is still working on the kindness-to-other-humans bit. Hopefully, the discipline of school, which she begins in early September, will help with that.)

Indeed, we’re all explorers in this brave new world of artificial intelligence, automation, and digital transformation. We’re charting unknown territories, facing unfamiliar challenges, and trying to make sense of a landscape that’s shifting beneath our feet faster than we can blink.

But what will Darcey be when she grows up? As in: what will her actual job be? Of course, it’s impossible to predict at this point. Partly because she has 14 years before she leaves school (hopefully), and mostly because professions are being warped – if not wiped out – by technological advancements. Plus, plenty of new jobs will be spawned. 

At the start of the year, LinkedIn produced a list of 25 UK roles that were on the rise, according to the platform’s data. The piece points out that job skill sets have changed by 25% since 2015 and are projected to shift by 65% by 2030 globally.

Right now – well, at the beginning of 2024, at least – the 25 fastest-growing jobs include Sustainability Manager (ranked first), AI Engineer (seventh), Security Operations Centre Analyst (tenth), Energy Engineer (fourteenth), and Data Governance Manager (sixteenth). Most of these roles did not exist a decade ago.

So, how best can we prepare our children for the world of work? The World Economic Forum talks about the four Cs being the essential skills for the 21st century. Namely: critical thinking, collaboration, communication, and creativity. I would lob another C in there: compassion.

My AI-induced FOBO (fear of becoming obsolete) has triggered my own – necessary – journey of exploration, which has led to much more fun speaking work, podcasting, moderating, and essentially more human interaction; this is why I talk about “human-work evolution” and not just “future of work”, which often leaves people out of the conversation.

Through all of this, I’ve discovered that the key to navigating the future of work lies not in mastering any particular technology, but in cultivating our uniquely human qualities. As such, I’ve chiselled the values I discussed in April’s Go Flux Yourself and created the CHUI Framework. 

CHUI is an acronym for the following:

  • Community
  • Health
  • Understanding
  • Interconnectedness

These values are crucial as we navigate the complexities of the digital age. They remind us that no matter how advanced our technology becomes, we are, at our core, social beings who thrive on connection, wellbeing, empathy, and the intricate web of relationships that bind us together.

Here’s a breakdown of each element of CHUI.

Community

In an increasingly digital world, the importance of community cannot be overstated. We need to foster a sense of belonging, both in physical and virtual spaces. This means creating inclusive workplace cultures, facilitating meaningful connections between team members, and recognising the value of diverse perspectives.

Health

This encompasses not just physical health but mental and emotional wellbeing. As the lines between work and personal life blur, especially in remote and hybrid work environments, we must prioritise holistic health. This includes promoting work-life balance, providing mental health resources, and creating a workplace culture that values wellbeing.

Understanding

Deep understanding is more valuable than ever in a world of information overload and rapid change. This means cultivating curiosity, promoting continuous learning, and developing the ability to see things from multiple perspectives. It’s about going beyond surface-level knowledge to grasp the complexities of our work and our world truly.

Interconnectedness

Everything is connected in our globalised, digitalised world. Actions in one part of the world can have far-reaching consequences. In the workplace, this means recognising how different roles and departments interrelate, understanding the broader impact of our work, and fostering a sense of shared purpose and responsibility.

By embracing these CHUI values, we can create more resilient, adaptable, and human-centric workplaces and better control human-work evolution.

Ultimately, we must be explorers, venturing into unknown territories, mapping out new ways of working, and discovering new possibilities at the intersection of human and machine intelligence.

But more than that, we need to be kind explorers. Kind to ourselves as we navigate the complexities of the digital age. Kind to colleagues and clients as they grapple with change and uncertainty. Kind to our intelligent assistants as we learn to work alongside them. And kind to the wider world that our decisions will impact.

The map we create today will shape the landscape for generations to come. So, let’s ensure it’s a landscape defined not just by technological advancement but human flourishing.

Let’s create workplaces where people can bring their whole selves to work, where diversity is celebrated, wellbeing is prioritised, learning is constant, and technology serves humanity – not the other way around.

Let’s be the kind of leaders who don’t just manage change but inspire it, who don’t just adapt to the future but shape it, who don’t just talk about values but live them daily.

We don’t know precisely where our digital explorations will lead us. But if we approach them with curiosity, kindness, and a commitment to our shared humanity, I believe we can evolve human work so that it’s not just more efficient but more fulfilling, not just more profitable but more purposeful.

So, let’s go forth and explore! Let’s be curious. Let’s be kind. Let’s be human.

The present

After typing this edition of Go Flux Yourself, I’ll grab my iPhone 13 and post teasers on LinkedIn and Instagram. I’m acutely aware of the irony. Here I am, a supposedly responsible adult, tapping away on a device I’m hesitant to put in my daughter’s hands. It’s a conundrum that plagues modern parenting: how do we navigate the digital landscape with our children when we’re still finding our own bearings?

Darcey, my four-year-old explorer-in-training, is growing up in a world where technology is as ubiquitous as oxygen. She’s already adept at swiping through photos on my phone and giggling at videos of herself, which is both impressive and terrifying.

I understand that Darcey’s interaction with technology – and smartphones in particular – will be crucial for her development. In a world where digital literacy is as essential as reading and writing, denying her access to these tools feels akin to sending her into battle armed with nothing but a wooden spoon.

But then I read about EE’s recommendation, published earlier this week, that children under 11 shouldn’t have smartphones, and I breathed a sigh of relief. It’s as though someone’s permitted me to pump the brakes on this runaway tech train.

The stance taken by EE – one of the UK’s largest mobile network providers – isn’t just some arbitrary line in the sand. It’s backed by growing concerns about the effects of smartphone and internet usage on children’s mental health and behaviour. The US Surgeon General’s warning that social media use presents “a profound risk of harm” for children only adds weight to these concerns.

As a parent, I’m caught in a tug-of-war between embracing technology’s potential and shielding my child from its perils. On one hand, I want Darcey and her older brother, Freddie, to be digitally savvy and navigate the online world confidently. On the other, I’m terrified of exposing my children to the darker corners of the internet, where trolls lurk, and misinformation spreads like wildfire.

It’s not just about protecting her from external threats, either. I worry about the internal changes that constant connectivity might bring. Will she develop the patience to read a book when TikTok offers instant gratification? Will she learn to navigate real-world social situations when she can hide behind a screen? Will she ever know the joy of being bored and having to use her imagination to entertain herself? 

In June’s newsletter, I discussed the loneliness epidemic, and the rise of AI girlfriends and boyfriends – what will this situation look like in a decade if left unchecked? 

Dr Jonathan Haidt’s observation about the potent combination of social media and smartphones causing a decline in mental health rings true (this is worth a watch). It’s not just about access to information; it’s about the constant, addictive pull of likes, shares, and notifications. It’s about the pressure to present a perfect online persona, even before you’ve figured out who you really are.

As I ponder this digital dilemma, I can’t help but wonder if we’re in the midst of a grand social experiment with our children as the unwitting subjects. Will future generations look back on our era of unregulated social media use with the same bewilderment we feel when we see old adverts promoting the health benefits of smoking?

EE’s advice may be a step towards a more balanced approach. Maybe we need to redefine what it means to be “connected” in the digital age. Could we embrace technology without being enslaved by it?

For now, I’m focusing on nurturing Darcey’s explorer spirit in the physical world. When the time comes for her to venture into the digital realm, I hope she’ll do so with the curiosity of an explorer and the caution of a well-prepared adventurer.

Meanwhile, I’m trying to model healthier tech habits. It’s a work in progress, but I’m learning to put my phone down more often, to be present in the moment, and to remember that the most important connections are the ones we make face-to-face.

In this brave new world of pixels and algorithms, the most revolutionary act is to be human. To laugh, play, and explore – without a screen in sight. After all, isn’t that what childhood should be about?

The past

In the spirit of being a fearless explorer I recently took a leap of faith. Quite literally.

In mid-August, my wife, Clare, and I found ourselves hurtling towards the earth at 120 mph after jumping out of a perfectly good aeroplane at 12,000 feet. This wasn’t just a random act of madness; we were skydiving to raise money for Lewisham and Greenwich NHS Trust Charity, near Ashford in Kent (here is the fundraising link, if interested).

Now, I have a fear of heights. A proper, palm-sweating, stomach-churning fear. But as I’ve been banging on about the importance of exploration and facing our fears in this digital age, I figured it was time to practice what I preach.

The experience was, well, “exhilarating” doesn’t entirely describe it. It was a cocktail of terror, awe, and pure, unadulterated, life-affirming joy. As we plummeted through the air, the fear melted away, replaced by an overwhelming sense of freedom. It was a vivid reminder that our greatest adventures often lie just beyond our comfort zones.

But this wasn’t just about personal growth or fundraising. It was also about sharing an important experience with Clare, strengthening our bond through a joint adventure. And with Darcey and Freddie watching from the ground, I hope we’ve inspired them to be brave, to push their boundaries, and to embrace life’s challenges head-on.

As we touched down, wobbly-legged but elated, I couldn’t help but draw parallels to our journey through this rapidly evolving digital landscape. Sometimes, we need to take that leap into the unknown. We need to trust our training, face our fears, and embrace the exhilarating free fall of progress.

So here’s to being fearless explorers – whether we’re charting the digital unknown or plummeting through the physical sky. May we always have the courage to jump, the wisdom to learn from the fall, and the joy to relish the ride.

Clare and I take to the skies

Statistics of the month

  • Following Labour’s pledge to introduce the “right to switch off”, new research reveals that most are “workaholics”, with 20% struggling to decline after-hours work requests, and 88% experiencing ongoing stress (Owl Labs).
  • In 2023, funding for generative AI skyrocketed to $22.4 billion, nearly nine times higher than in 2022 and 25 times the 2019 amount, despite a decline in overall AI investment since 2021 (Our World In Data).
  • Alphabet-backed Waymo, a self-driving technology company, revealed that it now provides over 100,000 paid robotaxi rides each week across its key markets in Los Angeles, San Francisco, and Phoenix (TechCrunch).

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Why the need for leaders to address poor workplace communication is so urgent now

Ineffective communication costs U.S. businesses $1.2 trillion annually, or $12,506 per employee, pointed out by Grammarly Business’s latest State of Business Communication report, published in early March. But are leaders receiving the message that urgent improvement is required? 

The Grammarly Business survey of 251 business leaders and 1,001 knowledge workers suggested connection problems are growing. Time spent on written communication grew 18% compared to 2022, while worker stress levels were 7% higher due to poor communication, and this caused a 15% decline in productivity.

Further, the research, conducted in partnership with The Harris Poll, showed that workers spent over 70% of their working weeks communicating on various channels. Yet 58% wished they had better tools to streamline communication. “Leaders who shrug off the massive impact of poor communication on their bottom line will lose,” argued Matt Rosenberg, Grammarly’s chief revenue officer and head of Grammarly Business. 

Rosenberg said that the results of the second annual report indicated the challenge was growing, causing a “greater impact on everything from operational efficiency to employee and customer satisfaction.” As a result, he urged a rethink of communications strategies. “At a time when the stakes are critically high, leaders who invest in empowering efficient, consistent communication across their organizations will see results and profits climb.”

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

How hybrid working is failing due to poor execution

Few leaders can say they have perfected hybrid working at their organization. The evolution from pre-pandemic working methods was always going to be messy and stressful, and a steep learning curve. With no one-size-fits-all, off-the-shelf solution, the scale and logistics of making a hybrid strategy work have been headache-inducing for many.

In a desperate attempt to ease the pain, and with dark clouds of a financial crisis looming, many employers, twitching from productivity paranoia, have retreated to old ways and imposed return-to-work mandates rather than persevering, opting to treat the symptom rather than the root cause of the problem.

But hybrid working is failing due to poor execution rather than as a concept — and that lack of success is primarily down to leadership — according to a global pulse survey by business consultancy Gartner, which questioned 330 HR leaders across a range of industries. So in that sense, will such RTO diktats not be regressive and more damaging in the longer term?

Gartner’s data shows 69% of business leaders have expressed concerns about collaboration, culture, creativity, and engagement. Little wonder more office-centric workforce strategies have been written up frantically. Further, 54% of human resources leaders reckoned their employees are less connected to their organizations than before the coronavirus crisis.

However, those who believe returning to the office will boost staff productivity, visibility, and loyalty are failing to realize and address the underlying issue.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in February 2023 – to read the complete piece, please click HERE.

Lack of in-office experience shows many Gen Zers don’t know how to behave at work

London-based banking consultant Amy – an alias to which WorkLife agreed to protect her identity in the highly regulated financial services industry – has managed dozens of Gen Zers in the last couple of years. She has often been frustrated – and at times flabbergasted – by their attitude towards traditional workplace norms. She offered plenty of examples, but overall it’s the sense of entitlement despite a lack of experience that most sticks in the craw. 

“Many started their careers in lockdown, but they are super-ambitious and expect to be calling the shots. Yet when restrictions lifted, it became clear that they were unsure of the right workplace etiquette,” said Amy, who has worked for numerous banks in a 20-year career.

She explained how there was an agreed expectation for everyone to return to the office for one specific day a week to improve face-to-face connection and collaboration. However, Gen Zers would frequently not show up for the brainstorming sessions. “I’d have to call them to see if they were coming in, and they’d say: ‘No, I’m working from home.’”

Amy’s insights tally with recent Gartner data that suggests the rise in remote and hybrid working has meant that many career starters have committed faux pas due to having few in-person experiences. Expressly, this lack of face-to-face time in the office has limited the chances to observe workplace norms or determine what is appropriate and effective within their organizations. 

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

Will 2023 be the year we stop idolizing tech entrepreneurs?

In Ancient Rome, where the public was enthralled by celebrity culture and helped elevate and sink reputations, 2022 would have been labeled an annus horribilis for cultish business leaders. 

Given the recent fall from eminence of several headline-generating bosses, could 2023 be the year people – including investors – finally become more careful not to be hoodwinked by technology entrepreneurs and even snub them altogether? Moreover, have we reached “peak idolatry of innovators,” as suggested by Scott Galloway, clinical professor of marketing at New York University Stern School of Business?

Galloway calculated that the wealthiest leader in the tech space had a 33% chance of being named Time magazine’s person of the year. However, the “gross, nonsensical adoration” of celebrity innovators may have reached the pinnacle after “a tough couple of months for the ‘Church of Technology,’” he added. 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How to lead and manage stressed-out workforces

No organization can say it has nailed hybrid working.

To help navigate the journey ahead, WorkLife selected nine recent statistics to show the direction of travel, identify the most prominent likely obstacles, and offer advice from experts on how employers can overcome them.

Four were featured in this piece and the remaining five are here. These include:
– 70% of C-suite executives in the U.K. feel burnt out
– 79% of global employees are not engaged at work
– 85% of global business leaders with hybrid workforces are not confident employees are being productive
– 43% of hybrid workers don’t feel included in meetings
– U.S. workers have, on average 18 hours of meetings a week – but almost one-third are deemed unnecessary

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

Under pressure: Why bosses are struggling more than ever

With the dark clouds of a global recession gathering and workers enveloped by a sense of dread and job insecurity, it’s easy to overlook the plight of those in the eye of the storm: the big bosses. And new data indicates that leaders around the globe are struggling like never before.

The latest Future Forum Pulse report — a survey of almost 11,000 workers across the U.S., Australia, France, Germany, Japan, and the U.K. published in October — found that executives’ sentiment and experience scores had sunk to record lows. Compared to a year ago, execs reported a 15% decline in the working environment, a 20% drop in work-life balance, and a 40% increase in work-related stress and anxiety.

The results shared by Future Forum, Slack’s research consortium on the future of work, were mirrored in workplace culture and recognition firm O.C. Tanner’s 2023 Global Culture Report, which involved 36,000 workers from 20 countries. “We found that leaders are 43% more likely to say that work is interfering with their ability to be happy in other areas of their lives,” said Robert Ordever, the organization’s European managing director. 

The saying that “a happy worker is a productive worker” is particularly relevant to those in a position of power. “When leaders don’t thrive, their employees, teams, and organizations won’t either,” added Ordever.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Glass half-full or half-empty: How to balance a partying culture at work

What was your honest reaction when Sanna Marin, Finland’s prime minister, was scandalized for partying recently? In August, the 36-year-old sparked controversy after leaked videos showed her dancing and drinking with friends. 

Whichever side of the bar you sit on, Marin’s partying raised important questions about how business leaders in all walks of life should conduct themselves when with and without colleagues in a social environment. 

How do employees feel about a boozy boss? And do enforced work events, where people are encouraged to imbibe at a free bar, help or hinder the health of a workplace in a post-pandemic world?

Indeed, in most industries, for decades – if not centuries – socializing with colleagues and attending work drinks has been central to company culture. Away from the workplace, over a glass or two, people can relax, make meaningful memories, share challenges and opportunities – at work and home – and, ultimately, strengthen bonds with coworkers. But is the glass half-full, half-empty, or completely empty in 2022?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in September 2022 – to read the complete piece, please click HERE.

Workers share their worst toxic boss experiences

All the chatter about quiet quitting – namely, doing what a job requires and no more – has provoked deeper discussions about toxic workplace culture and poor management as organizations firm up their hybrid-working strategies.

Some execs have aired concerns that the bring-your-whole-selves-to-work trend has backfired, and in many cases has caused fragmented workforces, while some leaders have taken advantage of the concept to justify their own questionable behavior.

WorkLife spoke to a range of employees from those who consider themselves quiet quitters, to those who have resigned outright, plus those still considering resigning, to find out what prompted them to take their current course of action. Under the condition of anonymity – for fear of career-damaging repercussions – they shared their recent experiences, which highlight the alarming management they have endured. We’ve selected three of the worst accounts.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in September 2022 – to read the complete piece, please click HERE.

The seven biggest hybrid-working challenges, and how to fix them

The phrase “new normal” is a misnomer, given the state of flux in the business world. Few organizations have been able to normalize operations; who can say they’ve nailed their hybrid working strategy with a straight face?

As Kate Thrumble, executive director of talent at marketing company R/GA London, said: “We are all on a – to use an overused word – ‘journey’ with the post-pandemic way of working. No one has cracked it yet. Even those with the best intentions will have to wait a year or two to understand the impact of today’s decisions.”

However, by matching the right technology solutions with the most pressing hybrid-working challenges, organizations will reach their end destination quicker: a happy, productive, engaged and empowered workforce.

So what exactly are the seven most significant business challenges and the best tech, tools and processes to solve them and speed up progress?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in May 2022 – to continue reading please click HERE.

‘Vulnerability can be your fortress’: How to move away from a command-and-control management approach

The events of the last two years have necessitated the acceleration of a multitude of work-related trends, but the erosion of the command-and-control leadership is arguably the most significant.

A decade ago, “The How Report,” published by LRN, surveyed 16,000 employees in 17 countries and concluded that 97% adhered to a command-and-control model. Four years later, in 2016, the figure in a follow-up survey was 92%. So what is the percentage likely to be in 2022, post-pandemic?

While there is not currently a definitive answer to that puzzler, plenty of other studies indicate a significant shift away from command-and-control leadership.

This article was first published on DigiDay’s WorkLife platform in March 2022 – to continue reading please click here.

‘It’s going to get messy’: How rising generational divides could kill workplace culture

Intergenerational divides are more expansive than ever, and if left unchecked could quickly lead to toxic workplace cultures, experts warn.

Opinions on post-pandemic work values vary wildly across generations, according to a report from London-based global recruitment firm Robert Walters published in early March.

Some 60% of the 4,000 U.K. office workers surveyed reported a rise in “new challenges” when working with teammates from different generations. And 40% of respondents are “annoyed” at the post-pandemic working values and global-minded outlooks of colleagues in other age ranges.

This article was first published on DigiDay’s WorkLife platform in March 2022 – to continue reading please click here.

‘What’s in it for me?’: The employee question that needs answering in any return-to-office playbook

It’s crunch time for hybrid return-to-office plans, again.

After numerous false starts (thanks Delta and Omicron) it looks like a full-scale return to the office, in whatever shape or form that takes, has arrived. As such, a growing number of major organizations have started to show what hybrid model they’re going for.

Last week, Google told staff in the San Francisco Bay Area and several other U.S. locations that it will end its voluntary work-from-home phase in April, in favor of a plan where most employees will spend three days in the office and two working remotely.

Microsoft has also said it will reopen its Washington state and Bay Area offices, and that employees can configure what days they come to the office with their managers. Likewise, with all coronavirus restrictions officially lifted in England, organizations there are being pressured to articulate and activate their return-to-the-office plans.

Trite as it may be, it’s vital to acknowledge that an incredible amount has changed in the world of work since the pandemic struck almost precisely two years ago. And the most significant transformation has been where most of us work.

Models will naturally vary depending on the company, but there are a few essential guidelines that are worthwhile for all employers to take note of. Here’s a breakdown of five key areas employers need to have in their playbook.

This article was first published on DigiDay’s WorkLife platform in March 2022 – to continue reading please click here.

Unpacking which harmful work practices the pandemic exposed, and which are — hopefully — banished for good

It’s crass to argue “the pandemic has been good for humanity.” It has, though, effectively taken an X-ray of society and highlighted where sickness lies. And, most agree, much remedial work is required to restore total health.

Whether acute areas are treated — or, indeed, treatable — is a matter for incumbent politicians and business leaders. In this article, we turned to the latter cohort to reflect on what harmful work practices were exposed by the coronavirus crisis and how they’ve evolved as a result, for the better.

This article was first published on DigiDay’s WorkLife platform in February 2022 – to continue reading please click here.

Leading for the future: how has the pandemic changed those in charge?

In a world where change is the only constant, leaders must be authentic, tech-savvy and human. They have to prepare for the next crisis by empowering employees so their businesses are more agile and resilient

Be honest, how has the coronavirus pandemic changed you?

For most of us, it is only now – more than 18 months after the pandemic hit and as some semblance of normality returns – that we finally have the headspace to reflect properly on this question, answer it truthfully and inspect the mental scars, having been in survival mode for so very long.

Spare a thought, then, for business leaders who, alongside any personal struggles, have been forced to steer their organisations out of choppy waters while faced with cascades of disruption.

The list includes supply chain problems, geopolitical issues, increased pressure to recruit and retain top talent in the so-called ‘great resignation’ age, and the need to engage with a range of stakeholders to facilitate an accelerated digital transformation. They don’t teach this stuff at business school and many will have felt out of their depth, understandably.

The torrents of chaos have eroded everyone to a degree. And businesses and laggard leaders who have not kept up with the waves of change have, alas, been swept away. The response to Covid-19 necessitated the locking down of people, but paradoxically it opened minds. As a result, in the post-pandemic wash-up, the world looks and feels different. 

For instance, videoconferencing technology’s rapid advancement or adoption has enabled businesses to communicate to colleagues and customers, and somehow brought people closer together. Moreover, there is something thrillingly democratising about everyone having the same-size square box on Zoom, Teams or Google Meet, whether a chief executive or a 21-year old, fresh out of university.

New normal: mindset change required

Cybersecurity and global warming have leapfrogged other concerns for boards and consumers alike. In the afterglow of COP26, ‘ESG strategy’ has become a business buzzword, while actions and transparency speak louder than words. And as many are focused on the environment and governance, is the social element the squeezed middle?

As we tiptoe hopefully out of the worst of the coronavirus crisis, leaders have many important questions to answer. How will hybrid working actually work? What business models need evolving or binning? And, most fundamentally, in a world of constant change, how can greater agility and resilience be achieved?

“The US military phrase VUCA – an acronym for volatility, uncertainty, complexity and ambiguity – captures the world in which we now operate,” says Alan Patefield-Smith, chief information officer of insurers Admiral Group. “Everyone has their favourite worry.”

Paul Szumilewicz, programme director for retail in continental Europe at HSBC, bristles at the concept of ‘resilience.’ He says: “What I’ve seen in the last few years, especially during the pandemic, is that ‘resilience’ is overrated. Too often, we have unrealistic expectations of people and particularly leaders.”

Szumilewicz argues that admitting “we don’t know the answer, but we are working on it” shows strength. “There is a positive shift in leaders to accept that being vulnerable makes us more real, more relatable,” he says.

Citing a 2017 Harvard Business Review paper, he continues: “The single biggest factor that triggers oxytocin [a hormone that plays a role in social bonding] in the brain at work is when a leader, manager or colleague shows vulnerability. Resilience is sometimes not as powerful as we think. Being honest about that can have an even more powerful impact.”

Invest in technology but don’t forget people

Simon Finch, supply chain director at Harrods, concedes that “there was a lot of scrambling around to make things work” when the coronavirus crisis and, more recently, Brexit fallout exposed operational weaknesses. He posits that businesses were “obsessed with making supply chains as lean as possible” before Covid, moving items around quickly, with minimal stock and expense. 

“Coronavirus completely screwed up that approach,” says Finch. “From now on, the supply chain must be more about agility, to cope with volatility and uncertainty, and less about being lean. However, that agility has to be fully supported by technology and data insights.” 

Technology alone, though, is not enough. If leaders fail to invest in their people, and that includes themselves, then the much-maligned skills gap will gape even wider. Consider the World Economic Forum estimates that technology will subsume 85m human jobs and 97m new roles will be created in just the next four years. As man, woman and machine work together, leaders should become less robotic and more human.

Indeed, according to Wayne Clarke, founding partner of the Global Growth Institute: “The most essential leadership trait of the 21st century, without a doubt, is empathy. The leaders with the most emotional intelligence will stand out. To better engage staff and improve the employee experience, the most critical question to ask is ‘How do you feel?’”

So go on, be honest.

This article was first published by Raconteur as part of a long-scroll project sponsored by Oliver Wight in November 2021