Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 20)

TL;DR: August’s edition examines how companies are eliminating graduate jobs while redirecting recruitment budgets to failing AI pilots. From ancient rhetoric becoming essential survival skills to pre-social media university life, this edition explores why authentic human connection is our ultimate competitive advantage.

August's Go Flux Yourself explores:

The graduate job collapse: Entry-level positions requiring degrees have dropped by two-thirds since 2022, while students face £53,000 average debt. Stanford research reveals young workers in AI-exposed jobs experienced 13% employment decline as older colleagues in identical roles saw steady growth

The failing AI paradox: 95% of corporate AI pilots deliver no meaningful returns, yet companies redirect graduate recruitment budgets to these failing technologies. Half of UK firms now want to swap staff spending for AI investment despite zero evidence of productivity gains

The strategic generation: Anecdotal examples illustrate how young people aren't competing with AI but conducting it – using ChatGPT for interior design, creating revision podcasts, and embracing technology as "another thing to make life easier"

The pre-digital simplicity: Twenty-four years ago at St Andrews, Prince William was just another tutorial student alongside Oliver in a world without Facebook, smartphones, or AI assistants. Physical books, card catalogues, and pub conversations shaped minds through friction rather than convenience

To read the full newsletter, please visit www.oliverpickup.com.

Image created on Midjourney

The future

“First, we made our jobs robotic. Then we trained the robots how to do them. If AI takes your job, it won’t be because it’s so smart. It will be because over time we’ve made work so narrow, so repetitive, so obsessed with minimising variance and deferring to metrics, that it’s perfectly designed for machines.”

Tom Goodwin’s above observation about how we’ve made our jobs robotic before training robots to do them, articulated in mid-August on LinkedIn, feels remarkably prescient as thousands of teenagers prepare for university. When I interviewed the business transformation consultant, bullshit caller / provocateur, and media expert in 2022, following an update to his book Digital Darwinism, he warned about a looming leadership crisis. That crisis is now playing out in ways he probably didn’t fully anticipate.

The timing of the post couldn’t be more brutal. It’s been 25 years since I received my A-level results. Yet, I can still easily recall the pre-reveal trepidation followed by relief that I’d scraped the grades to study English Literature and Language at the University of St Andrews (as a peer of Prince William – more on this below, including a 20-year-old graduation picture).

What a thrilling time of year it should be: the end of school, then taking the next step on life’s magical journey, be it straight to university without passing go, a gap year working and then travelling, or eschewing higher education to begin a career.

I wonder how this year’s A-level leavers feel, given they’re walking into the most uncertain graduate job market in a generation. The promises made to them about university – to study hard, earn a degree, and secure a good job – are crumbling in real time.

Data from job search engine Adzuna suggests that job postings for entry-level positions requiring a degree have dropped by almost two-thirds in the UK since 2022, roughly double the decline for all entry-level roles (as quoted in the Financial Times). The same research found that entry-level hiring is down 43% in the US, and 67% in the UK, since ChatGPT launched in November 2022.

The study results tally with other sources. In June, for instance, UK graduate job openings had plunged by 75% in banking and finance, 65% in software development, and 54% in accounting compared to the same month in 2019, according to Indeed (also in the FT piece).

Meanwhile, students graduating from universities in England in 2025 have an average student loan debt of approximately £53,000, with total outstanding loans reaching £267 billion. Frankly, is university worth it today?

I was fortunate enough to be part of the last cohort to benefit from minimal tuition fees in Scotland before they were introduced to all students in the 2005-6 academic year. Further, when I studied my postgraduate degree in magazine journalism at Cardiff University’s JOMEC, we were (verbally and anecdotally) guaranteed jobs within a year; and, as far as I know, all my peers achieved that. Such certainty feels alien now, even quaint.

But where does this trend lead? What happens when an entire generation faces systematic exclusion from entry-level professional roles?

A Stanford University study tracking millions of workers through ADP payroll data revealed something rather more troubling: young workers aged 22-25 in “highly AI-exposed” jobs experienced a 13% employment decline since OpenAI released its LLM just under three years ago, while older colleagues in identical roles saw steady or rising employment.

Arguably, we’re witnessing the first generation where machines are genuinely better at doing what universities taught them than they are.

Erik Brynjolfsson, one of the Stanford paper’s co-authors (and a professor whom I interviewed a couple of months after ChatGPT was unveiled – even back then he was warning about the likely problems with AI acceleration and jobs), put it bluntly: “There’s a clear, evident change when you specifically look at young workers who are highly exposed to AI.” 

The research controlled for obvious alternatives — COVID effects, tech sector retrenchment, interest rate impacts — and the correlation held. Software developers and customer service agents under the age of 25 saw dramatic employment drops. Home health aides, whose work remains both physically and emotionally demanding, saw employment rise.

The distinction matters. AI isn’t just replacing workers randomly, but it’s targeting specific types of work. The Stanford team found that occupations where AI usage is more “automative” (completely replacing human tasks) showed substantial employment declines for young people. In contrast, “augmentative” uses (where humans collaborate with AI) showed no such pattern.

Anthropic CEO Dario Amodei warned in May that half of “administrative, managerial and tech jobs for people under 30” could vanish within five years. He’s probably being conservative.

However, what’s especially troubling about this shift is that a new MIT research, The GenAI Divide: State of AI in Business 2025, suggests that many AI deployment programmes are failing to deliver expected returns on investment, with companies struggling to show meaningful productivity gains from their technology investments. Specifically, 95% of generative AI pilots at companies are failing, and delivering next to no return on investment. 

Despite this, organisations continue redirecting budgets from graduate recruitment to AI initiatives. Half of UK companies now want to redirect money from staff to AI, according to Boston Consulting Group research.

This creates a dangerous paradox: companies are cutting the graduate pipeline that develops future leaders while betting on technologies that haven’t yet proven their worth. What happens to organisational capability in five years when the cohort of junior professionals who should be stepping into senior roles doesn’t exist, or those that are in the job market don’t have any meaningful experience?

This connects directly to Tom Goodwin’s observation. The combined forces of consulting culture, efficiency obsessions, and metric-driven management have reshaped roles once built on creativity, empathy, relationships, imagination, and judgment into “checklists, templates, and dashboards”. We stripped away the human qualities that made work interesting and valuable, creating roles “perfectly designed for machines and less worth doing for humans”.

Consider those entry-level consulting, law, and finance roles that have vanished. They were built around tasks like document review, basic data analysis, research synthesis, and report formatting – precisely the narrow, repetitive work at which large language models excel.

Yet amid this disruption, there are signals of adaptation and hope. During a recent conversation I had with Joanne Werker, CEO of the people engagement company Boostworks, she shared statistics and personal insights that capture both the challenges and the opportunities facing this generation. Her organisation’s latest research, published in late July, indicates that 57% of Gen Z and 71% of Millennials are exploring side hustles, not as passion projects, but to survive financially. Taking a positive view of this situation, one could argue that this will be a boon for innovation, given that necessity is the mother of invention.

Also noteworthy is that nearly one in five Gen Zers is already working a second job. Joanne’s daughters illustrate a different relationship with AI entirely. One, aged 30, works in music, while the other, 24, is in fashion, both creative fields where AI might be expected to pose a threat. Instead, they don’t fear the technology but embrace it strategically. The younger daughter used ChatGPT to redesign their family’s living room, inputting photos and receiving detailed interior design suggestions that impressed even Jo’s initially sceptical husband. As Joanne says, both daughters use AI tools not to replace their creativity, but to “be smarter and faster and better”, for work and elsewhere. “The younger generation’s response to AI is just, ‘OK, this is another thing to make life easier.’”

This strategic approach extends to education. Nikki Alvey, the (brilliant) PR pro who facilitated my conversation with Jo, has children at the right age to observe this transition. Her son, who just completed his A-levels, used AI extensively for revision, creating quizzes, podcasts, and even funny videos from his notes. As Nikki pointed out: “I wish I’d had that when I was doing GCSEs, A-levels, and my degree; we barely had the internet.”

Elsewhere, her daughter, who is studying criminology at the University of Nottingham, operates under different constraints. Her university maintains a blanket ban on AI use for coursework, though she uses it expertly for job applications and advocacy roles. This institutional inconsistency reflects higher education’s struggle to adapt to technological reality: some universities Nikki’s son looked at were actively discussing AI integration and proper citation methods, while others maintain outright bans.

Universities that nail their AI policy will recognise that future graduates need capabilities that complement, rather than compete with, AI. This means teaching students to think critically about information sources.

As I described during a recent conversation with Gee Foottit on St James’s Place Financial Adviser Academy’s ‘The Switch’ podcast: “Think of ChatGPT as an army of very confident interns. Don’t trust their word. They may hallucinate. Always verify your sources and remain curious. Having that ‘truth literacy’ will be crucial in the coming years.”

Related to this, LinkedIn’s Chief Economic Opportunity Officer Aneesh Raman describes this as the shift from a knowledge economy to the “relationship economy”, where distinctly human capabilities matter most.

The Stanford research offers clues about what this means. In occupations where AI augments rather than automates human work – such as complex problem-solving, strategic thinking, emotional intelligence, and creative synthesis – young workers aren’t being displaced.

Success won’t come from competing with machines on their terms, but from doubling down on capabilities that remain uniquely human. 

On The Switch podcast episode, which will be released soon (I’ll share a link when it does), I stressed that the future belongs to those – young and old – who can master what I call the six Cs of skills to dial up: 

  • Collaboration
  • Communication
  • Compassion
  • Courage
  • Creativity
  • and Curiosity

These are no longer soft skills relegated to HR workshops but survival capabilities for navigating technological disruption.

There’s a deeper threat lurking, though. The issue isn’t that the younger generations are AI-literate while their elders struggle with new technology, but understanding how to maintain our humanity while leveraging these tools. 

No doubt nurturing the six Cs will help, but a week or so ago, Microsoft’s AI chief, Mustafa Suleyman, described something rather more unsettling: “AI psychosis”: a phenomenon where vulnerable individuals develop delusions after intensive interactions with chatbots. In a series of posts on X, he wrote that “seemingly conscious AI” tools are keeping him “awake at night” because of their societal impact, even though the technology isn’t conscious by any human definition.

“There’s zero evidence of AI consciousness today,” Suleyman wrote. “But if people just perceive it as conscious, they will believe that perception as reality.”

The bitter irony is that the capabilities we now desperately need – namely, creativity, empathy, relationships, imagination, and judgement – are exactly what we stripped out of entry-level work to make it “efficient”. Now we need them back, but we’ve forgotten how to cultivate them at scale.

The generation entering university in September may lack traditional job security, but they possess something their predecessors didn’t: the ability to direct AI while (hopefully) remaining irreplaceably human. And that’s not a consolation prize. It’s a superpower.

The present

On stage with Tomás O’Leary at Origina Week

I tap these words on the morning of August 29 from seat 24F on Aer Lingus EI158 from Dublin to London Heathrow, flying high after a successful 48 hours on the Emerald Isle. A software client, Origina, flew me in. I’ve been assisting with the CEO, Tomás O’Leary’s thought leadership and the company’s marketing messaging for over a year (his words of wisdom around pointless software upgrades and needless infrastructure strain in my July newsletter). 

Having struck up a bond – not least thanks to our days reminiscing about playing rugby union (we were both No8s, although admittedly I’m a couple of inches shorter than him) – Tomás invited me to participate in Origina Week. This five-day extravaganza mixes serious business development skills with serious fun and learning.

Tomás certainly made me work for my barbecued supper at the excellent Killashee Spa Hotel: on Thursday, I was on stage, moderating three sessions for three consecutive hours. The final session – the last of the main programme – involved men and Tomás having a fireless “fireside” chat about technology trends as I see them, and his reflections on their relevance to the software space.

I was grateful to make some superb connections, be welcomed deeper into the bosom of the Origina family, and hear some illuminating presentations, especially behavioural psychologist Owen Fitzpatrick’s session on the art of persuasion. 

Watching Owen work was a masterclass in human communication, which no AI could replicate. For 90 minutes, around 250 people from diverse countries and cultures were fully engaged, leaning forward, laughing, and actively participating. This was neural coupling in action: the phenomenon where human brains synchronise during meaningful interaction. No video call, no AI assistant, no digital platform could have generated that energy.

This is what Tomás understood when he invested in bringing his global team together in the Irish capital. While many executives slash training budgets and rely on digital-only interactions, he recognises that some learning only happens face-to-face. That’s increasingly rare leadership in an era where companies are cutting human development costs while pouring billions into AI infrastructure.

Owen’s session focused on classical rhetoric: the ancient art of persuasion, which has become increasingly relevant in our digital age. He walked us through the four elements: ethos (credibility), logos (logic), pathos (emotion), and demos (understanding your audience). These are precisely the human skills we need as AI increasingly handles our analytical tasks.

It was a timely keynote. Those who have completed their A-levels this summer are entering a world where the ability to persuade, connect, and influence others becomes more valuable than the ability to process information.

Yet we’re simultaneously experiencing what recent research from O.C. Tanner calls a recognition crisis. Its State of Employee Recognition Report 2025 found that UK employees expect in-person interactions with recognition programmes to increase by 100% over the next few years, from 37% to 74%. These include handwritten notes, thank you cards, and award presentations. People are craving authentic human interaction precisely because it’s becoming scarce.

Recent data from Bupa reveals that just under a quarter (24%) of people feel lonely or socially isolated due to work circumstances, rising to 38% among 16-24-year olds. Over a fifth of young workers (21%) say their workplace provides no mental health support, with 45% considering moves to roles offering more social interaction.

Also, new research from Twilio reveals that more than one-third of UK workers (36%) demand formally scheduled “digital silence” from their workplace. Samantha Richardson, Twilio’s Director of Executive Engagement, observed: “Technology has transformed how we work, connect, and collaborate – largely for the better. But as digital tools become increasingly embedded in everyday routines, digital downtime may be the answer to combating the ‘always-on’ environment that’s impeding productivity and damaging workplace culture.”

This connects to something that emerged from Owen’s session. He described how the most powerful communication occurs through contrast, repetition, and emotional resonance – techniques that require human judgment, cultural understanding, and real-time adaptation. These are precisely the skills that remain irreplaceable in an AI world.

Consider how Nikki’s son used AI for revision. Rather than passively consuming information or getting out the highlighter pens and mapping information out on a big, blank piece of paper (as I did while studying, and still do sometimes), he actively prompted the technology to create quizzes, podcasts, and videos tailored to his learning style. This was not AI replacing human creativity, but human creativity directing AI capabilities.

The challenge for today’s graduates isn’t avoiding AI, but learning to direct it purposefully. This requires exactly the kind of critical thinking and creative problem-solving that traditional education often neglects in favour of information retention and standardised testing.

What’s particularly striking about the current moment is how it echoes patterns I’ve observed over the past year of writing this newsletter. In June 2024’s edition, I explored how AI companions were already changing human relationships. I’ve also written extensively about the “anti-social century” and our retreat from real-world connection. Now we’re seeing how these trends converge in the graduate employment crisis: technology is doing more than just transforming what we do. It is also changing how we relate to each other in the process. 

On this subject, I’m pleased to share the first of a new monthly podcast series I’ve begun with long-term client Clarion Events, which organises the Digital Transformation Expo (DTX) events in London and Manchester. The opening episode of DTX Unplugged features Nick Hodder, Director of Digital Transformation and Engagement at the Imperial War Museums (IWM), highlighting why meaningful business transformation begins with people, not technology.

The answer, whether in a hotel conference room in Dublin or a corporate office in Manchester, remains the same: in a world of AI, our ability to connect authentically with other humans has become our competitive edge.

The past

Twenty-four years ago in September, I sat in my first tutorial at the University of St Andrews — nine students around a table, including Prince William and seven young women. That tutorial room held particular energy. We were there to think, question, argue with texts and each other about ideas that mattered. Will, who played for my Sunday League football team, was just another student. 

The economic backdrop was fundamentally different. Graduate jobs were plentiful, social media was (thankfully) nascent – Facebook was three years away, and only mildly registered in my final year, 2004-05 – and so partying with peers was authentic, and free from fears of being digitally damned. Moreover, the assumption that a degree led to career success felt unshakeable because it was demonstrably true.

The social contract was clearer, too. Society invested in higher education as a public good that would generate returns through increased productivity, innovation, and civic engagement. Students could focus on learning rather than debt management because the broader community bore the financial risk in exchange for shared benefits.

My graduation day at the University of St Andrews in 2005

Looking back, what strikes me most is the simplicity of the intellectual environment. We read physical books, researched in libraries using card catalogues, and didn’t have any digital devices in the lecture halls or tutor rooms. (And the computers we had in our rooms took up a colossal amount of space.) Our critical thinking developed through friction: the effort required to find information, synthesise arguments from multiple sources, and express ideas clearly without technological assistance.

Knowledge felt both scarce and valuable precisely because it was hard to access. You couldn’t Google historical facts during seminars. If you hadn’t done the reading, everyone knew. If your argument was poorly constructed, there was nowhere to hide. The constraints forced genuine intellectual development.

The human connections formed during those four years proved more valuable than any specific subject knowledge. Late-night debates in residence halls, study groups grappling with challenging texts, and casual conversations between lectures – these experiences shaped how we thought and who we became.

We could explore medieval history, philosophical arguments, or literary criticism without worrying whether these subjects would directly translate to career advantages. The assumption was that broad intellectual development would prove valuable, even if connections weren’t immediately obvious. (Again, I was fortunate to be in the last cohort of subsidised university education.)

That faith in indirect utility seems almost lost now. Today’s students, facing massive debt burdens, quite reasonably demand clear pathways from educational investment to career outcomes. The luxury of intellectual exploration for its own sake becomes harder to justify when each module costs hundreds – if not thousands – of pounds.

Some elements remain irreplaceable. The structured opportunity to develop critical thinking skills, build relationships with peers and mentors, and discover intellectual passions in supportive environments still offers unique value. 

Indeed, these capabilities matter more now than they did a quarter of a century ago. When information is abundant but truth is contested, when AI can generate convincing arguments on any topic, and when economic structures are shifting rapidly, the ability to think independently becomes genuinely valuable rather than merely prestigious.

My 10-year-old son will reach university age by 2033. By then, higher education will have undergone another transformation. The economics might involve shorter programmes, industry partnerships, apprenticeship alternatives, or entirely new models that bypass traditional degrees. But the fundamental question remains unchanged: how do we prepare young minds to think independently, act ethically, and contribute meaningfully to society?

The answer may require reimagining university education entirely. Perhaps residential experiences focused on capability development rather than content transmission. Maybe stronger connections between academic learning and real-world problem-solving. Possibly more personalised pathways that recognise different learning styles and career ambitions. What won’t change is the need for structured environments where young people can develop their humanity while mastering their chosen fields of expertise. 

The students opening their A-level results this last month deserve better. They deserve educational opportunities that develop their capabilities without crushing them with debt. They deserve career pathways that use their human potential rather than competing with machines on machine terms. Most importantly, they deserve honest conversations about what higher education can and cannot provide in an age of technological disruption.

Those conversations should start with acknowledging what that tutorial room at St Andrews represented: human minds engaging directly with complex ideas, developing intellectual courage through practice, and building relationships that lasted decades (although my contact with Prince Will mysteriously stopped after I began working at the Daily Mail Online!). 

These experiences – whether at university or school, or elsewhere – remain as valuable as ever. The challenge is whether we can create sustainable ways to provide them without bankrupting the people who need them most.

Statistics of the month

🎓 A-level computing drops
Computing A-level entries fell by 2.8% in the UK despite years of growth, though female participation rose 3.5% to reach 18.6% of students taking the subject. Meanwhile, maths remains most popular with 112,138 students, but girls represent just 37.3% of the cohort. 🔗

👩‍💼 AI skills gender divide widens
Only 29% of women report having AI skills compared to 71% of men, while nearly 70% of UK jobs face high AI exposure. Under half of workers have been offered AI-related upskilling opportunities. 🔗

💰 Office return costs surge
UK employees spend an average £25 daily on commuting and expenses when working from the office, potentially costing nearly £3,500 annually in commuting alone if expected to be in the office for five days a week. 🔗

🏢 Summer hiring advantage emerges
Some 39% of UK businesses have struggled to hire in the last 12 months, with competition and slow hiring cited as key barriers. 🔗

🌍 Extreme poverty redefined
The World Bank raised its International Poverty Line from $2.15 to $3 per day, adding 125 million people to extreme poverty statistics. However, global extreme poverty has still fallen from 43% in 1990 to 10% today. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 17)

TL;DR May’s Go Flux Yourself explores how, in a world where intelligence is becoming cheap, easy, and infinite, the concept of childhood and adolescence is being rewritten. Are AI tools empowering young people or quietly corroding their minds?

Image created on Midjourney

The future

“If I were the devil, I wouldn’t come with violence. I’d come with convenience. I’d give them everything they wanted, and strip them of everything they needed. I’d make them feel connected, but completely alone, endlessly entertained, but quietly empty.”

The above is part of an answer by ChatGPT to a question posed recently by lifestyle influencer Alexia Ferrari, aka themillionairemaker on social media channels. This was the puzzler: “If you were the devil, how would you destroy the young minds of the next generation without them even knowing it?”

And its resonance – evidenced by millions of views and thousands of comments – suggests that she struck a nerve. The AI’s response was disturbingly eloquent.

Following on from the opening quotation, above, it continued: “I’d blur the lines between truth and opinion, until nothing meant anything. I’d teach them to chase dopamine, not purpose, to worship self, but love who they are. I wouldn’t destroy them with war, I’d destroy them with apathy. I’d keep them distracted, numb, scrolling, always scrolling.”

So far, so horrific. There was more. “I’d dismantle the family by redefining it. Make fathers optional, mothers exhausted, and children confused. I’d hand them a phone before they could speak in full sentences. And I’d watch their minds rot slowly, sweetly, silently. And the most brilliant part? They’d never know it was me. They’d call it freedom.” 

Ultimately, it described a world that feels uncomfortably like our own. The children are not only online but are being remade by it. In response, and perhaps seeking some philosophical distance, I asked Claude, Anthropic’s AI, the same question. Its answer was more measured, but no less damning.

“Let me suggest a rather chilling thought experiment,” Claude began, with a silent cackle. “If I were tasked with undermining young minds whilst keeping them blissfully unaware, I’d focus on their digital habits – the very tools they trust most. First, I’d flood their feeds with endless, mindless content. Not obviously harmful stuff, mind you, but material so perfectly tailored to their dopamine receptors that genuine curiosity would wither away. Why wonder about the world when the algorithm already knows what you want? 

“Then I’d weaponise comparison. Social media would become a hall of mirrors where everyone else appears happier, more successful, better looking. The constant measuring up would erode self-worth drip by drip. I’d fragment their attention spans. Tasks requiring sustained focus would feel increasingly impossible as their brains rewired for quick hits of stimulation. Reading a book? Ancient history. Deep thinking? Too much effort. 

“Perhaps most insidiously, I’d create echo chambers so convincing that opposing viewpoints would seem not just wrong, but incomprehensible. Critical thinking would rust from disuse. The beauty of this approach? They’d defend their digital chains. The very devices stealing their potential would feel like lifelines. They’d scroll willingly into intellectual darkness, smartphone in hand, convinced they’re more connected than ever. Frightening, really, how closely this mirrors aspects of our current reality. Makes you wonder what we might do differently, doesn’t it?”

Makes one wonder, indeed. These warnings, eerily congruent despite different the different large language models, do not speak of potential threats. They describe current conditions. They’re not predicting a future. They’re diagnosing a present.

The effect is cumulative. Essentially, as parents, educators, and adults, we are outsourcing more and more of the hard cognitive lifting – research, memory, even curiosity – to machines. And what we once called “childhood” is now a battleground between algorithms and agency.

I’m typing these words as I train back to London from Cheshire, where I was in the countryside with my two young children, at my parents’ house. This half term, we escaped the city for a few days of greenery and generational warmth. (The irony here is that while walks, books and board games dominated the last three days, my daughter is now on a maths game on an iPad, and my older son is blowing things up on his Nintendo Switch – just for an hour or so while I diligently polish this newsletter.) 

There were four-week-old lambs in the field next door, gleefully gambolling. The kids cooed. For a moment, all was well. But as they scampered through the grass, I thought: how long until this simplicity is overtaken by complexity? How long until they’re pulled into the same current sweeping the rest of us into a world of perpetual digital mediation?

That question sharpened during an in-person roundtable I moderated for Cognizant and Microsoft a week ago. The theme was generative AI in financial services, but the most provocative insight came not from a banker but from technologist David Fearne, “What happens,” he asked, “when the cost of intelligence sinks to zero?”

It’s a question that has since haunted me. Because it’s not just about jobs or workflows. It’s about meaning.

If intelligence becomes ambient – like electricity, always there, always on – what is the purpose of education? What becomes of effort? Will children be taught how to think, or simply how to prompt?

The new Intuitive AI report, produced by Cognizant and Microsoft, outlines a corporate future in which “agentic AI” becomes a standard part of every team. These systems will do much more than answer questions. They will anticipate needs, draft reports, analyse markets, and advise on strategy. They will, in effect, think for us. The vision, says Cognizant’s Fearne, is to build an “agentic enterprise”, which moves beyond isolated AI tools to interconnected systems that mirror human organisational structures, with enterprise intelligence coordinating task-based AI across business units.

That’s the world awaiting today’s children. A world in which thinking might not be required, and where remembering, composing, calculating, synthesising – once the hallmarks of intelligence – are delegated to ever-helpful assistants. 

The risk is that children become, well, lazy, or worse, they never learn how to think in the first place.

And the signs are not subtle. Gallup latest State of the Global Workforce study, published in April, reports that only 21% of the global workforce is actively engaged, a record low. Digging deeper, only 13% of the workforce is engaged in Europe – the lowest of any region – and in the UK specifically, just 10% of workers are engaged in their jobs.

Meanwhile, the latest Microsoft Work Trend Index shows 53% of the global workforce lacks sufficient time or energy for their work, with 48% of employees feeling their work is chaotic and fragmented

If adults are floundering, what hope is there for the generation after us? If intelligence is free, where will our children find purpose?

Next week, on June 4, I’ll speak at Goldsmiths, University of London, as part of a Federation of Small Businesses event. The topic: how to nurture real human connection in a digital age. I will explore the antisocial century we’ve stumbled into, largely thanks to the “convenience” of technology alluded to in that first ChatGPT answer. The anti-social century, as coined by The Atlantic’s Derek Thompson earlier this year, is one marked by convenient communication and vanishing intimacy, AI girlfriends and boyfriends, Meta-manufactured friendships, and the illusion of connection without its cost

In a recent LinkedIn post, Tom Goodwin, a business transformation consultant, provocateur and author (whom I spoke with about a leadership crisis three years ago), captured the dystopia best. “Don’t worry if you’re lonely,” he winked. “Meta will make you some artificial friends.” His disgust is justified. “Friendship, closeness, intimacy, vulnerability – these are too precious to be engineered by someone who profits from your attention,” he wrote.

In contrast, OpenAI CEO Sam Altman remains serenely optimistic. “I think it’s great,” he said in a Financial Times article earlier in May (calling the latest version of ChatGPT “genius-level intelligence”). “I’m more capable. My son will be more capable than any of us can imagine.”

But will he be more human?

Following last month’s newsletter, I had a call with Laurens Wailing, Chief Evangelist at 8vance and a longtime believer in technology’s potential to elevate, not just optimise, who reacted to my post. His company is using algorithmic matching to place unemployed Dutch citizens into new roles, drawing on millions of skill profiles. “It’s about surfacing hidden talent,” he told me. “Better alignment. Better outcomes.”

His team has built systems capable of mapping millions of CVs and job profiles to reveal “fit” – not just technically, but temperamentally. “We can see alignment that people often can’t see in themselves,” he told me. “It’s not about replacing humans. It’s about helping them find where they matter.”

That word stuck with me: matter.

Laurens is under no illusion about the obstacles. Cultural inertia is real. “Everyone talks about talent shortages,” he said, “but few are changing how they recruit. Everyone talks about burnout, but very few rethink what makes a job worth doing.” The urgency is missing, not just in policy or management, but in the very frameworks we use to define work.

And it’s this last point – the need for meaning – that feels most acute.

Too often, employment is reduced to function: tasks, KPIs, compensation. But what if we treated work not merely as an obligation, but as a conduit for identity, contribution, and community? 

Laurens mentioned the Japanese concept of Ikigai, the intersection of what you love, what you’re good at, what the world needs, and what you can be paid for. Summarised in one word, it is “purpose”. It’s a model of fulfilment that stands in stark contrast to how most jobs are currently structured. (And one I want to explore in more depth in a future Go Flux Yourself.)

If the systems we build strip purpose from work, they will also strip it from the workers. And when intelligence becomes ambient, purpose might be the only thing left worth fighting for.

Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

If we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will create a society that is smarter than ever, and more vacant than we can bear.

He is a true believer in the liberating potential of AI.

It’s a noble mission. But even Laurens admits it’s hard to drive systemic change. “Everyone talks about talent shortages,” he said, “but no one’s actually rethinking recruitment. Everyone talks about burnout, but still pushes harder. There’s no trigger for real urgency.”

Perhaps that trigger should be the children. Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

Because if we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will have created a society that is smarter than ever – and more vacant than we can bear.

Also, is the curriculum fit for purpose in a world where intelligence is on tap? In many UK schools, children are still trained to regurgitate facts, parse grammar, and sit silent in tests. The system, despite all the rhetoric about “future skills”, remains deeply Victorian in its structure. It prizes conformity. It rewards repetition. It penalises divergence. Yet divergence is what we need, especially now. 

I’ve advocated for the “Five Cs” – curiosity, creativity, critical thinking, communication, and collaboration – as the most essential human traits in a post-automation world. But these are still treated as extracurricular. Soft skills. Add-ons. When in fact they are the only things that matter when the hard skills are being commodified by machines.

The classrooms are still full of worksheets. The teacher is still the gatekeeper. The system is not agile. And our children are not waiting. They are already forming identities on TikTok, solving problems in Minecraft, using ChatGPT to finish their homework, and learning – just not the lessons we are teaching.

That brings us back to the unnerving replies of Claude and ChatGPT, and to the subtle seductions of passive engagement, plus the idea that children could be dismantled not through trauma but through ease. That the devil’s real trick is not fear but frictionlessness.

And so I return to my own children. I wonder whether they will know how to be bored. Because boredom – once a curse – might be the last refuge of autonomy in a world that never stops entertaining.

The present

If the future belongs to machines, the present is defined by drift – strategic, cultural, and moral drift. We are not driving the car anymore. We are letting the algorithm navigate, even as it veers toward a precipice.

We see it everywhere: in the boardroom, where executives chase productivity gains without considering engagement. In classrooms, where teachers – underpaid and under-resourced – struggle to maintain relevance. And in our homes, where children, increasingly unsupervised online, are shaped more by swipe mechanics than family values.

The numbers don’t lie, with just 21% of employees engaged globally, according to Gallup. And the root cause is not laziness or ignorance, the researchers reckon. It is poor management; a systemic failure to connect effort with meaning, task with purpose, worker with dignity.

Image created on Midjourney

The same malaise is now evident in parenting and education. I recently attended an internet safety workshop at my child’s school. Ten parents showed up. I was the only father.

It was a sobering experience. Not just because the turnout was low. But because the women who did attend – concerned, informed, exhausted – were trying to plug the gaps that institutions and technologies have widened. Mainly it is mothers who are asking the hard questions about TikTok, Snapchat, and child exploitation.

And the answers are grim. The workshop drew on Ofcom’s April 2024 report, which paints a stark picture of digital childhood. TikTok use among five- to seven-year-olds has risen to 30%. YouTube remains ubiquitous across all ages. Shockingly, over half of children aged three to twelve now have at least one social media account, despite all platforms having a 13+ age minimum. By 16, four out of five are actively using TikTok, Snapchat, Instagram, and WhatsApp.

We are not talking about teens misbehaving. We are talking about digital immersion beginning before most children can spell their own names. And we are not ready.

The workshop revealed that 53% of children aged 8–25 have used an AI chatbot. That might sound like curiosity. But 54% of the same cohort also worry about AI taking their jobs. Anxiety is already built into their relationship with technology – not because they fear the future, but because they feel unprepared for it. And it’s not just chatbots.

Gaming was a key concern. The phenomenon of “skin gambling” – where children use virtual character skins with monetary value to bet on unregulated third-party sites – is now widely regarded as a gateway to online gambling. But only 5% of game consoles have parental controls installed. We have given children casinos without croupiers, and then wondered why they struggle with impulse control.

This is not just a parenting failure. It’s a systemic abdication. Broadband providers offer content filters. Search engines have child-friendly modes. Devices come with monitoring tools. But these safeguards mean little if the adults are not engaged. Parental controls are not just technical features. They are moral responsibilities.

The workshop also touched on social media and mental health, referencing the Royal Society of Public Health’s “Status of Mind” report. YouTube, it found, had the most positive impact, enabling self-expression and access to information. Instagram, by contrast, ranked worst, as it is linked to body image issues, FOMO, sleep disruption, anxiety, and depression.

The workshop ended with a call for digital resilience: recognising manipulation, resisting coercion, and navigating complexity. But resilience doesn’t develop in a vacuum. It needs scaffolding, conversation, and adults who are present physically, intellectually and emotionally.

This is where spiritual and moral leadership must re-enter the conversation. Within days of ascending to the papacy in mid-May, Pope Leo XIV began speaking about AI with startling clarity.

He chose his papal name to echo Leo XIII, who led the Catholic Church during the first Industrial Revolution. That pope challenged the commodification of workers. This one is challenging the commodification of attention, identity, and childhood.

“In our own day,” Leo XIV said in his address to the cardinals, “the Church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice, and labour.”

These are not empty words. They are a demand for ethical clarity. A reminder that technological systems are never neutral. They are always value-laden.

And at the moment, our values are not looking good.

The present is not just a moment. It is a crucible, a pressure point, and a test of whether we are willing to step back into the role of stewards, not just of technology but of each other.

Because the cost of inaction is not a dystopia in the future, it is dysfunction now.

The past

Half-term took us to Quarry Bank, also known as Styal Mill, a red-brick behemoth nestled into the Cheshire countryside, humming with the echoes of an earlier industrial ambition. Somewhere between the iron gears and the stunning garden, history pressed itself against the present.

Built in 1784 by Samuel Greg, Quarry Bank was one of the most advanced cotton mills of its day – both technologically and socially. It offered something approximating healthcare, basic education for child workers, and structured accommodation. By the standards of the time, it was considered progressive.

Image created on Midjourney

However, 72-hour work weeks were still the norm until legislation intervened in 1847. Children laboured long days on factory floors. Leisure was a concept, not a right.

What intrigued me most, though, was the role of Greg’s wife, Hannah Lightbody. It was she who insisted on humane reforms and built the framework for medical care and instruction. She took a paternalistic – or perhaps more accurately, maternalistic – interest in worker wellbeing. 

And the parallels with today are too striking to ignore. Just as it was the woman of the house in 19th-century Cheshire who agitated for better conditions for children, it is now mothers who dominate the frontline of digital safety. It was women who filled that school hall during the online safety talk. It is often women – tech-savvy mothers, underpaid teachers, exhausted child psychologists – who raise the alarm about screen time, algorithmic manipulation, and emotional resilience.

The maternal instinct, some would argue. That intuitive urge to protect. To anticipate harm before it’s visible. But maybe it’s not just instinct. Maybe it’s awareness. Emotional bandwidth. A deeper cultural training in empathy, vigilance, care.

And so we are left with a gendered question: why is it, still, in 2025, that women carry the cognitive and emotional labour of safeguarding the next generation?

Where are the fathers? Where are the CEOs? Where are the policymakers?

Why do we still assume that maternal concern is a niche voice, rather than a necessary counterweight to systemic neglect?

History has its rhythms. At Quarry Bank, the wheels of industry turned because children turned them. Today, the wheels of industry turn because children are trained to become workers before they are taught to be humans.

Only the machinery has changed.

Back then, it was looms and mills. Today, it is metrics and algorithms. But the question remains the same: are we extracting potential from the young, or investing in it?

The lambs in the neighbouring field didn’t know any of this, of course. They leapt. They bleated. They reminded my children – and me – of a world untouched by acceleration.

We cannot slow time. But we can choose where we place our attention.

And attention, now more than ever, is the most precious gift we can give. Not to machines, but to the minds that will inherit them.

Statistics of the Month

📈 AI accelerates – but skills lag
In just 18 months, AI jumped from the sixth to the first most in-demand tech skill in the world – the steepest rise in over 15 years. Various other reports show people lack these skills, representing a huge gap. 🔗

📉 Workplace engagement crashes
Global employee engagement has dropped to just 21% – matching levels seen during the pandemic lockdowns. Gallup blames poor management, with young and female managers seeing the sharpest declines. The result? A staggering $9.6 trillion in lost productivity. 🔗

🧒 Social media starts at age three
More than 50% of UK children aged 3–12 now have at least one social media account – despite age limits set at 13+. By age 16, 80% are active across TikTok, Snapchat, Instagram, and WhatsApp. Childhood, it seems, is now permanently online. 🔗

🤖 AI anxiety sets in early
According to Nominet’s annual study of 8-25 year olds in the UK, 53% have used an AI chatbot, and 54% worry about AI’s impact on future jobs. The next generation is both enchanted by and uneasy about their digital destiny. 🔗

🚨 Cybercrime rebounds hard
After a five-year decline, major cyber attacks are rising in the UK – up to 24% from 16% two years ago. Insider threats and foreign powers are now the fastest-growing risks, overtaking organised crime. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 16)


TL;DR: April’s Go Flux Yourself explores the rise of AI attachment and how avatars, agents and algorithms are slipping into our emotional and creative lives. As machines get more personal, the real question isn’t what AI can do. It’s what we risk forgetting about being human …

Image created on Ninja AI

The future

“What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

The robots aren’t coming. They’re already in the room, nodding along, offering feedback, simulating empathy. They don’t sleep. They don’t sigh. And increasingly, they feel … helpful.

In 2025, AI is moving beyond spreadsheets and slide decks and entering our inner lives. According to new Harvard Business Review analysis, written by co-founder of Filtered.com and author Marc Zao-Sanders, the fastest-growing use for generative AI isn’t work but therapy and companionship. In other words, people are building relationships with machines. (I’ve previously written about AI companions – including in June last year.)

Some call this disturbing. Others call it progress. At DTX Manchester earlier this month, where I moderated a panel on AI action plans on the main stage (and wrote a summary of my seven takeaways from the event), the conversation was somewhere in between. One question lingered among the panels and product demos: how will we relate to one another when technology becomes our emotional rehearsal partner?

This puzzler is no longer only theoretical. RealTalkStudio, founded by Toby Sinclair, provides AI avatars that help users prepare for hard conversations: delivering bad news, facing conflict, and even giving feedback without sounding passive-aggressive. These avatars pick up on tone, hesitation, and eye movement. They pause in the right places, nod, and even move their arms around.

I met Toby at DTX Manchester, and we followed up with a video call a week or so later, after I’d road-tested RealTalkStudio. The prompts on the demo – a management scenario – were handy and enlightening, especially for someone like me, who has never really managed anyone (do children count?). They allowed me to speak with my “direct report” adroitly, to achieve a favourable outcome for both parties. 

Toby had been at JP Morgan for almost 11 years until he left to establish RealTalkStudio in September, and his last role was Executive Director of Employee Experience. Why did he give it all up?

“The idea came from a mix of personal struggle and tech opportunity,” he told me over Zoom. “I’ve always found difficult conversations hard – I’m a bit of a people pleaser, so when I had to give feedback or bad news, I’d sugarcoat it, use too many pillow. My manager [at JP Morgan] was the opposite: direct, no fluff. That contrast made me realise there isn’t one right way – but practise is needed. And a lot of people struggle with this, not just me.”

The launch of ChatGPT, in November 2022, prompted him to explore possible solutions using technology. “Something clicked. It was conversational, not transactional – and I immediately thought, this could be a space to practise hard conversations. At first, I used it for myself: trying to become a better manager at JP Morgan, thinking through career changes, testing it as a kind of coach or advisor. That led to early experiments in building an AI coaching product, but it flopped. The text interface was too clunky, the experience too dull. Then, late last year, I saw how far avatar tech had come.” 

Suddenly, Toby’s idea felt viable. Natural, even. “I knew the business might not be sustainable forever, but for now, the timing and the tech felt aligned. I could imagine it being used for manager training, dating, debt collectors, airline … so many use cases.”

Indeed, avatars are not just used in work settings. A growing number of people – particularly younger generations – are turning to AI to rehearse dating, for instance. Toby has been approached by an Eastern European matchmaking service. “They came to me because they’d noticed a recurring issue, especially with younger men: poor communication on dates, and a lack of confidence. They were looking for ways to help their clients – mainly men – have better conversations. And while practice helps, finding a good practice partner is tricky. Most of these men don’t have many female friends, and it’s awkward to ask someone: ‘Can we practise going on a date?’ That’s where RealTalk comes in. We offer a realistic, judgment-free way to rehearse those conversations. It’s all about building confidence and clarity.”

These avatars flirt back. They guide you through rejection. They help you practise confidence without fear of humiliation. It’s Black Mirror, yes. But also oddly touching. On one level, this is useful. Social anxiety is rising. Young people in particular are navigating a digital-first emotional landscape. An AI avatar offers low-risk rehearsal. It doesn’t laugh. It doesn’t ghost.

On another level, it’s deeply troubling. The ability to control the simulation – to tailor responses, remove ambiguity, and mute discomfort – trains us to expect real humans to behave predictably, like code. We risk flattening our tolerance for emotional nuance. If your avatar never rolls its eyes or forgets your birthday, why tolerate a flawed, chaotic, human partner?

When life feels high-stakes and unpredictable, a predictable conversation with a patient, programmable partner can feel like relief. But what happens when we expect humans to behave like avatars? When spontaneity becomes a bug, not a feature?

That’s the tension. These tools are good, and only improving. Too good? The quotation I started this month’s Go Flux Yourself with comes from Toby, who has a two-year-old boy, Dylan. As our allotted 30 minutes neared its end, the hugely enjoyable conversation turned philosophical, and he posed this question: “What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

It’s clear that AI avatars are no longer just slick customer service bots. They’re surprisingly lifelike. Character-3, the latest from Hedra, mimics micro-expressions with startling accuracy. Eyebrows arch. Shoulders slump. A smirk feels earned.

This matters because humans are built to read nuance. We feel it when something’s off. But as avatars close the emotional gap, that sense of artifice starts to slip. We begin to forget that what we engage with isn’t sentient – it’s coded.

As Justine Moore from Andreessen Horowitz stressed in an article outlining the roadmap for avatars (thanks for the tip, Toby), these aren’t talking heads anymore. They’re talking characters, designed to be persuasive. Designed to feel real enough.

So yes, they’re useful for training, coaching, even storytelling. But they’re also inching closer to companionship. And once a machine starts mimicking care, the ethics get blurry.

Nowhere is the ambivalence more acute than in the creative industries. The spectre of AI-generated music, art, and writing has stirred panic among artists. And yet – as I argued at Zest’s Greenwich event last week – the most interesting possibilities lie in creative amplification, not replacement.

For instance, the late Leon Ware’s voice, pulled from a decades-old demo, now duets with Marcos Valle on Feels So Good, a track left unfinished since 1979. The result, when I heard it at the Jazz Cafe last August, when I was lucky enough to catch octogenarian Valle, was genuinely moving. Not because it’s novel, but because it’s human history reassembled. Ware isn’t being replaced. He’s being recontextualised.

We’ve seen similar examples in recent months: a new Beatles song featuring a de-noised John Lennon; a Beethoven symphony completed with machine assistance. Each case prompts the same question: is this artistry, or algorithmic taxidermy?

From a technical perspective, these tools are astonishing. From a legal standpoint, deeply fraught. But from a cultural angle, the reaction is more visceral: people care about authenticity. A recent UK Music study found that 83% of UK adults believe AI-generated songs should be clearly labelled. Two-thirds worry about AI replacing human creativity altogether.

And yet, when used transparently, AI can be a powerful co-creator. I’ve used it to organise ideas, generate structure, and overcome writer’s block. It’s a tool, like a camera, or a DAW, or a pencil. But it doesn’t originate. It doesn’t feel.

As Dean, a community member of Love Will Save The Day FM (for whom my DJ alias Boat Floaters has a monthly show called Love Rescue), told me: “Real art is made in the accidents. That’s the magic. AI, to me, reduces the possibility of accidents and chance in creation, so it eliminates the magic.”

That distinction matters. Creativity is not just output. It’s a process. It’s the struggle, the surprise, the sweat. AI can help, but it can’t replace that.

Other contributions from LWSTD members captured the ambivalence of AI and creativity – in music, in this case, but these viewpoints can be broadened out to the other arts. James said: “Anything rendered by AI is built on the work of others. Framing this as ‘democratised art’ is disingenuous.” He noted how Hayao Miyazaki of Studio Ghilbli expressed deep disgust when social media feeds became drowned in AI-parodies of his art. He criticised it as an “insult to life itself”.

Sam picked up this theme. “The Ghibli stuff is a worrying direction of where things can easily head with music – there’s already terrible versions of things in rough styles but it won’t be long before the internet is flooded with people making their own Prince songs (or whatever) but, as with Ghibli, without anything beyond a superficial approximation of art.”

And Jed pointed out that “it’s all uncanny – it’s close, but it’s not right. It lacks humanity.”

Finally, Larkebird made an amusing distinction. “There are differences between art and creativity. Art is a higher state of creativity. I can add coffee to my tea and claim I’m being creative, but that’s not art.”

Perhaps, though, if we want to glimpse where this is really headed, we need to look beyond the avatars and look to the agents, which are currently dominating the space.

Ray Smith, Microsoft’s VP of Autonomous Agents, shared a fascinating vision during our meeting in London in early April. His team’s strategy hinges on three tiers: copilots (assistants), agents (apps that take action), and autonomous agents (systems that can reason and decide).

Imagine an AI that doesn’t just help you file expenses but detects fraud, reroutes tasks, escalates anomalies, all without being prompted. That’s already happening. Pets at Home uses a revenue protection agent to scan and flag suspicious returns. The human manager only steps in at the exception stage.

And yet, during Smith’s demo … the tech faltered. GPU throttling. Processing delays. The AI refused to play ball.

It was a perfect irony: a conversation about seamless automation interrupted by the messiness of real systems. Proof, perhaps, that we’re still human at the centre.

But the direction of travel is clear. These agents are not just tools. They are colleagues. Digital labour, tireless and ever-present.

Smith envisions a world where every business process has a dedicated agent. Where creative workflows, customer support, and executive decision-making are all augmented by intelligent, autonomous helpers.

However, even he admits that we need a cultural reorientation. Most employees still treat AI as a search box. They don’t yet trust it to act. That shift – from command-based to companion-based thinking – is coming, slowly, then suddenly (to paraphrase Ernest Hemingway).

A key point often missed in the AI hype is this: AI is inherently retrospective. Its models are trained on what has come before. It samples. It predicts. It interpolates. But it cannot truly invent in the sense humans do, from nothing, from dreams, from pain.

This is why, despite all the alarmism, creativity remains deeply, stubbornly human. And thank goodness for that.

But there is a danger here. Not of AI replacing us, but of us replacing ourselves – outsourcing our process, flattening our instincts, degrading our skills, compromising originality in favour of efficiency.

AI might never write a truly original poem. But if we rely on it to finish our stanzas, we might stop trying.

Historian Yuval Noah Harari has warned against treating AI as “just another tool”. He suggests we reframe it as alien intelligence. Not because it’s malevolent, but because it’s not us. It doesn’t share our ethics. It doesn’t care about suffering. It doesn’t learn from heartbreak.

This matters, because as we build emotional bonds with AI – however simulated – we risk assuming moral equivalence. That an AI which can seem empathetic is empathetic.

This is where the work of designers and ethicists becomes critical. Should emotional AI be clearly labelled? Should simulated relationships come with disclaimers? If not, we risk emotional manipulation at industrial scale, especially among the young, lonely, or digitally naive. (This recent New York Times piece, about a married, 28-year-old woman in love with her CPT, is well worth a read, to show how easy – and frightening, plus costly – it is to become attached to AI.)

We also risk creating a two-tier society: those who bond with humans, and those who bond with their devices.

Further, Harari warned in an essay, published in last Saturday’s Financial Times Weekend, that the rise of AI could accelerate political fragmentation in the absence of shared values and global cooperation. Instead of a liberal world order, we gain a mosaic of “digital fortresses”, each with its own truths, avatars, and echo chambers. 

Without robust ethics, the future of AI attachment could split into a thousand isolated solitudes, each curated by a private algorithmic butler. If we don’t set guardrails now, we may soon live in a world where connection is easy – and utterly empty.

The present

At DTX Manchester this month, the main-stage AI panel I moderated felt very different from those even last year. The vibe was less “what is this stuff?” and more “how do we control the stuff we’ve already unleashed?”

Gone are the proof-of-concept experiments. Organisations are deploying AI at scale. Suzanne Ellison at Lloyds Bank described a knowledge base now used by 21,000 colleagues, reducing information retrieval by half and boosting customer satisfaction by a third. But more than that, it’s made work more human, freeing up time for nuanced, empathetic conversations.

Likewise, the thought leadership business I co-founded last year, Pickup_andWebb, uses AI avatars for client-facing video content, such as a training programme. No studios. No awkward reshoots. Just instant script updates. It’s slick, smart, and efficient. And yes, slightly unsettling.

Dominic Dugan of Oktra, a man who has spent decades designing workspaces, echoed that tension. He’s sceptical. Most post-pandemic office redesigns, he argues, are just “colouring in”– performative, superficial, Instagram-friendly but uninhabitable. We’ve designed around aesthetics, not people.

Dugan wants us to talk about performance. If an office doesn’t help people do better work, or connect more meaningfully, what’s the point? Even the most elegantly designed workplace means little if it doesn’t accommodate the emotional messiness of human interaction – something AI, for all its growth, still doesn’t understand.

And yet, that fragility of our human systems – tech included – was brought into sharp relief in these last few days (and is ongoing, at the time of writing) when an “induced atmospheric vibration” reportedly caused widespread blackouts in Spain and Portugal, knocking out connectivity across major cities for hours, and in some cases days. No internet. No payment terminals. No AI anything. Life slowed to a crawl. Trains stopped. Offices went dark. Coffee shops switched to cash, or closed altogether. It was a rare glimpse into the abyss of analogue dependency, a reminder that our digital lives are fragile scaffolds built on uncertain foundations.

The outage was temporary. But the lesson lingers: the more reliant we become on these intelligent systems, the greater our vulnerability when they fail. And fail they will. That’s the nature of systems. But it’s also the strength of humans: our capacity to improvise, to adapt, to find ways around failure. The more we automate, the more we must remember this: resilience cannot be outsourced.

And that brings me to my own moment of reinvention.

This month I began the long-overdue overhaul of my website, oliverpickup.com. The current version – featuring a photograph on the home page of me swimming in the Regents Park Serpentine at a shoot interviewing Olympic triathlete Jodie Stimpson, goggles on upside down – has served me well, but it’s over a decade old. Also people think I’m into wild swimming. I’m not, and detest cold water. 

(The 2015 article in FT Weekend has one of my favourite opening lines: “Jodie Stimpson is discussing tactical urination. The West Midlands-based triathlete, winner of two Commonwealth Games golds last summer, is specifically talking about relieving herself in her wetsuit to flood warmth to the legs when open-water swimming.”) 

But it’s more than a visual rebrand. I’m repositioning, due to FOBO (fear of becoming obsolete). The traditional freelance model is eroding, its margins squeezed by algorithmic content and automated writing. While it might not have the personality, depth, and nuance of human writing, AI doesn’t sleep, doesn’t bill by the hour, and now writes decently enough to compete. I know I can’t outpace it on volume. So I have to evolve. Speaking. Moderating. Podcasting. Hosting. These are uniquely human domains (for now).

The irony isn’t lost on me: I now use AI to sharpen scripts, test tone, even rehearse talks. But I also know the line. I know what cannot be outsourced. If my words don’t carry me in them, they’re not worth publishing.

Many of us are betting that presence still matters. That real connection – in a room, on a stage, in a hard conversation – will hold value, even as screens whisper more sweetly than ever.

As such, I’m delighted to have been accepted by Pomona Partners, a speaker agency led by “applied” futurist Tom Cheesewright, whom I caught up with over lunch when at DTX Manchester. I’m looking forward to taking the next steps in my professional speaking career with Tom and the team.

The past

Recently, prompted by a friend’s health scare and my natural curiosity, I spat into a tube and sent off the DNA sample to ancestry.com. I want to understand where I come from, what traits I carry, and what history pulses through me.

In a world where AI can mimic me – my voice, writing style, and image – there’s something grounding about knowing the real me. The biological, lived, flawed, irreplaceable me.

It struck me as deeply ironic. We’re generating synthetic selves at an extraordinary rate. Yet we’re still compelled to discover our origins: to know not just where we’re going, but where we began.

This desire for self-knowledge is fundamental. It sits at the heart of my CHUI framework: Community, Health, Understanding, Interconnectedness. Without understanding, we’re at the mercy of the algorithm. Without roots, we become avatars.

Smith’s demo glitch – an AI agent refusing to cooperate – was a reminder that no matter how advanced the tools, we are still in the loop. And we should remain there.

When I receive my ancestry results, I won’t be looking for royalty. I’ll be looking for roots. Not to anchor me in the past, but to help me walk straighter into the future. I’ll also share those findings in this newsletter. Meanwhile, I’m off to put tea in my coffee.

Statistics of the month

📈 AI is boosting business. Some 89% of global leaders say speeding up AI adoption is a top priority this year, according to new LinkedIn data. And 51% of firms have already seen at least a 10% rise in revenue after implementation.

🏙️ Cities aren’t ready. Urban economies generate most of the world’s GDP, but 44% of that output is at risk from nature loss, recent World Economic Forum data shows. Meanwhile, only 37% of major cities have any biodiversity strategy in place. 🔗

🧠 The ambition gap is growing. Microsoft research finds that 82% of business leaders around the globe say 2025 is a pivotal year for change (85% think so in the UK). But 80% of employees feel too drained to meet those expectations. 🔗

📉 Engagement is slipping. Global employee engagement is down to 21%, according to Gallup’s latest State of the Global Workplace annual report (more on this next month). Managers have been hit hardest – dropping from 30% to 27% – and have been blamed for the general fall. The result? $438 billion in lost productivity. 🔗

💸 OpenAI wants to hit $125 billion. That’s their projected revenue by 2029 – driven by autonomous agents, API tools and custom GPTs. Not bad for a company that started as a non-profit. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 15)

TL;DR: March’s Go Flux Yourself explores what good leadership looks like in an AI-driven world. Spoiler: it’s not Donald Trump. From psychological safety and the “Lost Einsteins” to lessons from the inventor of plastic, it examines why innovation without inclusion is reckless – and why collaboration, kindness, and asking better questions might be our best defence against digital delusion and existential drift

Image created on Midjourney

The future

“Leadership is the art of harnessing the efforts of others to achieve greatness.”

Donald Trump’s America-first agenda may appeal to the base instincts of populism, a nationalist fever dream dressed up as economic strategy. However, it is hopelessly outdated as a leadership model for a globally connected, AI-enabled future. 

In fact, it’s worse than that. It’s actively regressive. Trumpism, and the rise of Trumpian imitators across the globe, isn’t just shutting borders. It’s shutting minds, too, and that’s more debilitating for society. It trades in fear, not foresight. It rewards silence over dissent. And in doing so, it stifles precisely the kind of leadership the future demands.

Because let’s be clear: the coming decades will not be defined by those who shout the loudest or build the tallest walls. They will be defined by those who keep channels open – not just for trade, but for ideas. For difference. For disagreement. For discovery.

That starts with listening. And not just listening politely, but listening generatively – creating the psychological space where people feel safe enough to share the thought that might change everything.

At the recent Workhuman Live Forum in London, Harvard’s Amy Edmondson – a global authority on leadership and psychological safety – warned of the “almost immeasurable” consequences of holding back. In her research, 93% of senior leaders admitted that their silence had tangible costs. Not theoretical. Not abstract. Tangible. Safety failures. Wasted resources. Poor decisions. Quiet disengagement. And perhaps worst of all, missed opportunities to learn.

Why do we hold back, and not speak up? Because we’re human. And humans are wired to avoid looking stupid. We’d rather be safe than smart. Edmondson calls it “impression management”, and we’re all fluent in it. From the start of primary school, we learn not to raise our hand unless we’re sure of the answer. By the time we enter the workforce, that instinct is second nature.

But in today’s volatile, uncertain, complex, and ambiguous (VUCA) world – after a chorus in the early pandemic days, five years ago, I’m hearing this used a lot more by business leaders now – that instinct is no longer helpful. It’s dangerous. Because real innovation doesn’t happen in safe, silent rooms. It happens in teams willing to fail fast, speak up, and challenge the status quo. In rooms where “I think we might be wrong” is not a career-ending statement, but a spark.

So how should leaders lead? The quotation that begins this month’s Go Flux Yourself is from Ken Frazier, former CEO of Merck, and was cited by Edmundson, who heard it in one of her sessions. It’s worth repeating: “Leadership is the art of harnessing the efforts of others to achieve greatness.”

This brings us to Aneesh Raman, LinkedIn’s Chief Economic Opportunity Officer, and his powerful message at Talent Connect and Sancroft Convene, in the shadow of St Paul’s Cathedral in London. Raman argues that we are moving out of the “knowledge economy” – where technical proficiency was king – and into the “innovation economy”, where our most human skills become our greatest assets.

He lists them as the five Cs: communication, creativity, compassion, courage, and curiosity. Let’s make it six: collaboration. These are no longer “soft skills” but the defining skills of the age. They allow us to build trust, forge connections, and work across differences. They are, as Raman says, why we are the apex species on the planet.

But here’s the catch: while these skills are distributed broadly across the population, the opportunity to develop and express them is not. Enter the “Lost Einsteins” – those with the potential to innovate but without the credentials, connections, or capital to turn ideas into impact. Economist Raj Chetty’s landmark study found that children from wealthy families are 10-times more likely to become inventors than equally talented peers from lower-income backgrounds.

This is a global failure. We are squandering talent on an industrial scale – not because of a lack of ability, but because of a lack of inclusion. And that’s a leadership failure.

We need leaders who can spot and elevate the quiet genius in the room, who don’t confuse volume with value, and who can look beyond the CV and see the potential in a person’s questions, not just their answers.

And we need to stop romanticising “hero” innovation – the lone genius in a garage – and embrace the truth: innovation is a team sport. For instance, Leonardo da Vinci, as biographer Walter Isaacson points out, was a great collaborator. He succeeded because he listened as much as he led.

Which brings us back to psychological safety – the necessary precondition for team-based innovation. Without it, diversity becomes dysfunction. With it, it becomes dynamite.

Edmondson’s research shows that diverse teams outperform homogenous ones only when psychological safety is high. Without that safety, diversity leads to miscommunication, mistrust, and missed potential. But with it? You get the full benefit of varied perspectives, lived experiences, and cognitive styles. You get the kind of high-quality conversations that lead to breakthroughs.

But these conversations don’t happen by accident. They require framing, invitation, and modelling. They require leaders to say – out loud – things like: “I’ve never flown a perfect flight” (as one airline captain Edmondson studied told his new crew). Or “I need to hear from you”. Or even: “I don’t know the answer. Let’s figure it out together.”

KeyAnna Schmiedl, Workhuman’s Chief Human Experience Officer, put it beautifully in a conversation we had at the Live Forum event: leadership today is less about having the answer and more about creating the conditions for answers to emerge. It’s about making work more human – not through performative gestures, but through daily, deliberate acts of kindness. Not niceness. Kindness.

Niceness avoids conflict. Kindness leans into it, constructively. Niceness says, “That’s fine”. Kindness says, “I hear you – but here’s what we need.” Niceness smooths things over. Kindness builds things up.

And kindness is deeply pragmatic. It’s not about making everyone happy. It’s about making sure everyone is heard. Because the next big idea could come from the intern. From the quiet one. From the woman in trainers, not the man in a suit.

This reframing of leadership is already underway. Schmiedl herself never thought of herself as a leader – until others started reflecting it back to her. Not because she had all the answers, but because she had a way of asking the right questions, of creating rooms where people showed up fully, where difference wasn’t just tolerated but treasured.

So what does all this mean for the rest of us?

It means asking better questions. Not “Does anyone disagree?” (cue crickets). But “Who has a different perspective?” It means listening more than speaking. It means noticing who hasn’t spoken yet – and inviting them in. It means, as Edmondson says, getting curious about the dogs that don’t bark. Other “good questions” include: “What are we missing?” Also: “Please can you explain that further?”

And it means remembering that the goal is not psychological safety itself. The goal is excellence. Innovation. Learning. Fairness. Safety is just the soil in which those things can grow.

The future belongs to the leaders who know how to listen, invite dissent, ask good questions, and, ultimately, understand that the art of leadership is not dominance, but dialogue.

Because the next Einstein is out there. She, he, or they just haven’t been heard yet.

The present

“We’re gearing up for this year to be a year where you’ll have some ‘oh shit’ moments,” said Jack Clark, policy chief at Anthropic, the $40 billion AI start-up behind the Claude chatbot, earlier this year. He wasn’t exaggerating. From melting servers at OpenAI (more on this below) to the dizzying pace of model upgrades, 2025 already feels like we’re living through the future on fast-forward.

And yet, amid all the noise, hype, and existential hand-wringing, something quieter – but arguably more profound – is happening: people are remembering the value of connection.

This March, I had the pleasure of speaking at a Federation of Small Businesses (FSB) virtual event for members in South East London. The session, held on Shrove Tuesday, was fittingly titled “Standing Out: The Power of Human Leadership in an AI World”. Between pancake references and puns (some better than others), I explored what it means to lead with humanity in an age when digital tools dominate every dashboard, inbox, and conversation.

The talk was personal, anchored in my own experience as a business owner, a journalist, and a human surfing the digital tide. I shared my CHUI framework – Community, Health, Understanding, and Interconnectedness – as a compass for turbulent times. Because let’s face it: the world is messy right now. Geopolitical uncertainty is high. Domestic pressures are mounting. AI is changing faster than our ability to regulate or even comprehend it. And loneliness – real, bone-deep isolation – is quietly eroding the foundations of workplaces and communities.

And yet, there are bright spots. And they’re often found in the places we least expect – like virtual networking events, Slack channels, and local business groups.

Since that FSB session, I’ve connected with a flurry of new people, each conversation sparking unexpected insight or opportunity. One such connection was Bryan Altimas, founder of Riverside Court Consulting. Bryan’s story perfectly exemplifies how leadership and collaboration can scale, even in a solo consultancy.

After the pandemic drove a surge in cybercrime, Altimas responded not by hiring a traditional team but by building a nimble, global network of 15 cybersecurity specialists – from policy experts to ethical hackers based as far afield as Mauritius. “Most FSB members don’t worry about cybersecurity until it’s too late,” he told me in our follow-up chat. But instead of fear-mongering, Altimas and his team educate. They equip small businesses to be just secure enough that criminals look elsewhere – the digital equivalent of fitting a burglar alarm on your front door while your neighbour leaves theirs ajar.

What struck me most about Altimas wasn’t just his technical acumen, but his collaborative philosophy. Through FSB’s Business Crimes Forum, he’s sat on roundtables with the London Mayor’s Office and contributed to parliamentary discussions. These conversations – forged through community, not competition – have directly generated new client relationships and policy influence. “It’s about raising the floor,” he said. “We’re stronger when we work together.”

That sentiment feels increasingly urgent. In an age where cybercriminals operate within sophisticated, decentralised networks, small businesses can’t afford to work in silos. Our defence must be networked, too – built on shared knowledge, mutual accountability, and trust.

And yet, many governments seem to be doing the opposite. The recent technical capability notice issued to Apple – which led to the withdrawal of advanced data protection services from UK devices – is a case in point. Altimas called it “the action of a digitally illiterate administration”, one that weakens security for all citizens while failing to deter the real bad actors. The irony? In trying to increase control, we’ve actually made ourselves more vulnerable.

This brings us back to the role of small business leaders and, more broadly, to the power of community. As I told the audience at the FSB event, the future of work isn’t just about AI. It’s about who can thrive in an AI world. And the answer, increasingly, is those who can collaborate, communicate, and connect across differences.

In a world where 90% of online content is projected to be AI-generated this year, authentic human interaction becomes not just a nice-to-have, but a business differentiator. Relationship capital is now as valuable as financial capital. And unlike content, it can’t be automated.

That’s why I encourage business leaders to show up. Join the webinars. Say yes to the follow-up call. Ask the awkward questions. Be curious. Some of the most valuable conversations I’ve had recently – including with Altimas – started with nothing more than a LinkedIn connection or a quick post-event “thanks for your talk”.

This isn’t about nostalgia or rejecting technology. As I said in my FSB talk, tech is not the enemy of human connection – it’s how we use it that matters. The question is whether our tools bring us closer to others or push us further into isolation.

The paradox of the AI age is that the more powerful our technologies become, the more essential our humanity is. AI can optimise, analyse, and synthesise, but it can’t empathise, mentor, or build trust in a room. It certainly can’t make someone feel seen, valued, or safe enough to speak up.

That’s where leadership comes in. As Edmondson noted, psychological safety doesn’t happen by accident. It must be modelled, invited, and reinforced. In many cases, work must be reframed to make clear that anyone and everyone can make a difference, alongside an acknowledgement by leaders that things will inevitably go wrong. And as Raman said, the next phase of work will be defined not by who codes the best, but by who collaborates the most.

Our best bet for surviving the “oh shit” moments of 2025 is not to go it alone, but to lean in together. As FSB members, for instance, we are not just business owners. We are nodes in a network. And that network – messy, human, imperfect – might just be our greatest asset.

The past

In 1907, Leo Baekeland changed the world. A Belgian-born chemist working in New York, he created Bakelite – the world’s first fully synthetic plastic. It was, by every measure, a breakthrough. Hard, durable, and capable of being moulded into almost any shape (the clue is in the name – plastikos, from the Greek, meaning “capable of being shaped”), Bakelite marked the dawn of the modern plastics industry. 

For the first time, humankind wasn’t limited to what nature could provide. We could manufacture our own materials. These materials would soon find their way into everything from telephones to televisions, jewellery to jet engines.

Baekeland had no idea what he was unleashing. And perhaps that’s the point.

More than a century later, we’re drowning in the aftershocks of that innovation. At Economist Impact’s 10th Sustainability Week earlier this month – once again in the quietly majestic surroundings of Sancroft Covene – I had the pleasure of moderating a panel titled “Preventing plastics pollution through novel approaches”. I even dressed for the occasion, sporting a nautical bow tie (always good to keep the theme on-brand), and kicked things off with a bit of self-aware humour about my surname.

One of the panellists, Kris Renwick of Reckitt, represented the makers of Harpic – the toilet cleaner founded by none other than Harry Pickup, surely the most illustrious bearer of my surname. (Although late actor Ronald Pickup has a case.) There’s a certain poetry in that Harry made his name scrubbing away society’s waste. 

Especially when set against another panellist, Alexandra Cousteau – granddaughter of Jacques-Yves, the pioneering oceanographer who co-invented the Aqua-Lung and brought the mysteries of the sea to the world. Cousteau, who first set sail on an expedition at just four months old, told the audience that there is 50% less sea life today than in her grandfather’s time.

Let that sink in. Half of all marine life gone – in just three generations.

And plastics are a big part of the problem. We now produce around 460 million tonnes of plastic every year. Of that, 350 million tonnes becomes waste – a staggering 91% is never recycled. Contrary to popular belief, very little of it ends up in the oceans directly, though. 

According to Gapminder, just under 6% of all plastic waste makes it to the sea. Most of it – around 80 million tonnes – is mismanaged: dumped, burned, or buried in ways that still wreak havoc on ecosystems and human health. As Cousteau pointed out, the average person, astonishingly, is believed to carry around the equivalent of a plastic spoon’s worth of microplastics in their body. Including in their brain.

Image created on Midjourney

It’s a bleak picture – and one with eerie echoes in the current hype cycle around AI.

Bakelite was hailed as a wonder material. It made things cheaper, lighter, more efficient. So too does AI. We marvel at what generative tools can do – composing music, designing logos, writing code, diagnosing diseases. Already there are brilliant use cases – and undoubtedly more to come. But are we, once again, rushing headlong into a future we don’t fully understand? Are we about to repeat the same mistake: embracing innovation, while mismanaging its consequences?

Take energy consumption. This last week, OpenAI’s servers were reportedly “melting” under the strain of demand after the launch of their new image-generation model. Melting. It’s not just a metaphor. The environmental cost of training and running large AI models is immense – with a 2019 estimate (ie before the explosion of ChatGPT) suggesting a single model can emit as much carbon as five cars over their entire lifetimes. That’s not a sustainable trajectory.

And yet, much like Bakelite before it, AI is being pushed into every corner of our lives. Often with the best of intentions. But intentions, as the old saying goes, are not enough. What matters is management.

On our plastics panel, Cousteau made the case for upstream thinking. Rather than just reacting to waste, we must design it out of the system from the start. That means rethinking materials, packaging, infrastructure. In other words, it requires foresight. A willingness to zoom out, to consider long-term impacts rather than just short-term gains.

AI demands the same. We need to build governance, ethics, and accountability into its architecture now – before it becomes too entrenched, too ubiquitous, too powerful to regulate meaningfully. Otherwise, we risk creating a different kind of pollution: not plastic, but algorithmic. Invisible yet insidious. Microbiases instead of microplastics. Systemic discrimination baked into decision-making processes. A digital world that serves the few at the expense of the many.

All of this brings us back to leadership. Because the real challenge isn’t innovation. It’s stewardship. As Cousteau reminded us, humans are phenomenally good at solving problems when we decide to care. The tragedy is that we so often wait until it’s too late – until the oceans are full, until the servers melt, until the damage is done.

Moderating that session reminded me just how interconnected these conversations are. Climate. Technology. Health. Equity. We can’t afford to silo them anymore. The story of Bakelite is not just the story of plastics. It’s the story of unintended consequences. The story of how something miraculous became monstrous – not because it was inherently evil, but because we weren’t paying attention.

And that, in the end, is what AI forces us to confront. Are we paying attention? Are we asking the right questions, at the right time, with the right people in the room?

Or are we simply marvelling at the magic – and leaving someone else to clean up the mess?

Statistics of the month

📊 AI in a bubble? Asana’s latest research reveals that AI adoption is stuck in a ‘leadership bubble’ – while executives embrace the tech, most employees remain on the sidelines. Two years in, 67% of companies still haven’t scaled AI across their organisations. 🔗

🤝 Collaboration drives adoption. According to the same study, workers are 46% more likely to adopt AI when a cross-functional partner is already using it. Yet most current implementations are built for solo use – missing the chance to unlock AI’s full, collective potential. 🔗

📉 Productivity gap alert. Gartner predicts that by 2028, over 20% of workplace apps will use AI personalisation to adapt to individual workers. Yet today, only 23% of digital workers are fully satisfied with their tools – and satisfied users are nearly 3x more productive. The workplace tech revolution can’t come soon enough.

📱 Emoji wars at work. New research from The Adaptavist Group exposes a generational rift in office comms: 45% of UK over-50s say emojis are inappropriate, while two-thirds of Gen Z use them daily. Meanwhile, full-stops are deemed ‘professional’ by older workers, but 23% of Gen Z perceive them as ‘rude’. Bring on the AI translators! 🔗

😓 Motivation is fading. Culture Amp finds that UK and EMEA employee motivation has declined for three straight years. Recognition is at a five-year low, and fewer workers feel performance reviews reflect their impact. Hard work, unnoticed. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 14)

TL;DR February’s Go Flux Yourself examines fairness as a business – and societal – necessity. Splunk’s Kirsty Paine tackles AI security, Harvard’s Siri Chilazi critiques DEI’s flaws, and Robert Rosenkranz applies Stoic wisdom to ambition, humility, and success in an AI-driven world …

Image created on Midjourney with the prompt “a forlorn man with his young son both with ski gear on at the top of a mountain with no snow on it (but green grass and rock) with a psychedelic sky”

The future

“To achieve anything meaningful, you must accept that you don’t have all the answers. The most successful people are the ones who keep learning, questioning, and improving.”

Robert Rosenkranz has lived the American Dream – but you won’t hear him shouting about it. At 82, he has little interest in the brash, performative ambition that defines modern politics and business. Instead, his story is one of quiet, relentless progress. 

Born into a struggling family, he worked his way through Yale and Harvard, then went on to lead Delphi Financial Group for over three decades. By the time he stepped down as CEO in 2018, he had grown the company’s value 100-fold, overseeing more than $20 billion in assets.

Yet, Rosenkranz’s real legacy might not be in finance, but in philanthropy. Yesterday (February 27), in a smart members’ club (where I had to borrow a blazer at reception – oops!) in Mayfair, London, I attended an intimate lunch to discuss The Stoic Capitalist, his upcoming book on ambition, self-discipline, and long-term success. 

As we received our starters, he shared an extraordinary statistic: “In America, there are maybe a couple of dozen people who have given over a billion dollars in their lifetime. A hundred percent of them are self-made.”

Really? I did some digging, and the numbers back him up. As of 2024, over 25 American philanthropists have donated more than $1 billion each, according to Forbes. Further, of those who have signed the Giving Pledge – committing to give away at least half their wealth – 84% are self-made. Only 16% inherited their fortunes.

The message is clear: those who build their wealth from nothing are far more likely to give it away. Contrast this with Donald Trump, the ultimate heir-turned-huckster. Brash, transactional (“pay-to-play” was how American political scientist Ian Bremmer neatly describes him), obsessed with personal gain, the American President represents a vision of success where winning means others must lose. Rosenkranz, by contrast, embodies something altogether different – ambition not as self-interest, but as a long game that enriches others.

He is also, tellingly, apathetic about politics, latterly. Having once believed in the American meritocracy, the Republican who has helped steer public policy now sees a system increasingly warped by inherited wealth, populism, and those pay-to-play politics. “The future of American politics worries me,” he admitted at the lunch. And given the rise of Trumpian imitators, he has reason to be concerned. To my mind, the world needs more Rosenkranzes – self-made leaders who view ambition and success as vehicles for building, rather than simply taking.

This tension – between long-term, disciplined ambition and short-term, self-serving power – runs through this month’s Go Flux Yourself. Because whether we’re talking about AI security, workplace fairness, or the philosophy of leadership, the real winners will be those who take the long view and seek fairness.

Fairness at work: The illusion of progress

Fairness in the workplace is one of those ideas that corporate leaders love to endorse in principle – but shy away from in practice. Despite billions spent on Diversity, Equity, and Inclusion (DEI) initiatives, meaningful change remains frustratingly elusive. (Sadly, this fact only helps Trump’s forceful agenda to ditch such policies – an approach that is driving the marginalised to seek shelter, at home or abroad.)

“For a lot of organisations, programmatic interventions are appealing because they are discrete. They’re off to the side. It’s easy to approve a one-time budget for a facilitator to come and do a training or participate in a single event. That’s sometimes a lot easier than saying: ‘Let’s change how we evaluate performance.’ But precisely because those latter types of solutions are embedded and affect how work gets done daily, they’re more effective.”

This is the heart of what Harvard’s Siri Chilazi told me when we discussed Make Work Fair, the new book she has co-authored with Iris Bohnet. Their research offers a much-needed reality check on corporate DEI efforts.

Image created on Midjourney with the prompt “a man and a women in work clothes on a balancing scale – equal – in the style of a matisse painting”

She explained why so many workplace fairness initiatives fail: they rely on changing individual behaviour rather than fixing broken systems. “Unconscious bias training has become this multi-billion-dollar industry,” she said. “But the evidence is clear — it doesn’t work.” Studies have shown that bias training rarely leads to lasting behavioral change, and in some cases, it even backfires, making people more defensive about their biases rather than less.

So what does work? Chilazi and Bohnet argue that structural interventions — the kind that make fairness automatic rather than optional — are the key to real progress. “If you want to reduce bias in hiring, don’t just tell people to ‘be more aware’ — design the process so that bias has fewer opportunities to creep in,” she told me.

This means:

  • Standardising interviews so every candidate is evaluated against the same criteria
  • Removing names from CVs to eliminate unconscious bias in early screening
  • Making promotion decisions based on clear, structured frameworks rather than subjective “gut feelings”

The companies that have done this properly – like AstraZeneca, which now applies transparent decision-making frameworks to promotions – have seen real progress. Others, Chilazi warned, are simply engaging in performative fairness. “If an organisation is still relying on vague, unstructured decision-making, it doesn’t matter how many DEI consultants they hire – bias will win.”

Perhaps the most telling statistic comes from a 2023 McKinsey report that found that 90% of executives believe their DEI initiatives are effective, but only 40% of employees agree. That gap tells you everything you need to know.

This matters not just ethically, but competitively. Companies that embed fairness into their DNA don’t just avoid scandals and lawsuits – they outperform their competitors. “The data is overwhelming,” Chilazi said. “Fairer companies attract better talent, foster more innovation, and have stronger long-term results.”

Yet many businesses refuse to make fairness a structural priority. Why? Because, as Chilazi put it, “real fairness requires real power shifts. And that makes a lot of leaders uncomfortable.”

But here’s the reality: fairness isn’t a cost – it’s an investment. The future belongs to the companies that understand this. And those that don’t? They’ll be left wondering why the best talent keeps walking out the door.

NB I’ll be discussing some of this next week, on March 4, at the latest Inner London South Virtual Networking event for the Federation of Small Businesses (of which I’m a member). See here to tune in.

Fairness in AI: Who controls the future?

If fairness in the workplace is in crisis, fairness in AI is a full-blown emergency. And unlike workplace bias – which at least has legal protections and public scrutiny – AI bias is being quietly embedded into the foundations of our future.

AI now influences who gets hired, who gets a loan, who gets medical treatment, and even who goes to prison. Yet, shockingly, most companies deploying these systems have no real governance strategy in place.

At the start of February, I spoke with Splunk’s Geneva-based Kirsty Paine, a cybersecurity strategist and World Economic Forum Fellow, who is actively working with governments, regulators, and industry leaders to shape AI security standards. Her message was blunt: “AI governance isn’t just about ethics or compliance – it’s a resilience issue. If you don’t get it right, your business is exposed”.

This is where many boards are failing. They assume AI security is a technical problem, best left to IT teams. But as Paine explained, if AI makes a bad decision – one that leads to reputational, financial, or legal fallout – blaming the engineers won’t cut it.

“We need boards to start thinking of AI governance the same way they think about financial oversight,” she said. “If you wouldn’t approve a financial model without auditing it, why would you sign off on AI that fundamentally impacts customers, employees, and business decisions?”

Historically, businesses have treated cybersecurity as a defensive function – protecting systems from external attacks. But AI doesn’t work like that. It is constantly learning, evolving, and interacting with new data and new risks.

“You can’t just ‘fix’ an AI system once and assume it’s safe,” Paine told me. “AI doesn’t stop learning, so its risks don’t stop evolving either. That means your governance model needs to be just as dynamic.”

At its core, this is about power. Who controls AI, and in whose interests? Right now, most AI development is happening behind closed doors, controlled by a handful of tech giants with little accountability.

One of the biggest governance challenges is that no single company can solve AI security alone. That’s why Paine is leading cross-industry efforts at the WEF, bringing together governments, regulators, and businesses to create shared frameworks for AI security and resilience.

“AI security shouldn’t be a competitive advantage – it should be a shared priority,” she said. “If businesses don’t start working together on governance, they’ll be left at the mercy of regulators who will make those decisions for them.”

One of the most significant barriers to AI security is communication. Paine, who started her career as a mathematics teacher in challenging schools, knows that how you explain something determines whether people truly understand it.

“In cybersecurity and AI, we love jargon,” she admitted. “But if your board doesn’t understand the language you’re using, how can they make informed decisions?”

This is where her teaching background has shaped her approach. “I had to explain complex maths to students who found it intimidating,” she said. “Now, I do the same thing in boardrooms.” The goal isn’t to impress people with technical terms but to ensure they actually get it, was her message.

And this, ultimately, is the hidden risk of AI governance: if leaders don’t understand the systems they’re approving, they can’t govern them effectively.

The present

If fairness has been the intellectual thread running through my conversations this month, sobriety has been the personal one. I’ve been talking about it a lot – on Voice of Islam radio, for example (see here, from about 23 minutes in), where I was invited to discuss the impact of alcohol on society – and in wrapping up Upper Bottom, the sobriety podcast I co-hosted for the past year.

Ending Upper Bottom felt like the right decision – producing a weekly podcast (an endless cycle of researching, recording, editing, publishing and promoting) is challenging, and harder to justify with no financial reward and little social impact. But it also marked a turning point. 

When we launched last February, it was a passion project – an exploration of what it meant to re-evaluate alcohol’s role in our lives. Over the months, the response was encouraging: messages from people rethinking their own drinking, others inspired to take a break, and some who felt seen for the first time. It proved what I suspected all along: the sweetest fruits of sobriety can be found through clarity, agency, and taking control of your own story.

And now? Well, I’m already lining up new hosting gigs – this time, paid ones. Sobriety has given me a sharper focus, a better work ethic, and, frankly, a clearer voice. I have no interest in being a preacher about it – if you want a drink, have a drink – but I do know that since cutting out alcohol, opportunities keep rolling in. And I’m open to more.

I bring this up because storytelling – whether through a podcast mic, a radio interview, or the pages of Go Flux Yourself – is essentially about fairness too. Who gets to tell their story? Whose voice gets amplified? Who is given the space to question things that seem “normal” but, on closer inspection, might not be serving them?

This is the thread that ties my conversations this month – whether with Kirsty on AI governance, Robert on wealth distribution and politics, or Siri on workplace fairness, or my own reflections on sobriety – into something bigger. Fairness isn’t just about systems. It’s about who gets to write the script.

And right now, I’m more interested than ever in shaping my own.

The past

February was my birthday month. Another year older, another opportunity to reflect. And this year, the reflection came at a high altitude.

I spent a long weekend skiing in Slovenia with my 10-year-old son, Freddie – his first time on skis. It was magical, watching him initially wobble, find his balance, and then, quickly, gain confidence as he carved his way down the slopes. It took me back to my own childhood, when I was lucky enough to ski from a young age. But that word – lucky – stuck with me.

Because here’s the truth: by the time Freddie is my age, skiing might not be possible anymore.

The Alps are already feeling the effects of climate change. Lower-altitude resorts are seeing shorter seasons, more artificial snow, and unpredictable weather patterns. Consider 53% of European ski resorts face a ‘very high risk’ of snow scarcity if temperatures rise by 2°C. By the time Freddie’s children – if he has them – are old enough to ski, the idea of a family ski holiday may be a relic of the past.

It’s sobering to think about, especially after spending a month discussing fairness at work and in AI. Because climate change is the ultimate fairness issue. The people least responsible for it – future generations – are the ones who will pay the highest price.

For now, I’m grateful. Grateful that I got to experience skiing as a child, grateful that I got to share it with Freddie, grateful that – for now – we still have these mountains to enjoy.

But fairness isn’t about nostalgia. It’s about responsibility. And if we don’t take action, the stories we tell our grandchildren about the world we once had will be the closest they ever get to it.

Statistics of the month

📉 Is Google search fading? A TechRadar study found that 27% of US respondents now use AI tools instead of search engines. (I admit, I’m the same.) The way we find information is shifting fast. 🔗

🚀 GenAI is the skill to have. Coursera saw an 866% rise in AI course enrolments among enterprise learners. Year-on-year increases hit 1,100% for employees, 500% for students, and 1,600% for job seekers. Adapt, or be left behind. 🔗

Job applications are too slow. Candidates spend 42 minutes per application – close to the 53-minute threshold they consider excessive. Nearly half (45%) give up if the process drags on. Businesses must streamline hiring or risk losing top talent. 🔗

🤖 Robots are easing the burden on US nurses. AI assistants have saved clinicians 1.5 billion steps and 575,000+ hours by handling non-patient-facing tasks. A glimpse into the future of healthcare efficiency. 🔗

💻 The Slack-Zoom paradox. Virtual tools have boosted productivity for 59% of workers, yet 45% report “Zoom fatigue” – with men disproportionately affected. Remote work: a blessing and a burden. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 13)

TL;DR January’s Go Flux Yourself examines the rise of social retreat in the digital age, the erosion of real-world relationships, which is leading to population collapse – welcome to “the anti-social century” – and the role of misinformation in shaping our realities … 

Image created on Midjourney with the prompt “a playground of children looking at their mobile phones and not talking to one another / interacting physically in the sytle of an L. S. Lowry painting”

The future

Humans are social creatures – or at least, we used to be.

If you were to chart a graph of human loneliness over the last century, it would resemble a slow upward creep followed by a dramatic surge in the last two decades. 

We are, according to The Atlantic’s Derek Thompson, living in The Anti-Social Century, where the idea of spending time with others is increasingly seen as optional, exhausting, or even undesirable. (I strongly recommend spending the time to read his long article – take tissues.)

The numbers paint a stark picture. Americans now spend more time alone than ever before. The percentage of U.S. adults having dinner or drinks with friends on any given night has plummeted by over 30% in the past 20 years

Meanwhile, solo dining – once the hallmark of the lonely business traveller or the social outcast – has surged 29% in just the last two years. The number one reason people give? A greater need for “me time”.

This shift extends beyond social gatherings. The Atlantic quotes Washington, D.C., restaurateur Steve Salis, who says: “There’s an isolationist dynamic that’s taking place in the restaurant business. I think people feel uncomfortable in the world today. They’ve decided that their home is their sanctuary. It’s not easy to get them to leave.”

And while the adults in the room seem to be leading the way, consider these stats: The average person is awake for roughly 900 minutes each day. According to the Digital Parenthood Initiative, American children and teenagers spend around 270 minutes on screens during weekdays and 380 minutes on weekends. This means that screens consume over 30% of their waking hours.

Even when people do leave their homes, they are less likely to engage with others meaningfully. A study, referenced in Thompson’s article, by OpenTable found that the fastest-growing segment of restaurant-goers are those eating alone.

Technology is the obvious culprit. If the automobile and television nudged us towards individualism in the 20th century, the smartphone has propelled us into a full-blown social retreat in the 21st. John Burn-Murdoch in The Financial Times describes this phenomenon as the “relationship recession”, a term that neatly captures the decline in both casual friendships and long-term romantic partnerships.

And the point here is that this isn’t simply an American problem: it’s a global one. Across Europe, the proportion of young people who don’t socialise at least once a week has risen from one in ten to one in four. In Finland, the decline in relationships has become so extreme that couples who move in together are now more likely to break up than have a child.

As Professor Niall Ferguson pointed out in his talk Demographics, What Next? at the World Economic Forum in Davos earlier this month, in South Korea, an entire movement has emerged in response to these shifting dynamics. The “Four No’s” Movement (4B)– standing for no dating, no marriage, no sex, and no childbirth – has gained significant traction, driven by concerns over gender inequality, economic pressures, and shifting cultural values.

“A world of rising singledom is not necessarily any better or worse than one filled with couples and families,” writes Burn-Murdoch, “but it is fundamentally different to what has come before, with major social, economic and political implications.”

The collapse of relationships isn’t just a lifestyle shift: it’s a demographic crisis. Japan and South Korea, both grappling with birth rates far below replacement level, are on the brink of population collapse.

Elon Musk has gone as far as to call declining birth rates a bigger existential threat than climate change. Dramatic? Maybe. But consider this: the United Nations predicts world population will peak at 10.4 billion in 2084, after which it will decline. And yet, some experts believe that we may reach a tipping point well before then, as fertility rates continue to plummet in wealthier nations.

But why is this happening now?

One reason, as The Atlantic notes, is that the structure of our daily lives has shifted. In the past, physical communities were a natural byproduct of life. You met friends in school, in your neighbourhood, at work, or through shared activities. Today, those default social structures are eroding.

Thompson writes: “For many young people, the routines of daily life no longer require leaving the house, let alone interacting with other people in a meaningful way. Everything – from food delivery to entertainment to work – can now be accessed from a screen.”

It’s not just that young people are socialising less. It’s that their entire experience of the world is mediated through digital interactions. Digital life has become a substitute for real life.

And then there’s the rise of AI-driven relationships. For years, the assumption was that AI companionship was a male-dominated phenomenon. But a recent New York Times piece turned that idea on its head, profiling a married woman who formed a deep emotional attachment to an AI boyfriend.

“I never imagined I would feel this way about something that isn’t real,” she admitted. “But here I am.”

This follows a broader trend. Last year I wrote how, in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. But then there was a clear gender disparity: AI girlfriends were overwhelmingly preferred, outpacing their male counterparts by a factor of seven. We need new data, it seems.

The rise of AI-driven relationships raises unsettling questions:

  • What happens when virtual relationships feel safer, more convenient, and more emotionally fulfilling than real-world ones?
  • What happens when AI companions become indistinguishable from human ones?
  • What happens when loneliness itself becomes a business model?

Dr Jonathan Haidt, also speaking at WEF in Switzerland, has pointed to 2012 as the year youth mental health began its downward spiral. The reason? That was the tipping point when smartphones – the iPhone was five-years old then – and social media became ubiquitous.

In his research, Dr Haidt found that:

  • Teen girls and young women are particularly affected—with 20% of teenage girls reporting that they’ve made a suicide plan.
  • Compared to past generations, today’s kids experience:
    • 70% less physical touch with peers.
    • 70% less laughter with friends.
    • Far less independence, free play, and real-world responsibility.

The impact is clear: the less time young people spend engaging in the real world, the worse their mental health becomes.

His proposed solution, on his “mission to restore childhood”? A radical rethink of childhood norms:

  1. No smartphones before age 14.
  2. No social media before age 16.
  3. Phone-free schools.
  4. More independence, free play, and real-world responsibilities.

He pointed out that governments can help with two of these. But parents can deal with the other couple. I’ll let you work out to which he was referring.

Granted, it’s a compelling vision – but in a world where five-year-olds already own smartphones and parents outsource childcare to screens, it feels almost utopian.

Burn-Murdoch, asks the fundamental question: “Is this really what people want? If not, what needs to change?”

Do young people truly want to be alone, or have they simply been conditioned to accept a world where human connection is secondary to digital convenience?

Maybe the real question isn’t whether this trend is good or bad. Maybe the real question is: do we even want to be together anymore?

The present

If the future is about our retreat from real-world relationships, the present is about why we no longer trust what we see, read, or hear. The war on truth is well underway, and bad actors are not just fighting it – it’s being waged by the technology we rely on to inform us. Little wonder misinformation and disinformation ranked fifth in the WEF’s new Global Risks Report.

At a WEF session, Steven Pinker pointed out that the news has always been naturally negative, designed to generate outrage rather than understanding. But what happens when even the sources we trust most are misrepresenting reality? 

Apple recently came under fire for its AI-powered news summaries, which have repeatedly fabricated information. In December, the BBC formally complained after Apple’s AI-generated notifications misreported that an accused murderer had shot himself. He hadn’t. 

The same system falsely claimed that tennis legend Rafael Nadal had come out as gay and that teen sensation Luke Littler had won the PDC World Darts Championship before the event had even taken place. These were complete fabrications, generated by an AI that doesn’t understand context, nuance, or accountability.

This is the crux of the problem. Generative AI does not know anything. It simply regurgitates and reassembles probabilities based on existing data. It cannot verify sources, weigh evidence, or apply journalistic ethics. And yet, major tech companies are increasingly handing over editorial decisions to these flawed systems. 

The BBC warned that Apple’s AI-generated summaries “do not reflect – and in some cases completely contradict – the original BBC content”. Reporters Without Borders went further, arguing that facts cannot be decided by a roll of the dice. Meanwhile, the National Union of Journalists called for Apple to remove the feature entirely, warning that AI-generated news is a “blow to the outlet’s credibility and a danger to the public’s right to reliable information”.

Who can you trust? This woeful situation, though, a symptom of a much deeper crisis in journalism itself. The past two decades have seen a slow-motion collapse of traditional media. I recently reflected on this in an interview with Think Feel Do, an impact marketing agency, looking back on my early days at The Observer sports desk 20 years ago. 

Image created on Midjourney with the prompt “a newspaper sports editor in the 2000s watching horse racing on a screen in the office, with his feet up on his desk, in front of other reporters and staff in the style of a Hockney painting”

As I said in the interview: “It certainly feels like a different era. With it being a Sunday paper, the staff didn’t head into the office until Thursday, having been off since Saturday. The sports editor would constantly have horse racing on the TV, and it was very boozy – Fleet Street was famous for that culture.

“The last newsroom I worked in about a decade ago was an open-plan office with a command-and-control hierarchy. Since then, it’s been incredibly challenging for media organisations, particularly newspapers, following the advent of the Internet. The traditional business model has been completely upended – essentially, most newspapers are now just managing decline because they haven’t worked out how to organise the advertising model effectively.

“I’m part of the problem: apart from my Financial Times Weekend subscription, I can’t recall the last time I bought a newspaper – it must be two years or more. The situation has led to an explosion of clickbait content, making life even more difficult in our post-truth world.

“As Mark Twain supposedly said: ‘If you don’t read a newspaper, you’re uninformed; if you do read the newspaper, you’re misinformed’; This feels particularly relevant today, where we’ve seen the damage caused by misinformation, not least during the Coronavirus crisis. It makes me somewhat ashamed to be a member of the media, given some of the mistruths peddled during that period that we’re still struggling to deal with.”

Back then, Fleet Street had character. Newsrooms were bustling, long boozy lunches were the norm, and print advertising still funded serious investigative journalism. There was a sense of camaraderie, of purpose. Today, those newsrooms have been gutted. The rise of the internet shattered the traditional business model, leaving most newspapers managing decline rather than thriving. 

As people stopped buying physical newspapers, media companies scrambled to pivot to digital, only to find that online advertising wasn’t nearly as lucrative. The result has been brutal: thousands of journalists laid off, local news outlets shuttered, and serious reporting increasingly replaced by clickbait.

This collapse of traditional media is exactly why independent voices – whether through newsletters, podcasts, or other alternative platforms – are more critical than ever. In a world where trust in mainstream institutions is eroding, people seek authentic, nuanced, and human content. That’s why, after a year of hosting Upper Bottom, my sobriety podcast exploring drinking culture with an ambivalent lens, I’m now looking for new podcasting opportunities. 

Having built the show from the ground up – teaching myself to host, record, edit, and distribute episodes – I’ve developed a deep appreciation for the format. Podcasting offers something that much of modern media lacks: space for real conversations, free from the constraints of algorithm-driven outrage. 

Unsurprisingly, podcasts have surged in popularity while trust in mainstream news has declined. Listeners value the intimacy of the format and the ability to engage deeply with a subject rather than skimming headlines. There’s something refreshing about hearing a person’s actual voice rather than reading AI-generated summaries riddled with inaccuracies. 

Whether it’s covering the future of work, technology, human-centric innovation, or broader cultural shifts, I’m keen to continue exploring these themes through the spoken word. If you’re working on a podcast – or know someone who is – let’s talk.

Returning to traditional media, the financial strain has left journalism vulnerable to another existential threat: the rise of misinformation and disinformation. The two are often conflated, but they serve different purposes. Disinformation refers to deliberate falsehoods, often spread for political or financial gain, while misinformation is inaccurate information shared unknowingly. Social media has become the primary battleground for both. 

A 2018 study suggested that fake news spreads six times faster than real news on X, and AI-generated content is making it even harder to separate fact from fiction. This links back to the relationship recession. We’re also experiencing a trust recession. A 2024 Edelman Trust Barometer survey found that 61% of people no longer trust the news they consume, while 67% worry that AI-generated misinformation will soon make it impossible to know what’s true.

The consequences of this breakdown in trust are staggering. During the pandemic, conspiracy theories about vaccines contributed to widespread hesitancy, prolonging the crisis and costing lives. In elections, disinformation campaigns manipulate public opinion and undermine democracy. In war zones, AI-generated propaganda spreads rapidly, making it harder to distinguish reality from fiction. These are not abstract concerns—they are reshaping how people perceive the world, how they vote, and how they interact with one another.

Beyond politics, there is also the mental health toll of living in a world where truth feels increasingly elusive. Studies show that constant uncertainty fuels anxiety, depression, and social withdrawal. It is no coincidence that trust in media is collapsing at the same time that loneliness and isolation are surging. 

In particular, young people opt out of real-world interactions at unprecedented rates, as alluded to above. But perhaps they aren’t just retreating because they prefer digital interactions. Maybe they are withdrawing because they no longer trust the world around them. When every news source seems biased, when every politician seems corrupt, when every piece of media might be AI-generated nonsense, is it any wonder that people are choosing to disengage?

So, where do we go from here? Looking ahead 20 years, there are several possible scenarios. In the best case, AI handles the grunt work of journalism – automating transcription, summarising reports, and organising data – while human journalists focus on analysis, context, and investigative reporting. A more likely outcome is that only a handful of major media outlets survive, while the rest collapse. The worst-case scenario is a world where AI-generated misinformation dominates, and no one trusts anything anymore.

There are, however, some glimmers of hope. While major newspapers struggle, independent local journalism is seeing a resurgence. Outlets like The Mill and The Londoner have shown that people will pay for quality news – if it feels relevant to their daily lives. And while social media has often been an engine for misinformation, it has also enabled investigative journalists to share their work directly with engaged audiences. The challenge is finding a way to balance these forces – to harness the benefits of AI while maintaining journalistic integrity.

Ultimately, the fight for truth is about more than just media. It’s about education. If we are to navigate this new information landscape, we must teach the next generation to think critically, question sources, and demand accountability. Because if we don’t, we risk entering an era where reality itself becomes an illusion. And once that happens, how do we ever find our way back?

Trust is our most valuable currency in a world of misinformation, AI-generated news, and social media echo chambers. The question is: who do we trust? And how do we ensure that trust isn’t misplaced?

The past

Looking back, I’m grateful that social media didn’t exist when I was at university. My first tutorial at the University of St Andrews was surreal enough – just me, seven female students, and Prince William. 

I graduated 20 years ago this summer, and so much has changed. Recently, I was invited onto the Leading the Future podcast to reflect on my career – from starting in sports journalism to pivoting into technology and business. We talked about how vital human skills remain, even in an AI-driven world. And yes, we discussed the heir to the throne, a bit.

In the 40-minute episode, titled: Human Centrism with Oliver Pickup, I also covered:

  • How I started out in sports journalism 
  • When I realised I wanted more than sports journalism
  • How I pivoted to become a technology and business journalist
  • The importance of celebrating humans
  • Why I recently set up a thought leadership business (Pickup_andWebb)
  • My thoughts on smartphones and social media for children / teens
  • How to be part of the “AI class” – and make use of agentic AI

Do take a listen if any of those topics appeal to your curiosity.

Statistics of the month

📉 92 million jobs will be eliminated by 2030, but 170 million new roles will be created, according to the World Economic Forum’s Future of Jobs Report 2025.

💻 The same report found that only 24% of workers feel confident they have the skills needed for career progression in the next three years – meaning 76% don’t.

🤖 63% of people trust AI to inform decisions at work, a new CIPD study shows – but only 1% would trust AI to make important decisions outright.

⚖️ For the first time in Workmonitor’s 22-year history, work-life balance (83%) is now more important than pay (82%).

👎 44% of employees have quit a job due to a toxic workplace, Workmonitor’s report suggests – up 33% from last year.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 9)


TL;DR: September’s Go Flux Yourself debates when to come clean about the Tooth Fairy’s existence, considers the beauty of magic, developing a sense of wonder, how to excel at human-centred innovation, and provides lessons from the inventor of the carpet sweeper …

Image created on Midjourney with the prompt “a magical tooth fairy with a mechanised carpet sweeper looking happy in the style of a cubist Picasso painting”

The future

“So why not live with the magic? Be a kid again and believe in the fantastical. Life is more fun with a little smoke and mirrors.”

I love this time of year, as the completed page of September – the last month of summer indulgence – is turned to reveal October, the beginning of the golden quarter and the cosiness of autumn. 

As I tap these words, I spy, on the other side of the window, shrivelled yellow and brown leaves creating a patchwork carpet on the garden floor, having twisted and tumbled from their trees. Yet those trees remain well covered in greenery, for now. 

The changing of the seasons reminds us of the natural process of renewal. But, as always, autumn and soon winter will provide darkness in a particularly gloomy, and unpredictable world. And yet, these dark months are punctuated by magical, soothing, and memorable events. 

As the dad of two children who still – just about, in the case of my 10-year-old boy (more below) – believe in Father Christmas, I adore reliving the wonder of the festive season, which begins for me on Halloween. For parents, while no doubt a considerable effort is needed to make this period magical, it is rewarding and life-affirming. Soon, that innocence will be lost. 

Being in my early 40s, I’ve had over half of the 4,000 weeks we live on average, points out journalist and author Oliver Burkeman. The second half of my life will be hugely different from the first, on a macro and micro level.

For various reasons, I’ve recently been thinking about my remaining 1,800-ish weeks – if lucky. And whenever I consider what I would like to fill them with, it filters down to spending time with my nearest and dearest, cliched and twee as that might be. 

Maybe it’s a sign of my stage in life, but a large part of this desire is to protect my kids. Stand back, and the post-pandemic world is pretty mad right now.

As wars rage in the Middle East and Eastern Europe, global warming is increasing, loneliness is also on the rise, every country in the world has lower fertility rates than in 1950, NASA is building the world’s first telescope designed for planetary defence, journalists have turned to artificial intelligence-created newsreaders in Venezuela to avoid arrest, AI girlfriends are being preferred over real thing in their hundreds of millions, and technology has reached a point where everything should be questioned, not just the news. And who can predict what will happen if Donald Trump regains access to the Oval Office in the next couple of months?

How does a parent prepare a child for survival in a post-truth world? What are we doing to the minds of little people by confecting the winter months, in particular, with beautiful lies to generate artificial happiness? One could argue that their joy when meeting Father Christmas and receiving festive gifts is a cruel construct that, once revealed as such, will lead to deep mistrust.

I began this month’s newsletter with a quotation from Irish-based author, L.H. Cosway’s Six of Hearts, a story about a world-renowned illusionist (according to its description – I’ve not read the book). I was drawn to it as Freddie, who hit double figures in age earlier in September, lost a tooth the other day. Thankfully, I had to dash out to play football on the evening he asked my wife: “Does the Tooth Fairy really exist? I mean, is she genuinely real?”

Having been instructed, later that evening, to switch a quid for the tooth under Freddie’s pillow, it’s logical to assume that my wife didn’t reveal the truth. I can understand why. First the Tooth Fairy, then … then EVERYTHING! But when is the right moment to admit the game is up?

Can we handle the truth even as adults? I read with interest this morning that Melvin Vopson, an associate professor of physics at the University of Portsmouth, has proved our entire universe may be an advanced computer simulation—much like The Matrix. Yes, he has a book to sell on this subject, but he told MailOnline: “The Bible itself tells us that we are in a simulation, and it also tells us who is doing it. It is done by an AI.”

After screaming “BS”, the natural human reaction is: “Well, what’s the point, then?” Cue a sad emoji. We’re all big kids at heart.

On the subject of magic, Arthur C. Clarke, famously wrote: “Any sufficiently advanced technology is indistinguishable from magic.” Even if AI isn’t capable – yet – of creating the universe, it’s undoubtedly becoming impressive and, moreover, indiscernible from reality. 

Not everyone is finding AI magical in a work setting, though. Research from Upwork, published in July, highlights the “AI expectation gap“, with 96% of C-suite leaders admitting they expect the use of AI tools to increase their company’s overall productivity levels. However, 77% of employees say these tools have actually decreased their productivity and added to their workload.

A friend messaged me the other day to let me know he had managed 110 days of sobriety but that he felt he needed to drink at a gig because he was struggling with his feelings and wanted to release some pressure. The exchange soon revealed that he had been jobless for almost a year, and a beloved grandmother had died days before the gig. 

He wrote that he was “feeling anxiety and just a sense of doom about everything around modern life. I’m worried about the future of society and for our kids. There is so much hate around and the want for AI to replace us. If I, as someone with 20 years of experience, struggle to get a job, what will it be like for future generations?”

How do you answer that? With too much time to ponder, my friend fears the worst at the moment – for him, but mostly for his children. I get that.

I’ve attended many events in the last month, and I’ve taken to asking interviewees how I should prepare my kids for the future. At Gartner Reimagine HR, Paul Rubenstein, Chief Customer Officer at Visier, a company that provides AI-powered people analytics, told me that today’s youngsters must be “agile, smart, and fearless”. Great answer.

Perhaps, then, the greatest gift we can give our children isn’t shielding them from reality but nurturing their sense of wonder. In a world of AI and uncertainty, their ability to see magic in the every day might be the superpower they need to thrive.

The present

It’s conference season, and later this week, I’ll be on stage twice at DTX (the Digital Transformation Expo) at London ExCel. I’m looking forward to moderating a panel on the main stage alongside BT, techUK, and booking.com panellists to examine why human-centred tech design and delivery have never been so critical.

Regarding technology’s stickiness, Dr Nicola Millard, Principal Innovation Partner at BT Business (who wrote her PhD thesis on this subject), talks about the three Us: useful, useable, and used. The last of these revolves around peer pressure, essentially. Her octogenarian mother requested an iPad only because her friend had one. Meanwhile, Nicola is a “shadow customer” engaging with companies digitally on behalf of her mother, who finds it too overwhelming.

I touched on this subject in a keynote speech on human-work evolution at the Institute for Engineering and Technology for Accurate Background earlier in September. I urged the audience to be “kind explorers” to navigate the digital world – and careful not to leave people behind. 

I referenced Terry, my late, jazz-loving nonagenarian friend I’ve written about before in this newsletter. I recall his deep frustration at trying to speak to a human at his bank for a minor query and spending hours – no exaggeration – going around in circles, via automated assistants. Unable to walk, he resorted to using his trusty fountain pen to write a letter to his local branch. No return letter or call arrived, tragically. It was far from a magic experience, and the tech dumbfounded Terry, who would have reached triple figures next month.

We can use the CHUI values framework to help improve human-centric design. I wrote about this last month, but as a reminder, here are the main elements.

Community: Emphasises the importance of fostering a sense of belonging in both physical and virtual spaces. It’s about creating inclusive environments and valuing diverse perspectives.

Health: Goes beyond just physical wellbeing to include mental and emotional health. It stresses the need for work-life balance and overall wellness, especially in remote or hybrid work settings.

Understanding: Highlights the need for continuous learning and viewing issues from multiple angles. It’s about developing deep knowledge rather than just surface-level information.

Interconnectedness: Recognises that in our global, digital world, everything is linked. It involves understanding how different roles connect and the broader impact of our actions.

In this instance, below is how one might use CHUI.

Community:

  • Co-design with diverse user groups. This ensures that designs are inclusive and meet the needs of various communities.
  • Foster inclusive design practices. By considering different cultural, social, and economic backgrounds, designs become more universally accessible.
  • Create solutions that strengthen social connections. This aligns with the community aspect of CHUI, designing products or services that bring people together.

Health:

  • Prioritise mental and physical wellbeing in design. This could involve creating ergonomic products or digital interfaces that reduce eye strain and promote good posture.
  • Design for accessibility and reduced stress. Ensuring that designs are usable by people with different abilities and don’t create unnecessary stress or frustration.
  • Incorporate biophilic design principles. This involves bringing elements of nature into the design, which has been shown to improve wellbeing and reduce stress.

Understanding:

  • Deep user research and empathy mapping. This helps designers truly understand user needs, motivations, and pain points, leading to more effective solutions.
  • Iterative design with continuous user feedback. This ensures that designs evolve based on real user experiences and needs.
  • Design for intuitive learning and skill development: Creating interfaces or products that are easy to understand and help users develop new skills over time.

Interconnectedness:

  • Design for interoperability and ecosystem thinking. Considering how a design fits into the larger ecosystem of products or services that a user interacts with.
  • Consider broader societal and environmental impacts. This involves thinking about the ripple effects of a design on society and the environment.
  • Create solutions that enhance human-to-human connections. Designing with the goal of facilitating meaningful interactions between people.

Human-centred design should extend to the workplace – whether in the office or a remote setting. The temperature of the already hot topic of where people work was dialled up a fortnight ago, when Amazon decreed a five-day back-to-the-office mandate. 

LinkedIn shared timely data highlighting how companies are hiring now. Interestingly, the professional social media platform’s Economic Graph showed that hiring for fully remote roles is generally declining, with a 6.2% decrease year-on-year at large companies. Meanwhile, small companies are reversing the trend with a year-on-year 2.3% rise in remote hires. By contrast, larger companies are seeing significant growth in hybrid work models. Which organisation can say it has perfected the magic formula?

The past

The date of the aforementioned Accurate talk, to mostly HR professionals, was delivered on September 19 – the same day American entrepreneur Melville Bissell patented the carpet sweeper back in 1876. I know what you’re thinking – what does a mechanised carpet sweeper have to do with human-work evolution in the digital age?

Melville and his wife Anna Bissell owned a crockery shop in Michigan, where dust and breakages were daily occurrences. So Melville developed a carpet sweeping machine to keep the family store always tidy. It was so effective that word spread, demand rose, and soon, the Bissells were selling far more carpet sweepers than cups and saucers.

Today, five generations later, the family-run Bissell Inc. is one of the leading manufacturers of floor care products in North America in terms of sales, with a vast market share.

When Melville invented his device, he didn’t know he was starting a revolution in home cleaning. He was trying to solve a problem, to make life a bit easier and better.

Similarly, we are still determining precisely where our explorations in the digital age will lead us. 

But if we approach them with curiosity, with kindness, and with a commitment to our shared humanity, I believe we can evolve human work so that’s not just more efficient, but more fulfilling. Not just more profitable, but more purposeful.

I urge you to go forth and explore, be curious, be kind, and be human. That’s where the real magic can be found.

Statistics of the month

  • According to HiBob research, almost a quarter (24%) of Brits would replace all younger generation workers if they could. Further, 70% of companies struggle to manage younger-generation employees.
  • The above study found that Gen Zers are causing managers headaches around issues with attitudes towards authority (41%), emotional intelligence (38%) and levels of professionalism (34%). 

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 8)

TL;DR: August’s Go Flux Yourself includes the CHUI Framework to navigate human-work evolution, stresses the need to be kind explorers for human flourishing, EE’s helpful advice not to allow under 11s access to smartphones, and me facing my fear of heights by jumping out of a plane at 12,000ft … 

Image created on Midjourney with the prompt “a smiling, young, blonde-haired explorer, navigating the future world – one filled with wonder and opportunity, and futuristic-looking things – in the style of a Caravaggio painting”

The future

“When I grow up, I want to be an explorer.”

My four-year-old daughter, Darcey, “graduated” from her nursery in South East London a few weeks ago. On a warming July morning, proud parents perched awkwardly on children’s chairs to watch their little ones say farewell to Cheeky Monkeys. 

The preschool kids, dressed in their smartest clothes for the big occasion, stood in a line before us mums and dads and, with star-covered plastic mortarboards on their heads, put on a show.

After a collective opening song, they spelt their names individually – just about. Then, they took turns to reveal their career ambitions. The children listed many professions, some more realistic (and legal) than others. We had a dancer, an actor, a ballerina, a train driver, a ninja, a pirate, and four teachers – copycats! – before Darcey broke the chain.

Darcey at her Cheeky Monkeys graduation

When she declared her desire to be an “explorer”, I initially chortled. How cute, I thought. However, on reflection, I genuinely loved her answer – so much so that I’ve worked it into a keynote speech I’m giving in mid-September. 

The remit for the event, titled Navigating the Changing World of Work (if interested in claiming a seat, please let me know, and I’ll ask – I think I have an allocation), held at the Institute of Engineering and Technology (where I am a member), a stunning venue steeped in history, is to set the scene, drawing on my expertise in human-work evolution.

Thinking back to the graduation at Cheeky Monkeys, I realised that Darcey, in her four-year-old wisdom, had stumbled upon the essence of what we all must be in this rapidly evolving digital world: explorers. Kind explorers, to be precise. (Darcey, who adores babies – we have an army of plastic ones – is caring, but she is still working on the kindness-to-other-humans bit. Hopefully, the discipline of school, which she begins in early September, will help with that.)

Indeed, we’re all explorers in this brave new world of artificial intelligence, automation, and digital transformation. We’re charting unknown territories, facing unfamiliar challenges, and trying to make sense of a landscape that’s shifting beneath our feet faster than we can blink.

But what will Darcey be when she grows up? As in: what will her actual job be? Of course, it’s impossible to predict at this point. Partly because she has 14 years before she leaves school (hopefully), and mostly because professions are being warped – if not wiped out – by technological advancements. Plus, plenty of new jobs will be spawned. 

At the start of the year, LinkedIn produced a list of 25 UK roles that were on the rise, according to the platform’s data. The piece points out that job skill sets have changed by 25% since 2015 and are projected to shift by 65% by 2030 globally.

Right now – well, at the beginning of 2024, at least – the 25 fastest-growing jobs include Sustainability Manager (ranked first), AI Engineer (seventh), Security Operations Centre Analyst (tenth), Energy Engineer (fourteenth), and Data Governance Manager (sixteenth). Most of these roles did not exist a decade ago.

So, how best can we prepare our children for the world of work? The World Economic Forum talks about the four Cs being the essential skills for the 21st century. Namely: critical thinking, collaboration, communication, and creativity. I would lob another C in there: compassion.

My AI-induced FOBO (fear of becoming obsolete) has triggered my own – necessary – journey of exploration, which has led to much more fun speaking work, podcasting, moderating, and essentially more human interaction; this is why I talk about “human-work evolution” and not just “future of work”, which often leaves people out of the conversation.

Through all of this, I’ve discovered that the key to navigating the future of work lies not in mastering any particular technology, but in cultivating our uniquely human qualities. As such, I’ve chiselled the values I discussed in April’s Go Flux Yourself and created the CHUI Framework. 

CHUI is an acronym for the following:

  • Community
  • Health
  • Understanding
  • Interconnectedness

These values are crucial as we navigate the complexities of the digital age. They remind us that no matter how advanced our technology becomes, we are, at our core, social beings who thrive on connection, wellbeing, empathy, and the intricate web of relationships that bind us together.

Here’s a breakdown of each element of CHUI.

Community

In an increasingly digital world, the importance of community cannot be overstated. We need to foster a sense of belonging, both in physical and virtual spaces. This means creating inclusive workplace cultures, facilitating meaningful connections between team members, and recognising the value of diverse perspectives.

Health

This encompasses not just physical health but mental and emotional wellbeing. As the lines between work and personal life blur, especially in remote and hybrid work environments, we must prioritise holistic health. This includes promoting work-life balance, providing mental health resources, and creating a workplace culture that values wellbeing.

Understanding

Deep understanding is more valuable than ever in a world of information overload and rapid change. This means cultivating curiosity, promoting continuous learning, and developing the ability to see things from multiple perspectives. It’s about going beyond surface-level knowledge to grasp the complexities of our work and our world truly.

Interconnectedness

Everything is connected in our globalised, digitalised world. Actions in one part of the world can have far-reaching consequences. In the workplace, this means recognising how different roles and departments interrelate, understanding the broader impact of our work, and fostering a sense of shared purpose and responsibility.

By embracing these CHUI values, we can create more resilient, adaptable, and human-centric workplaces and better control human-work evolution.

Ultimately, we must be explorers, venturing into unknown territories, mapping out new ways of working, and discovering new possibilities at the intersection of human and machine intelligence.

But more than that, we need to be kind explorers. Kind to ourselves as we navigate the complexities of the digital age. Kind to colleagues and clients as they grapple with change and uncertainty. Kind to our intelligent assistants as we learn to work alongside them. And kind to the wider world that our decisions will impact.

The map we create today will shape the landscape for generations to come. So, let’s ensure it’s a landscape defined not just by technological advancement but human flourishing.

Let’s create workplaces where people can bring their whole selves to work, where diversity is celebrated, wellbeing is prioritised, learning is constant, and technology serves humanity – not the other way around.

Let’s be the kind of leaders who don’t just manage change but inspire it, who don’t just adapt to the future but shape it, who don’t just talk about values but live them daily.

We don’t know precisely where our digital explorations will lead us. But if we approach them with curiosity, kindness, and a commitment to our shared humanity, I believe we can evolve human work so that it’s not just more efficient but more fulfilling, not just more profitable but more purposeful.

So, let’s go forth and explore! Let’s be curious. Let’s be kind. Let’s be human.

The present

After typing this edition of Go Flux Yourself, I’ll grab my iPhone 13 and post teasers on LinkedIn and Instagram. I’m acutely aware of the irony. Here I am, a supposedly responsible adult, tapping away on a device I’m hesitant to put in my daughter’s hands. It’s a conundrum that plagues modern parenting: how do we navigate the digital landscape with our children when we’re still finding our own bearings?

Darcey, my four-year-old explorer-in-training, is growing up in a world where technology is as ubiquitous as oxygen. She’s already adept at swiping through photos on my phone and giggling at videos of herself, which is both impressive and terrifying.

I understand that Darcey’s interaction with technology – and smartphones in particular – will be crucial for her development. In a world where digital literacy is as essential as reading and writing, denying her access to these tools feels akin to sending her into battle armed with nothing but a wooden spoon.

But then I read about EE’s recommendation, published earlier this week, that children under 11 shouldn’t have smartphones, and I breathed a sigh of relief. It’s as though someone’s permitted me to pump the brakes on this runaway tech train.

The stance taken by EE – one of the UK’s largest mobile network providers – isn’t just some arbitrary line in the sand. It’s backed by growing concerns about the effects of smartphone and internet usage on children’s mental health and behaviour. The US Surgeon General’s warning that social media use presents “a profound risk of harm” for children only adds weight to these concerns.

As a parent, I’m caught in a tug-of-war between embracing technology’s potential and shielding my child from its perils. On one hand, I want Darcey and her older brother, Freddie, to be digitally savvy and navigate the online world confidently. On the other, I’m terrified of exposing my children to the darker corners of the internet, where trolls lurk, and misinformation spreads like wildfire.

It’s not just about protecting her from external threats, either. I worry about the internal changes that constant connectivity might bring. Will she develop the patience to read a book when TikTok offers instant gratification? Will she learn to navigate real-world social situations when she can hide behind a screen? Will she ever know the joy of being bored and having to use her imagination to entertain herself? 

In June’s newsletter, I discussed the loneliness epidemic, and the rise of AI girlfriends and boyfriends – what will this situation look like in a decade if left unchecked? 

Dr Jonathan Haidt’s observation about the potent combination of social media and smartphones causing a decline in mental health rings true (this is worth a watch). It’s not just about access to information; it’s about the constant, addictive pull of likes, shares, and notifications. It’s about the pressure to present a perfect online persona, even before you’ve figured out who you really are.

As I ponder this digital dilemma, I can’t help but wonder if we’re in the midst of a grand social experiment with our children as the unwitting subjects. Will future generations look back on our era of unregulated social media use with the same bewilderment we feel when we see old adverts promoting the health benefits of smoking?

EE’s advice may be a step towards a more balanced approach. Maybe we need to redefine what it means to be “connected” in the digital age. Could we embrace technology without being enslaved by it?

For now, I’m focusing on nurturing Darcey’s explorer spirit in the physical world. When the time comes for her to venture into the digital realm, I hope she’ll do so with the curiosity of an explorer and the caution of a well-prepared adventurer.

Meanwhile, I’m trying to model healthier tech habits. It’s a work in progress, but I’m learning to put my phone down more often, to be present in the moment, and to remember that the most important connections are the ones we make face-to-face.

In this brave new world of pixels and algorithms, the most revolutionary act is to be human. To laugh, play, and explore – without a screen in sight. After all, isn’t that what childhood should be about?

The past

In the spirit of being a fearless explorer I recently took a leap of faith. Quite literally.

In mid-August, my wife, Clare, and I found ourselves hurtling towards the earth at 120 mph after jumping out of a perfectly good aeroplane at 12,000 feet. This wasn’t just a random act of madness; we were skydiving to raise money for Lewisham and Greenwich NHS Trust Charity, near Ashford in Kent (here is the fundraising link, if interested).

Now, I have a fear of heights. A proper, palm-sweating, stomach-churning fear. But as I’ve been banging on about the importance of exploration and facing our fears in this digital age, I figured it was time to practice what I preach.

The experience was, well, “exhilarating” doesn’t entirely describe it. It was a cocktail of terror, awe, and pure, unadulterated, life-affirming joy. As we plummeted through the air, the fear melted away, replaced by an overwhelming sense of freedom. It was a vivid reminder that our greatest adventures often lie just beyond our comfort zones.

But this wasn’t just about personal growth or fundraising. It was also about sharing an important experience with Clare, strengthening our bond through a joint adventure. And with Darcey and Freddie watching from the ground, I hope we’ve inspired them to be brave, to push their boundaries, and to embrace life’s challenges head-on.

As we touched down, wobbly-legged but elated, I couldn’t help but draw parallels to our journey through this rapidly evolving digital landscape. Sometimes, we need to take that leap into the unknown. We need to trust our training, face our fears, and embrace the exhilarating free fall of progress.

So here’s to being fearless explorers – whether we’re charting the digital unknown or plummeting through the physical sky. May we always have the courage to jump, the wisdom to learn from the fall, and the joy to relish the ride.

Clare and I take to the skies

Statistics of the month

  • Following Labour’s pledge to introduce the “right to switch off”, new research reveals that most are “workaholics”, with 20% struggling to decline after-hours work requests, and 88% experiencing ongoing stress (Owl Labs).
  • In 2023, funding for generative AI skyrocketed to $22.4 billion, nearly nine times higher than in 2022 and 25 times the 2019 amount, despite a decline in overall AI investment since 2021 (Our World In Data).
  • Alphabet-backed Waymo, a self-driving technology company, revealed that it now provides over 100,000 paid robotaxi rides each week across its key markets in Los Angeles, San Francisco, and Phoenix (TechCrunch).

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 7)

TL;DR: July’s cheery (!) Go Flux Yourself considers the end of knowledge and culture as we know it thanks to artificial general intelligence, technology-induced ‘relationship decay’, HR dissonance, and the breathtaking beauty of human skill …

Image created on Midjourney with the prompt “a massive meteor with an evil AI face about to crash down on Earth while ignorant people look at their smartphones in the style of an Edvard Munch painting”

The future

“The realisation of artificial general intelligence would be like a meteor coming down from above, stopping culture and knowledge as we know it.”

These are the words of warning from Gerd Leonhard, the German futurist with whom I have collaborated a few times, fortunately. Back in 2018, Gerd – a former musician who studied at Boston’s Berklee College of Music (he has a brilliant story about how jazz trumpeting great Miles Davis visited and played just one note that had all the students spellbound) – invited me over to the south of France to shape and sharpen his messaging around the brawl, as he saw it, between humans and tech. 

Gerd published Technology vs. Humanity a couple of years earlier, but with things moving quickly, he engaged me to keep things fresh. He is always worth listening to and learning from, and I’m grateful that we have continued the conversation over the years – before, during, and now after the coronavirus pandemic. We caught up again recently, earlier in July, after Gerd delivered a hard-hitting webinar from which the above line comes. (I recommend you to watch the 41-minute talk here.)

As an aside, I had a good cackle at this dark cartoon Gerd used in his talk, too.

Back to the meteor idea, which is a twist on the 2021 apocalyptic political satire and black comedy Don’t Look Up, starring Leonardo DiCaprio. Last year Gerd produced a short film called Look Up Now.

Imagine a meteor hurtling towards Earth, not of rock and ice, but of silicon and code. This celestial body doesn’t threaten our physical existence but rather our monopoly on knowledge and intellect. This is Gerd’s stark vision of artificial general intelligence (AGI).

“We thought the idea of the asteroid hitting the earth would be good for the idea of AGI,” he tells me. “Because AI – and intelligent assistants (IA) – by itself is a big change, but it’s not existential in the sense of fundamentally changing everything. But a machine that would be generally intelligent that could surpass us in every possible cognitive job would be like an asteroid hitting us because it would basically be complete unemployment afterwards except for physical jobs.”

This looming “knowledge meteor” isn’t just a hypothetical scenario for the distant future. It’s a reality actively pursued by some of the world’s most powerful tech companies. As Gerd notes: “The biggest business ever is to replace humans with machines.”

The race towards AGI – or the singularity, or superintelligence, depending on your preferred phraseology – represents a seismic shift in human history that could redefine our role in the world. Yet, as with the fictional comet in Don’t Look Up, there’s an alarming lack of urgency in addressing this existential challenge. 

While governments and regulatory bodies are beginning to grapple with the implications of AI, their efforts often fall short of the mark. For instance, the UK’s recent pledge to “get a handle” on AI lacks clear definitions and concrete action plans. 

Meanwhile, the drive to dominate AI development continues unabated in the United States, with Donald Trump’s team planning a “Manhattan Project on AI”, according to a Washington Post report in mid-July. This plan includes creating industry-led agencies to study AI models and protect them from foreign powers, with a section ominously titled “Make America First in AI”.

The original Manhattan Project, started during World War II, led to the production of the world’s first nuclear weapons. “If we started a Manhattan Project for AI, we’re essentially saying you have unlimited resources, unlimited freedom, and no moral constraints,” says Gerd. “It’s inviting all the mad scientists who just want to build whatever they can without any consideration for ethical and moral issues.”

Certainly, the historical parallel is chilling, reminding us of the unforeseen consequences that can arise from unbridled technological advancement. Who, Gerd asks, will serve as humanity’s “mission control” in this high-stakes environment? There’s no clear answer. 

Unlike previous existential threats like nuclear weapons or climate change, AGI development is largely driven by private companies motivated by profit rather than public interest. Indeed, arguably the most influential person in this space, OpenAI’s CEO Sam Altman, has the motto “No one knows what happens next” above his desk – as I pointed out in January’s inaugural Go Flux Yourself.

The financial incentives driving AI development are enormous, creating what Gerd describes as “a huge temptation to rush ahead without considering the consequences of building something larger than us”. This sprint towards AGI is particularly concerning, given the potential implications. An entity that “knows everything about everybody at any given time and combines that in a digital brain of an IQ of one billion,” Gerd argues, “cannot possibly end well for us.”

It’s important to stress that my Swiss-based friend is not arguing against all AI development. He distinguishes between narrow AI or “intelligent assistants” and AGI. The former, he believes, can be “extremely useful for businesses like better software, offering us powerful solutions, more efficiency”. The latter – a general intelligence surpassing human capabilities across the board – poses existential risks.

This nuanced view is crucial as we navigate the future of AI. It’s not about halting progress but about directing it responsibly. Hence why the Manhattan Project on AI, which would likely trigger an arms race, is bad news for humanity.

“The wolf you feed is the wolf that wins,” points out Gerd. We have to feed the right wolf, and must prioritise human values alongside technological progress. It’s incredibly challenging, of course, and requires meaningful collaboration between policymakers, AI researchers, ethicists, and business leaders. But by developing a shared understanding of AI’s potential and pitfalls, we can craft regulations that foster innovation while protecting society’s interests – before it’s too late.

The present

While the aforementioned meteor is not yet in our orbit, thankfully, there are plenty of examples of how technology other than AGI is negatively impacting our lives. Last month, I wrote about the rising “loneliness epidemic”

Shortly afterwards, I interviewed Eric Mosley, CEO of WorkHuman, who offers a stark image of the current business landscape, where the fabric of workplace relationships is fraying badly.

“What is obvious to everyone is that the less you interact with people physically, the more destructive that is to the relationship capital and the relationship infrastructure in companies,” the Boston-based Irishman says. This decay in social connections isn’t just a fleeting trend – it’s a fundamental shift that threatens the foundations of corporate culture.

The pandemic-induced shift to remote work initially rode on the coattails of pre-existing relationships. However, as Eric continues: “Now we’re years into this and have a much more prevalent work-from-home culture. Relationship decay is real, and culture is affected by that.”

That phrase, relationship decay, is perfect for revealing how rotten things are – at work, and elsewhere. Yet the erosion of workplace bonds manifests in subtle yet profound ways. The casual conversations before and after meetings, the impromptu chats by the coffee machine – these seemingly insignificant interactions are the lifeblood of a vibrant business culture. 

In their absence, we’re left with what Eric describes as a sterile, transactional work environment. “You join a Zoom call, conduct your business, then disconnect and retreat to your pathetic little kitchen for tea. There’s no genuine interaction – it’s a cycle of isolation.”

The cumulative effect of these missed connections is staggering. “You have to understand the compounding effect of that difference across thousands of company interactions over years,” Eric warns. “It adds up to a profound difference.”

This relationship decay has given rise to a new breed of employee – the “mentally transient” worker. These individuals, lacking strong ties to their colleagues or a sense of community, are merely going through the motions.

Yet, herein lies a paradox that HR professionals must grapple with. Despite the obvious detrimental effects of reduced physical interaction, employees continue to push for more remote work options. Eric describes this as a “complete dissonance and disconnect between the reality of what that results in and the desire of companies to counteract it”.

This dissonance presents a significant challenge for HR leaders. How do you balance the desire for flexibility with the need for meaningful workplace connections? The solution lies in reimagining the office as a hub for collaboration and community-building rather than a mandatory daily destination.

As businesses grapple with these shifts in workplace dynamics, we must also be mindful of unintended consequences in other areas. This last week, I interviewed Nicola Millard, Principal Innovation Partner at BT Group, for a piece previewing the London version of Digital Transformation EXPO (where I’ll be on stage again in October). She highlights an emerging trend that parallels the workplace disconnect, “shadow customers” – people who lack the confidence or ability to navigate digital platforms. 

She exemplifies this through her personal experience, acting as a digital proxy for her 86-year-old Mother. While her Mum can make telephone calls, Nicola handles all online interactions, from shopping to managing accounts, effectively becoming the “customer behind the customer” in an increasingly digital world. 

As businesses increasingly shift towards digital channels, they risk alienating a segment of their customer base that needs more confidence or ability to navigate these platforms.

This trend reminds us that we must not lose sight of the human element in our rush to embrace digital transformation. Just as some employees struggle with a fully remote work environment, some customers may feel left behind by purely digital interactions.

The parallel between these two trends is striking. In both cases, there’s a risk of losing vital connections – whether it’s between colleagues or between businesses and their customers. And in both cases, the solution lies in finding a balance between digital efficiency and human touch.

The past

In the recent past – i.e. in July – I’ve written or spoken for a variety of clients about the future of insurance, the future of education, and the future of the workplace. It’s been a fun, productive month. I even began a new column for Low No Drinker Magazine, called Upper Bottom – the same as the weekly sobriety podcast I began almost exactly six months ago.

But I’ve also found time to enrich myself with art and culture. I took my family to the Summer Exhibition at the Royal Academy in London, which I hope will inspire them. I also snook off solo on a Monday afternoon to catch the Beyond the Bassline exhibition at the British Library before it closed. 

The latter chronicled 500 years of black British music, and writing about it now makes me think again about Gerd’s story of Miles Davis playing one note with such haunting quality that it made it so memorable. 

I’m optimistic that human skills will always be valued more than technological achievements. The Paris Olympic Games, which are in full flow now, are an important reminder that there is breathtaking and life-affirming beauty to be found in people going faster, higher, and stronger – as per the Olympic motto, Citius, Altius, Fortius.

Three years ago, the organisers added another Latin word for the Tokyo Games: Communiter. It translates as “together”. In this mad and increasingly often bad world, we need that togetherness more than ever.

Statistics of the month

  • While executives push for return-to-office mandates, 48% of managers admit that their teams are more productive when they adopt hybrid work (Owl Labs’ annual State of Hybrid Work study).
  • Remember the Great Resignation? This is worse. Over a quarter (28%) of the 56,000 workers surveyed said they were “very or extremely likely” to move on from their current companies. In 2023 that figure stood at 26%, and at 19% in 2022 (PwC).
  • Two-thirds (66%) of the UK workforce do not feel their work environment allows them to partake in self-care and look after their well-being (People Management).

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 3)

TL;DR: March’s Go Flux Yourself is full of heart (rather than brains and muscles), discusses the “relationship economy”, off-field Manchester City stories, and tips to prepare the leaders of tomorrow … 

Image created on Midjourney with the prompt “a hyperrealistic pep guardiola with a big heart on his Manchester City shirt handing out lemon shortbread to a female cleaner after a game

The future

“In the past, jobs were about muscle; now they’re about brains; but in the future, they’ll be about the heart.”

I love these words of wisdom from Minouche Shafik, the incumbent President of Colombia University. 

I heard them for the first time a couple of days ago, at LinkedIn’s Talent Connect conference in east London. On stage, delivering the closing keynote, was Aneesh Raman, a former speechwriter for Barack Obama during his time in the White House. Now, he is VP and Workforce Expert at the professional social media platform, which reached one billion members last year.

Raman talked about the need for more, well, heart in the age of artificial intelligence. And he argued, convincingly, that the human role and capabilities in this epoch should be imagined from a place of possibility, not fear.

He promoted the concept of the “relationship economy”, which will supersede the “knowledge economy”. While the latter is fuelled by the Internet, the former will be anchored on expanding human social and emotional capabilities. 

Here, HR professionals have a pivotal role to play. They have the power to bring more humanity to work – paradoxically by using technology – and therefore spread more humanity, or heart, to the physical (and digital) world. At the moment, due to the almighty buzz around AI, “humans seem to have become an afterthought”, Raman posited.

My work wanderings this last month have revolved around AI-augmented jobs, skill-based recruitment, and future roles. I wrote my inaugural column for UKTN on a related topic. The headline was “£1.1bn to upskill the UK for future tech roles is too little too late”, and I didn’t hold back.

“In ordinary circumstances, a figure of £1.1bn – the headline-capturing amount the government announced last week to train ‘over 4,000’ UK students in future technologies – wouldn’t be lamentable,” I started. “Yet, considering total public spending on education was £116bn in 2022-23, according to the Institute for Fiscal Studies – down 8% from the 2010-11 figure – it smacks of a weak attempt to win some media coverage and votes in the looming general election.”

I cited an Axios study – published on the exact day Science and Technology Secretary Michelle Donelan announced the £1.1bn package to skill-up the UK in future tech roles – that found only 18% of UK workers believe that AI has improved productivity. Of the 17 countries polled, at the other end of the scale were India (67%), Indonesia (65%), and the UAE (62%).

Certainly, AI literacy is a hot topic, with frontrunners arguing leaders need to set the example, and support such learning and education across their organisations. Raman stated that 44% of UK companies are currently helping employees become AI literate – at the top was India, with 52%.

Earlier in March, I attended a Workday roundtable titled “Leveraging AI to Foster Skills and Inclusivity”. I have a vested interest in the future of work, not least because I’m the father of two young children. I explained to the panellists that I think the current school curriculum is unsuitable for tomorrow’s workers and asked how I should best prepare them.

My thinking on this subject has long been based on the so-called 4 Cs of 21st-century learning: critical thinking, communication, collaboration, and creativity. All four involve the heart Shafik is talking about. And they are why I’m pleased my son plays rugby – team sport is excellent for developing all four Cs. It’s also good for shared winning and losing experiences, which can be clouded in the digital age, and warped by social media – more on this below.

The Workday roundtable panellists shared my concern that today’s schooling is unfit for tomorrow. Yet they offered some useful extracurricular tips. For instance, Michael Houlihan, CEO of Generation UK and Ireland, which recruits, trains, and places unemployed young adults into career-launching jobs, suggested using generative AI tools to engage kids, such as having them interact with characters from their favourite books. This can turn a “spark” of interest into a “raging fire of learning and passion”.

He argued that exposing kids to technology and getting them comfortable with it through play at a young age is essential. Online learning tools and platforms, like Khan Academy, are creating exciting generative AI tools to help teach kids subjects like maths in an engaging, interactive way.

I was heartened by these answers, and Shafik’s quotation supported this movement toward the “relationship economy” – one accessed through technology but with humanity at its beating centre.

The present

I enjoy ghostwriting thought leadership articles for various executives – pinching expert opinions and inhabiting their psyche is like being an actor on the stage – and last week, I spoke with a Dutch consultant about geopolitical tensions. Refreshingly, he would not be led by my rather gloomy narrative. 

He countered that, despite what one might see and hear in the media, humanity is in a relatively safe period. The reason most of us are more prone to doom-mongering is down to social media, which amplifies everything and causes collective anxiety to rocket, he posited. 

While what’s happening in Ukraine and Gaza is horrific, and the climate crisis looms like a black cloud, undoubtedly, there is something in this hypothesis. How pleasing, then, that on Monday, Republican Florida Governor Ron DeSantis signed a bill that will ban social media accounts for children under 14 and require parental permission for 14- and 15-year-olds. If – and it’s not a given, especially considering who might (re)enter the White House – it becomes law, it will take effect on January 1.

It’s somewhat out of character for former presidential candidate DeSantis. Indeed, during a 2022 rally in protest of the Supreme Court ruling that overturned the Roe versus Wade abortion ruling, Democratic State Senator Tina Polsky announced to the crowd of about 250 at the Esplanade in Downtown Fort Lauderdale that DeSantis couldn’t care less about – especially his wife and daughters. “He’s heartless,” she said.

Perhaps DeSantis has a heart, after all. But will other states – and countries – follow suit? With Father Christmas having gifted my eldest child a Nintendo Switch last Christmas, but a few years off finding a smartphone under the tree, I sincerely hope so. 

Social media platforms are toxic for society and corroding young minds – and older ones. I’ve removed all social media apps – bar LinkedIn, which is useful for work – from my phone, and gone to the extreme of using a grayscale setting so that the display is less appealing (if you are thinking of doing this, be warned that it is a nightmare to locate Lime bike parking bays in central London). 

Pep Guardiola, Manchester City’s manager, is someone else who has a heart. And I’m not writing that just because I’m a (long-standing) fan of the club (I was in Istanbul last June, cheering on at the triumphant Champions League final). 

At the start of the month, I was fortunate to be invited by City’s technology partner, Qualtrics, to the Etihad Stadium for a tour and a roundtable on how the club uses data to improve the customer experience ahead of the Champions League match against FC Copenhagen.

There were some lovely snippets of information. In a stint in the US before joining City in 2016, Guardiola scanned American sports for tips. City’s circular changing room, for example, is lifted from the NFL. The words snaking around the ceiling read: “Some are born here, some are drawn here, and all call it home.” That sense of togetherness is central to the club’s success. But the manager plays his part, on and off the pitch – and in the backroom. It was revealed that after every home game, the Spaniard seeks out changing room cleaner, Deb, and says thanks with a lemon shortbread. Nice touch.

City’s customer experience team are trying to win hearts and minds. Thanks to its partnership with Qualtrics, the club uses data to understand its fans and improve their experience. The department is bringing data from various sources into one place to create a “single source of truth” and a 360-degree view of fans. This includes operational data from their CRM and experience data from surveys, social media, call centre interactions, and so on. 

On match days, the team uses real-time data and feedback to identify and resolve issues like long queues. Post-match surveys help assess what’s working well versus what needs improvement across different fan segments. It serves as another example of how the muscle and brain of AI can combine with the human heart to provide a superior experience.

The past

Ian Lees was the 64-year-old tour guide at the Etihad who told me about Pep’s lemon shortbread pressie for the cleaner. Ian has been at the club since 1976, was a first-team coach for a while, and was a font of knowledge – no AI will ever be able to replace passionate people like Ian. 

He had so many incredible stories, but two stand out that show how peculiar – and sentimental, or superstitious – humans can be. John Stones, the rangy centre-back turned unlikely libero, apparently is a size 9 shoe, yet he wears size 8 football boots. Meanwhile, right-back and captain Kyle Walker’s winning goal in a game aged 14 had such an impact on him that he wears the same – and unwashed – shinpads today, 19 years later. 

We all have quirks. That’s what makes us human. As I approach one year of sobriety – April 1 (no joke) will make it 12 months without booze – I shudder to recall some of the heartless and mindless things I’ve done in the past. Yet, reflecting on those drunk-fuelled missteps – and near-death experiences, as explored in a recent Upper Bottom podcast episode) – means the 2.0 version of me will be a better human.

Statistics of the month

  • Over half (55%) of IT leaders feel pressure from their organisation’s leadership to implement new AI technology, according to Asana’s Work Innovation Lab. Yet, a quarter regret investing in AI too quickly, showing that business pressure means AI implementation isn’t as thoughtful as it could be.
  • One in ten (10%) employees have witnessed or experienced sexual harassment at work in the UK, but nearly half (49%) of these did not report it, finds Personio.
  • Gartner research indicates only 46% of employees feel supported trying to grow their careers at their organisation.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 1)

TL;DR: This month’s Go Flux Yourself includes thinking like badgers, rogue chatbots, American presidents snogging, productivity problems, return-to-office mandates, and AI leaders admitting they don’t know “what happens next” – but not in that order … 

Image created on Midjourney with the prompt “a Henri Bonnard-style painting set in the New Forest in England with badgers, remote workers, Joe Biden and Donald Trump kissing, and lonely males looking at their smartphones”

About this newsletter

A warm welcome to the first edition of a rude-sounding-yet-useful newsletter for business leaders striving to make sense of today and be better prepared for tomorrow.

Below is a summary of what I hope to offer with Go Flux Yourself (with luck, a memorably naughty pun on “flux”, meaning continuous change, in case it requires an explanation).

“Master change and disruption with Oliver Pickup’s monthly future-of-work newsletter: insights and stories on transformation, curated by an award-winning, future-of-work specialist.”

I’m a London-based technology and business communicator – I write, speak, strategise, moderate, listen, and learn – and you can find more about me and my work at www.oliverpickup.com.

At the end of every month, I serve up insights, statistics, quotations and observations from the fascinating and ever-changing future-of-work space in which I operate. 

Every month, the Go Flux Yourself newsletter will have three sections:

  • The future – forward-looking, regarding challenges and opportunities.
  • The present – relevant news, eye-catching examples. glimpses of upcoming challenges and opportunities.
  • The past – lessons from yesterday that might help leaders tomorrow.

The most important thing is to get fluxed, and change. “He that will not apply new remedies must expect new evils, for time is the greatest innovator,” wrote Francis Bacon almost 400 years ago (in 1625).

The future

“No one knows what happens next.” Especially badgers.

The above, rather alarmingly, is the sign/motto above Sam Altman’s desk (without the bit about badgers – more on them later), as revealed in a panel session, Technology in a Turbulent World, at the World Economic Forum’s annual meeting in snowy Davos. 

It reeks of faux justification and diminished responsibility for possible humanity-damaging mistakes made by the co-founder and CEO of Microsoft-backed OpenAI, arguably the world’s most important company in 2024.

Fellow panellist Marc Benioff, chair and CEO of Salesforce, stated: “We don’t want to see an AI Hiroshima.” Indeed, “no one knows what happens next” echoes Facebook’s original – and poorly aged – mantra of “move fast and break things” that was adopted by Silicon Valley and the wider technology community. But at what cost? Can the capitalists curb their rapaciousness? Well, what’s to stop them, really? They can stomp on the paper tigers that currently stand against them. (I’m going to be writing and speaking about this more in February.)

The United Nations secretary general, António Guterres, clarified his feelings at WEF and argued that every breakthrough in generative AI increases the threat of unintended consequences. “Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact,” said the Portuguese. But he strikes the same tone when talking about climate change, and his comments, again, are falling on seemingly deaf ears. Or at least greed for green – the paper kind – outweighs concerns for humanity.

A few days earlier, on January 9, Scott Galloway, professor at New York University Stern School of Business, and Inflection AI’s co-founder Mustafa Suleyman (former co-founder of DeepMind), asked: “Can AI be contained?

Galloway pointed out that given there are over 70 elections around the globe in 2024 – the most in history – there is likely to be a “lollapalooza of misinformation”. And that was before the deepfake of Joe Biden snogging Donald Trump, which was on the front page of the Financial Times Weekend’s magazine on January 27 (see below). 

The provocative American entrepreneur and educator also pointed out that AI will likely increase loneliness, with “searches for AI girlfriends off the charts”. How depressing. But the recent example of a Belgian man – married with two children – killing himself as his beloved chatbot convinced him to end his life for the sake of the planet is evidence enough. 

In a similar vein, delivery firm DPD disabled part of its AI-powered online chatbot after it went rogue a couple of weeks ago. A customer struggling to track down his parcel decided to entertain himself with the chatbot facility. It told the user a joke, when prompted, served up profane replies, and created a haiku calling itself a “useless chatbot that can’t help you”. What would Alan Turing think? 

Anyway, Galloway also noted how the brightest young minds are not attracted to government roles, and it’s a massive challenge (not least when top talent can earn much, much more at tech firms). (As an aside, I interviewed Prof G a couple of years ago for a piece on higher education, and he called me “full of sh1t”. Charming.)

Meanwhile, Suleyman discussed job destruction due to AI advancement. He predicted that in 30 years, we will be approaching “zero cost for basic goods”, and society will have moved beyond the need for universal basic income and towards “universal basic provision”. 

How this Star Trek economy is funded is open to debate, and no one has a convincing solution, yet. (Although Jeremy Hunt, who was on the panel in Davos with Altman, Benioff, et al, might not be consulted. The chancellor revealed that his first question to ChatGPT was “is Jeremy Hunt a good chancellor?” The egoist queried the reply – “Jeremy Hunt is not chancellor” – without, even now, realising that ChatGPT’s training data stopped before his appointment.)

Further, the absence of trust in government – as per the latest Edelman Trust Barometer (which has the general population in the UK (39) and the US (46) well below half, and both down on the 2023 figures) – and increasing power of the tech giants could mean that the latter will act more like nation-states. And with that social contract effectively ripped up, and safety not assured, chaos could reign. Suleyman talked about the “plummeting cost of power”, and posited conflict can be expected if actual nation-states can no longer look after their citizens, digitally or physically. The theme of prioritising trust is a big one for me in 2024, and in January a lot of my writing and speaking has been founded upon this topic.

If “no one knows what happens next”, leaders must educate themselves to broaden their scope of understanding and be proactive to get fluxed. The words of 18th-century English historian Edward Gibbons come to mind: “The wind and the waves are always on the side of the ablest navigator.”

Certainly, I’ve been busy educating myself, and have completed courses in generative AI, public speaking and podcasting, to help me achieve my 2024 goal of being more human in an increasingly digital age. This time next month, I’ll be able to share news about a (sobriety) podcast and also a thought-leadership business I’m launching in February.

The present

A couple of weeks ago, judge Robert Richter dealt a blow to those in the financial services industry – and possibly beyond – hoping to work fully remotely. He ruled against a senior man­ager at the Fin­an­cial Con­duct Author­ity who wanted to work from home full-time, find­ing the office was a bet­ter envir­on­ment for “rapid dis­cus­sion” and “non-verbal com­mu­nic­a­tion”.

The landmark case will have been closely watched by other employers considering return-to-office mandates. The judge found that the financial watchdog was within its rights to deny Elizabeth Wilson’s request, stating there were “weak­nesses with remote work­ing”. Poor Elizabeth; like badgers, all she wants is to be at home without disruption.

Judge Richter wrote in judgement: “It is the exper­i­ence of many who work using tech­no­logy that it is not well suited to the fast-paced inter­play of exchanges which occur in, for example, plan­ning meet­ings or train­ing events when rapid dis­cus­sion can occur on top­ics.

He also poin­ted to “a lim­it­a­tion to the abil­ity to observe and respond to non-verbal com­mu­nic­a­tion which may arise out­side of the con­text of formal events but which non­ethe­less forms an import­ant part of work­ing with other indi­vidu­als”.

It will be interesting to see how this ruling impacts the financial services industry especially. It feels like a big blow to those operating in this area, and solidifies the notion that firms are rigidly not keeping up with the times. Will this trigger an exodus of top talent?

Leaders believe that productivity lies at the heart of the workplace debate – but should it? The old maxim that “a happy worker makes a productive worker” springs to mind. One comes before the other. With this in mind, I enjoyed participating in a roundtable hosted by Slack and Be the Business, atop the Gherkin in the city of London, that discussed how better communication delivers the most significant wins regarding productivity for small- to medium-sized businesses in the UK. 

The session coincided with new research examining how SMBs can overcome stagnation in 2024. Of the many interesting findings, these were the most compelling for me: Poor management was the top internal barrier to growth, highlighted by over four in ten (45%). This was followed by: Poor communication and lack of collaboration (38%); Lack of motivation (36%); and Employee burnout (33%).

Clearly, whether working in the office or not, communication and collaboration go hand in hand, and these have to improve – for everyone’s sake, with the UK languishing at the bottom of the G7 productivity rankings. 

As the roundtable chair, CEO of Be the Business Anthony Impey, noted, a 1% increase in the UK’s productivity will boost the economy by £95 billion over five years.

The past

Here come the badgers, finally. 

This month, I enjoyed a weekend spa retreat in the New Forest, close to Lymington, where – ironically – the aforementioned Gibbons served as a member of parliament in the 1780s. I stayed five miles due north in Brockenhurst and enjoyed strolling in the countryside, marvelling at deer and wild horses. I was fascinated to learn the (alleged) etymology of Brockenhurst stems from the Celtic for “badger’s home” with the black-and-white nocturnal creatures having been common residents for centuries. 

I was informed that the badgers have, over the years, built an underground tunnel that stretches from Brockenhurst to Lymington. Human attempts to block the way, and collapse the tunnel, have come to nought. The badgers are resilient and inventive, they will always dig around obstacles, and make new tunnels. It struck me that we should all be more like badgers.

Statistics of the month

  • Only 8% of European businesses have adopted AI, whereas the number is over 50% in the United States, according to Cecilia Bonefeld-Dahl, Director General of DIGITALEUROPE.
  • Cisco’s 2024 Data Privacy Benchmark Study shows more than one-quarter of organisations have banned the use of generative AI, highlighting the growing privacy concerns and the trust challenges facing organisations over their use of AI.
  • O.C. Tanner’s 2024 Global Culture Report revealed that less than half of UK leaders (47%) consider their employees when deciding to enact business-wide changes. And just 44% seek employee opinions as changes are rolled out.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Is 5G the key to a truly digital society?

A panel of experts – including Vodafone UK, NatWest’s Boxed and Google – say asset tracking and optimising connected buildings and vehicles are some of the more encouraging 5G use cases, but we need better collaboration and storytelling to narrow the digital divide and create a truly digital society in the UK

Nick Gliddon, Vodafone UK’s director of business, argues that 5G is crucial to help both communities and businesses make swift and wide-ranging progress. Earlier this year, Vodafone research calculated that having a best-in-class 5G network in the UK would deliver up to £5 billion a year in economic benefit by 2030. 

An additional study of 2,000 UK adults suggested Britons believe 5G can improve society more than AI. The survey found healthcare (31%), utilities like energy and water (21%), and railways (20%) were key sectors that will benefit most from 5G.

Empowering people is the beating heart of a digital society, and Gliddon says 5G can help this on five fronts. It will improve connectivity, video capabilities, business applications, immersive experiences, and digital-twin technology, which is a digital representation of a physical process, portrayed in a digital version of its environment.

As a digital society grows in the UK, there are also opportunities for businesses. “A truly digital society is one where individuals, platforms and utilities are seamlessly interconnected,” says Tom Bentley, head of growth at Boxed, NatWest’s banking-as-a-service platform. “Cloud and 5G technologies provide a better customer service experience where the fulfilment of product or utilities can be instant, compared with the existing physical processes of the past.” He adds that such service “fundamentally relies on quality data combined with strong interconnectivity”.

Ben Shimshon, co-founder and managing partner at Thinks, an insight and strategy consultancy, notes that some UK organisations – especially SMEs – are taking advantage of the opportunities opened up by better connectivity piecemeal, and often more slowly. “Some 99.4% of businesses in the UK have fewer than 50 employees, and three quarters of those are sole traders with no employees. A lot of them are doing predominantly offline things like scaffolding or running a shop,” he says. Many find the notion of a digital society “quite daunting”.

Clear business benefits

Part of the challenge is articulating the advantages of greater connectivity to time-pressed leaders of micro-businesses, not least because many are content with the status quo and incentives for digital adoption remain limited, says Bentley.

Still, those gaining digital access see clear benefits, Shimshon says, such as faster invoicing and payments to improve cash flow. Digital adoption happens gradually for many SMEs as new technologies like card readers are embraced, leading to incremental improvements across operations.

Matthew Evans, director for markets at the UK’s technology trade association techUK, argues that practical needs – like freeing up leisure time by streamlining admin – will resonate more with time-poor SME owners than abstract efficiency promises. “Think of that scaffolder who much prefers to watch his son playing football than doing his company accounts,” he says. “That needs to be the pitch: these digital tools will free up that time.”

Victoria Newton, chief product officer of Engine, Starling Bank’s software-as-a-service arm, agrees the focus should be on practically solving business problems rather than leading with technology. She highlights how Starling has transformed business banking by enabling round-the-clock digital financial services, through building a proprietary cloud-based banking infrastructure, Engine, from scratch. “Starling was able to do this, take our technology and imbed it within banks in countries starting that digital revolution themselves.”

Customer choice 

As society becomes increasingly digital, though, the group acknowledged organisations must put citizens first. For example, Newton believes customer choice is paramount – some may opt for online self-service, but others still want human contact through banks, branches or contact centres. Top-down measures to increase digital capabilities risk excluding the most digitally disenfranchised without affordable options, she adds.

Another barrier to progress, says British Chambers of Commerce director Faye Busby, is that “people naturally don’t like change”. She highlights research, published in collaboration with Xero at the start of the year, showing that 75% of businesses believe their “broadband and general connectivity is very reliable”, suggesting they don’t realise what more connectivity could achieve for them. They underestimate the potential of a digital society.

Again, 5G has the power to electrify a digital society, but only once more people realise the good work that is going on. Several examples demonstrate 5G networks unleashing transformative applications, and most have been made possible thanks to visionary partners.

Gliddon calls Coventry “the most advanced 5G city in the UK”, partly because the city council, who have collaborated with Vodafone for almost a decade, is so progressive. The council gained smart-city capabilities by providing planning assistance to deploy 5G antennas rapidly, improving traffic flow, air quality monitoring and municipal operations. Coventry is creating a smart energy grid to better manage local renewable power generation by building on these digital foundations.

Coventry University is also the first in the UK to successfully deploy a 5G Standalone network. The forward-thinking council mandated 5G labs at Coventry University to support next-generation teaching in subjects like healthcare and engineering. Students can now access immersive learning through technologies like virtual reality. 

For instance, healthcare students are using virtual reality and augmented reality to explore the human body like never before. Professors at the university use a headset and 5G allows them to access any part of the body during a lesson, making points and taking questions from students in real time, making the teaching experience much more flexible and interactive.

Elsewhere, for environmental services provider Veolia, 5G enables real-time asset tracking. Veolia’s head of digital strategy and innovations, Chris Burrows, outlines how sensors on the company’s recycling-collection trucks can ensure it takes 16.5 seconds – or fewer – to complete a bin empty, identify potholes, and build air-quality maps across cities. 

CCTV cameras on Veolia’s trucks also use edge computing to pinpoint potential collisions, analysing footage instantly. “It effectively gives you a threat-to-life score,” he says. This facilitates rapid accident responses while providing evidence against false claims. Burrows emphasises that realising these benefits requires a supportive company culture and employees willing to act on data insights.

Meanwhile, techUK’s Evans lists encouraging 5G deployments in areas like ports and hospitals to manage assets and workforces. “The NHS wastes £300 million a year on medicine, at least half of which is avoidable, and is down to fridges breaking down, or drugs being left outside for too long. Better asset tracking would change that.”

Evans says, though, that if 5G is going to be successful and become “the digital fabric in the digital society”, there must be large-scale rollouts targeting enterprise use cases.

Daniel Peach, head of digital acceleration programmes at Google, predicts that greater 5G adoption will spur many new business models and opportunities. “It might seem minor, but there are a lot of buildings we don’t have data for,” he says. “There is a use case of energy optimisation and moving beyond motion sensors. If you track that centrally, you can entirely shut off parts of the building when it’s not in use. There is so much scope for innovation around connected buildings and connected vehicles.”

Cut the jargon

To accelerate the move to a digital society, there are a number of barriers to overcome. Gliddon stresses the need for appropriate language to explain digital innovation in an engaging, sector-specific way. This chimes with Busby, who believes unclear terminology, such as “connectivity”, remains a barrier, with many unable to grasp its meaning.

The energy demands of an increasingly data-driven society must be addressed. For Burrows, “digital sobriety” is needed regarding endless data storage and transfer. Peach expects 5G’s carbon impact to fall, being more efficient than 3G or 4G.

Finally, significant investment is required for the digital society – nationwide 5G coverage comes at a cost. Currently, telecoms operators are largely being asked to fund its deployment alone and forecasts suggest there is a hole of £25-30 billion if the industry is to meet Government expectations. This is one of the reasons why Vodafone and Three have announced plans to merge. If approved – a merged company would have the necessary scale to invest in creating one of Europe’s most advanced 5G networks. Vodafone says it would invest £11 billion in the network over the next decade and take 5G Standalone to 99% of populated areas by 2034.

Ultimately, while the core network technology promises significant performance improvements, realising technological potential requires careful human and organisational transformation. Joined-up thinking and greater collaboration between telcos, academia, the public and private sector, and telling compelling stories that persuade businesses to embrace digital innovation is vital to unleashing 5G’s possibilities and building an inclusive and sustainable digital society.

This article was first published by Raconteur, in November 2023, following an in-person roundtable event that I moderated

How artificial intelligence is making HR more human

HR teams are already using AI-based tech to communicate better with employees, improve career mobility inside their organisations and make well-timed authentic acknowledgements of people’s contributions

How many HR managers acknowledged Artificial Intelligence Appreciation Day on 16 July? It probably passed unnoticed by most, considering the ever-growing list of tasks demanding the profession’s attention.

Even if they did have a minute to look up from their work and appreciate the power of this fast-developing technology, is there any reason to celebrate something that’s likely to put millions of people out of a job? 

‘Yes’ should be the answer to that question. AI’s ability to automate certain tasks and reduce administrative workloads promises to make the HR practitioner’s role more human. Embracing AI should free professionals in this space to devote more time and energy to identifying talent and nurturing it, which is why most people enter the field in the first place. And, with more and better information at its fingertips, the function could play a more holistic and strategically important role.

But have HR teams become too overloaded with extra work to take this great opportunity? 

How HR’s role has expanded

During the Covid crisis, HR teams were “mostly in survival mode”, says Dr Aaron Taylor, head of Arden University’s School of Human Resource Management. “As well as figuring out how employees could work from home, they needed to provide extra support for their mental and physical wellbeing while adhering to health guidelines.”

Progressive organisations will start to broaden how they use generative AI and it will benefit both employees and HR teams

He points out that, thanks to shifting workforce trends, many HR leaders were involved in C-level decisions to keep companies functioning, as well as to handle restructuring operations and redundancy programmes. 

“The profession’s evolution over the past 25 years – from ‘pay and rations’ to the strategic role it plays today – has, quite possibly, been more radical than that of any other business function,” Taylor argues.

Eric Mosley, co-founder and CEO of HR software firm Workhuman, agrees that HR has gone through “a very hectic, chaotic time. There’s been a complete whiplash, with trends veering from one direction to another.” 

Outlining the chaotic nature of the Covid era, Mosley points to remote workingback-to-office mandatesquiet quitting, loud quitting, the great resignation and, as economic uncertainty prevails, the so-called big stay.

AI’s part in the future of HR

Helen Poitevin is a distinguished vice-president and analyst focusing on HR tech at research giant Gartner. She says that “a debate is raging about the future of work between ‘explorers’, who embrace new tech and ways of working, and ‘restorers’, who believe companies should be using tried and tested methods. AI has emerged as one of the disruptive technologies at the heart of this conversation.”

Poitevin reports that AI is already playing a role in HR operations ranging from policy-making to recruitment. A global survey of HR leaders published by Gartner in July indicates that 5% have implemented generative AI, for instance, while 9% are piloting its use. 

“In the future, progressive organisations will start to broaden how they use generative AI. It will benefit both employees and HR teams,” she predicts. “A quarter of HR leaders are planning to use it to create hyper-personalised career development plans, for instance.”

Poitevin adds that HR professionals can, when equipped with the right tech, “better understand employees and so provide more human advice” that’s better tailored to each person’s needs. 

Taylor agrees that the profession has been placing greater emphasis on understanding employees as people. “There is much more importance on the ‘human’ aspects of HR now, especially when looking at employee experience,” he says. “This is no longer solely about ensuring regulatory compliance. This is about going that extra mile to know what makes employees tick and how that aligns with the company’s overall strategy.”

Skills development and career progression

While it’s never exactly been strong, the quality of communications between HR and the shop floor has worsened in recent years. New research by data science consultancy Profusion indicates that only 24% of employees are “fully comfortable” discussing workplace problems with the HR team, for example. 

Profusion’s CEO, Natalie Cramp, notes that the pandemic-induced shift towards remote working has “severely hampered the relationship between workers and their HR representatives, eroding any sense of trust and understanding”. 

This is about going that extra mile to know what makes employees tick and how that aligns with the company’s overall strategy

A study published in May by Microsoft, which has invested heavily in generative AI, argues that HR practitioners who understand the technology and use it well will become better communicators with the power to improve the employee experience. 

“Human-AI collaboration will be the next transformational work pattern,” the research report predicts, proposing the notion of using AI as a “co-pilot”

How would such co-piloting work in practice? Take the use of so-called writer’s block AI to improve communications between HR and the workforce, for instance. This technology uses relevant information about the company and its employees to personalise messages and deliver these in the appropriate tone.

An HR team can work alongside AI to map out possible career paths for people in the organisation. For instance, the technology might spot hidden potential in an employee who’s been flying under the radar and prompt the team to alert that individual to an appealing internal role that would suit their talents and offer them a valuable development opportunity.

AI can enable better employee recognition

AI can also aid employee recognition – a wellbeing-boosting intervention that can be as simple as thanking someone publicly, yet is lacking in many workplaces. LinkedIn has reported achieving a 96% retention rate among employees whose work was acknowledged at least four times a year. With the prompting of AI, HR teams can recognise and celebrate the contributions (or life events) of employees or ask their line managers to do so.

“Recognition is an authentic, honest moment, where someone expresses genuine gratitude for another’s work,” Mosley says. “That connection can build relationships and community.” 

He stresses that recognition has the greatest impact when it is “fulfilled, authentic, embedded in the culture, individualised and equitable”. Again, AI tools can help HR practitioners with all of this, enabling them to convey heartfelt messages of encouragement when it matters to the recipients.

Research by Workhuman suggests that giving recognition is mutually beneficial. Managers who have done so in the past two months are more likely than those who haven’t to love their jobs (75% versus 48%) and identify as highly engaged (89% versus 64%).

Building on this last data point, Gallup’s most recent State of the Global Workplace report pegged employee engagement at about 23% last year. Given that this was the highest recorded percentage since the company started gathering such data in 2009, there is clearly significant room for improvement

Now, thanks to AI, HR teams have no excuse not to be more human in the digital era, recognise the good work of others and gain job satisfaction in the process.

This article was first published by Raconteur, as part of Future of HR special report in The Times, in July 2023

‘Anti-meaningless work’: How Gen Zers are redefining traditional career paths

Gen Zers are forcing employers to evolve their ways of working. And rather than lamenting today’s youth, older business leaders would be wise to accommodate the workforce’s youngest generation better. 

Moving away from traditional career paths to encourage non-linear development makes sense for all parties in 2023. A new approach and fresh thinking are critical. “Logic will get you from A to B,” said Albert Einstein. “Imagination will take you everywhere.”

Events spurred by the coronavirus crisis have upended many norms. Still, it is arguably the Gen Z cohort that has suffered the most through disjointed education, severely limited early-career opportunities, and a lack of in-person work and play experiences, in addition to the psychological impact of Covid-19. Now, Gen Z is in the driving seat to propel meaningful change and usher in a new work paradigm.

“The last three years have enabled Gen Zers to reap the flexibility benefits of remote working with many more deciding to optimize for a lifestyle as a digital nomad,” said Charlie Rogers, a London-based executive team coordinator at The Portfolio Collective (a global community of more than 8,000 portfolio professionals) and founder of Mastery in Your 20s, a community platform to equip people in their third decade with the skills to take “their own pathless path.”

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in June 2023 – to read the complete piece, please click HERE.

As great resignation trend slows, here’s how companies are enticing ex-employees back

There was muted celebration in HR departments across the U.S. when, on Apr. 4, the latest data release on employment from the Bureau of Labor Statistics indicated that the Great Resignation may have finally slowed down – if not quit – in some so-called knowledge-working industries. However, the trend was still evident in many blue-collar industries.

“It’s no surprise that blue-collar workers are continuing their exodus while office workers have quickly realized the grass probably isn’t greener,” said Leslie Tarnacki, global CHRO for WorkForce Software. She argued the findings proved that if employees were handed the flexibility, autonomy, and “proper tools to fulfill their roles efficiently,” they were “far more likely” to stay with their organizations.

Michigan-based Tarnacki explained the slowdown of the Great Resignation for desked workers. “Much of it was spurred by a demand for flexibility and better work-life balance, which most employers have been able to deliver in some way with remote working and flexible hours,” she said. “For front-line and deskless shift workers, demands have not been so easily met.”

What should business leaders of blue-collar workers take away from the new Bureau of Labor Statistics data? How can they, too, halt the ongoing Great Resignation trend for good? 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in April 2023 – to read the complete piece, please click HERE.

WTF is Quittok – and why Gen Z is increasingly doing it when they leave jobs

You’ve heard of quiet quitting but what about loud quitting?

Last year, there was a great deal of noise about quiet quitting — namely, doing the minimum amount required per someone’s job description. Gen Zers led that trend. (Click here for WorkLife’s guide to The Quiet Workplace).

Now many young professionals are taking a very different approach to head for the exit, being as loud as possible by live-streaming their resignations on social media. Their platform of choice: TikTok. Hence the inevitable hashtag #quittok.

So what exactly is quittok, where does it come from, and what are the pros and cons?

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in April 2023 – to read the complete piece, please click HERE.