Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 20)

TL;DR: August’s edition examines how companies are eliminating graduate jobs while redirecting recruitment budgets to failing AI pilots. From ancient rhetoric becoming essential survival skills to pre-social media university life, this edition explores why authentic human connection is our ultimate competitive advantage.

August's Go Flux Yourself explores:

The graduate job collapse: Entry-level positions requiring degrees have dropped by two-thirds since 2022, while students face £53,000 average debt. Stanford research reveals young workers in AI-exposed jobs experienced 13% employment decline as older colleagues in identical roles saw steady growth

The failing AI paradox: 95% of corporate AI pilots deliver no meaningful returns, yet companies redirect graduate recruitment budgets to these failing technologies. Half of UK firms now want to swap staff spending for AI investment despite zero evidence of productivity gains

The strategic generation: Anecdotal examples illustrate how young people aren't competing with AI but conducting it – using ChatGPT for interior design, creating revision podcasts, and embracing technology as "another thing to make life easier"

The pre-digital simplicity: Twenty-four years ago at St Andrews, Prince William was just another tutorial student alongside Oliver in a world without Facebook, smartphones, or AI assistants. Physical books, card catalogues, and pub conversations shaped minds through friction rather than convenience

To read the full newsletter, please visit www.oliverpickup.com.

Image created on Midjourney

The future

“First, we made our jobs robotic. Then we trained the robots how to do them. If AI takes your job, it won’t be because it’s so smart. It will be because over time we’ve made work so narrow, so repetitive, so obsessed with minimising variance and deferring to metrics, that it’s perfectly designed for machines.”

Tom Goodwin’s above observation about how we’ve made our jobs robotic before training robots to do them, articulated in mid-August on LinkedIn, feels remarkably prescient as thousands of teenagers prepare for university. When I interviewed the business transformation consultant, bullshit caller / provocateur, and media expert in 2022, following an update to his book Digital Darwinism, he warned about a looming leadership crisis. That crisis is now playing out in ways he probably didn’t fully anticipate.

The timing of the post couldn’t be more brutal. It’s been 25 years since I received my A-level results. Yet, I can still easily recall the pre-reveal trepidation followed by relief that I’d scraped the grades to study English Literature and Language at the University of St Andrews (as a peer of Prince William – more on this below, including a 20-year-old graduation picture).

What a thrilling time of year it should be: the end of school, then taking the next step on life’s magical journey, be it straight to university without passing go, a gap year working and then travelling, or eschewing higher education to begin a career.

I wonder how this year’s A-level leavers feel, given they’re walking into the most uncertain graduate job market in a generation. The promises made to them about university – to study hard, earn a degree, and secure a good job – are crumbling in real time.

Data from job search engine Adzuna suggests that job postings for entry-level positions requiring a degree have dropped by almost two-thirds in the UK since 2022, roughly double the decline for all entry-level roles (as quoted in the Financial Times). The same research found that entry-level hiring is down 43% in the US, and 67% in the UK, since ChatGPT launched in November 2022.

The study results tally with other sources. In June, for instance, UK graduate job openings had plunged by 75% in banking and finance, 65% in software development, and 54% in accounting compared to the same month in 2019, according to Indeed (also in the FT piece).

Meanwhile, students graduating from universities in England in 2025 have an average student loan debt of approximately £53,000, with total outstanding loans reaching £267 billion. Frankly, is university worth it today?

I was fortunate enough to be part of the last cohort to benefit from minimal tuition fees in Scotland before they were introduced to all students in the 2005-6 academic year. Further, when I studied my postgraduate degree in magazine journalism at Cardiff University’s JOMEC, we were (verbally and anecdotally) guaranteed jobs within a year; and, as far as I know, all my peers achieved that. Such certainty feels alien now, even quaint.

But where does this trend lead? What happens when an entire generation faces systematic exclusion from entry-level professional roles?

A Stanford University study tracking millions of workers through ADP payroll data revealed something rather more troubling: young workers aged 22-25 in “highly AI-exposed” jobs experienced a 13% employment decline since OpenAI released its LLM just under three years ago, while older colleagues in identical roles saw steady or rising employment.

Arguably, we’re witnessing the first generation where machines are genuinely better at doing what universities taught them than they are.

Erik Brynjolfsson, one of the Stanford paper’s co-authors (and a professor whom I interviewed a couple of months after ChatGPT was unveiled – even back then he was warning about the likely problems with AI acceleration and jobs), put it bluntly: “There’s a clear, evident change when you specifically look at young workers who are highly exposed to AI.” 

The research controlled for obvious alternatives — COVID effects, tech sector retrenchment, interest rate impacts — and the correlation held. Software developers and customer service agents under the age of 25 saw dramatic employment drops. Home health aides, whose work remains both physically and emotionally demanding, saw employment rise.

The distinction matters. AI isn’t just replacing workers randomly, but it’s targeting specific types of work. The Stanford team found that occupations where AI usage is more “automative” (completely replacing human tasks) showed substantial employment declines for young people. In contrast, “augmentative” uses (where humans collaborate with AI) showed no such pattern.

Anthropic CEO Dario Amodei warned in May that half of “administrative, managerial and tech jobs for people under 30” could vanish within five years. He’s probably being conservative.

However, what’s especially troubling about this shift is that a new MIT research, The GenAI Divide: State of AI in Business 2025, suggests that many AI deployment programmes are failing to deliver expected returns on investment, with companies struggling to show meaningful productivity gains from their technology investments. Specifically, 95% of generative AI pilots at companies are failing, and delivering next to no return on investment. 

Despite this, organisations continue redirecting budgets from graduate recruitment to AI initiatives. Half of UK companies now want to redirect money from staff to AI, according to Boston Consulting Group research.

This creates a dangerous paradox: companies are cutting the graduate pipeline that develops future leaders while betting on technologies that haven’t yet proven their worth. What happens to organisational capability in five years when the cohort of junior professionals who should be stepping into senior roles doesn’t exist, or those that are in the job market don’t have any meaningful experience?

This connects directly to Tom Goodwin’s observation. The combined forces of consulting culture, efficiency obsessions, and metric-driven management have reshaped roles once built on creativity, empathy, relationships, imagination, and judgment into “checklists, templates, and dashboards”. We stripped away the human qualities that made work interesting and valuable, creating roles “perfectly designed for machines and less worth doing for humans”.

Consider those entry-level consulting, law, and finance roles that have vanished. They were built around tasks like document review, basic data analysis, research synthesis, and report formatting – precisely the narrow, repetitive work at which large language models excel.

Yet amid this disruption, there are signals of adaptation and hope. During a recent conversation I had with Joanne Werker, CEO of the people engagement company Boostworks, she shared statistics and personal insights that capture both the challenges and the opportunities facing this generation. Her organisation’s latest research, published in late July, indicates that 57% of Gen Z and 71% of Millennials are exploring side hustles, not as passion projects, but to survive financially. Taking a positive view of this situation, one could argue that this will be a boon for innovation, given that necessity is the mother of invention.

Also noteworthy is that nearly one in five Gen Zers is already working a second job. Joanne’s daughters illustrate a different relationship with AI entirely. One, aged 30, works in music, while the other, 24, is in fashion, both creative fields where AI might be expected to pose a threat. Instead, they don’t fear the technology but embrace it strategically. The younger daughter used ChatGPT to redesign their family’s living room, inputting photos and receiving detailed interior design suggestions that impressed even Jo’s initially sceptical husband. As Joanne says, both daughters use AI tools not to replace their creativity, but to “be smarter and faster and better”, for work and elsewhere. “The younger generation’s response to AI is just, ‘OK, this is another thing to make life easier.’”

This strategic approach extends to education. Nikki Alvey, the (brilliant) PR pro who facilitated my conversation with Jo, has children at the right age to observe this transition. Her son, who just completed his A-levels, used AI extensively for revision, creating quizzes, podcasts, and even funny videos from his notes. As Nikki pointed out: “I wish I’d had that when I was doing GCSEs, A-levels, and my degree; we barely had the internet.”

Elsewhere, her daughter, who is studying criminology at the University of Nottingham, operates under different constraints. Her university maintains a blanket ban on AI use for coursework, though she uses it expertly for job applications and advocacy roles. This institutional inconsistency reflects higher education’s struggle to adapt to technological reality: some universities Nikki’s son looked at were actively discussing AI integration and proper citation methods, while others maintain outright bans.

Universities that nail their AI policy will recognise that future graduates need capabilities that complement, rather than compete with, AI. This means teaching students to think critically about information sources.

As I described during a recent conversation with Gee Foottit on St James’s Place Financial Adviser Academy’s ‘The Switch’ podcast: “Think of ChatGPT as an army of very confident interns. Don’t trust their word. They may hallucinate. Always verify your sources and remain curious. Having that ‘truth literacy’ will be crucial in the coming years.”

Related to this, LinkedIn’s Chief Economic Opportunity Officer Aneesh Raman describes this as the shift from a knowledge economy to the “relationship economy”, where distinctly human capabilities matter most.

The Stanford research offers clues about what this means. In occupations where AI augments rather than automates human work – such as complex problem-solving, strategic thinking, emotional intelligence, and creative synthesis – young workers aren’t being displaced.

Success won’t come from competing with machines on their terms, but from doubling down on capabilities that remain uniquely human. 

On The Switch podcast episode, which will be released soon (I’ll share a link when it does), I stressed that the future belongs to those – young and old – who can master what I call the six Cs of skills to dial up: 

  • Collaboration
  • Communication
  • Compassion
  • Courage
  • Creativity
  • and Curiosity

These are no longer soft skills relegated to HR workshops but survival capabilities for navigating technological disruption.

There’s a deeper threat lurking, though. The issue isn’t that the younger generations are AI-literate while their elders struggle with new technology, but understanding how to maintain our humanity while leveraging these tools. 

No doubt nurturing the six Cs will help, but a week or so ago, Microsoft’s AI chief, Mustafa Suleyman, described something rather more unsettling: “AI psychosis”: a phenomenon where vulnerable individuals develop delusions after intensive interactions with chatbots. In a series of posts on X, he wrote that “seemingly conscious AI” tools are keeping him “awake at night” because of their societal impact, even though the technology isn’t conscious by any human definition.

“There’s zero evidence of AI consciousness today,” Suleyman wrote. “But if people just perceive it as conscious, they will believe that perception as reality.”

The bitter irony is that the capabilities we now desperately need – namely, creativity, empathy, relationships, imagination, and judgement – are exactly what we stripped out of entry-level work to make it “efficient”. Now we need them back, but we’ve forgotten how to cultivate them at scale.

The generation entering university in September may lack traditional job security, but they possess something their predecessors didn’t: the ability to direct AI while (hopefully) remaining irreplaceably human. And that’s not a consolation prize. It’s a superpower.

The present

On stage with Tomás O’Leary at Origina Week

I tap these words on the morning of August 29 from seat 24F on Aer Lingus EI158 from Dublin to London Heathrow, flying high after a successful 48 hours on the Emerald Isle. A software client, Origina, flew me in. I’ve been assisting with the CEO, Tomás O’Leary’s thought leadership and the company’s marketing messaging for over a year (his words of wisdom around pointless software upgrades and needless infrastructure strain in my July newsletter). 

Having struck up a bond – not least thanks to our days reminiscing about playing rugby union (we were both No8s, although admittedly I’m a couple of inches shorter than him) – Tomás invited me to participate in Origina Week. This five-day extravaganza mixes serious business development skills with serious fun and learning.

Tomás certainly made me work for my barbecued supper at the excellent Killashee Spa Hotel: on Thursday, I was on stage, moderating three sessions for three consecutive hours. The final session – the last of the main programme – involved men and Tomás having a fireless “fireside” chat about technology trends as I see them, and his reflections on their relevance to the software space.

I was grateful to make some superb connections, be welcomed deeper into the bosom of the Origina family, and hear some illuminating presentations, especially behavioural psychologist Owen Fitzpatrick’s session on the art of persuasion. 

Watching Owen work was a masterclass in human communication, which no AI could replicate. For 90 minutes, around 250 people from diverse countries and cultures were fully engaged, leaning forward, laughing, and actively participating. This was neural coupling in action: the phenomenon where human brains synchronise during meaningful interaction. No video call, no AI assistant, no digital platform could have generated that energy.

This is what Tomás understood when he invested in bringing his global team together in the Irish capital. While many executives slash training budgets and rely on digital-only interactions, he recognises that some learning only happens face-to-face. That’s increasingly rare leadership in an era where companies are cutting human development costs while pouring billions into AI infrastructure.

Owen’s session focused on classical rhetoric: the ancient art of persuasion, which has become increasingly relevant in our digital age. He walked us through the four elements: ethos (credibility), logos (logic), pathos (emotion), and demos (understanding your audience). These are precisely the human skills we need as AI increasingly handles our analytical tasks.

It was a timely keynote. Those who have completed their A-levels this summer are entering a world where the ability to persuade, connect, and influence others becomes more valuable than the ability to process information.

Yet we’re simultaneously experiencing what recent research from O.C. Tanner calls a recognition crisis. Its State of Employee Recognition Report 2025 found that UK employees expect in-person interactions with recognition programmes to increase by 100% over the next few years, from 37% to 74%. These include handwritten notes, thank you cards, and award presentations. People are craving authentic human interaction precisely because it’s becoming scarce.

Recent data from Bupa reveals that just under a quarter (24%) of people feel lonely or socially isolated due to work circumstances, rising to 38% among 16-24-year olds. Over a fifth of young workers (21%) say their workplace provides no mental health support, with 45% considering moves to roles offering more social interaction.

Also, new research from Twilio reveals that more than one-third of UK workers (36%) demand formally scheduled “digital silence” from their workplace. Samantha Richardson, Twilio’s Director of Executive Engagement, observed: “Technology has transformed how we work, connect, and collaborate – largely for the better. But as digital tools become increasingly embedded in everyday routines, digital downtime may be the answer to combating the ‘always-on’ environment that’s impeding productivity and damaging workplace culture.”

This connects to something that emerged from Owen’s session. He described how the most powerful communication occurs through contrast, repetition, and emotional resonance – techniques that require human judgment, cultural understanding, and real-time adaptation. These are precisely the skills that remain irreplaceable in an AI world.

Consider how Nikki’s son used AI for revision. Rather than passively consuming information or getting out the highlighter pens and mapping information out on a big, blank piece of paper (as I did while studying, and still do sometimes), he actively prompted the technology to create quizzes, podcasts, and videos tailored to his learning style. This was not AI replacing human creativity, but human creativity directing AI capabilities.

The challenge for today’s graduates isn’t avoiding AI, but learning to direct it purposefully. This requires exactly the kind of critical thinking and creative problem-solving that traditional education often neglects in favour of information retention and standardised testing.

What’s particularly striking about the current moment is how it echoes patterns I’ve observed over the past year of writing this newsletter. In June 2024’s edition, I explored how AI companions were already changing human relationships. I’ve also written extensively about the “anti-social century” and our retreat from real-world connection. Now we’re seeing how these trends converge in the graduate employment crisis: technology is doing more than just transforming what we do. It is also changing how we relate to each other in the process. 

On this subject, I’m pleased to share the first of a new monthly podcast series I’ve begun with long-term client Clarion Events, which organises the Digital Transformation Expo (DTX) events in London and Manchester. The opening episode of DTX Unplugged features Nick Hodder, Director of Digital Transformation and Engagement at the Imperial War Museums (IWM), highlighting why meaningful business transformation begins with people, not technology.

The answer, whether in a hotel conference room in Dublin or a corporate office in Manchester, remains the same: in a world of AI, our ability to connect authentically with other humans has become our competitive edge.

The past

Twenty-four years ago in September, I sat in my first tutorial at the University of St Andrews — nine students around a table, including Prince William and seven young women. That tutorial room held particular energy. We were there to think, question, argue with texts and each other about ideas that mattered. Will, who played for my Sunday League football team, was just another student. 

The economic backdrop was fundamentally different. Graduate jobs were plentiful, social media was (thankfully) nascent – Facebook was three years away, and only mildly registered in my final year, 2004-05 – and so partying with peers was authentic, and free from fears of being digitally damned. Moreover, the assumption that a degree led to career success felt unshakeable because it was demonstrably true.

The social contract was clearer, too. Society invested in higher education as a public good that would generate returns through increased productivity, innovation, and civic engagement. Students could focus on learning rather than debt management because the broader community bore the financial risk in exchange for shared benefits.

My graduation day at the University of St Andrews in 2005

Looking back, what strikes me most is the simplicity of the intellectual environment. We read physical books, researched in libraries using card catalogues, and didn’t have any digital devices in the lecture halls or tutor rooms. (And the computers we had in our rooms took up a colossal amount of space.) Our critical thinking developed through friction: the effort required to find information, synthesise arguments from multiple sources, and express ideas clearly without technological assistance.

Knowledge felt both scarce and valuable precisely because it was hard to access. You couldn’t Google historical facts during seminars. If you hadn’t done the reading, everyone knew. If your argument was poorly constructed, there was nowhere to hide. The constraints forced genuine intellectual development.

The human connections formed during those four years proved more valuable than any specific subject knowledge. Late-night debates in residence halls, study groups grappling with challenging texts, and casual conversations between lectures – these experiences shaped how we thought and who we became.

We could explore medieval history, philosophical arguments, or literary criticism without worrying whether these subjects would directly translate to career advantages. The assumption was that broad intellectual development would prove valuable, even if connections weren’t immediately obvious. (Again, I was fortunate to be in the last cohort of subsidised university education.)

That faith in indirect utility seems almost lost now. Today’s students, facing massive debt burdens, quite reasonably demand clear pathways from educational investment to career outcomes. The luxury of intellectual exploration for its own sake becomes harder to justify when each module costs hundreds – if not thousands – of pounds.

Some elements remain irreplaceable. The structured opportunity to develop critical thinking skills, build relationships with peers and mentors, and discover intellectual passions in supportive environments still offers unique value. 

Indeed, these capabilities matter more now than they did a quarter of a century ago. When information is abundant but truth is contested, when AI can generate convincing arguments on any topic, and when economic structures are shifting rapidly, the ability to think independently becomes genuinely valuable rather than merely prestigious.

My 10-year-old son will reach university age by 2033. By then, higher education will have undergone another transformation. The economics might involve shorter programmes, industry partnerships, apprenticeship alternatives, or entirely new models that bypass traditional degrees. But the fundamental question remains unchanged: how do we prepare young minds to think independently, act ethically, and contribute meaningfully to society?

The answer may require reimagining university education entirely. Perhaps residential experiences focused on capability development rather than content transmission. Maybe stronger connections between academic learning and real-world problem-solving. Possibly more personalised pathways that recognise different learning styles and career ambitions. What won’t change is the need for structured environments where young people can develop their humanity while mastering their chosen fields of expertise. 

The students opening their A-level results this last month deserve better. They deserve educational opportunities that develop their capabilities without crushing them with debt. They deserve career pathways that use their human potential rather than competing with machines on machine terms. Most importantly, they deserve honest conversations about what higher education can and cannot provide in an age of technological disruption.

Those conversations should start with acknowledging what that tutorial room at St Andrews represented: human minds engaging directly with complex ideas, developing intellectual courage through practice, and building relationships that lasted decades (although my contact with Prince Will mysteriously stopped after I began working at the Daily Mail Online!). 

These experiences – whether at university or school, or elsewhere – remain as valuable as ever. The challenge is whether we can create sustainable ways to provide them without bankrupting the people who need them most.

Statistics of the month

🎓 A-level computing drops
Computing A-level entries fell by 2.8% in the UK despite years of growth, though female participation rose 3.5% to reach 18.6% of students taking the subject. Meanwhile, maths remains most popular with 112,138 students, but girls represent just 37.3% of the cohort. 🔗

👩‍💼 AI skills gender divide widens
Only 29% of women report having AI skills compared to 71% of men, while nearly 70% of UK jobs face high AI exposure. Under half of workers have been offered AI-related upskilling opportunities. 🔗

💰 Office return costs surge
UK employees spend an average £25 daily on commuting and expenses when working from the office, potentially costing nearly £3,500 annually in commuting alone if expected to be in the office for five days a week. 🔗

🏢 Summer hiring advantage emerges
Some 39% of UK businesses have struggled to hire in the last 12 months, with competition and slow hiring cited as key barriers. 🔗

🌍 Extreme poverty redefined
The World Bank raised its International Poverty Line from $2.15 to $3 per day, adding 125 million people to extreme poverty statistics. However, global extreme poverty has still fallen from 43% in 1990 to 10% today. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 17)

TL;DR May’s Go Flux Yourself explores how, in a world where intelligence is becoming cheap, easy, and infinite, the concept of childhood and adolescence is being rewritten. Are AI tools empowering young people or quietly corroding their minds?

Image created on Midjourney

The future

“If I were the devil, I wouldn’t come with violence. I’d come with convenience. I’d give them everything they wanted, and strip them of everything they needed. I’d make them feel connected, but completely alone, endlessly entertained, but quietly empty.”

The above is part of an answer by ChatGPT to a question posed recently by lifestyle influencer Alexia Ferrari, aka themillionairemaker on social media channels. This was the puzzler: “If you were the devil, how would you destroy the young minds of the next generation without them even knowing it?”

And its resonance – evidenced by millions of views and thousands of comments – suggests that she struck a nerve. The AI’s response was disturbingly eloquent.

Following on from the opening quotation, above, it continued: “I’d blur the lines between truth and opinion, until nothing meant anything. I’d teach them to chase dopamine, not purpose, to worship self, but love who they are. I wouldn’t destroy them with war, I’d destroy them with apathy. I’d keep them distracted, numb, scrolling, always scrolling.”

So far, so horrific. There was more. “I’d dismantle the family by redefining it. Make fathers optional, mothers exhausted, and children confused. I’d hand them a phone before they could speak in full sentences. And I’d watch their minds rot slowly, sweetly, silently. And the most brilliant part? They’d never know it was me. They’d call it freedom.” 

Ultimately, it described a world that feels uncomfortably like our own. The children are not only online but are being remade by it. In response, and perhaps seeking some philosophical distance, I asked Claude, Anthropic’s AI, the same question. Its answer was more measured, but no less damning.

“Let me suggest a rather chilling thought experiment,” Claude began, with a silent cackle. “If I were tasked with undermining young minds whilst keeping them blissfully unaware, I’d focus on their digital habits – the very tools they trust most. First, I’d flood their feeds with endless, mindless content. Not obviously harmful stuff, mind you, but material so perfectly tailored to their dopamine receptors that genuine curiosity would wither away. Why wonder about the world when the algorithm already knows what you want? 

“Then I’d weaponise comparison. Social media would become a hall of mirrors where everyone else appears happier, more successful, better looking. The constant measuring up would erode self-worth drip by drip. I’d fragment their attention spans. Tasks requiring sustained focus would feel increasingly impossible as their brains rewired for quick hits of stimulation. Reading a book? Ancient history. Deep thinking? Too much effort. 

“Perhaps most insidiously, I’d create echo chambers so convincing that opposing viewpoints would seem not just wrong, but incomprehensible. Critical thinking would rust from disuse. The beauty of this approach? They’d defend their digital chains. The very devices stealing their potential would feel like lifelines. They’d scroll willingly into intellectual darkness, smartphone in hand, convinced they’re more connected than ever. Frightening, really, how closely this mirrors aspects of our current reality. Makes you wonder what we might do differently, doesn’t it?”

Makes one wonder, indeed. These warnings, eerily congruent despite different the different large language models, do not speak of potential threats. They describe current conditions. They’re not predicting a future. They’re diagnosing a present.

The effect is cumulative. Essentially, as parents, educators, and adults, we are outsourcing more and more of the hard cognitive lifting – research, memory, even curiosity – to machines. And what we once called “childhood” is now a battleground between algorithms and agency.

I’m typing these words as I train back to London from Cheshire, where I was in the countryside with my two young children, at my parents’ house. This half term, we escaped the city for a few days of greenery and generational warmth. (The irony here is that while walks, books and board games dominated the last three days, my daughter is now on a maths game on an iPad, and my older son is blowing things up on his Nintendo Switch – just for an hour or so while I diligently polish this newsletter.) 

There were four-week-old lambs in the field next door, gleefully gambolling. The kids cooed. For a moment, all was well. But as they scampered through the grass, I thought: how long until this simplicity is overtaken by complexity? How long until they’re pulled into the same current sweeping the rest of us into a world of perpetual digital mediation?

That question sharpened during an in-person roundtable I moderated for Cognizant and Microsoft a week ago. The theme was generative AI in financial services, but the most provocative insight came not from a banker but from technologist David Fearne, “What happens,” he asked, “when the cost of intelligence sinks to zero?”

It’s a question that has since haunted me. Because it’s not just about jobs or workflows. It’s about meaning.

If intelligence becomes ambient – like electricity, always there, always on – what is the purpose of education? What becomes of effort? Will children be taught how to think, or simply how to prompt?

The new Intuitive AI report, produced by Cognizant and Microsoft, outlines a corporate future in which “agentic AI” becomes a standard part of every team. These systems will do much more than answer questions. They will anticipate needs, draft reports, analyse markets, and advise on strategy. They will, in effect, think for us. The vision, says Cognizant’s Fearne, is to build an “agentic enterprise”, which moves beyond isolated AI tools to interconnected systems that mirror human organisational structures, with enterprise intelligence coordinating task-based AI across business units.

That’s the world awaiting today’s children. A world in which thinking might not be required, and where remembering, composing, calculating, synthesising – once the hallmarks of intelligence – are delegated to ever-helpful assistants. 

The risk is that children become, well, lazy, or worse, they never learn how to think in the first place.

And the signs are not subtle. Gallup latest State of the Global Workforce study, published in April, reports that only 21% of the global workforce is actively engaged, a record low. Digging deeper, only 13% of the workforce is engaged in Europe – the lowest of any region – and in the UK specifically, just 10% of workers are engaged in their jobs.

Meanwhile, the latest Microsoft Work Trend Index shows 53% of the global workforce lacks sufficient time or energy for their work, with 48% of employees feeling their work is chaotic and fragmented

If adults are floundering, what hope is there for the generation after us? If intelligence is free, where will our children find purpose?

Next week, on June 4, I’ll speak at Goldsmiths, University of London, as part of a Federation of Small Businesses event. The topic: how to nurture real human connection in a digital age. I will explore the antisocial century we’ve stumbled into, largely thanks to the “convenience” of technology alluded to in that first ChatGPT answer. The anti-social century, as coined by The Atlantic’s Derek Thompson earlier this year, is one marked by convenient communication and vanishing intimacy, AI girlfriends and boyfriends, Meta-manufactured friendships, and the illusion of connection without its cost

In a recent LinkedIn post, Tom Goodwin, a business transformation consultant, provocateur and author (whom I spoke with about a leadership crisis three years ago), captured the dystopia best. “Don’t worry if you’re lonely,” he winked. “Meta will make you some artificial friends.” His disgust is justified. “Friendship, closeness, intimacy, vulnerability – these are too precious to be engineered by someone who profits from your attention,” he wrote.

In contrast, OpenAI CEO Sam Altman remains serenely optimistic. “I think it’s great,” he said in a Financial Times article earlier in May (calling the latest version of ChatGPT “genius-level intelligence”). “I’m more capable. My son will be more capable than any of us can imagine.”

But will he be more human?

Following last month’s newsletter, I had a call with Laurens Wailing, Chief Evangelist at 8vance and a longtime believer in technology’s potential to elevate, not just optimise, who reacted to my post. His company is using algorithmic matching to place unemployed Dutch citizens into new roles, drawing on millions of skill profiles. “It’s about surfacing hidden talent,” he told me. “Better alignment. Better outcomes.”

His team has built systems capable of mapping millions of CVs and job profiles to reveal “fit” – not just technically, but temperamentally. “We can see alignment that people often can’t see in themselves,” he told me. “It’s not about replacing humans. It’s about helping them find where they matter.”

That word stuck with me: matter.

Laurens is under no illusion about the obstacles. Cultural inertia is real. “Everyone talks about talent shortages,” he said, “but few are changing how they recruit. Everyone talks about burnout, but very few rethink what makes a job worth doing.” The urgency is missing, not just in policy or management, but in the very frameworks we use to define work.

And it’s this last point – the need for meaning – that feels most acute.

Too often, employment is reduced to function: tasks, KPIs, compensation. But what if we treated work not merely as an obligation, but as a conduit for identity, contribution, and community? 

Laurens mentioned the Japanese concept of Ikigai, the intersection of what you love, what you’re good at, what the world needs, and what you can be paid for. Summarised in one word, it is “purpose”. It’s a model of fulfilment that stands in stark contrast to how most jobs are currently structured. (And one I want to explore in more depth in a future Go Flux Yourself.)

If the systems we build strip purpose from work, they will also strip it from the workers. And when intelligence becomes ambient, purpose might be the only thing left worth fighting for.

Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

If we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will create a society that is smarter than ever, and more vacant than we can bear.

He is a true believer in the liberating potential of AI.

It’s a noble mission. But even Laurens admits it’s hard to drive systemic change. “Everyone talks about talent shortages,” he said, “but no one’s actually rethinking recruitment. Everyone talks about burnout, but still pushes harder. There’s no trigger for real urgency.”

Perhaps that trigger should be the children. Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

Because if we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will have created a society that is smarter than ever – and more vacant than we can bear.

Also, is the curriculum fit for purpose in a world where intelligence is on tap? In many UK schools, children are still trained to regurgitate facts, parse grammar, and sit silent in tests. The system, despite all the rhetoric about “future skills”, remains deeply Victorian in its structure. It prizes conformity. It rewards repetition. It penalises divergence. Yet divergence is what we need, especially now. 

I’ve advocated for the “Five Cs” – curiosity, creativity, critical thinking, communication, and collaboration – as the most essential human traits in a post-automation world. But these are still treated as extracurricular. Soft skills. Add-ons. When in fact they are the only things that matter when the hard skills are being commodified by machines.

The classrooms are still full of worksheets. The teacher is still the gatekeeper. The system is not agile. And our children are not waiting. They are already forming identities on TikTok, solving problems in Minecraft, using ChatGPT to finish their homework, and learning – just not the lessons we are teaching.

That brings us back to the unnerving replies of Claude and ChatGPT, and to the subtle seductions of passive engagement, plus the idea that children could be dismantled not through trauma but through ease. That the devil’s real trick is not fear but frictionlessness.

And so I return to my own children. I wonder whether they will know how to be bored. Because boredom – once a curse – might be the last refuge of autonomy in a world that never stops entertaining.

The present

If the future belongs to machines, the present is defined by drift – strategic, cultural, and moral drift. We are not driving the car anymore. We are letting the algorithm navigate, even as it veers toward a precipice.

We see it everywhere: in the boardroom, where executives chase productivity gains without considering engagement. In classrooms, where teachers – underpaid and under-resourced – struggle to maintain relevance. And in our homes, where children, increasingly unsupervised online, are shaped more by swipe mechanics than family values.

The numbers don’t lie, with just 21% of employees engaged globally, according to Gallup. And the root cause is not laziness or ignorance, the researchers reckon. It is poor management; a systemic failure to connect effort with meaning, task with purpose, worker with dignity.

Image created on Midjourney

The same malaise is now evident in parenting and education. I recently attended an internet safety workshop at my child’s school. Ten parents showed up. I was the only father.

It was a sobering experience. Not just because the turnout was low. But because the women who did attend – concerned, informed, exhausted – were trying to plug the gaps that institutions and technologies have widened. Mainly it is mothers who are asking the hard questions about TikTok, Snapchat, and child exploitation.

And the answers are grim. The workshop drew on Ofcom’s April 2024 report, which paints a stark picture of digital childhood. TikTok use among five- to seven-year-olds has risen to 30%. YouTube remains ubiquitous across all ages. Shockingly, over half of children aged three to twelve now have at least one social media account, despite all platforms having a 13+ age minimum. By 16, four out of five are actively using TikTok, Snapchat, Instagram, and WhatsApp.

We are not talking about teens misbehaving. We are talking about digital immersion beginning before most children can spell their own names. And we are not ready.

The workshop revealed that 53% of children aged 8–25 have used an AI chatbot. That might sound like curiosity. But 54% of the same cohort also worry about AI taking their jobs. Anxiety is already built into their relationship with technology – not because they fear the future, but because they feel unprepared for it. And it’s not just chatbots.

Gaming was a key concern. The phenomenon of “skin gambling” – where children use virtual character skins with monetary value to bet on unregulated third-party sites – is now widely regarded as a gateway to online gambling. But only 5% of game consoles have parental controls installed. We have given children casinos without croupiers, and then wondered why they struggle with impulse control.

This is not just a parenting failure. It’s a systemic abdication. Broadband providers offer content filters. Search engines have child-friendly modes. Devices come with monitoring tools. But these safeguards mean little if the adults are not engaged. Parental controls are not just technical features. They are moral responsibilities.

The workshop also touched on social media and mental health, referencing the Royal Society of Public Health’s “Status of Mind” report. YouTube, it found, had the most positive impact, enabling self-expression and access to information. Instagram, by contrast, ranked worst, as it is linked to body image issues, FOMO, sleep disruption, anxiety, and depression.

The workshop ended with a call for digital resilience: recognising manipulation, resisting coercion, and navigating complexity. But resilience doesn’t develop in a vacuum. It needs scaffolding, conversation, and adults who are present physically, intellectually and emotionally.

This is where spiritual and moral leadership must re-enter the conversation. Within days of ascending to the papacy in mid-May, Pope Leo XIV began speaking about AI with startling clarity.

He chose his papal name to echo Leo XIII, who led the Catholic Church during the first Industrial Revolution. That pope challenged the commodification of workers. This one is challenging the commodification of attention, identity, and childhood.

“In our own day,” Leo XIV said in his address to the cardinals, “the Church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice, and labour.”

These are not empty words. They are a demand for ethical clarity. A reminder that technological systems are never neutral. They are always value-laden.

And at the moment, our values are not looking good.

The present is not just a moment. It is a crucible, a pressure point, and a test of whether we are willing to step back into the role of stewards, not just of technology but of each other.

Because the cost of inaction is not a dystopia in the future, it is dysfunction now.

The past

Half-term took us to Quarry Bank, also known as Styal Mill, a red-brick behemoth nestled into the Cheshire countryside, humming with the echoes of an earlier industrial ambition. Somewhere between the iron gears and the stunning garden, history pressed itself against the present.

Built in 1784 by Samuel Greg, Quarry Bank was one of the most advanced cotton mills of its day – both technologically and socially. It offered something approximating healthcare, basic education for child workers, and structured accommodation. By the standards of the time, it was considered progressive.

Image created on Midjourney

However, 72-hour work weeks were still the norm until legislation intervened in 1847. Children laboured long days on factory floors. Leisure was a concept, not a right.

What intrigued me most, though, was the role of Greg’s wife, Hannah Lightbody. It was she who insisted on humane reforms and built the framework for medical care and instruction. She took a paternalistic – or perhaps more accurately, maternalistic – interest in worker wellbeing. 

And the parallels with today are too striking to ignore. Just as it was the woman of the house in 19th-century Cheshire who agitated for better conditions for children, it is now mothers who dominate the frontline of digital safety. It was women who filled that school hall during the online safety talk. It is often women – tech-savvy mothers, underpaid teachers, exhausted child psychologists – who raise the alarm about screen time, algorithmic manipulation, and emotional resilience.

The maternal instinct, some would argue. That intuitive urge to protect. To anticipate harm before it’s visible. But maybe it’s not just instinct. Maybe it’s awareness. Emotional bandwidth. A deeper cultural training in empathy, vigilance, care.

And so we are left with a gendered question: why is it, still, in 2025, that women carry the cognitive and emotional labour of safeguarding the next generation?

Where are the fathers? Where are the CEOs? Where are the policymakers?

Why do we still assume that maternal concern is a niche voice, rather than a necessary counterweight to systemic neglect?

History has its rhythms. At Quarry Bank, the wheels of industry turned because children turned them. Today, the wheels of industry turn because children are trained to become workers before they are taught to be humans.

Only the machinery has changed.

Back then, it was looms and mills. Today, it is metrics and algorithms. But the question remains the same: are we extracting potential from the young, or investing in it?

The lambs in the neighbouring field didn’t know any of this, of course. They leapt. They bleated. They reminded my children – and me – of a world untouched by acceleration.

We cannot slow time. But we can choose where we place our attention.

And attention, now more than ever, is the most precious gift we can give. Not to machines, but to the minds that will inherit them.

Statistics of the Month

📈 AI accelerates – but skills lag
In just 18 months, AI jumped from the sixth to the first most in-demand tech skill in the world – the steepest rise in over 15 years. Various other reports show people lack these skills, representing a huge gap. 🔗

📉 Workplace engagement crashes
Global employee engagement has dropped to just 21% – matching levels seen during the pandemic lockdowns. Gallup blames poor management, with young and female managers seeing the sharpest declines. The result? A staggering $9.6 trillion in lost productivity. 🔗

🧒 Social media starts at age three
More than 50% of UK children aged 3–12 now have at least one social media account – despite age limits set at 13+. By age 16, 80% are active across TikTok, Snapchat, Instagram, and WhatsApp. Childhood, it seems, is now permanently online. 🔗

🤖 AI anxiety sets in early
According to Nominet’s annual study of 8-25 year olds in the UK, 53% have used an AI chatbot, and 54% worry about AI’s impact on future jobs. The next generation is both enchanted by and uneasy about their digital destiny. 🔗

🚨 Cybercrime rebounds hard
After a five-year decline, major cyber attacks are rising in the UK – up to 24% from 16% two years ago. Insider threats and foreign powers are now the fastest-growing risks, overtaking organised crime. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 16)


TL;DR: April’s Go Flux Yourself explores the rise of AI attachment and how avatars, agents and algorithms are slipping into our emotional and creative lives. As machines get more personal, the real question isn’t what AI can do. It’s what we risk forgetting about being human …

Image created on Ninja AI

The future

“What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

The robots aren’t coming. They’re already in the room, nodding along, offering feedback, simulating empathy. They don’t sleep. They don’t sigh. And increasingly, they feel … helpful.

In 2025, AI is moving beyond spreadsheets and slide decks and entering our inner lives. According to new Harvard Business Review analysis, written by co-founder of Filtered.com and author Marc Zao-Sanders, the fastest-growing use for generative AI isn’t work but therapy and companionship. In other words, people are building relationships with machines. (I’ve previously written about AI companions – including in June last year.)

Some call this disturbing. Others call it progress. At DTX Manchester earlier this month, where I moderated a panel on AI action plans on the main stage (and wrote a summary of my seven takeaways from the event), the conversation was somewhere in between. One question lingered among the panels and product demos: how will we relate to one another when technology becomes our emotional rehearsal partner?

This puzzler is no longer only theoretical. RealTalkStudio, founded by Toby Sinclair, provides AI avatars that help users prepare for hard conversations: delivering bad news, facing conflict, and even giving feedback without sounding passive-aggressive. These avatars pick up on tone, hesitation, and eye movement. They pause in the right places, nod, and even move their arms around.

I met Toby at DTX Manchester, and we followed up with a video call a week or so later, after I’d road-tested RealTalkStudio. The prompts on the demo – a management scenario – were handy and enlightening, especially for someone like me, who has never really managed anyone (do children count?). They allowed me to speak with my “direct report” adroitly, to achieve a favourable outcome for both parties. 

Toby had been at JP Morgan for almost 11 years until he left to establish RealTalkStudio in September, and his last role was Executive Director of Employee Experience. Why did he give it all up?

“The idea came from a mix of personal struggle and tech opportunity,” he told me over Zoom. “I’ve always found difficult conversations hard – I’m a bit of a people pleaser, so when I had to give feedback or bad news, I’d sugarcoat it, use too many pillow. My manager [at JP Morgan] was the opposite: direct, no fluff. That contrast made me realise there isn’t one right way – but practise is needed. And a lot of people struggle with this, not just me.”

The launch of ChatGPT, in November 2022, prompted him to explore possible solutions using technology. “Something clicked. It was conversational, not transactional – and I immediately thought, this could be a space to practise hard conversations. At first, I used it for myself: trying to become a better manager at JP Morgan, thinking through career changes, testing it as a kind of coach or advisor. That led to early experiments in building an AI coaching product, but it flopped. The text interface was too clunky, the experience too dull. Then, late last year, I saw how far avatar tech had come.” 

Suddenly, Toby’s idea felt viable. Natural, even. “I knew the business might not be sustainable forever, but for now, the timing and the tech felt aligned. I could imagine it being used for manager training, dating, debt collectors, airline … so many use cases.”

Indeed, avatars are not just used in work settings. A growing number of people – particularly younger generations – are turning to AI to rehearse dating, for instance. Toby has been approached by an Eastern European matchmaking service. “They came to me because they’d noticed a recurring issue, especially with younger men: poor communication on dates, and a lack of confidence. They were looking for ways to help their clients – mainly men – have better conversations. And while practice helps, finding a good practice partner is tricky. Most of these men don’t have many female friends, and it’s awkward to ask someone: ‘Can we practise going on a date?’ That’s where RealTalk comes in. We offer a realistic, judgment-free way to rehearse those conversations. It’s all about building confidence and clarity.”

These avatars flirt back. They guide you through rejection. They help you practise confidence without fear of humiliation. It’s Black Mirror, yes. But also oddly touching. On one level, this is useful. Social anxiety is rising. Young people in particular are navigating a digital-first emotional landscape. An AI avatar offers low-risk rehearsal. It doesn’t laugh. It doesn’t ghost.

On another level, it’s deeply troubling. The ability to control the simulation – to tailor responses, remove ambiguity, and mute discomfort – trains us to expect real humans to behave predictably, like code. We risk flattening our tolerance for emotional nuance. If your avatar never rolls its eyes or forgets your birthday, why tolerate a flawed, chaotic, human partner?

When life feels high-stakes and unpredictable, a predictable conversation with a patient, programmable partner can feel like relief. But what happens when we expect humans to behave like avatars? When spontaneity becomes a bug, not a feature?

That’s the tension. These tools are good, and only improving. Too good? The quotation I started this month’s Go Flux Yourself with comes from Toby, who has a two-year-old boy, Dylan. As our allotted 30 minutes neared its end, the hugely enjoyable conversation turned philosophical, and he posed this question: “What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

It’s clear that AI avatars are no longer just slick customer service bots. They’re surprisingly lifelike. Character-3, the latest from Hedra, mimics micro-expressions with startling accuracy. Eyebrows arch. Shoulders slump. A smirk feels earned.

This matters because humans are built to read nuance. We feel it when something’s off. But as avatars close the emotional gap, that sense of artifice starts to slip. We begin to forget that what we engage with isn’t sentient – it’s coded.

As Justine Moore from Andreessen Horowitz stressed in an article outlining the roadmap for avatars (thanks for the tip, Toby), these aren’t talking heads anymore. They’re talking characters, designed to be persuasive. Designed to feel real enough.

So yes, they’re useful for training, coaching, even storytelling. But they’re also inching closer to companionship. And once a machine starts mimicking care, the ethics get blurry.

Nowhere is the ambivalence more acute than in the creative industries. The spectre of AI-generated music, art, and writing has stirred panic among artists. And yet – as I argued at Zest’s Greenwich event last week – the most interesting possibilities lie in creative amplification, not replacement.

For instance, the late Leon Ware’s voice, pulled from a decades-old demo, now duets with Marcos Valle on Feels So Good, a track left unfinished since 1979. The result, when I heard it at the Jazz Cafe last August, when I was lucky enough to catch octogenarian Valle, was genuinely moving. Not because it’s novel, but because it’s human history reassembled. Ware isn’t being replaced. He’s being recontextualised.

We’ve seen similar examples in recent months: a new Beatles song featuring a de-noised John Lennon; a Beethoven symphony completed with machine assistance. Each case prompts the same question: is this artistry, or algorithmic taxidermy?

From a technical perspective, these tools are astonishing. From a legal standpoint, deeply fraught. But from a cultural angle, the reaction is more visceral: people care about authenticity. A recent UK Music study found that 83% of UK adults believe AI-generated songs should be clearly labelled. Two-thirds worry about AI replacing human creativity altogether.

And yet, when used transparently, AI can be a powerful co-creator. I’ve used it to organise ideas, generate structure, and overcome writer’s block. It’s a tool, like a camera, or a DAW, or a pencil. But it doesn’t originate. It doesn’t feel.

As Dean, a community member of Love Will Save The Day FM (for whom my DJ alias Boat Floaters has a monthly show called Love Rescue), told me: “Real art is made in the accidents. That’s the magic. AI, to me, reduces the possibility of accidents and chance in creation, so it eliminates the magic.”

That distinction matters. Creativity is not just output. It’s a process. It’s the struggle, the surprise, the sweat. AI can help, but it can’t replace that.

Other contributions from LWSTD members captured the ambivalence of AI and creativity – in music, in this case, but these viewpoints can be broadened out to the other arts. James said: “Anything rendered by AI is built on the work of others. Framing this as ‘democratised art’ is disingenuous.” He noted how Hayao Miyazaki of Studio Ghilbli expressed deep disgust when social media feeds became drowned in AI-parodies of his art. He criticised it as an “insult to life itself”.

Sam picked up this theme. “The Ghibli stuff is a worrying direction of where things can easily head with music – there’s already terrible versions of things in rough styles but it won’t be long before the internet is flooded with people making their own Prince songs (or whatever) but, as with Ghibli, without anything beyond a superficial approximation of art.”

And Jed pointed out that “it’s all uncanny – it’s close, but it’s not right. It lacks humanity.”

Finally, Larkebird made an amusing distinction. “There are differences between art and creativity. Art is a higher state of creativity. I can add coffee to my tea and claim I’m being creative, but that’s not art.”

Perhaps, though, if we want to glimpse where this is really headed, we need to look beyond the avatars and look to the agents, which are currently dominating the space.

Ray Smith, Microsoft’s VP of Autonomous Agents, shared a fascinating vision during our meeting in London in early April. His team’s strategy hinges on three tiers: copilots (assistants), agents (apps that take action), and autonomous agents (systems that can reason and decide).

Imagine an AI that doesn’t just help you file expenses but detects fraud, reroutes tasks, escalates anomalies, all without being prompted. That’s already happening. Pets at Home uses a revenue protection agent to scan and flag suspicious returns. The human manager only steps in at the exception stage.

And yet, during Smith’s demo … the tech faltered. GPU throttling. Processing delays. The AI refused to play ball.

It was a perfect irony: a conversation about seamless automation interrupted by the messiness of real systems. Proof, perhaps, that we’re still human at the centre.

But the direction of travel is clear. These agents are not just tools. They are colleagues. Digital labour, tireless and ever-present.

Smith envisions a world where every business process has a dedicated agent. Where creative workflows, customer support, and executive decision-making are all augmented by intelligent, autonomous helpers.

However, even he admits that we need a cultural reorientation. Most employees still treat AI as a search box. They don’t yet trust it to act. That shift – from command-based to companion-based thinking – is coming, slowly, then suddenly (to paraphrase Ernest Hemingway).

A key point often missed in the AI hype is this: AI is inherently retrospective. Its models are trained on what has come before. It samples. It predicts. It interpolates. But it cannot truly invent in the sense humans do, from nothing, from dreams, from pain.

This is why, despite all the alarmism, creativity remains deeply, stubbornly human. And thank goodness for that.

But there is a danger here. Not of AI replacing us, but of us replacing ourselves – outsourcing our process, flattening our instincts, degrading our skills, compromising originality in favour of efficiency.

AI might never write a truly original poem. But if we rely on it to finish our stanzas, we might stop trying.

Historian Yuval Noah Harari has warned against treating AI as “just another tool”. He suggests we reframe it as alien intelligence. Not because it’s malevolent, but because it’s not us. It doesn’t share our ethics. It doesn’t care about suffering. It doesn’t learn from heartbreak.

This matters, because as we build emotional bonds with AI – however simulated – we risk assuming moral equivalence. That an AI which can seem empathetic is empathetic.

This is where the work of designers and ethicists becomes critical. Should emotional AI be clearly labelled? Should simulated relationships come with disclaimers? If not, we risk emotional manipulation at industrial scale, especially among the young, lonely, or digitally naive. (This recent New York Times piece, about a married, 28-year-old woman in love with her CPT, is well worth a read, to show how easy – and frightening, plus costly – it is to become attached to AI.)

We also risk creating a two-tier society: those who bond with humans, and those who bond with their devices.

Further, Harari warned in an essay, published in last Saturday’s Financial Times Weekend, that the rise of AI could accelerate political fragmentation in the absence of shared values and global cooperation. Instead of a liberal world order, we gain a mosaic of “digital fortresses”, each with its own truths, avatars, and echo chambers. 

Without robust ethics, the future of AI attachment could split into a thousand isolated solitudes, each curated by a private algorithmic butler. If we don’t set guardrails now, we may soon live in a world where connection is easy – and utterly empty.

The present

At DTX Manchester this month, the main-stage AI panel I moderated felt very different from those even last year. The vibe was less “what is this stuff?” and more “how do we control the stuff we’ve already unleashed?”

Gone are the proof-of-concept experiments. Organisations are deploying AI at scale. Suzanne Ellison at Lloyds Bank described a knowledge base now used by 21,000 colleagues, reducing information retrieval by half and boosting customer satisfaction by a third. But more than that, it’s made work more human, freeing up time for nuanced, empathetic conversations.

Likewise, the thought leadership business I co-founded last year, Pickup_andWebb, uses AI avatars for client-facing video content, such as a training programme. No studios. No awkward reshoots. Just instant script updates. It’s slick, smart, and efficient. And yes, slightly unsettling.

Dominic Dugan of Oktra, a man who has spent decades designing workspaces, echoed that tension. He’s sceptical. Most post-pandemic office redesigns, he argues, are just “colouring in”– performative, superficial, Instagram-friendly but uninhabitable. We’ve designed around aesthetics, not people.

Dugan wants us to talk about performance. If an office doesn’t help people do better work, or connect more meaningfully, what’s the point? Even the most elegantly designed workplace means little if it doesn’t accommodate the emotional messiness of human interaction – something AI, for all its growth, still doesn’t understand.

And yet, that fragility of our human systems – tech included – was brought into sharp relief in these last few days (and is ongoing, at the time of writing) when an “induced atmospheric vibration” reportedly caused widespread blackouts in Spain and Portugal, knocking out connectivity across major cities for hours, and in some cases days. No internet. No payment terminals. No AI anything. Life slowed to a crawl. Trains stopped. Offices went dark. Coffee shops switched to cash, or closed altogether. It was a rare glimpse into the abyss of analogue dependency, a reminder that our digital lives are fragile scaffolds built on uncertain foundations.

The outage was temporary. But the lesson lingers: the more reliant we become on these intelligent systems, the greater our vulnerability when they fail. And fail they will. That’s the nature of systems. But it’s also the strength of humans: our capacity to improvise, to adapt, to find ways around failure. The more we automate, the more we must remember this: resilience cannot be outsourced.

And that brings me to my own moment of reinvention.

This month I began the long-overdue overhaul of my website, oliverpickup.com. The current version – featuring a photograph on the home page of me swimming in the Regents Park Serpentine at a shoot interviewing Olympic triathlete Jodie Stimpson, goggles on upside down – has served me well, but it’s over a decade old. Also people think I’m into wild swimming. I’m not, and detest cold water. 

(The 2015 article in FT Weekend has one of my favourite opening lines: “Jodie Stimpson is discussing tactical urination. The West Midlands-based triathlete, winner of two Commonwealth Games golds last summer, is specifically talking about relieving herself in her wetsuit to flood warmth to the legs when open-water swimming.”) 

But it’s more than a visual rebrand. I’m repositioning, due to FOBO (fear of becoming obsolete). The traditional freelance model is eroding, its margins squeezed by algorithmic content and automated writing. While it might not have the personality, depth, and nuance of human writing, AI doesn’t sleep, doesn’t bill by the hour, and now writes decently enough to compete. I know I can’t outpace it on volume. So I have to evolve. Speaking. Moderating. Podcasting. Hosting. These are uniquely human domains (for now).

The irony isn’t lost on me: I now use AI to sharpen scripts, test tone, even rehearse talks. But I also know the line. I know what cannot be outsourced. If my words don’t carry me in them, they’re not worth publishing.

Many of us are betting that presence still matters. That real connection – in a room, on a stage, in a hard conversation – will hold value, even as screens whisper more sweetly than ever.

As such, I’m delighted to have been accepted by Pomona Partners, a speaker agency led by “applied” futurist Tom Cheesewright, whom I caught up with over lunch when at DTX Manchester. I’m looking forward to taking the next steps in my professional speaking career with Tom and the team.

The past

Recently, prompted by a friend’s health scare and my natural curiosity, I spat into a tube and sent off the DNA sample to ancestry.com. I want to understand where I come from, what traits I carry, and what history pulses through me.

In a world where AI can mimic me – my voice, writing style, and image – there’s something grounding about knowing the real me. The biological, lived, flawed, irreplaceable me.

It struck me as deeply ironic. We’re generating synthetic selves at an extraordinary rate. Yet we’re still compelled to discover our origins: to know not just where we’re going, but where we began.

This desire for self-knowledge is fundamental. It sits at the heart of my CHUI framework: Community, Health, Understanding, Interconnectedness. Without understanding, we’re at the mercy of the algorithm. Without roots, we become avatars.

Smith’s demo glitch – an AI agent refusing to cooperate – was a reminder that no matter how advanced the tools, we are still in the loop. And we should remain there.

When I receive my ancestry results, I won’t be looking for royalty. I’ll be looking for roots. Not to anchor me in the past, but to help me walk straighter into the future. I’ll also share those findings in this newsletter. Meanwhile, I’m off to put tea in my coffee.

Statistics of the month

📈 AI is boosting business. Some 89% of global leaders say speeding up AI adoption is a top priority this year, according to new LinkedIn data. And 51% of firms have already seen at least a 10% rise in revenue after implementation.

🏙️ Cities aren’t ready. Urban economies generate most of the world’s GDP, but 44% of that output is at risk from nature loss, recent World Economic Forum data shows. Meanwhile, only 37% of major cities have any biodiversity strategy in place. 🔗

🧠 The ambition gap is growing. Microsoft research finds that 82% of business leaders around the globe say 2025 is a pivotal year for change (85% think so in the UK). But 80% of employees feel too drained to meet those expectations. 🔗

📉 Engagement is slipping. Global employee engagement is down to 21%, according to Gallup’s latest State of the Global Workplace annual report (more on this next month). Managers have been hit hardest – dropping from 30% to 27% – and have been blamed for the general fall. The result? $438 billion in lost productivity. 🔗

💸 OpenAI wants to hit $125 billion. That’s their projected revenue by 2029 – driven by autonomous agents, API tools and custom GPTs. Not bad for a company that started as a non-profit. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 14)

TL;DR February’s Go Flux Yourself examines fairness as a business – and societal – necessity. Splunk’s Kirsty Paine tackles AI security, Harvard’s Siri Chilazi critiques DEI’s flaws, and Robert Rosenkranz applies Stoic wisdom to ambition, humility, and success in an AI-driven world …

Image created on Midjourney with the prompt “a forlorn man with his young son both with ski gear on at the top of a mountain with no snow on it (but green grass and rock) with a psychedelic sky”

The future

“To achieve anything meaningful, you must accept that you don’t have all the answers. The most successful people are the ones who keep learning, questioning, and improving.”

Robert Rosenkranz has lived the American Dream – but you won’t hear him shouting about it. At 82, he has little interest in the brash, performative ambition that defines modern politics and business. Instead, his story is one of quiet, relentless progress. 

Born into a struggling family, he worked his way through Yale and Harvard, then went on to lead Delphi Financial Group for over three decades. By the time he stepped down as CEO in 2018, he had grown the company’s value 100-fold, overseeing more than $20 billion in assets.

Yet, Rosenkranz’s real legacy might not be in finance, but in philanthropy. Yesterday (February 27), in a smart members’ club (where I had to borrow a blazer at reception – oops!) in Mayfair, London, I attended an intimate lunch to discuss The Stoic Capitalist, his upcoming book on ambition, self-discipline, and long-term success. 

As we received our starters, he shared an extraordinary statistic: “In America, there are maybe a couple of dozen people who have given over a billion dollars in their lifetime. A hundred percent of them are self-made.”

Really? I did some digging, and the numbers back him up. As of 2024, over 25 American philanthropists have donated more than $1 billion each, according to Forbes. Further, of those who have signed the Giving Pledge – committing to give away at least half their wealth – 84% are self-made. Only 16% inherited their fortunes.

The message is clear: those who build their wealth from nothing are far more likely to give it away. Contrast this with Donald Trump, the ultimate heir-turned-huckster. Brash, transactional (“pay-to-play” was how American political scientist Ian Bremmer neatly describes him), obsessed with personal gain, the American President represents a vision of success where winning means others must lose. Rosenkranz, by contrast, embodies something altogether different – ambition not as self-interest, but as a long game that enriches others.

He is also, tellingly, apathetic about politics, latterly. Having once believed in the American meritocracy, the Republican who has helped steer public policy now sees a system increasingly warped by inherited wealth, populism, and those pay-to-play politics. “The future of American politics worries me,” he admitted at the lunch. And given the rise of Trumpian imitators, he has reason to be concerned. To my mind, the world needs more Rosenkranzes – self-made leaders who view ambition and success as vehicles for building, rather than simply taking.

This tension – between long-term, disciplined ambition and short-term, self-serving power – runs through this month’s Go Flux Yourself. Because whether we’re talking about AI security, workplace fairness, or the philosophy of leadership, the real winners will be those who take the long view and seek fairness.

Fairness at work: The illusion of progress

Fairness in the workplace is one of those ideas that corporate leaders love to endorse in principle – but shy away from in practice. Despite billions spent on Diversity, Equity, and Inclusion (DEI) initiatives, meaningful change remains frustratingly elusive. (Sadly, this fact only helps Trump’s forceful agenda to ditch such policies – an approach that is driving the marginalised to seek shelter, at home or abroad.)

“For a lot of organisations, programmatic interventions are appealing because they are discrete. They’re off to the side. It’s easy to approve a one-time budget for a facilitator to come and do a training or participate in a single event. That’s sometimes a lot easier than saying: ‘Let’s change how we evaluate performance.’ But precisely because those latter types of solutions are embedded and affect how work gets done daily, they’re more effective.”

This is the heart of what Harvard’s Siri Chilazi told me when we discussed Make Work Fair, the new book she has co-authored with Iris Bohnet. Their research offers a much-needed reality check on corporate DEI efforts.

Image created on Midjourney with the prompt “a man and a women in work clothes on a balancing scale – equal – in the style of a matisse painting”

She explained why so many workplace fairness initiatives fail: they rely on changing individual behaviour rather than fixing broken systems. “Unconscious bias training has become this multi-billion-dollar industry,” she said. “But the evidence is clear — it doesn’t work.” Studies have shown that bias training rarely leads to lasting behavioral change, and in some cases, it even backfires, making people more defensive about their biases rather than less.

So what does work? Chilazi and Bohnet argue that structural interventions — the kind that make fairness automatic rather than optional — are the key to real progress. “If you want to reduce bias in hiring, don’t just tell people to ‘be more aware’ — design the process so that bias has fewer opportunities to creep in,” she told me.

This means:

  • Standardising interviews so every candidate is evaluated against the same criteria
  • Removing names from CVs to eliminate unconscious bias in early screening
  • Making promotion decisions based on clear, structured frameworks rather than subjective “gut feelings”

The companies that have done this properly – like AstraZeneca, which now applies transparent decision-making frameworks to promotions – have seen real progress. Others, Chilazi warned, are simply engaging in performative fairness. “If an organisation is still relying on vague, unstructured decision-making, it doesn’t matter how many DEI consultants they hire – bias will win.”

Perhaps the most telling statistic comes from a 2023 McKinsey report that found that 90% of executives believe their DEI initiatives are effective, but only 40% of employees agree. That gap tells you everything you need to know.

This matters not just ethically, but competitively. Companies that embed fairness into their DNA don’t just avoid scandals and lawsuits – they outperform their competitors. “The data is overwhelming,” Chilazi said. “Fairer companies attract better talent, foster more innovation, and have stronger long-term results.”

Yet many businesses refuse to make fairness a structural priority. Why? Because, as Chilazi put it, “real fairness requires real power shifts. And that makes a lot of leaders uncomfortable.”

But here’s the reality: fairness isn’t a cost – it’s an investment. The future belongs to the companies that understand this. And those that don’t? They’ll be left wondering why the best talent keeps walking out the door.

NB I’ll be discussing some of this next week, on March 4, at the latest Inner London South Virtual Networking event for the Federation of Small Businesses (of which I’m a member). See here to tune in.

Fairness in AI: Who controls the future?

If fairness in the workplace is in crisis, fairness in AI is a full-blown emergency. And unlike workplace bias – which at least has legal protections and public scrutiny – AI bias is being quietly embedded into the foundations of our future.

AI now influences who gets hired, who gets a loan, who gets medical treatment, and even who goes to prison. Yet, shockingly, most companies deploying these systems have no real governance strategy in place.

At the start of February, I spoke with Splunk’s Geneva-based Kirsty Paine, a cybersecurity strategist and World Economic Forum Fellow, who is actively working with governments, regulators, and industry leaders to shape AI security standards. Her message was blunt: “AI governance isn’t just about ethics or compliance – it’s a resilience issue. If you don’t get it right, your business is exposed”.

This is where many boards are failing. They assume AI security is a technical problem, best left to IT teams. But as Paine explained, if AI makes a bad decision – one that leads to reputational, financial, or legal fallout – blaming the engineers won’t cut it.

“We need boards to start thinking of AI governance the same way they think about financial oversight,” she said. “If you wouldn’t approve a financial model without auditing it, why would you sign off on AI that fundamentally impacts customers, employees, and business decisions?”

Historically, businesses have treated cybersecurity as a defensive function – protecting systems from external attacks. But AI doesn’t work like that. It is constantly learning, evolving, and interacting with new data and new risks.

“You can’t just ‘fix’ an AI system once and assume it’s safe,” Paine told me. “AI doesn’t stop learning, so its risks don’t stop evolving either. That means your governance model needs to be just as dynamic.”

At its core, this is about power. Who controls AI, and in whose interests? Right now, most AI development is happening behind closed doors, controlled by a handful of tech giants with little accountability.

One of the biggest governance challenges is that no single company can solve AI security alone. That’s why Paine is leading cross-industry efforts at the WEF, bringing together governments, regulators, and businesses to create shared frameworks for AI security and resilience.

“AI security shouldn’t be a competitive advantage – it should be a shared priority,” she said. “If businesses don’t start working together on governance, they’ll be left at the mercy of regulators who will make those decisions for them.”

One of the most significant barriers to AI security is communication. Paine, who started her career as a mathematics teacher in challenging schools, knows that how you explain something determines whether people truly understand it.

“In cybersecurity and AI, we love jargon,” she admitted. “But if your board doesn’t understand the language you’re using, how can they make informed decisions?”

This is where her teaching background has shaped her approach. “I had to explain complex maths to students who found it intimidating,” she said. “Now, I do the same thing in boardrooms.” The goal isn’t to impress people with technical terms but to ensure they actually get it, was her message.

And this, ultimately, is the hidden risk of AI governance: if leaders don’t understand the systems they’re approving, they can’t govern them effectively.

The present

If fairness has been the intellectual thread running through my conversations this month, sobriety has been the personal one. I’ve been talking about it a lot – on Voice of Islam radio, for example (see here, from about 23 minutes in), where I was invited to discuss the impact of alcohol on society – and in wrapping up Upper Bottom, the sobriety podcast I co-hosted for the past year.

Ending Upper Bottom felt like the right decision – producing a weekly podcast (an endless cycle of researching, recording, editing, publishing and promoting) is challenging, and harder to justify with no financial reward and little social impact. But it also marked a turning point. 

When we launched last February, it was a passion project – an exploration of what it meant to re-evaluate alcohol’s role in our lives. Over the months, the response was encouraging: messages from people rethinking their own drinking, others inspired to take a break, and some who felt seen for the first time. It proved what I suspected all along: the sweetest fruits of sobriety can be found through clarity, agency, and taking control of your own story.

And now? Well, I’m already lining up new hosting gigs – this time, paid ones. Sobriety has given me a sharper focus, a better work ethic, and, frankly, a clearer voice. I have no interest in being a preacher about it – if you want a drink, have a drink – but I do know that since cutting out alcohol, opportunities keep rolling in. And I’m open to more.

I bring this up because storytelling – whether through a podcast mic, a radio interview, or the pages of Go Flux Yourself – is essentially about fairness too. Who gets to tell their story? Whose voice gets amplified? Who is given the space to question things that seem “normal” but, on closer inspection, might not be serving them?

This is the thread that ties my conversations this month – whether with Kirsty on AI governance, Robert on wealth distribution and politics, or Siri on workplace fairness, or my own reflections on sobriety – into something bigger. Fairness isn’t just about systems. It’s about who gets to write the script.

And right now, I’m more interested than ever in shaping my own.

The past

February was my birthday month. Another year older, another opportunity to reflect. And this year, the reflection came at a high altitude.

I spent a long weekend skiing in Slovenia with my 10-year-old son, Freddie – his first time on skis. It was magical, watching him initially wobble, find his balance, and then, quickly, gain confidence as he carved his way down the slopes. It took me back to my own childhood, when I was lucky enough to ski from a young age. But that word – lucky – stuck with me.

Because here’s the truth: by the time Freddie is my age, skiing might not be possible anymore.

The Alps are already feeling the effects of climate change. Lower-altitude resorts are seeing shorter seasons, more artificial snow, and unpredictable weather patterns. Consider 53% of European ski resorts face a ‘very high risk’ of snow scarcity if temperatures rise by 2°C. By the time Freddie’s children – if he has them – are old enough to ski, the idea of a family ski holiday may be a relic of the past.

It’s sobering to think about, especially after spending a month discussing fairness at work and in AI. Because climate change is the ultimate fairness issue. The people least responsible for it – future generations – are the ones who will pay the highest price.

For now, I’m grateful. Grateful that I got to experience skiing as a child, grateful that I got to share it with Freddie, grateful that – for now – we still have these mountains to enjoy.

But fairness isn’t about nostalgia. It’s about responsibility. And if we don’t take action, the stories we tell our grandchildren about the world we once had will be the closest they ever get to it.

Statistics of the month

📉 Is Google search fading? A TechRadar study found that 27% of US respondents now use AI tools instead of search engines. (I admit, I’m the same.) The way we find information is shifting fast. 🔗

🚀 GenAI is the skill to have. Coursera saw an 866% rise in AI course enrolments among enterprise learners. Year-on-year increases hit 1,100% for employees, 500% for students, and 1,600% for job seekers. Adapt, or be left behind. 🔗

Job applications are too slow. Candidates spend 42 minutes per application – close to the 53-minute threshold they consider excessive. Nearly half (45%) give up if the process drags on. Businesses must streamline hiring or risk losing top talent. 🔗

🤖 Robots are easing the burden on US nurses. AI assistants have saved clinicians 1.5 billion steps and 575,000+ hours by handling non-patient-facing tasks. A glimpse into the future of healthcare efficiency. 🔗

💻 The Slack-Zoom paradox. Virtual tools have boosted productivity for 59% of workers, yet 45% report “Zoom fatigue” – with men disproportionately affected. Remote work: a blessing and a burden. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 12)

TL;DR: December’s Go Flux Yourself explores how AI agents are reshaping the workforce, draws philosophical parallels between death and AI uncertainty, examines predictions for 2025, considers the growing AI-induced loneliness epidemic, and reflects on key themes from the newsletter’s inaugural year … 

Image created on Midjourney with the prompt “a thoughtful AI agent working alongside humans in an office setting, with both harmony and tension visible, in the style of a Rembrandt painting”

The future

“Stop thinking of AI as doing the work of 50% of the people. Start thinking of AI as doing 50% of the work for 100% of the people.”

These words from Jensen Huang, the CEO of Nvidia (whose vision of a two-thirds non-human workforce featured in November’s Go Flux Yourself) crystallise a profound shift in how we should conceptualise the relationship between artificial intelligence and human work – a subject that has been top of mind for me this last year. On the eve of 2025, the conversation is evolving from whether AI will replace humans to how AI agents will augment and transform human capabilities.

At the start of December, I spoke with Sultan Saidov, Co-Founder and President at Beamery, an HR management software company headquartered in London. He offers valuable insights into this transformation and suggests we’re witnessing a fundamental restructuring of organisations, shifting from traditional pyramid structures to diamond shapes.

“In a world where AI might not be super smart or reliable, but it can do lots of low-cost tasks, you may not need as wide a pyramid at the bottom of the organisation,” Saidov explained. This structural evolution reflects Huang’s declaration.

The implications of this shift are far-reaching. “If you start with empowerment,” Saidov argued, “you don’t just have to think about what tasks you are doing today that an agent could do. You have to think about what work would be really valuable in my time in a world where agents are available to me.”

This transformation brings profound questions about human identity and purpose in an AI-augmented world. Chris Langan, the so-called “smartest man in the world”, according to the Daily Mail, whose IQ reportedly ranges between 190 and 210, offers an intriguing perspective through his Cognitive-Theoretic Model of the Universe. 

He suggests that when we die, we transition entirely to another plane of existence – one we cannot access while alive. In this new state, we might not even remember who we were before, existing in what Langan describes as a “meta-simultaneous” state where all possible incarnations exist at once.

The parallels between Langan’s metaphysical musings and our current AI moment are striking. Just as “no one knows what happens next”  – the motto above OpenAI CEO Sam Altman’s desk – when we die, we face similar uncertainty about how AI will transform our existence. Both scenarios involve a fundamental transformation of consciousness and being, with outcomes that remain tantalisingly beyond our current understanding.

Saidov recognises this uncertainty in the workplace context. “You have certain roles that are gradually becoming agents,” he noted, “like, let’s say, scheduling interviews and coordinating. That’s increasingly becoming this primarily non-human task. The humans that did that before are gradually moving into other things; otherwise, you won’t need as many of them.”

The emergence of “agentic AI” – autonomous systems that can perform complex tasks with minimal human supervision – represents perhaps the most significant shift in how work gets done since the Industrial Revolution. No doubt there will be opportunities and challenges (not least with security permissions and data sharing, as raised by SailPoint’s CEO Mark McClain in last month’s newsletter). Sultan emphasises that this isn’t just about efficiency: “The purpose of the HR function is increasingly to help navigate this massive human transformation.”

Just as Langan suggests, our consciousness might persist in a different form after death, but Saidov sees human work evolving rather than disappearing. “The nuance of how do you do this right, especially for people topics, has to come with a bit of governance,” he stressed. This governance becomes crucial as we navigate what he calls “task taxonomies” – the complex mapping of which AI agents should handle tasks versus humans.

In 2025, the challenge isn’t just technical implementation but preparing the next generation for this uncertain future. As I have come to ask many interviewees, I quizzed Saidov about how we should prepare our children. (I’m interested in these answers as the father of two children, aged 10 and four.) “It’s hard to predict what tools you will use [in the coming years], so probably the best thing you could do is encourage a curiosity for finding a passion, which isn’t so much a skill as a mindset that lets you explore what you care about more proactively,” he advised. 

I love this reply, and this emphasis on curiosity and adaptability echoes through Langan’s philosophical framework and Sultan’s practical insights. In a world where AI agents are transforming traditional roles, the ability to navigate uncertainty becomes paramount. “There’s probably going to be toolkits for kids who are trying to zoom in on what’s real and what’s not real, to be more sophisticated in thinking about what’s fake and not fake than we are,” Saidov added.

The transformation of human interaction extends beyond the workplace, however. A Financial Times article about Meta’s plans to create bots on its social media platforms confirmed fears that – thanks to misguided rapacious capitalism goals – the biggest players are missing the point of technology, and accelerating tech-induced loneliness and content that is eating itself. 

Indeed, Meta revealed its alarming vision for AI-generated characters to populate its social media platforms, with Connor Hayes, Vice-President of Product for Generative AI, suggesting these AI personas will exist “in the same way that accounts do”, complete with biographies, profile pictures, and the ability to generate and share content. 

While this might make platforms “more entertaining and engaging” – Meta’s stated priority for the next two years, according to Hayes – it raises profound questions about authenticity and connection in our digital future. As Becky Owen, former head of Meta’s creator innovations team, warned: “Unlike human creators, these AI personas don’t have lived experiences, emotions, or the same capacity for relatability.” 

This observation feels particularly pertinent when considered alongside Langan’s theories about consciousness and Saidov’s emphasis on human value in an AI-augmented workplace.

Like Langan’s view of consciousness, human-work evolution may exist simultaneously in multiple states – part human, part machine, with boundaries increasingly fluid. As organisations evolve toward diamond structures, the key to success is not resisting this transformation but embracing its uncertainty while maintaining our essential human qualities.

As we stand on this threshold of unprecedented change, perhaps the most valuable insight comes from combining Langan’s metaphysical perspective with Saidov’s practical wisdom: the future, whether of consciousness or work, may be unknowable, but our ability to adapt and maintain our humanity within it remains firmly within our control.

In early January, I’ll be talking about my latest thinking as the first guest of 2025 on the Leading the Future podcast – look out for that here.

The present

As we peer into the uncertainty of 2025, the predictions from industry leaders paint a picture of unprecedented transformation. Yet beneath the surface, enthusiasm for AI adoption lies a growing concern about its impact on human connection and wellbeing.

“The biggest threat AI offers is loneliness,” warned Scott Galloway, professor at NYU Stern School of Business, in his pre-Christmas annual predictions. His concern is far from theoretical – in March, SplitMetrics found that AI companion apps reached 225 million lifetime downloads on the Google Play store alone. We’re witnessing a fundamental shift in how humans seek connection. The rise of AI girlfriends, outpacing their male counterparts by a factor of seven, signals a troubling retreat from human-to-human relationships.

This shift towards digital relationships parallels broader workplace transformations. Roshan Kindred, Chief Diversity Officer at PagerDuty, predicts that “companies will face increasing demands for transparency in their DEI efforts, including publishing data on workforce demographics, pay equity, and diversity initiatives”. The human element becomes more crucial even as automation increases.

The numbers tell a sobering story. According to Culture Amp’s latest data, nearly one in four workers (23%) plan to quit their jobs in 2025. This predicted attrition rate is particularly pronounced in the UK, exceeding both US (19%) and Australian (18%) figures. Only Germany shows a higher potential turnover at 24%.

Leadership quality emerges as the critical factor in this equation. With a great manager and leader, employees’ commitment to stay reaches 94%; with poor leadership, it plummets to just 19%. The financial implications are staggering – replacing an employee can cost between 30% and 200% of their salary, with the average UK salary in 2024 being £37,400.

Jessie Scheepers, Belonging & Impact Lead at Pleo, envisions 2025 as a year when “human-centric leadership will come to the fore, with authentic and empathetic leaders placing a greater focus on their team’s mental health”. This prediction gains urgency when considered alongside Galloway’s warnings about technology-induced isolation.

Meanwhile, the education sector faces its own reckoning. Nikolaz Foucaud, Managing Director EMEA at Coursera, notes that 54% of employers now prioritise skills over traditional credentials. This shift comes as higher education faces financial challenges, with university fee caps rising to £9,535 a year. The solution, Foucaud suggests, lies in industry “micro-credentials” that bridge the gap between academic learning and workplace demands.

Perhaps most telling is the quantum-computing horizon outlined by Dominic Allon, CEO of Pipedrive. While quantum technologies promise revolutionary advances in optimisation and data security, their immediate impact may be less dramatic than feared. “Small businesses may benefit from breakthroughs in areas like optimisation, but those that adopt or integrate quantum solutions early on could gain a competitive edge in innovation, cost efficiency, and scalability,” he notes.

The financial landscape presents particular challenges. Hila Harel from Fiverr predicts UK businesses will face significant pressures, with average losses predicted to reach £138,000 and a quarter of companies expecting losses over £100,000. Yet within this disruption lies opportunity – particularly for freelancers and flexible workers who can navigate the evolving landscape.

These predictions collectively suggest that 2025 will be less about technological revolution and more about human evolution. The successful integration of AI and other advanced technologies will depend not on the tools’ sophistication but on our ability to maintain and strengthen human connections in an increasingly digital world.

The past

As it’s the end of 2024, I’d like to look back and reflect on the inaugural year of Go Flux Yourself. The themes that emerged across my monthly explorations feel eerily prescient. I began in January examining Sam Altman’s aforementioned desk motto – “no one knows what happens next” – a humble acknowledgement that set the tone for a year of thoughtful examination rather than confident predictions.

That uncertainty proved to be one of my most reliable companions through 2024. In February, I explored the concept of FOBO – fear of becoming obsolete – through an unexpected lens: a chance encounter with a tarot card reader in a cafe. The Page of Swords card suggested the need to embrace new forms of communication and continuous learning, while The Two of Wands warned about the dangers of remaining in our comfort zones while watching the world transform around us.

March’s Go Flux Yourself featured Minouche Shafik’s prescient observation that while “jobs were about muscles in the past, now they’re about brains, but in the future, they’ll be about the heart”. This insight gained particular resonance as the year progressed and AI capabilities expanded, emphasising the enduring value of human empathy and emotional intelligence.

In April, I shared my thoughts about values. In August, I revealed the CHUI Framework – Community, Health, Understanding, and Interconnectedness – providing structured guidance for navigating human-work evolution. These values proved essential as organisations grappled with technological change and the persistent challenge of maintaining human connection in increasingly digital workplaces.

During the summer months, I examined what Scott Galloway calls “the biggest threat we’re not discussing enough”: loneliness. June’s newsletter warned about the potential social costs of increasing reliance on digital relationships.

July’s Go Flux Yourself provided one of the most sobering insights of the year. Futurist Gerd Leonhard compared the arrival of artificial general intelligence (AGI) to “a meteor coming down from above, stopping culture and knowledge as we know it”. This metaphor gained particular potency when considered alongside Chris Langan’s theories about consciousness and existence, which we explored earlier in this edition.

August introduced the concept of being “kind explorers” in the digital age, inspired by my daughter’s career ambition shared at her nursery graduation. September reflected on the importance of wonder and magic in an increasingly automated world, while October examined the dark side of AI through conversations with cybersecurity luminaries Dr Joye Purser and Shlomo Kramer.

November channelled Marcus Aurelius’s wisdom about the quality of our thoughts determining the quality of our lives – a theme that resonates powerfully as we conclude our year’s journey. Throughout these explorations, certain constants emerged: the importance of human connection in an increasingly digital world, the need for thoughtful implementation of technology, and the enduring value of authentic leadership.

I’ve witnessed the workplace transform from a location to a concept, and documented the rise of what we now call the “relationship economy”. Further, research suggests that by 2025, up to 90% of online content could be AI-generated, making human authenticity more valuable than ever. (This is one of the reasons I have this year established Pickup_andWebb, a content company providing human-first thought leadership for businesses and C-suite executives. Read this blog on our thinking about the near future of thought leadership here.)

The year brought tangible changes too. Australia made history by banning social media for under-16s, while EE recommended against smartphones for under-11s. Microsoft’s Copilot AI demonstrated both the promise and perils of workplace AI integration, with privacy breaches highlighting the gap between technological capability and practical implementation.

My explorations of – and writing and speaking about – human-work evolution have taken me from London’s Silicon Roundabout to Barcelona’s tech hubs, from Manchester’s Digital Transformation Expo to a security conference in Rome, and elsewhere.

Looking back, perhaps Go Flux Yourself’s most significant achievement in 2024 has been maintaining a balanced perspective – neither succumbing to techno-optimism nor falling into dystopian pessimism. I’ve documented the challenges while highlighting opportunities, always emphasising the importance of human agency in shaping our technological future.

The questions I asked in January remain relevant. How do we maintain our humanity in an increasingly automated world? How do we ensure technology serves human flourishing rather than diminishing it? But I’ve gained valuable insights into answering them, understanding that the key lies not in resisting change but in thoughtfully shaping it.

As I close the last chapter of 2024 of Go Flux Yourself, we can appreciate that while uncertainty remains our constant companion in the coming year, our capacity for adaptation, innovation, and human connection provides a reliable compass for navigating whatever comes next. 

Ultimately, whether facing AGI, armies of AI agents, or augmented workplaces, the quality of our thoughts and the strength of our human bonds will determine the success of our journey.

Statistics of the month

  • AI is the fastest-growing skill among employees, job seekers and students in the UK and globally, with Coursera course enrolments in this domain having increased 866% year-on-year, according to newly released data
  • Nobel laureate Geoffrey Hinton – the “Godfather of AI” – has doubled his doomsday prediction, speaking to BBC Radio 4’s Today programme, now warning of a 1-in-5 chance that AI could wipe out humanity by 2054. He cautions we’ll be like toddlers attempting to control super-intelligent machines, adding: “We’ve never had to deal with things more intelligent than ourselves before.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 10)

TL;DR: October’s Go Flux Yourself explores the dark and light sides of AI through Nobel Prize winners and cybersecurity experts, weighs the impact of disinformation ahead of the US election, confronts haunting cases of AI misuse, and finds hope in a Holocaust survivor’s legacy of ethical innovation …

Image created on Midjourney with the prompt  “a scary megalomaniac dressed as halloween monster with loads of computers showing code behind him in the style of an Edward Hopper painting”

The future

“Large language models are like young children – they grow and develop based on how you nurture and treat them.”

I’m curious by nature – it’s central to my profession as a truth-seeking human-work evolution journalist. But sometimes, it’s perhaps best not to peek behind the curtain, as what lies behind might be terror-inducing. Fittingly, this newsletter is published on Halloween, so you might expect some horror. Consider yourself warned!

I was fortunate enough to interview two genuine cybersecurity luminaries in as many days towards the end of October. First, Dr Joye Purser, Field CISO at Veritas Technologies and a former White House director who was a senior US Government official during the Colonial Pipeline attack in 2021, was over from Atlanta. 

And, the following day, the “godfather of Israeli cybersecurity”, Shlomo Kramer, Co-Founder and CEO of Cato Networks, treated me to lunch at Claridge’s – lucky me! – after flying in from Tel Aviv.

The above quotation is from my conversation with Joye, who warned that if a nation isn’t democratic, they’ll train their AI systems very differently, with state-controlled information. 

Both she and Shlomo painted a sobering picture of our technological future, particularly as we approach what could be the most digitally manipulated vote in history: the United States presidential election. Remember, remember the fifth of November, indeed.

“The risk is high for disinformation campaigns,” Joye stressed, urging voters to “carefully scrutinise the information they receive for what is the source, how recent or not recent is the information, and just develop an increasing public awareness of the warning signs or red flags that something’s not right with the communication”.

Shlomo, who co-founded Check Point Software Technologies in 1993, offered a stark analysis of how social media has fractured our society. “People don’t care if it’s right or wrong, whether a tweet is from a bot or a Russian campaign,” he said. “They just consume it, they believe it – it becomes their religion.” 

Shlomo drew a fascinating parallel between modern social media echo chambers and medieval church communities, suggesting we’ve come full circle from faith-based societies through the age of reason and back to tribal belief systems.

And, of course, most disagreements that develop into wars are primarily down to religious beliefs, at least on the surface. Is it a coincidence that two of the largest wars we have known in four decades are raging? (And if Donald Trump is voted back into the White House in a week, then what will that mean for Europe if – as heavily hinted – funding for Ukraine’s military is strangled?) 

After the collective trauma of the coronavirus pandemic, the combination of echo chambers on social media and manipulated AIs is creating flames for a smouldering society. Have you noticed how, generally, people are snappier with one another?

The cybersecurity challenges are equally worrying. Both experts highlighted how AI is supercharging traditional threats. Shlomo’s team recently uncovered an AI tool that can generate entire fake identities – complete with convincing video, passport photos, and multiple corroborating accounts – capable of fooling sophisticated know-your-customer systems at financial institutions.

Maybe most concerning was their shared view that cybersecurity isn’t a problem we can solve but must constantly manage. As Shlomo said: “You have to run as fast as possible to stay in the same place.” It’s a perpetual arms race between defenders and increasingly sophisticated threats.

Still, there’s hope. The very technologies that create these challenges might also help us overcome them. Both experts emphasised that while bad actors can use AI for deception, it’s also essential for defence. The key is ensuring we develop these tools with democratic values and human welfare in mind.

When I asked about preparing our children for this uncertain future – as I often do when interviewing experts who also have kids – their responses were enlightening. Joye emphasised the importance of teaching children to be “informed consumers of information” who understand the significance of trusted sources and proper journalism. 

Shlomo’s advice was more philosophical: children must learn to “listen to themselves and believe what they hear is true” – to trust their inner voice amid the cacophony of digital noise.

In the post-truth era, who can we trust if not ourselves?

A couple of years ago, John Elkington, a world authority on corporate responsibility and sustainable development who coined the term “triple bottom line”, told me: “In the vacuum of effective politicians, people are turning to businesses for leadership, so business leaders must accept that responsibility.” (Coincidently, this year marks three decades since the British environmental thinker coined the “3 Ps” of people, planet, and profit.)

For this reason, CEOs, especially, have to speak up with authority, authenticity and original thought. Staying curious, thinking critically, and calling out bad practices are increasingly important, particularly by industry leaders. 

With an eye on the near future and the need for truth, I’m pleased to announce the soft launch of Pickup_andWebb, a collaboration with brand strategist and client-turned-friend Cameron Webb. Pickup_andWebb develops incisive, issue-led thought leadership for ambitious clients looking to provoke stakeholder and industry debate and enhance their expert reputation.

“In an era of unprecedented volatility, CEOs navigate treacherous waters,” I wrote recently in our opening Insights article titled Speak up or sink. “The growing list of headwinds is formidable – from geopolitical tensions and wars reshaping global alliances to the relentless march of technological advancements disrupting entire industries. 

“Add to this the perfect storm of rising energy and material costs, traumatised supply chains, and the ever-present spectre of climate change, and it’s clear that the modern CEO’s role has never been more challenging – or more crucial. Yet, despite this incredible turbulence, the truly successful CEO of 2024 must remain a beacon of stability and vision. They are the captains who keep their eyes fixed on the distant horizon, refusing to be distracted by the immediate squalls. 

“More than ever, they must embody the role of progressive visionaries, their gaze penetrating years into the future to seize nascent opportunities or deftly avoid looming catastrophes. But vision alone is not enough.

“Today’s exemplary leaders are expected to steer with a unique blend of authenticity, humility, and vulnerability. They understand that true strength lies not in infallibility but in the courage to acknowledge uncertainties and learn from missteps. 

“These leaders aren’t afraid to swim against the tide, challenging conventional wisdom when necessary and inspiring their crews to navigate uncharted waters.”

If you are – or you know – a leader who might need help swimming against the tide and spreading their word, let’s start a conversation and co-create in early 2025.

The present

This month’s news perfectly illustrated AI’s Jekyll-and-Hyde nature on the subject of truth and technology. We saw the good, the bad, and the downright ugly.

While I’ve shared the darker future possibilities outlined by cybersecurity experts Joye and Shlomo, the 2024 Nobel Prizes highlighted AI’s extraordinary potential for good.

Sir Demis Hassabis, chief executive of Google DeepMind, shared the chemistry prize for using AI to crack a 50-year-old puzzle in biology: predicting the structure of every protein known to humanity. His team’s creation, AlphaFold, has already been used by over two million scientists worldwide, helping develop vaccines, improve plant resistance to climate change, and advance our understanding of the human body.

The day before, Geoffrey Hinton – dubbed the “godfather of AI” – won the physics prize for his pioneering work on neural networks, the very technology that powers today’s AI systems. Yet Hinton, who left Google in May 2023 to “freely speak out about the risk of AI”, now spends his time advocating for greater AI safety measures.

It’s a fitting metaphor for our times: the same week that celebrated AI’s potential to revolutionise scientific discovery also saw warnings about its capacity for deception and manipulation. As Hassabis himself noted, AI remains “just an analytical tool”; how we choose to use it matters, echoing Joye’s comment about how we feed LLMs.

Related to this topic, I was on stage twice at the Digital Transformation EXPO (DTX) London 2024 at the start of the month. Having been asked to produce a write-up of the two-day conference – the theme was “reinvention” – I noted how “the tech industry is caught in a dizzying dance of progress and prudence”.

I continued: “As industry titans and innovators converged at ExCeL London in early October, a central question emerged: how do we harness the transformative power of AI while safeguarding the essence of our humanity?

“As we stand on the brink of unprecedented change, one thing becomes clear: the path forward demands technological prowess, deep ethical reflection, and a renewed focus on the human element in our digital age.”

In the opening keynote, Derren Brown, Britain’s leading psychological illusionist, called for a pause in AI development to ensure technological products serve humans, not vice versa.

“We need to keep humanity in the driving seat,” Brown urged, challenging the audience to rethink the breakneck pace of innovation. This call for caution contrasted sharply with the rest of the conference’s urgency.

Piers Linney, Founder of ImplementAI and former Dragons’ Den investor, provided the most vivid analogy of the event. He likened competing in today’s market without embracing AI to “cage fighting – to the death – against the world champion, yet having Ironman in one’s corner and not calling him for help”.

Meanwhile, Michael Wignall, Customer Success Leader UK at Microsoft, warned: “Most businesses are not moving fast enough. You need to ask yourself: ‘Am I ready to embrace this wave of transformation?’ Your competitors may be ready.” His advice was unequivocal: “Do stuff quickly. If you are not disrupting, you will be disrupted.”

I was honoured to moderate a main-stage panel exploring human-centred tech design, offering a crucial counterpoint to the “move-fast-and-break-things” mantra. Gavin Barton, VP of Engineering at Booking.com, Sue Daley, Director of Tech and Innovation at techUK, and Dr Nicola Millard, Principal Innovation Partner at BT Group, joined me.

“Focus on the outcome you’re looking for,” advised Gavin. “Look at the problem rather than the metric; ask what the real problem is to solve.” Sue cautioned against unquestioningly jumping on the AI bandwagon, stressing: “Think about what you’re trying to achieve. Are you involving your employees, workforce, and potentially customers in what you’re trying to do?” Nicola introduced her “3 Us” framework – Useful, Useable, and Used – for evaluating tech innovation.

Regarding tech’s darker side, Jake Moore, Global Cybersecurity Advisor at ESET, delivered a hair-raising presentation titled The Rise of the Clones on DTX’s Cyber Hacker stage. His practical demonstration of deep fake technology’s potential for harm validated the warnings from both Joye and Shlomo about AI-enabled deception.

Moore revealed how he had used deep fake video and voice technology to penetrate a business’s defences and commit small-scale fraud. It was particularly unnerving given Shlomo’s earlier warning about AI tools generating entire fake identities that can fool sophisticated verification systems.

Quoting the late Sir Stephen Hawking’s prescient warning that “AI will be either the best or the worst thing for humanity”, Moore’s demonstration felt like a stark counterpoint to the Nobel Prize celebrations. Here, in one conference hall, we witnessed both the promise and peril of our AI future – rather like watching Dr Jekyll transform into Mr Hyde.

Later in the month, there were yet darker instances of AI’s misuse and abuse. In a story that reads like a Black Mirror episode, American Drew Crecente discovered his late teenage daughter, Jennifer, murdered in 2006, had been resurrected as an AI chatbot on Character.AI. The company claimed the bot was “user-created” and quickly removed it, but the incident raises profound questions about data privacy and respect for the deceased in our digital age.

Arguably even more distressing, and also in the United States, was the case of 14-year-old Sewell Setzer III, who took his own life after developing a relationship with an AI character based on Game of Thrones’ Daenerys Targaryen. His mother’s lawsuit against Character.AI highlights the dangers of AI companions that can form deep emotional bonds with vulnerable users – particularly children and teenagers.

Finally, in what police called a “landmark” prosecution, Bolton-based graphic design student Hugh Nelson was jailed for 18 years after using AI to create and sell child abuse images. The case exemplifies how rapidly improving AI technology can be weaponised for the darkest purposes, with prosecutors noting that “the imagery is becoming more realistic”.

While difficult to stomach, these stories validate warnings about AI’s destructive potential when developed without proper safeguards and ethical considerations. As Joye emphasised, how we nurture these technologies matters profoundly. The challenge ahead is clear: we must harness AI’s extraordinary potential for good while protecting the most vulnerable members of our society.

The past

During lunch at Claridge’s, Shlomo shared a remarkable story about his grandfather, Shlomo – whom he is named after – that feels particularly pertinent given the topic of human resilience in the face of technological change.

The elder Shlomo was an entrepreneur in Poland who survived Stalin’s Gulag through his business acumen. After enduring that horror, he navigated the treacherous post-war period in Austria – a time and place immortalised in Orson Welles’ The Third Man – before finally finding sanctuary in Israel in the early 1960s.

When the younger Shlomo co-founded Check Point Software Technologies over 30 years ago, the company’s first office was in his late grandfather’s vacant apartment. It feels fitting that a business focused on protecting people from digital threats began in a space owned by someone who had spent his life helping others survive very real ones.

The heart-warming story reminds us that while the challenges we face may evolve – from physical threats to digital deception – human ingenuity, ethical leadership, and the drive to protect others remain constant. 

As we grapple with AI’s implications for society, we would do well to remember this Halloween that technology is merely a tool; it’s the hands that wield it – and the values that guide those hands – that truly matter.

Statistics of the month

  • According to McKinsey and Company’s report The role of power in unlocking the European AI revolution, published last week, “in Europe, demand for data centers is expected to grow to approximately 35 gigawatts (GW) by 2030, up from 10 GW today. To meet this new IT load demand, more than $250 to $300 billion of investment will be needed in data center infrastructure, excluding power generation capacity.”
  • LinkedIn’s research reveals that more than half (56%) of UK professionals feel overwhelmed by how quickly their jobs are changing, which is particularly true of the younger generation (70% of 25-34 year olds), while 47% say expectations are higher than ever.
  • Data from Asana’s Work Innovation Lab reveals that AI use is still predominantly a “solo” activity for UK workers, with the majority feeling most comfortable using it alone compared to within a team or their wider organisation. The press release hypothesises: “This may be because UK individual workers think they have a better handle on technology than their managers or the business. Workers rank themselves as having the highest level of comfort with technology (86%) – compared to their team (78%), manager (74%) and organisation (76%). This trend is mirrored across industries and sectors.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 7)

TL;DR: July’s cheery (!) Go Flux Yourself considers the end of knowledge and culture as we know it thanks to artificial general intelligence, technology-induced ‘relationship decay’, HR dissonance, and the breathtaking beauty of human skill …

Image created on Midjourney with the prompt “a massive meteor with an evil AI face about to crash down on Earth while ignorant people look at their smartphones in the style of an Edvard Munch painting”

The future

“The realisation of artificial general intelligence would be like a meteor coming down from above, stopping culture and knowledge as we know it.”

These are the words of warning from Gerd Leonhard, the German futurist with whom I have collaborated a few times, fortunately. Back in 2018, Gerd – a former musician who studied at Boston’s Berklee College of Music (he has a brilliant story about how jazz trumpeting great Miles Davis visited and played just one note that had all the students spellbound) – invited me over to the south of France to shape and sharpen his messaging around the brawl, as he saw it, between humans and tech. 

Gerd published Technology vs. Humanity a couple of years earlier, but with things moving quickly, he engaged me to keep things fresh. He is always worth listening to and learning from, and I’m grateful that we have continued the conversation over the years – before, during, and now after the coronavirus pandemic. We caught up again recently, earlier in July, after Gerd delivered a hard-hitting webinar from which the above line comes. (I recommend you to watch the 41-minute talk here.)

As an aside, I had a good cackle at this dark cartoon Gerd used in his talk, too.

Back to the meteor idea, which is a twist on the 2021 apocalyptic political satire and black comedy Don’t Look Up, starring Leonardo DiCaprio. Last year Gerd produced a short film called Look Up Now.

Imagine a meteor hurtling towards Earth, not of rock and ice, but of silicon and code. This celestial body doesn’t threaten our physical existence but rather our monopoly on knowledge and intellect. This is Gerd’s stark vision of artificial general intelligence (AGI).

“We thought the idea of the asteroid hitting the earth would be good for the idea of AGI,” he tells me. “Because AI – and intelligent assistants (IA) – by itself is a big change, but it’s not existential in the sense of fundamentally changing everything. But a machine that would be generally intelligent that could surpass us in every possible cognitive job would be like an asteroid hitting us because it would basically be complete unemployment afterwards except for physical jobs.”

This looming “knowledge meteor” isn’t just a hypothetical scenario for the distant future. It’s a reality actively pursued by some of the world’s most powerful tech companies. As Gerd notes: “The biggest business ever is to replace humans with machines.”

The race towards AGI – or the singularity, or superintelligence, depending on your preferred phraseology – represents a seismic shift in human history that could redefine our role in the world. Yet, as with the fictional comet in Don’t Look Up, there’s an alarming lack of urgency in addressing this existential challenge. 

While governments and regulatory bodies are beginning to grapple with the implications of AI, their efforts often fall short of the mark. For instance, the UK’s recent pledge to “get a handle” on AI lacks clear definitions and concrete action plans. 

Meanwhile, the drive to dominate AI development continues unabated in the United States, with Donald Trump’s team planning a “Manhattan Project on AI”, according to a Washington Post report in mid-July. This plan includes creating industry-led agencies to study AI models and protect them from foreign powers, with a section ominously titled “Make America First in AI”.

The original Manhattan Project, started during World War II, led to the production of the world’s first nuclear weapons. “If we started a Manhattan Project for AI, we’re essentially saying you have unlimited resources, unlimited freedom, and no moral constraints,” says Gerd. “It’s inviting all the mad scientists who just want to build whatever they can without any consideration for ethical and moral issues.”

Certainly, the historical parallel is chilling, reminding us of the unforeseen consequences that can arise from unbridled technological advancement. Who, Gerd asks, will serve as humanity’s “mission control” in this high-stakes environment? There’s no clear answer. 

Unlike previous existential threats like nuclear weapons or climate change, AGI development is largely driven by private companies motivated by profit rather than public interest. Indeed, arguably the most influential person in this space, OpenAI’s CEO Sam Altman, has the motto “No one knows what happens next” above his desk – as I pointed out in January’s inaugural Go Flux Yourself.

The financial incentives driving AI development are enormous, creating what Gerd describes as “a huge temptation to rush ahead without considering the consequences of building something larger than us”. This sprint towards AGI is particularly concerning, given the potential implications. An entity that “knows everything about everybody at any given time and combines that in a digital brain of an IQ of one billion,” Gerd argues, “cannot possibly end well for us.”

It’s important to stress that my Swiss-based friend is not arguing against all AI development. He distinguishes between narrow AI or “intelligent assistants” and AGI. The former, he believes, can be “extremely useful for businesses like better software, offering us powerful solutions, more efficiency”. The latter – a general intelligence surpassing human capabilities across the board – poses existential risks.

This nuanced view is crucial as we navigate the future of AI. It’s not about halting progress but about directing it responsibly. Hence why the Manhattan Project on AI, which would likely trigger an arms race, is bad news for humanity.

“The wolf you feed is the wolf that wins,” points out Gerd. We have to feed the right wolf, and must prioritise human values alongside technological progress. It’s incredibly challenging, of course, and requires meaningful collaboration between policymakers, AI researchers, ethicists, and business leaders. But by developing a shared understanding of AI’s potential and pitfalls, we can craft regulations that foster innovation while protecting society’s interests – before it’s too late.

The present

While the aforementioned meteor is not yet in our orbit, thankfully, there are plenty of examples of how technology other than AGI is negatively impacting our lives. Last month, I wrote about the rising “loneliness epidemic”

Shortly afterwards, I interviewed Eric Mosley, CEO of WorkHuman, who offers a stark image of the current business landscape, where the fabric of workplace relationships is fraying badly.

“What is obvious to everyone is that the less you interact with people physically, the more destructive that is to the relationship capital and the relationship infrastructure in companies,” the Boston-based Irishman says. This decay in social connections isn’t just a fleeting trend – it’s a fundamental shift that threatens the foundations of corporate culture.

The pandemic-induced shift to remote work initially rode on the coattails of pre-existing relationships. However, as Eric continues: “Now we’re years into this and have a much more prevalent work-from-home culture. Relationship decay is real, and culture is affected by that.”

That phrase, relationship decay, is perfect for revealing how rotten things are – at work, and elsewhere. Yet the erosion of workplace bonds manifests in subtle yet profound ways. The casual conversations before and after meetings, the impromptu chats by the coffee machine – these seemingly insignificant interactions are the lifeblood of a vibrant business culture. 

In their absence, we’re left with what Eric describes as a sterile, transactional work environment. “You join a Zoom call, conduct your business, then disconnect and retreat to your pathetic little kitchen for tea. There’s no genuine interaction – it’s a cycle of isolation.”

The cumulative effect of these missed connections is staggering. “You have to understand the compounding effect of that difference across thousands of company interactions over years,” Eric warns. “It adds up to a profound difference.”

This relationship decay has given rise to a new breed of employee – the “mentally transient” worker. These individuals, lacking strong ties to their colleagues or a sense of community, are merely going through the motions.

Yet, herein lies a paradox that HR professionals must grapple with. Despite the obvious detrimental effects of reduced physical interaction, employees continue to push for more remote work options. Eric describes this as a “complete dissonance and disconnect between the reality of what that results in and the desire of companies to counteract it”.

This dissonance presents a significant challenge for HR leaders. How do you balance the desire for flexibility with the need for meaningful workplace connections? The solution lies in reimagining the office as a hub for collaboration and community-building rather than a mandatory daily destination.

As businesses grapple with these shifts in workplace dynamics, we must also be mindful of unintended consequences in other areas. This last week, I interviewed Nicola Millard, Principal Innovation Partner at BT Group, for a piece previewing the London version of Digital Transformation EXPO (where I’ll be on stage again in October). She highlights an emerging trend that parallels the workplace disconnect, “shadow customers” – people who lack the confidence or ability to navigate digital platforms. 

She exemplifies this through her personal experience, acting as a digital proxy for her 86-year-old Mother. While her Mum can make telephone calls, Nicola handles all online interactions, from shopping to managing accounts, effectively becoming the “customer behind the customer” in an increasingly digital world. 

As businesses increasingly shift towards digital channels, they risk alienating a segment of their customer base that needs more confidence or ability to navigate these platforms.

This trend reminds us that we must not lose sight of the human element in our rush to embrace digital transformation. Just as some employees struggle with a fully remote work environment, some customers may feel left behind by purely digital interactions.

The parallel between these two trends is striking. In both cases, there’s a risk of losing vital connections – whether it’s between colleagues or between businesses and their customers. And in both cases, the solution lies in finding a balance between digital efficiency and human touch.

The past

In the recent past – i.e. in July – I’ve written or spoken for a variety of clients about the future of insurance, the future of education, and the future of the workplace. It’s been a fun, productive month. I even began a new column for Low No Drinker Magazine, called Upper Bottom – the same as the weekly sobriety podcast I began almost exactly six months ago.

But I’ve also found time to enrich myself with art and culture. I took my family to the Summer Exhibition at the Royal Academy in London, which I hope will inspire them. I also snook off solo on a Monday afternoon to catch the Beyond the Bassline exhibition at the British Library before it closed. 

The latter chronicled 500 years of black British music, and writing about it now makes me think again about Gerd’s story of Miles Davis playing one note with such haunting quality that it made it so memorable. 

I’m optimistic that human skills will always be valued more than technological achievements. The Paris Olympic Games, which are in full flow now, are an important reminder that there is breathtaking and life-affirming beauty to be found in people going faster, higher, and stronger – as per the Olympic motto, Citius, Altius, Fortius.

Three years ago, the organisers added another Latin word for the Tokyo Games: Communiter. It translates as “together”. In this mad and increasingly often bad world, we need that togetherness more than ever.

Statistics of the month

  • While executives push for return-to-office mandates, 48% of managers admit that their teams are more productive when they adopt hybrid work (Owl Labs’ annual State of Hybrid Work study).
  • Remember the Great Resignation? This is worse. Over a quarter (28%) of the 56,000 workers surveyed said they were “very or extremely likely” to move on from their current companies. In 2023 that figure stood at 26%, and at 19% in 2022 (PwC).
  • Two-thirds (66%) of the UK workforce do not feel their work environment allows them to partake in self-care and look after their well-being (People Management).

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 2)

TL;DR: February’s Go Flux Yourself discusses FOBO – the fear of becoming obsolete – tarot readings, why communication (and not relying on AI) matters more than ever, and leaping out of one’s comfort zone …

Image created on Midjourney with the prompt “the page of swords taking a leap outside of his comfort zone in the style of an Edgar Degas painting”

The future

“It always feels too soon to leap. But you have to. Because that’s the moment between you and remarkable.”

So wrote American author, entrepreneur and former dot-com business executive Seth Godin in his prescient 2015 book, Leap First: Creating work that matters. It’s a fitting quotation. Not least because today is February 29 – a remarkable date only possible in a leap year. 

It’s appropriate also because most of us must jump out of our comfort zones – whether voluntarily or shoved – and try new things for work and pleasure in this digital age. We want to be heard, to be valued. Moreover, there is a collective FOBO – fear of being obsolete (as discussed with suitable levity on the Resistance is Futile podcast this week).

As someone primarily known as a writer, I have felt FOBO in the last 15 months, since the advent of generative artificial intelligence. So much so that when I was sitting in a cafe, waiting for my car to be serviced in November – a year after OpenAI unleashed ChatGPT – I couldn’t help but approach, with nervous excitement, the tarot card reader on the next table, whose 10.30am booking hadn’t appeared. 

I asked: “What’s coming next in my career?”

She pulled six cards from the deck in her hands, although two fell out of the pack, which was significant, I was informed. One of the fumbled cards was The Two of Wands. “This is about staying within our comfort zone and looking out to see what’s coming next,” the reader said. “It suggests you must start planning, discovering who you are and what you really want, and focusing on that.”

The other slipped card was The Page of Swords. “This one – intrinsically linked to the Two of Wands – says that you need to work in something that requires many different communication skills. But this is also something about trying something new, particularly regarding communication, learning new skills, and getting more in touch with the times.”

Energised by 20 minutes with the tarot reader, I’ve leapt outside my comfort zone and re-focused on expressing my true(r) self, having established this newsletter, and a (sobriety) podcast. (I’ve also set up a new thought-leadership company, but more of that next month!) I’m loving the journey. Taking the leap has forced me to confront what makes me tick, what I enjoy, and how to be more authentic professionally and personally. Already, the change has been, to quote Godin, “remarkable”.

And yet, I fear an increasing reliance on AI tools is blunting our communication skills and, worse, our sense of curiosity and adventure. Are we becoming dumbed down and lazy? And, by extension, are the threads that make up the fabric of society – language, communication, community – fraying to the point of being irreparable?

At the end of last month, in the first Go Flux Yourself, I wrote how Mustafa Suleyman, former co-founder of DeepMind, discussed job destruction triggered by AI advancement. He predicted that in 30 years, we will be approaching “zero cost for basic goods”, and society will have moved beyond the need for universal basic income and towards “universal basic provision”. How will we stay relevant and curious if we want for nothing? 

Before we reach that point, LinkedIn data published earlier this month found that soft skills comprise four of UK employers’ top five most in-demand skills, with communication ranked number one. Further, the skills needed for jobs will change by “at least 65%” by the decade’s end. 

Wow. Ready to take your leap?

The present

Grammarly’s 2024 State of Business Communication Report, published last week, exposed the problem of communication – or rather miscommunication – for businesses. Getting this wrong affects the organisation’s culture and its chances of success today and tomorrow. 

Indeed, the report showed that effective communication increases productivity by 64%, boosts customer satisfaction by 51%, and raises employee confidence by 49% – that last one is especially interesting, and it’s worth noting that March 1 is Employee Appreciation Day, which was started in 1995. While I’m sure hardly any companies will appreciate their employees any more than usual, building confidence through better communication is business critical.

There is much work to do here. The Grammarly study found that in the past 12 months, workers have seen a rise in communication frequency (78%) and variety of communication channels (73%). Additionally: “Over half of professionals (55%) spend excessive time crafting messages or deciphering others’ communications, while 54% find managing numerous work communications challenging, and for 53%, this is all compounded by anxiety over misinterpreting written messages.”

Is AI helping or hindering communication?

I love this cartoon by Tom Fishburne, the California-based “Marketoonist”, who neatly summarises the dilemma.

Also this month, we marvelled at OpenAI’s early demonstrations of Sora (Japanese for “sky”, apparently), which converts text to video. FOBO was ratcheted up another notch.

Thankfully, I was reminded that most AI is far from perfect – like the automatic camera operator used for a football match at Caledonian Thistle during the pandemic-induced lockdown. The “in-built, AI, ball-tracking technology” seemed a good idea, but was repeatedly confused by the linesman’s bald head. It offered an amusing twist on spot the ball.

Granted, that was over three years ago, and the use cases of genuinely helpful AI are growing, if still narrow in scope. For example, this fascinating new article by James O’Malley, highlights how Transport for London has been experimenting with integrating AI into Willesden Green tube station. The system was trained to identify 77 different use cases, broken down into these categories: hazards, unattended items, person(s) on the track, unauthorised access, stranded customers and safeguarding.

Clearly, better communication between man and machine is essential as we journey ahead.

The past

“My heart is too full to tell you just how I feel … I sincerely hope I shall always be a credit to my race and to the motion picture industry.”

On this day 84 years ago, Hattie McDaniel spoke these words after being named best supporting actress at the 12th Academy Awards in 1940. She was the first black actor to win – or be nominated – for an Oscar. 

The 44-year-old won for her portrayal of Mammy, a house servant, in Gone With the Wind. She accepted her gold statuette on stage at the Cocoanut Grove nightclub in Los Angeles’ Ambassador Hotel – a “no-Blacks” hotel (she was afforded a special pass). However, McDaniel, whose acting career included 74 maid roles, according to The Hollywood Reporter, was denied entry to the after-party at another “no-Blacks” club. A bittersweet experience in the extreme.

We might look back and be appalled by old social norms. Certainly, the pace of progress in certain areas is lamentably slow – after McDaniel, no other Black woman won an Oscar again for 50 years until Whoopi Goldberg was named best supporting actress for her role in Ghost. Still, it is important to track progress by considering history and context.

How long will it be before we have “no-AI” videoconferencing calls? And would that be classed as progress?

I’ve been thinking about the darker corners of my past recently. Earlier this month, I started a podcast, Upper Bottom, that takes a balanced (not worthy, and hopefully lighthearted) look at sobriety. Almost exactly a year ago, I called Alcoholics Anonymous and explained that while nothing tragically wrong had happened, I wanted to reset my relationship with booze. “Ah, you are what we call an ‘upper bottom’,” said the call handler. “You haven’t reached rock bottom but want to change your ways with alcohol.”

Spurred by the tarot reading, and fortified by the ongoing sobriety – April 1 (no joke) will make it a year without a drop – I’m grateful for the opportunity to polish my communication skills, learn new skills (if you want me to produce and host a podcast I would be delighted to collaborate), and build a community via Upper Bottom.

My voice is being heard, literally, and I’m speaking the truth on a human level. In 2024, that matters.

Statistics of the month

  • On the subject of slow progress, only 18% of high-growth companies in the UK have a woman founder, according to a report just published by a UK government taskforce.
  • Nearly seven in 10 UK Gen Zeders are rejecting full-time employment – many as a result of AI and layoff fears, finds Fiverr.
  • And new research by Uniquely Health shows that less than half (49%) of the nation is confident they would be classed as “healthy” by a doctor. Time to make the most of the extra day this year and leap to some exercise?

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media