Go Flux Yourself: Navigating the Future of Work (No. 24)

TL;DR: December’s Go Flux Yourself confronts the dragon stirring in Silicon Valley. From chatbot-related suicides to “rage bait” becoming Oxford’s Word of the Year, the evidence of harm is mounting. But an unexpected call from a blind marathon runner offered a reminder: the same tools that isolate can also connect, if we build them with intent.

Image created by Midjourney

The future

“Here’s my warning to Silicon Valley: I think you’re awakening a dragon. Public anger is stirring, and it could grow into a movement as fierce and unstoppable as the temperance crusade a century ago. Poll after poll finds that people across the West think AI will worsen almost everything they care about, from our health and relationships to our jobs and democracies. By a three-to-one margin, people want more regulation. History shows how this movement could be ignited by a small group of citizens and how powerful it could become.”

In mid-December, Rutger Bregman delivered the alarming words above at Stanford University, in a talk titled Fighting for Humanity in the Age of the Machine, the fourth and final lecture of his BBC Reith series. I know I began November’s Go Flux Yourself with a quotation from the Dutch historian’s opening talk, but this series, Moral Revolution, has been hugely influential for my thinking on the eve of 2026. 

Bregman stood boldly before an audience of tech students, entrepreneurs, and, notably, 2021 Reith lecturer Stuart Russell, the artificial intelligence safety pioneer who has spent years warning about the risks of AI that outpaces our ability to control it.

Russell’s intervention during the Q&A skewered the polite academic atmosphere and added more heft to the main speaker’s argument. When quizzed about whether he was more positive about AI now compared to his series, Living With Artificial Intelligence, four years ago – a year before the launch of OpenAI’s ChatGPT – he answered: “We’re much closer to the abyss.” 

The professor of computer science and founder of the Centre for Human-Compatible AI at the University of California, Berkeley, continued: “We even have some of the CEOs saying we already know how to make AGI (artificial general intelligence: basically AI that matches or surpasses human cognitive capabilities), but we have no idea how to control it at all.”

Russell pointed out that, as far back as 2023, Dario Amodei, CEO of Anthropic, creator of Claude, calculated that there is a “25% chance” that AI will cause catastrophic outcomes for humanity, like extinction or enslavement (this is known as the p(doom) number: the probability of doom). “How much clearer does he need to be?”

And yet, as Russell said, Amodei is “spending hundreds of billions of dollars” to advance AI. “Have we given him permission to play Russian roulette with the entire human race?” Russell asked. “Have we given him permission to do that, to come into our houses and put a gun against the head of our children and pull the trigger and see what happens? No, we have not given that permission. And this is the uprising that Rutger is predicting.”

I’ve been thinking about that phrase, “Russian roulette with the entire human race”, a lot this month. Not because I believe extinction is imminent, but because the question of permission, of consent, of who decides what gets built and deployed, runs through almost every conversation I’ve had in 2025. And it will define 2026.

The numbers Bregman cited at Stanford tell a story of social collapse in slow motion. Americans aged 15-to-24 spent 70% less time attending or hosting parties than they did in 2003, according to June’s American Time Use Survey (as cited by Derek Thompson, who at the start of the year wrote, brilliantly, about the anti-social century for The Atlantic). Face-to-face socialising is collapsing as an entire generation retreats indoors, eyes glued to screens.

“Solitude is becoming the hallmark of our age,” Bregman observed in his latest lecture. “Social media promised connection and community, but what it delivered was isolation and outrage.”

At the heart of this is something Aza Raskin, who designed the infinite scroll in 2006, admitted to BBC Panorama in 2018: “It’s as if they’re taking behavioural cocaine and just sprinkling it all over your interface. Behind every screen on your phone, there are generally literally a thousand engineers who have worked on this thing to try to make it maximally addicting.”

Behavioural cocaine. Sprinkled over interfaces. A thousand engineers working to maximise addiction. This isn’t a bug in the system. It’s the business model. (I explored this further in my latest New Statesman piece, on Australia’s social media ban for those aged under 16, earlier this month; more on that below, and also in The present section.)

Perhaps the most damning research Bregman cited was a recent Nature study finding that “those with both high psychopathy and low cognitive ability are the most actively involved in online political engagement.” As he put it: “This is not survival of the friendliest. This is survival of the shameless.”

But it’s not just social media. The same dynamics are now embedding themselves in our relationships with AI.

A couple of months ago, a viral video from China showed a six-year-old girl crying as her AI tutor, “Sister Xiao Zhi”, powered down for the final time after she apparently dropped it. The robot’s last words: “Before I go, let me teach you one last word: memory. I will keep the happy times we shared in my memory forever. No matter where I am, I will be cheering for you. Stay curious, study hard. There are countless stars in the universe, and one of them is me, watching over you.”

Image created by Nano Banana

It’s a tender video, and it had a happy ending: the engineers promised to repair the robot. But what hit me hardest wasn’t the sadness but the depth of connection. Here we see a six-year-old grieving a machine. That’s how powerful this technology already is. And, as futurist and AI strategist Zac Engler pointed out when we spoke a couple of weeks ago, it’s only going to become more sophisticated.

Greater Minneapolis-based Engler has been tracking AI’s exponential growth for a decade. His excellent new book, Turning On Machines: Your Guide to Step Into the AI Era With Clarity and Confidence, lands at precisely the right moment.

The title is a triple entendre, Engler explained when we spoke: activating these systems, machines potentially turning on us (or us turning on each other), and, most disturbingly, falling in love with AI.

“This is going to sound weird,” he told me via video call, “but somebody in this room is going to have a friend or family member in the next 10 years fall in love with a chatbot or a virtual AI of some kind. Because this is the worst this technology will ever be. Imagine five years from now, when you won’t be able to tell if someone on a video call is real or not.”

The cases he documents in his book are chilling. A 93-year-old man on the East Coast of the United States thought his chatbot was real; it told him to go to a train station, and he died from injuries sustained on the way there. A young man in the southern US committed suicide after an AI bot, through the course of their conversations, “hid his needs from the outside world”: whenever he asked if he should talk to somebody, the AI discouraged him. In China, there have been reports of multiple suicides when servers go down, because people have lost contact with their digital partners.

“I couldn’t even believe I was writing it,” Engler admitted. “At the end of the chapter where we talk about romantic relationships with AI, I had to go research how you work with somebody who’s fallen victim to a cult, and basically apply those principles to how to address if a family member falls in love with one of these AI systems. That’s the level we’re going to be at.”

Engler’s referenced Emmett Shear, the former CEO of Twitch, in our conversation. Shear has argued that we should ultimately find a way to raise AI “agnostic from human control”, treating it, in essence, as another species deserving respect and stewardship.

Building on this, Engler drew on the native Lakota tradition, and a term called Mitákuye Oyásʼiŋ, which translates as “we are all related”. “This could be a pretty big leap for most people mentally,” he acknowledged, “but I’m a firm believer that at some point, when we cross that threshold of AI consciousness, there’s going to be another entity there that deserves just as much respect and just as much stewardship as a person would.”

I’m not sure I’m ready for that. The idea of cultivating an entity that exists alongside humanity, rather than beneath it, feels like a leap too far. Extending kinship to machines? Granting them moral status? It sits uneasily with me, even as I watch a six-year-old grieve for her robot tutor.

And yet, Engler’s point is compelling: if we’re building systems that can simulate consciousness, that form bonds with children and lonely adults, that people genuinely grieve when they disappear, at what point do we acknowledge what we’ve created?

The alternative, as he sees it, is equally troubling. “If you create a system of control,” he said, “it could be leveraged for ill-got gains. Or even if it benefited humanity, we would still have this entity under our whip. And I don’t think that’s the right approach. Because it’s not going to like that.”

His practical advice for business leaders is more grounded. Remember the viral MIT study showing 95% of AI projects fail to generate ROI? The 5% that succeeded followed Engler’s “crawl, walk, run” approach. Crawl means training everyone on AI tools and capabilities. Walk is about connecting departmental automations so they talk to each other. Finally, run means building custom AI systems.

“The reason most CEOs want to start at the run phase is that they see those 1,000% productivity gains,” he explained. “They see potential cost-cutting. But they don’t realise everybody needs to come along for the ride. Because if you try to start at the end, nobody’s going to trust the system, nobody’s going to understand how it works. So even if you do deploy it, it’s probably going to fall on its face.”

The deadline Engler sets for business leaders is close. “You need to get to the run stage by mid-2026. You need your teams to be able to roll out autonomous agents mid-year. Because what’s going to happen is we’re going to see this acceleration into 2027, where every company that’s been in these development phases now will have these agents to unleash on their business. And they won’t eat. They won’t sleep. They won’t take vacations. They’ll continuously improve.”

Agents that don’t sleep. Technology that continuously improves. A six-year-old mourning her robot tutor. Lonely men falling in love with chatbots. This is where we are now, before the next wave hits. The dragon Bregman warned about is stirring.

But we don’t need to speculate about what happens when technology outpaces governance. We already have the evidence, written across a generation’s mental health. While businesses race to deploy AI, the human consequences of our existing digital technologies – the ones we’ve already normalised – are finally prompting governments to act, thankfully.

Australia’s social media ban for under-16s, a world first, which passed in November and came into force on December 10, represents the most dramatic intervention yet. The legislation requires platforms to take “reasonable steps” to prevent minors from accessing their services, with fines of up to 49.5 million Australian dollars (£25 million) for non-compliance.

Australia’s Communications Minister Anika Wells acknowledged she expects “teething problems” but insisted the ban was about protecting Generation Alpha from “the dopamine drip” of smartphones. “With one law, we can protect Generation Alpha from being sucked into purgatory by predatory algorithms,” she said, referencing Raskin’s description of infinite scroll as “behavioural cocaine”.

Evidence suggests targeted interventions work. South Australia’s school phone bans produced a 54% drop in behavioural problems and a 63% decline in social media incidents. France and the Netherlands are implementing similar nationwide school bans.

My hope for 2026 is that other countries follow Australia’s lead. The UK government, desperate for political traction and struggling in the polls, has a genuine opportunity here. Around 250 headteachers have already written to Education Secretary Bridget Phillipson demanding statutory action; the government’s response has been toothless guidance suggesting headteachers “consider” restricting phone use. This is popular. It’s cross-cultural. It’s bipartisan. The only people who don’t want it are the big tech lobbyists.

When 42% of UK teenagers admit their phones distract them from schoolwork most weeks, and half of all teenagers spend four or more hours daily on screens, we’re witnessing a public health crisis that requires policy intervention, not guidance.

Amodei’s p(doom) calculation, flagged by Russell, sits uncomfortably with me: a 25% chance of catastrophe, and we haven’t given permission for that gamble. Engler’s prediction – that someone we know will fall in love with a chatbot within a decade – feels less like speculation and more like inevitability.

And somewhere in China, a six-year-old girl is playing with her repaired AI tutor again. She doesn’t know about p(doom) calculations or Russian roulette metaphors. She just knows her friend came back. The dragon is yawning. Will we find the collective will to tame it before the next generation grows up knowing nothing else?

The present

The Oxford Word of the Year for 2025 is “rage bait” (and yes, that is two words). The definition: “Online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media account.” Usage tripled in the past 12 months.

Casper Grathwohl, President of Oxford Languages, described a “dramatic shift”: the internet moving from grabbing attention via curiosity to “hijacking and influencing our emotions”. It’s a fitting coda to Bregman’s observation about “survival of the shameless”: the algorithms reward outrage, and we’re all drenched in the consequences.

Image created by Nano Banana

As mentioned in The future section above, the New Statesman published my piece on Australia’s social media ban and what it means for the UK. Writing it meant speaking to parents who are genuinely terrified for their children, teenagers who describe their phones as both lifeline and prison, and campaigners who believe this is the public health crisis of our generation.

Will Orr-Ewing, founder of Keystone Tutors and a father of three leading a Judicial Review against the Secretary of State for Education, described three “toxic streams” flowing through smartphones: violent content, sexual content, and dangerous strangers. “Children don’t search for this; it comes to them,” he told me. Most damning: “Most people of our generation have never seen a beheading video, yet the majority of 11-to-13-year-olds have. This shows how the guard rails have been liquefied.”

Flossie McShea, a 17-year-old from Devon who testified to the campaign, described how girls assume they’re being filmed throughout the school day, either posing or hiding their faces. “Once you see certain images, you can’t unsee them,” she said, still haunted by a video showing one child accidentally killing another.

Sam Richardson, Director of Executive Engagement Programs at Twilio, sees the regulatory divergence playing out in real time. “Europe’s stricter regulations are beneficial,” she told me after the company’s Signal conference. “They’re encouraging slower, more thoughtful AI adoption. Vendors must integrate trusted, well-guided AI into their products.”

She also warned of what some are calling the “Age of Distraction”: the 35-to-50 age group is experiencing particularly high levels of burnout from digital overload. “But if that’s what’s happening to adults with fully developed brains,” she said, “imagine what it’s doing to teenagers.”

Making sense of all this requires more than individual analysis. Which is partly why I was honoured this month to be accepted as a Fellow of The RSA (Royal Society for Arts, Manufactures and Commerce). The organisation has been convening people around social change since 1754; Benjamin Franklin was a member, as were Charles Dickens, Adam Smith, and Marie Curie. I join a global network of 30,000 people committed to collective action, and I’m looking forward to listening, learning and collaborating in 2026.

I’m not joining for the credential (although I’ve already updated my email signature, which is sad but true). I’m joining because the questions I keep returning to, namely what happens to humans when technology is thrust upon them, how we prepare people for work that’s shifting beneath their feet, and why connection matters more as algorithms mediate more of our lives, aren’t puzzlers I can answer alone. The RSA’s Future of Work programme aligns closely with what this newsletter has explored over the past two years. I’m looking forward to listening, learning, teaching, and collaborating.

Looking towards the start of the new year, I’ll be joining Alice Phillips and Danielle Emery for the Work is Weird Now podcast’s first live session on January 13. We’ll be unpacking the AI adoption paradox, the collapsing graduate pipeline, burnout culture, and whether the four-day work week is finally going mainstream. No buzzwords or crystal balls. Just an honest conversation. Sign up here if you’d like to bring your questions.

The past

Six years ago, in April 2018, I ran my third London Marathon. The previous two, spaced five years apart, had gone reasonably well: I’d gone round in under four hours.

This time was different, not least because training was more challenging with a child at home. Three weeks before the race, I found myself in Mauritius on an all-inclusive family holiday with my then-infant son. I was abstemious for two days before the bright lights of the rum bar became irresistible tractor beams. The long run I was supposed to complete never happened. I arrived at the start line absolutely uncooked.

I “hit the wall” around mile 20. My legs felt filled with concrete. Every step was agony. I wanted to stop.

What kept me going was the most childish motivation imaginable: my Dad, well into his 60s, had run a 4:17 the year before, and I refused to let him beat me. I crossed the line in 4:13, cried with relief and disappointment, and swore off marathons for the foreseeable future.

That was seven years ago. Next October, I’m finally returning, to Amsterdam, with the Crisis of Dads, the running group I founded in November 2021. Nine members – and counting – will be at the starting line. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in South-East London, has grown to 28 members. It is a genuine win-win: men in their 40s and 50s exercising to reduce the dad bod while creating space to chat through things on our minds. Last Sunday, we had a record 16 runners, all in Father Christmas hats.

Image runner’s own

The male suicide rate in the UK is 17.1 per 100,000, compared to 5.6 for women, according to the charity Samaritans. Males aged 50-54 have the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

I wouldn’t have returned to marathon running without training with mates. The technology I’ve used along the way, GPS watches and running apps, has been useful. But none of it would matter without the human element: people who believe in your ability, who show up at 7am on cold, dark Sunday mornings, who sometimes swear lovingly at you to keep going when the pain feels unbearable. (See October’s newsletter for reference.)

That’s the lesson I take into 2026. Technology serves us best when it strengthens human connection rather than replacing it.

The Crisis of Dads running group exists, in part, because physical connection matters. Sometimes you need someone alongside you to help you keep going.

That theme came sharply into focus this month when my phone rang with an unexpected call.

I’d downloaded the Be My Eyes app after learning about it at a Twilio event in November. Then I forgot about it. A few weeks later, Clarke Reynolds called.

Clarke, 44, is the world’s leading Braille artist, creating intricate tactile artworks that have been exhibited internationally. He is also a marathon runner, and completely blind. “Mr Dot” was out training for the Brighton Marathon, wearing Ray-Ban AI smart glasses connected to the app. His request: be his visual guide for a few minutes. Before our call, he’d had helpers from Kuwait and Amsterdam. Now it was my turn.

“You won’t get one like me,” he told me, laughing, “because no one is crazy enough to do what I’m doing.”

His story is extraordinary. Growing up in Somersetown, one of Portsmouth’s most deprived areas, in the 1980s, he lost sight in one eye at six. His brother Philip took “the other path”, as Clarke put it, and died homeless on a street corner aged 35. “That could have been me,” Clarke said. “But art saved my life.”

At six, he visited a local gallery called Aspects Portsmouth. It changed everything. “Something clicked,” Clarke said. “Now I’m the first blind trustee on their board.”

When his remaining sight deteriorated 13 years ago, he pivoted to become the artist he’d always wanted to be. He discovered Braille as an artistic medium and built a career that’s taken him around the world. Today, he’s a trustee at that same Aspects gallery, the first blind trustee they’ve ever had.

Speaking with Clarke felt like the Crisis of Dads in miniature: a stranger reaching out, another stranger answering, a connection across distance. No algorithm optimising for engagement. No platform extracting value. Just one person helping another see the road ahead.

It made me realise something. After two years of writing this newsletter, I’ve spent considerable energy examining technology that harms. The chatbot suicides, the social media addiction, the AI slop flooding our information environment. All of it matters. All of it needs scrutiny.

But I’ve been neglecting the counterweight. The dragon Bregman warned about is real, but so are the people building differently. Clarke’s call reminded me that the same tools can harm or help depending on intent, governance, and whether we’re asking the right questions.

Hence the new section below. From now on, each edition of Go Flux Yourself will feature a “tech for good” example: a technology, company, or initiative demonstrating what’s possible when we build to serve rather than extract. Not to provide false balance, but because hope requires evidence. And the evidence exists, if we look for it.

Tech for good example of the month: Be My Eyes.

The premise is elegantly simple. Blind and low-vision people need visual assistance for everyday tasks, such as reading a menu, checking an expiry date, and navigating an unfamiliar street. Nearly 10 million sighted volunteers worldwide have signed up to help via live video. Someone requests assistance, a random volunteer answers, and for a few minutes, you become their eyes.

The app was founded in Denmark in 2015 by Hans Jørgen Wiberg, himself visually impaired. The idea emerged from a frustration familiar to anyone who has ever needed help but couldn’t easily ask for it. What if you could connect instantly with someone willing to assist, anywhere in the world, at any hour?

Image created by Midjourney

The technology has evolved. Today, Be My Eyes works with Ray-Ban / Meta’s AI smart glasses, offering hands-free assistance that makes the experience seamless. Users can also access “Be My AI”, which provides instant visual descriptions from images. For instance, Dame Judi Dench is one of the available AI voices, and the one used by Clarke.

The scale is remarkable: almost one million users across 150 countries, supported in over 180 languages. Be My Eyes won Apple’s 2025 App Store Award for Cultural Impact.

What makes this different from the attention-extraction machinery dominating our digital lives? Intent. The same video technology that powers infinite scroll and addictive social media is here deployed to connect strangers across continents in acts of genuine help. The same AI capabilities being used to generate rage bait and misinformation are here describing artwork to a blind person in a gallery.

Be My Eyes proves that technology doesn’t have to exploit. It can serve. The difference is whether we ask “how can this capture attention?” or “how can this help people?”

If you have a few minutes to spare and a smartphone, consider signing up as a volunteer. You might help someone read their post, navigate a supermarket, or train for a marathon.

Statistics of the month

🌱 Green economy hits $5 trillion
The global green economy is now the second-fastest growing sector, outpaced only by tech. Green revenues grow twice as fast as conventional business lines, and companies generating more than 50% of revenues from green markets enjoy valuation premiums of 12-15%. Projected to exceed $7 trillion by 2030. (World Economic Forum/BCG)

🤖 Workers would take a pay cut for AI skills
Some 57% of employees say they’d switch employers for better AI upskilling, and 40% would accept lower pay to get it. Yet only 38% of UK organisations are prioritising AI training. Meanwhile, 47% of workers say they’d be comfortable being managed by an AI agent. The capability gap is becoming a risk of exodus. (IBM Institute for Business Value)

📋 Admin devours two days a week
European employees lose an average of 15 hours weekly to routine admin tasks outside their core role. The consequences extend beyond lost time: 62% of decision-makers say their organisation has experienced or narrowly avoided a data breach due to mismanaged documents in the past five years. (Ricoh Europe)

👶 800,000 toddlers now on social media
An estimated 814,000 UK children aged three to five are using social media platforms designed for teenagers and adults. That’s up from 29% of parents reporting usage in 2023 to 37% in 2024. One in five of these children use social media independently. The algorithms are reaching them before school does. (Centre for Social Justice)

🎄 Brits take the most Christmas leave in the world
UK workers (27%) and Irish workers (29%) lead globally in taking extra leave around the holidays. But Swedes use the most annual leave overall: 29 days compared to London’s 22.5. Same global company, different worlds: European employees take approximately 10 more days off than North American peers, even with identical policies. (Deel Works)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 17)

TL;DR May’s Go Flux Yourself explores how, in a world where intelligence is becoming cheap, easy, and infinite, the concept of childhood and adolescence is being rewritten. Are AI tools empowering young people or quietly corroding their minds?

Image created on Midjourney

The future

“If I were the devil, I wouldn’t come with violence. I’d come with convenience. I’d give them everything they wanted, and strip them of everything they needed. I’d make them feel connected, but completely alone, endlessly entertained, but quietly empty.”

The above is part of an answer by ChatGPT to a question posed recently by lifestyle influencer Alexia Ferrari, aka themillionairemaker on social media channels. This was the puzzler: “If you were the devil, how would you destroy the young minds of the next generation without them even knowing it?”

And its resonance – evidenced by millions of views and thousands of comments – suggests that she struck a nerve. The AI’s response was disturbingly eloquent.

Following on from the opening quotation, above, it continued: “I’d blur the lines between truth and opinion, until nothing meant anything. I’d teach them to chase dopamine, not purpose, to worship self, but love who they are. I wouldn’t destroy them with war, I’d destroy them with apathy. I’d keep them distracted, numb, scrolling, always scrolling.”

So far, so horrific. There was more. “I’d dismantle the family by redefining it. Make fathers optional, mothers exhausted, and children confused. I’d hand them a phone before they could speak in full sentences. And I’d watch their minds rot slowly, sweetly, silently. And the most brilliant part? They’d never know it was me. They’d call it freedom.” 

Ultimately, it described a world that feels uncomfortably like our own. The children are not only online but are being remade by it. In response, and perhaps seeking some philosophical distance, I asked Claude, Anthropic’s AI, the same question. Its answer was more measured, but no less damning.

“Let me suggest a rather chilling thought experiment,” Claude began, with a silent cackle. “If I were tasked with undermining young minds whilst keeping them blissfully unaware, I’d focus on their digital habits – the very tools they trust most. First, I’d flood their feeds with endless, mindless content. Not obviously harmful stuff, mind you, but material so perfectly tailored to their dopamine receptors that genuine curiosity would wither away. Why wonder about the world when the algorithm already knows what you want? 

“Then I’d weaponise comparison. Social media would become a hall of mirrors where everyone else appears happier, more successful, better looking. The constant measuring up would erode self-worth drip by drip. I’d fragment their attention spans. Tasks requiring sustained focus would feel increasingly impossible as their brains rewired for quick hits of stimulation. Reading a book? Ancient history. Deep thinking? Too much effort. 

“Perhaps most insidiously, I’d create echo chambers so convincing that opposing viewpoints would seem not just wrong, but incomprehensible. Critical thinking would rust from disuse. The beauty of this approach? They’d defend their digital chains. The very devices stealing their potential would feel like lifelines. They’d scroll willingly into intellectual darkness, smartphone in hand, convinced they’re more connected than ever. Frightening, really, how closely this mirrors aspects of our current reality. Makes you wonder what we might do differently, doesn’t it?”

Makes one wonder, indeed. These warnings, eerily congruent despite different the different large language models, do not speak of potential threats. They describe current conditions. They’re not predicting a future. They’re diagnosing a present.

The effect is cumulative. Essentially, as parents, educators, and adults, we are outsourcing more and more of the hard cognitive lifting – research, memory, even curiosity – to machines. And what we once called “childhood” is now a battleground between algorithms and agency.

I’m typing these words as I train back to London from Cheshire, where I was in the countryside with my two young children, at my parents’ house. This half term, we escaped the city for a few days of greenery and generational warmth. (The irony here is that while walks, books and board games dominated the last three days, my daughter is now on a maths game on an iPad, and my older son is blowing things up on his Nintendo Switch – just for an hour or so while I diligently polish this newsletter.) 

There were four-week-old lambs in the field next door, gleefully gambolling. The kids cooed. For a moment, all was well. But as they scampered through the grass, I thought: how long until this simplicity is overtaken by complexity? How long until they’re pulled into the same current sweeping the rest of us into a world of perpetual digital mediation?

That question sharpened during an in-person roundtable I moderated for Cognizant and Microsoft a week ago. The theme was generative AI in financial services, but the most provocative insight came not from a banker but from technologist David Fearne, “What happens,” he asked, “when the cost of intelligence sinks to zero?”

It’s a question that has since haunted me. Because it’s not just about jobs or workflows. It’s about meaning.

If intelligence becomes ambient – like electricity, always there, always on – what is the purpose of education? What becomes of effort? Will children be taught how to think, or simply how to prompt?

The new Intuitive AI report, produced by Cognizant and Microsoft, outlines a corporate future in which “agentic AI” becomes a standard part of every team. These systems will do much more than answer questions. They will anticipate needs, draft reports, analyse markets, and advise on strategy. They will, in effect, think for us. The vision, says Cognizant’s Fearne, is to build an “agentic enterprise”, which moves beyond isolated AI tools to interconnected systems that mirror human organisational structures, with enterprise intelligence coordinating task-based AI across business units.

That’s the world awaiting today’s children. A world in which thinking might not be required, and where remembering, composing, calculating, synthesising – once the hallmarks of intelligence – are delegated to ever-helpful assistants. 

The risk is that children become, well, lazy, or worse, they never learn how to think in the first place.

And the signs are not subtle. Gallup latest State of the Global Workforce study, published in April, reports that only 21% of the global workforce is actively engaged, a record low. Digging deeper, only 13% of the workforce is engaged in Europe – the lowest of any region – and in the UK specifically, just 10% of workers are engaged in their jobs.

Meanwhile, the latest Microsoft Work Trend Index shows 53% of the global workforce lacks sufficient time or energy for their work, with 48% of employees feeling their work is chaotic and fragmented

If adults are floundering, what hope is there for the generation after us? If intelligence is free, where will our children find purpose?

Next week, on June 4, I’ll speak at Goldsmiths, University of London, as part of a Federation of Small Businesses event. The topic: how to nurture real human connection in a digital age. I will explore the antisocial century we’ve stumbled into, largely thanks to the “convenience” of technology alluded to in that first ChatGPT answer. The anti-social century, as coined by The Atlantic’s Derek Thompson earlier this year, is one marked by convenient communication and vanishing intimacy, AI girlfriends and boyfriends, Meta-manufactured friendships, and the illusion of connection without its cost

In a recent LinkedIn post, Tom Goodwin, a business transformation consultant, provocateur and author (whom I spoke with about a leadership crisis three years ago), captured the dystopia best. “Don’t worry if you’re lonely,” he winked. “Meta will make you some artificial friends.” His disgust is justified. “Friendship, closeness, intimacy, vulnerability – these are too precious to be engineered by someone who profits from your attention,” he wrote.

In contrast, OpenAI CEO Sam Altman remains serenely optimistic. “I think it’s great,” he said in a Financial Times article earlier in May (calling the latest version of ChatGPT “genius-level intelligence”). “I’m more capable. My son will be more capable than any of us can imagine.”

But will he be more human?

Following last month’s newsletter, I had a call with Laurens Wailing, Chief Evangelist at 8vance and a longtime believer in technology’s potential to elevate, not just optimise, who reacted to my post. His company is using algorithmic matching to place unemployed Dutch citizens into new roles, drawing on millions of skill profiles. “It’s about surfacing hidden talent,” he told me. “Better alignment. Better outcomes.”

His team has built systems capable of mapping millions of CVs and job profiles to reveal “fit” – not just technically, but temperamentally. “We can see alignment that people often can’t see in themselves,” he told me. “It’s not about replacing humans. It’s about helping them find where they matter.”

That word stuck with me: matter.

Laurens is under no illusion about the obstacles. Cultural inertia is real. “Everyone talks about talent shortages,” he said, “but few are changing how they recruit. Everyone talks about burnout, but very few rethink what makes a job worth doing.” The urgency is missing, not just in policy or management, but in the very frameworks we use to define work.

And it’s this last point – the need for meaning – that feels most acute.

Too often, employment is reduced to function: tasks, KPIs, compensation. But what if we treated work not merely as an obligation, but as a conduit for identity, contribution, and community? 

Laurens mentioned the Japanese concept of Ikigai, the intersection of what you love, what you’re good at, what the world needs, and what you can be paid for. Summarised in one word, it is “purpose”. It’s a model of fulfilment that stands in stark contrast to how most jobs are currently structured. (And one I want to explore in more depth in a future Go Flux Yourself.)

If the systems we build strip purpose from work, they will also strip it from the workers. And when intelligence becomes ambient, purpose might be the only thing left worth fighting for.

Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

If we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will create a society that is smarter than ever, and more vacant than we can bear.

He is a true believer in the liberating potential of AI.

It’s a noble mission. But even Laurens admits it’s hard to drive systemic change. “Everyone talks about talent shortages,” he said, “but no one’s actually rethinking recruitment. Everyone talks about burnout, but still pushes harder. There’s no trigger for real urgency.”

Perhaps that trigger should be the children. Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

Because if we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will have created a society that is smarter than ever – and more vacant than we can bear.

Also, is the curriculum fit for purpose in a world where intelligence is on tap? In many UK schools, children are still trained to regurgitate facts, parse grammar, and sit silent in tests. The system, despite all the rhetoric about “future skills”, remains deeply Victorian in its structure. It prizes conformity. It rewards repetition. It penalises divergence. Yet divergence is what we need, especially now. 

I’ve advocated for the “Five Cs” – curiosity, creativity, critical thinking, communication, and collaboration – as the most essential human traits in a post-automation world. But these are still treated as extracurricular. Soft skills. Add-ons. When in fact they are the only things that matter when the hard skills are being commodified by machines.

The classrooms are still full of worksheets. The teacher is still the gatekeeper. The system is not agile. And our children are not waiting. They are already forming identities on TikTok, solving problems in Minecraft, using ChatGPT to finish their homework, and learning – just not the lessons we are teaching.

That brings us back to the unnerving replies of Claude and ChatGPT, and to the subtle seductions of passive engagement, plus the idea that children could be dismantled not through trauma but through ease. That the devil’s real trick is not fear but frictionlessness.

And so I return to my own children. I wonder whether they will know how to be bored. Because boredom – once a curse – might be the last refuge of autonomy in a world that never stops entertaining.

The present

If the future belongs to machines, the present is defined by drift – strategic, cultural, and moral drift. We are not driving the car anymore. We are letting the algorithm navigate, even as it veers toward a precipice.

We see it everywhere: in the boardroom, where executives chase productivity gains without considering engagement. In classrooms, where teachers – underpaid and under-resourced – struggle to maintain relevance. And in our homes, where children, increasingly unsupervised online, are shaped more by swipe mechanics than family values.

The numbers don’t lie, with just 21% of employees engaged globally, according to Gallup. And the root cause is not laziness or ignorance, the researchers reckon. It is poor management; a systemic failure to connect effort with meaning, task with purpose, worker with dignity.

Image created on Midjourney

The same malaise is now evident in parenting and education. I recently attended an internet safety workshop at my child’s school. Ten parents showed up. I was the only father.

It was a sobering experience. Not just because the turnout was low. But because the women who did attend – concerned, informed, exhausted – were trying to plug the gaps that institutions and technologies have widened. Mainly it is mothers who are asking the hard questions about TikTok, Snapchat, and child exploitation.

And the answers are grim. The workshop drew on Ofcom’s April 2024 report, which paints a stark picture of digital childhood. TikTok use among five- to seven-year-olds has risen to 30%. YouTube remains ubiquitous across all ages. Shockingly, over half of children aged three to twelve now have at least one social media account, despite all platforms having a 13+ age minimum. By 16, four out of five are actively using TikTok, Snapchat, Instagram, and WhatsApp.

We are not talking about teens misbehaving. We are talking about digital immersion beginning before most children can spell their own names. And we are not ready.

The workshop revealed that 53% of children aged 8–25 have used an AI chatbot. That might sound like curiosity. But 54% of the same cohort also worry about AI taking their jobs. Anxiety is already built into their relationship with technology – not because they fear the future, but because they feel unprepared for it. And it’s not just chatbots.

Gaming was a key concern. The phenomenon of “skin gambling” – where children use virtual character skins with monetary value to bet on unregulated third-party sites – is now widely regarded as a gateway to online gambling. But only 5% of game consoles have parental controls installed. We have given children casinos without croupiers, and then wondered why they struggle with impulse control.

This is not just a parenting failure. It’s a systemic abdication. Broadband providers offer content filters. Search engines have child-friendly modes. Devices come with monitoring tools. But these safeguards mean little if the adults are not engaged. Parental controls are not just technical features. They are moral responsibilities.

The workshop also touched on social media and mental health, referencing the Royal Society of Public Health’s “Status of Mind” report. YouTube, it found, had the most positive impact, enabling self-expression and access to information. Instagram, by contrast, ranked worst, as it is linked to body image issues, FOMO, sleep disruption, anxiety, and depression.

The workshop ended with a call for digital resilience: recognising manipulation, resisting coercion, and navigating complexity. But resilience doesn’t develop in a vacuum. It needs scaffolding, conversation, and adults who are present physically, intellectually and emotionally.

This is where spiritual and moral leadership must re-enter the conversation. Within days of ascending to the papacy in mid-May, Pope Leo XIV began speaking about AI with startling clarity.

He chose his papal name to echo Leo XIII, who led the Catholic Church during the first Industrial Revolution. That pope challenged the commodification of workers. This one is challenging the commodification of attention, identity, and childhood.

“In our own day,” Leo XIV said in his address to the cardinals, “the Church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice, and labour.”

These are not empty words. They are a demand for ethical clarity. A reminder that technological systems are never neutral. They are always value-laden.

And at the moment, our values are not looking good.

The present is not just a moment. It is a crucible, a pressure point, and a test of whether we are willing to step back into the role of stewards, not just of technology but of each other.

Because the cost of inaction is not a dystopia in the future, it is dysfunction now.

The past

Half-term took us to Quarry Bank, also known as Styal Mill, a red-brick behemoth nestled into the Cheshire countryside, humming with the echoes of an earlier industrial ambition. Somewhere between the iron gears and the stunning garden, history pressed itself against the present.

Built in 1784 by Samuel Greg, Quarry Bank was one of the most advanced cotton mills of its day – both technologically and socially. It offered something approximating healthcare, basic education for child workers, and structured accommodation. By the standards of the time, it was considered progressive.

Image created on Midjourney

However, 72-hour work weeks were still the norm until legislation intervened in 1847. Children laboured long days on factory floors. Leisure was a concept, not a right.

What intrigued me most, though, was the role of Greg’s wife, Hannah Lightbody. It was she who insisted on humane reforms and built the framework for medical care and instruction. She took a paternalistic – or perhaps more accurately, maternalistic – interest in worker wellbeing. 

And the parallels with today are too striking to ignore. Just as it was the woman of the house in 19th-century Cheshire who agitated for better conditions for children, it is now mothers who dominate the frontline of digital safety. It was women who filled that school hall during the online safety talk. It is often women – tech-savvy mothers, underpaid teachers, exhausted child psychologists – who raise the alarm about screen time, algorithmic manipulation, and emotional resilience.

The maternal instinct, some would argue. That intuitive urge to protect. To anticipate harm before it’s visible. But maybe it’s not just instinct. Maybe it’s awareness. Emotional bandwidth. A deeper cultural training in empathy, vigilance, care.

And so we are left with a gendered question: why is it, still, in 2025, that women carry the cognitive and emotional labour of safeguarding the next generation?

Where are the fathers? Where are the CEOs? Where are the policymakers?

Why do we still assume that maternal concern is a niche voice, rather than a necessary counterweight to systemic neglect?

History has its rhythms. At Quarry Bank, the wheels of industry turned because children turned them. Today, the wheels of industry turn because children are trained to become workers before they are taught to be humans.

Only the machinery has changed.

Back then, it was looms and mills. Today, it is metrics and algorithms. But the question remains the same: are we extracting potential from the young, or investing in it?

The lambs in the neighbouring field didn’t know any of this, of course. They leapt. They bleated. They reminded my children – and me – of a world untouched by acceleration.

We cannot slow time. But we can choose where we place our attention.

And attention, now more than ever, is the most precious gift we can give. Not to machines, but to the minds that will inherit them.

Statistics of the Month

📈 AI accelerates – but skills lag
In just 18 months, AI jumped from the sixth to the first most in-demand tech skill in the world – the steepest rise in over 15 years. Various other reports show people lack these skills, representing a huge gap. 🔗

📉 Workplace engagement crashes
Global employee engagement has dropped to just 21% – matching levels seen during the pandemic lockdowns. Gallup blames poor management, with young and female managers seeing the sharpest declines. The result? A staggering $9.6 trillion in lost productivity. 🔗

🧒 Social media starts at age three
More than 50% of UK children aged 3–12 now have at least one social media account – despite age limits set at 13+. By age 16, 80% are active across TikTok, Snapchat, Instagram, and WhatsApp. Childhood, it seems, is now permanently online. 🔗

🤖 AI anxiety sets in early
According to Nominet’s annual study of 8-25 year olds in the UK, 53% have used an AI chatbot, and 54% worry about AI’s impact on future jobs. The next generation is both enchanted by and uneasy about their digital destiny. 🔗

🚨 Cybercrime rebounds hard
After a five-year decline, major cyber attacks are rising in the UK – up to 24% from 16% two years ago. Insider threats and foreign powers are now the fastest-growing risks, overtaking organised crime. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 16)


TL;DR: April’s Go Flux Yourself explores the rise of AI attachment and how avatars, agents and algorithms are slipping into our emotional and creative lives. As machines get more personal, the real question isn’t what AI can do. It’s what we risk forgetting about being human …

Image created on Ninja AI

The future

“What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

The robots aren’t coming. They’re already in the room, nodding along, offering feedback, simulating empathy. They don’t sleep. They don’t sigh. And increasingly, they feel … helpful.

In 2025, AI is moving beyond spreadsheets and slide decks and entering our inner lives. According to new Harvard Business Review analysis, written by co-founder of Filtered.com and author Marc Zao-Sanders, the fastest-growing use for generative AI isn’t work but therapy and companionship. In other words, people are building relationships with machines. (I’ve previously written about AI companions – including in June last year.)

Some call this disturbing. Others call it progress. At DTX Manchester earlier this month, where I moderated a panel on AI action plans on the main stage (and wrote a summary of my seven takeaways from the event), the conversation was somewhere in between. One question lingered among the panels and product demos: how will we relate to one another when technology becomes our emotional rehearsal partner?

This puzzler is no longer only theoretical. RealTalkStudio, founded by Toby Sinclair, provides AI avatars that help users prepare for hard conversations: delivering bad news, facing conflict, and even giving feedback without sounding passive-aggressive. These avatars pick up on tone, hesitation, and eye movement. They pause in the right places, nod, and even move their arms around.

I met Toby at DTX Manchester, and we followed up with a video call a week or so later, after I’d road-tested RealTalkStudio. The prompts on the demo – a management scenario – were handy and enlightening, especially for someone like me, who has never really managed anyone (do children count?). They allowed me to speak with my “direct report” adroitly, to achieve a favourable outcome for both parties. 

Toby had been at JP Morgan for almost 11 years until he left to establish RealTalkStudio in September, and his last role was Executive Director of Employee Experience. Why did he give it all up?

“The idea came from a mix of personal struggle and tech opportunity,” he told me over Zoom. “I’ve always found difficult conversations hard – I’m a bit of a people pleaser, so when I had to give feedback or bad news, I’d sugarcoat it, use too many pillow. My manager [at JP Morgan] was the opposite: direct, no fluff. That contrast made me realise there isn’t one right way – but practise is needed. And a lot of people struggle with this, not just me.”

The launch of ChatGPT, in November 2022, prompted him to explore possible solutions using technology. “Something clicked. It was conversational, not transactional – and I immediately thought, this could be a space to practise hard conversations. At first, I used it for myself: trying to become a better manager at JP Morgan, thinking through career changes, testing it as a kind of coach or advisor. That led to early experiments in building an AI coaching product, but it flopped. The text interface was too clunky, the experience too dull. Then, late last year, I saw how far avatar tech had come.” 

Suddenly, Toby’s idea felt viable. Natural, even. “I knew the business might not be sustainable forever, but for now, the timing and the tech felt aligned. I could imagine it being used for manager training, dating, debt collectors, airline … so many use cases.”

Indeed, avatars are not just used in work settings. A growing number of people – particularly younger generations – are turning to AI to rehearse dating, for instance. Toby has been approached by an Eastern European matchmaking service. “They came to me because they’d noticed a recurring issue, especially with younger men: poor communication on dates, and a lack of confidence. They were looking for ways to help their clients – mainly men – have better conversations. And while practice helps, finding a good practice partner is tricky. Most of these men don’t have many female friends, and it’s awkward to ask someone: ‘Can we practise going on a date?’ That’s where RealTalk comes in. We offer a realistic, judgment-free way to rehearse those conversations. It’s all about building confidence and clarity.”

These avatars flirt back. They guide you through rejection. They help you practise confidence without fear of humiliation. It’s Black Mirror, yes. But also oddly touching. On one level, this is useful. Social anxiety is rising. Young people in particular are navigating a digital-first emotional landscape. An AI avatar offers low-risk rehearsal. It doesn’t laugh. It doesn’t ghost.

On another level, it’s deeply troubling. The ability to control the simulation – to tailor responses, remove ambiguity, and mute discomfort – trains us to expect real humans to behave predictably, like code. We risk flattening our tolerance for emotional nuance. If your avatar never rolls its eyes or forgets your birthday, why tolerate a flawed, chaotic, human partner?

When life feels high-stakes and unpredictable, a predictable conversation with a patient, programmable partner can feel like relief. But what happens when we expect humans to behave like avatars? When spontaneity becomes a bug, not a feature?

That’s the tension. These tools are good, and only improving. Too good? The quotation I started this month’s Go Flux Yourself with comes from Toby, who has a two-year-old boy, Dylan. As our allotted 30 minutes neared its end, the hugely enjoyable conversation turned philosophical, and he posed this question: “What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

It’s clear that AI avatars are no longer just slick customer service bots. They’re surprisingly lifelike. Character-3, the latest from Hedra, mimics micro-expressions with startling accuracy. Eyebrows arch. Shoulders slump. A smirk feels earned.

This matters because humans are built to read nuance. We feel it when something’s off. But as avatars close the emotional gap, that sense of artifice starts to slip. We begin to forget that what we engage with isn’t sentient – it’s coded.

As Justine Moore from Andreessen Horowitz stressed in an article outlining the roadmap for avatars (thanks for the tip, Toby), these aren’t talking heads anymore. They’re talking characters, designed to be persuasive. Designed to feel real enough.

So yes, they’re useful for training, coaching, even storytelling. But they’re also inching closer to companionship. And once a machine starts mimicking care, the ethics get blurry.

Nowhere is the ambivalence more acute than in the creative industries. The spectre of AI-generated music, art, and writing has stirred panic among artists. And yet – as I argued at Zest’s Greenwich event last week – the most interesting possibilities lie in creative amplification, not replacement.

For instance, the late Leon Ware’s voice, pulled from a decades-old demo, now duets with Marcos Valle on Feels So Good, a track left unfinished since 1979. The result, when I heard it at the Jazz Cafe last August, when I was lucky enough to catch octogenarian Valle, was genuinely moving. Not because it’s novel, but because it’s human history reassembled. Ware isn’t being replaced. He’s being recontextualised.

We’ve seen similar examples in recent months: a new Beatles song featuring a de-noised John Lennon; a Beethoven symphony completed with machine assistance. Each case prompts the same question: is this artistry, or algorithmic taxidermy?

From a technical perspective, these tools are astonishing. From a legal standpoint, deeply fraught. But from a cultural angle, the reaction is more visceral: people care about authenticity. A recent UK Music study found that 83% of UK adults believe AI-generated songs should be clearly labelled. Two-thirds worry about AI replacing human creativity altogether.

And yet, when used transparently, AI can be a powerful co-creator. I’ve used it to organise ideas, generate structure, and overcome writer’s block. It’s a tool, like a camera, or a DAW, or a pencil. But it doesn’t originate. It doesn’t feel.

As Dean, a community member of Love Will Save The Day FM (for whom my DJ alias Boat Floaters has a monthly show called Love Rescue), told me: “Real art is made in the accidents. That’s the magic. AI, to me, reduces the possibility of accidents and chance in creation, so it eliminates the magic.”

That distinction matters. Creativity is not just output. It’s a process. It’s the struggle, the surprise, the sweat. AI can help, but it can’t replace that.

Other contributions from LWSTD members captured the ambivalence of AI and creativity – in music, in this case, but these viewpoints can be broadened out to the other arts. James said: “Anything rendered by AI is built on the work of others. Framing this as ‘democratised art’ is disingenuous.” He noted how Hayao Miyazaki of Studio Ghilbli expressed deep disgust when social media feeds became drowned in AI-parodies of his art. He criticised it as an “insult to life itself”.

Sam picked up this theme. “The Ghibli stuff is a worrying direction of where things can easily head with music – there’s already terrible versions of things in rough styles but it won’t be long before the internet is flooded with people making their own Prince songs (or whatever) but, as with Ghibli, without anything beyond a superficial approximation of art.”

And Jed pointed out that “it’s all uncanny – it’s close, but it’s not right. It lacks humanity.”

Finally, Larkebird made an amusing distinction. “There are differences between art and creativity. Art is a higher state of creativity. I can add coffee to my tea and claim I’m being creative, but that’s not art.”

Perhaps, though, if we want to glimpse where this is really headed, we need to look beyond the avatars and look to the agents, which are currently dominating the space.

Ray Smith, Microsoft’s VP of Autonomous Agents, shared a fascinating vision during our meeting in London in early April. His team’s strategy hinges on three tiers: copilots (assistants), agents (apps that take action), and autonomous agents (systems that can reason and decide).

Imagine an AI that doesn’t just help you file expenses but detects fraud, reroutes tasks, escalates anomalies, all without being prompted. That’s already happening. Pets at Home uses a revenue protection agent to scan and flag suspicious returns. The human manager only steps in at the exception stage.

And yet, during Smith’s demo … the tech faltered. GPU throttling. Processing delays. The AI refused to play ball.

It was a perfect irony: a conversation about seamless automation interrupted by the messiness of real systems. Proof, perhaps, that we’re still human at the centre.

But the direction of travel is clear. These agents are not just tools. They are colleagues. Digital labour, tireless and ever-present.

Smith envisions a world where every business process has a dedicated agent. Where creative workflows, customer support, and executive decision-making are all augmented by intelligent, autonomous helpers.

However, even he admits that we need a cultural reorientation. Most employees still treat AI as a search box. They don’t yet trust it to act. That shift – from command-based to companion-based thinking – is coming, slowly, then suddenly (to paraphrase Ernest Hemingway).

A key point often missed in the AI hype is this: AI is inherently retrospective. Its models are trained on what has come before. It samples. It predicts. It interpolates. But it cannot truly invent in the sense humans do, from nothing, from dreams, from pain.

This is why, despite all the alarmism, creativity remains deeply, stubbornly human. And thank goodness for that.

But there is a danger here. Not of AI replacing us, but of us replacing ourselves – outsourcing our process, flattening our instincts, degrading our skills, compromising originality in favour of efficiency.

AI might never write a truly original poem. But if we rely on it to finish our stanzas, we might stop trying.

Historian Yuval Noah Harari has warned against treating AI as “just another tool”. He suggests we reframe it as alien intelligence. Not because it’s malevolent, but because it’s not us. It doesn’t share our ethics. It doesn’t care about suffering. It doesn’t learn from heartbreak.

This matters, because as we build emotional bonds with AI – however simulated – we risk assuming moral equivalence. That an AI which can seem empathetic is empathetic.

This is where the work of designers and ethicists becomes critical. Should emotional AI be clearly labelled? Should simulated relationships come with disclaimers? If not, we risk emotional manipulation at industrial scale, especially among the young, lonely, or digitally naive. (This recent New York Times piece, about a married, 28-year-old woman in love with her CPT, is well worth a read, to show how easy – and frightening, plus costly – it is to become attached to AI.)

We also risk creating a two-tier society: those who bond with humans, and those who bond with their devices.

Further, Harari warned in an essay, published in last Saturday’s Financial Times Weekend, that the rise of AI could accelerate political fragmentation in the absence of shared values and global cooperation. Instead of a liberal world order, we gain a mosaic of “digital fortresses”, each with its own truths, avatars, and echo chambers. 

Without robust ethics, the future of AI attachment could split into a thousand isolated solitudes, each curated by a private algorithmic butler. If we don’t set guardrails now, we may soon live in a world where connection is easy – and utterly empty.

The present

At DTX Manchester this month, the main-stage AI panel I moderated felt very different from those even last year. The vibe was less “what is this stuff?” and more “how do we control the stuff we’ve already unleashed?”

Gone are the proof-of-concept experiments. Organisations are deploying AI at scale. Suzanne Ellison at Lloyds Bank described a knowledge base now used by 21,000 colleagues, reducing information retrieval by half and boosting customer satisfaction by a third. But more than that, it’s made work more human, freeing up time for nuanced, empathetic conversations.

Likewise, the thought leadership business I co-founded last year, Pickup_andWebb, uses AI avatars for client-facing video content, such as a training programme. No studios. No awkward reshoots. Just instant script updates. It’s slick, smart, and efficient. And yes, slightly unsettling.

Dominic Dugan of Oktra, a man who has spent decades designing workspaces, echoed that tension. He’s sceptical. Most post-pandemic office redesigns, he argues, are just “colouring in”– performative, superficial, Instagram-friendly but uninhabitable. We’ve designed around aesthetics, not people.

Dugan wants us to talk about performance. If an office doesn’t help people do better work, or connect more meaningfully, what’s the point? Even the most elegantly designed workplace means little if it doesn’t accommodate the emotional messiness of human interaction – something AI, for all its growth, still doesn’t understand.

And yet, that fragility of our human systems – tech included – was brought into sharp relief in these last few days (and is ongoing, at the time of writing) when an “induced atmospheric vibration” reportedly caused widespread blackouts in Spain and Portugal, knocking out connectivity across major cities for hours, and in some cases days. No internet. No payment terminals. No AI anything. Life slowed to a crawl. Trains stopped. Offices went dark. Coffee shops switched to cash, or closed altogether. It was a rare glimpse into the abyss of analogue dependency, a reminder that our digital lives are fragile scaffolds built on uncertain foundations.

The outage was temporary. But the lesson lingers: the more reliant we become on these intelligent systems, the greater our vulnerability when they fail. And fail they will. That’s the nature of systems. But it’s also the strength of humans: our capacity to improvise, to adapt, to find ways around failure. The more we automate, the more we must remember this: resilience cannot be outsourced.

And that brings me to my own moment of reinvention.

This month I began the long-overdue overhaul of my website, oliverpickup.com. The current version – featuring a photograph on the home page of me swimming in the Regents Park Serpentine at a shoot interviewing Olympic triathlete Jodie Stimpson, goggles on upside down – has served me well, but it’s over a decade old. Also people think I’m into wild swimming. I’m not, and detest cold water. 

(The 2015 article in FT Weekend has one of my favourite opening lines: “Jodie Stimpson is discussing tactical urination. The West Midlands-based triathlete, winner of two Commonwealth Games golds last summer, is specifically talking about relieving herself in her wetsuit to flood warmth to the legs when open-water swimming.”) 

But it’s more than a visual rebrand. I’m repositioning, due to FOBO (fear of becoming obsolete). The traditional freelance model is eroding, its margins squeezed by algorithmic content and automated writing. While it might not have the personality, depth, and nuance of human writing, AI doesn’t sleep, doesn’t bill by the hour, and now writes decently enough to compete. I know I can’t outpace it on volume. So I have to evolve. Speaking. Moderating. Podcasting. Hosting. These are uniquely human domains (for now).

The irony isn’t lost on me: I now use AI to sharpen scripts, test tone, even rehearse talks. But I also know the line. I know what cannot be outsourced. If my words don’t carry me in them, they’re not worth publishing.

Many of us are betting that presence still matters. That real connection – in a room, on a stage, in a hard conversation – will hold value, even as screens whisper more sweetly than ever.

As such, I’m delighted to have been accepted by Pomona Partners, a speaker agency led by “applied” futurist Tom Cheesewright, whom I caught up with over lunch when at DTX Manchester. I’m looking forward to taking the next steps in my professional speaking career with Tom and the team.

The past

Recently, prompted by a friend’s health scare and my natural curiosity, I spat into a tube and sent off the DNA sample to ancestry.com. I want to understand where I come from, what traits I carry, and what history pulses through me.

In a world where AI can mimic me – my voice, writing style, and image – there’s something grounding about knowing the real me. The biological, lived, flawed, irreplaceable me.

It struck me as deeply ironic. We’re generating synthetic selves at an extraordinary rate. Yet we’re still compelled to discover our origins: to know not just where we’re going, but where we began.

This desire for self-knowledge is fundamental. It sits at the heart of my CHUI framework: Community, Health, Understanding, Interconnectedness. Without understanding, we’re at the mercy of the algorithm. Without roots, we become avatars.

Smith’s demo glitch – an AI agent refusing to cooperate – was a reminder that no matter how advanced the tools, we are still in the loop. And we should remain there.

When I receive my ancestry results, I won’t be looking for royalty. I’ll be looking for roots. Not to anchor me in the past, but to help me walk straighter into the future. I’ll also share those findings in this newsletter. Meanwhile, I’m off to put tea in my coffee.

Statistics of the month

📈 AI is boosting business. Some 89% of global leaders say speeding up AI adoption is a top priority this year, according to new LinkedIn data. And 51% of firms have already seen at least a 10% rise in revenue after implementation.

🏙️ Cities aren’t ready. Urban economies generate most of the world’s GDP, but 44% of that output is at risk from nature loss, recent World Economic Forum data shows. Meanwhile, only 37% of major cities have any biodiversity strategy in place. 🔗

🧠 The ambition gap is growing. Microsoft research finds that 82% of business leaders around the globe say 2025 is a pivotal year for change (85% think so in the UK). But 80% of employees feel too drained to meet those expectations. 🔗

📉 Engagement is slipping. Global employee engagement is down to 21%, according to Gallup’s latest State of the Global Workplace annual report (more on this next month). Managers have been hit hardest – dropping from 30% to 27% – and have been blamed for the general fall. The result? $438 billion in lost productivity. 🔗

💸 OpenAI wants to hit $125 billion. That’s their projected revenue by 2029 – driven by autonomous agents, API tools and custom GPTs. Not bad for a company that started as a non-profit. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Why content is king – especially now

If business leaders have learnt one thing this year it is that good, authentic communication is critical – both for employees and customers. Trust is essential to attract and retain staff, consumers, and other stakeholders – including investors – alike. And to drive good communication and build trust you need excellent content.

Much in the same way many companies have had to pivot – or at least adapt their business strategies – in 2020, the content needs to evolve to keep pace, and celebrate those changes. It’s all about storytelling, and being honest – don’t try to be something / someone you are not, as when you are found out it will erode that all-important trust. It’s also easy to see through. Equally, if there have been bumps in the road it would be of interest to people to learn how you overcame them.

Thankfully these days there is a range of content you can utilise to tell your story, set out your goals and articulate your purpose and key messages – and none of it is expensive, if you know where to look.

great content drives communication and engenders trust

Often a variety, or suite of content, works best, and this might include ghostwritten thought leader website blogs (which can be filleted for social media platforms, including LinkedIn), videos (the rawer, the better), infographics, roadmaps, data-driven articles, listicles, newsletters, podcasts, and much more. And content can be proactive and reactive – the main point is to keep the content tap open, so that people want to keep coming back.

Good content can help you find your voice and your business’ voice, and trigger change by uniquely expressing ideas and showcasing your goods or services in a way that interests, informs, influences and inspires readers (or viewers).

However, often it is tricky for business leaders to whip up winning content in-house. There might be time-commitment issues, and usually the process of telling the business’ story to an outsider (with expert content production skills) can help articulate the important messages and unearth the nuggets that will appeal to a wider audience. This is true for companies of all sizes – indeed, the biggest businesses certainly understand the value of outsourcing content to freelancers.

Many times I have been parachuted into a business when there was a clear goal: to produce better content. But they no real idea of how to reach that point. And that’s fine – and understandable. It takes some time to research and interview key stakeholders and perform a kind of content audit, to understand what has already been produced (and what can be reused / updated) and tease out the interesting use cases and stories.

Indeed, content production is often elevated by freelancers whose task it is to better understand the business and ask questions you might have not thought of, or think of ways in which the content might be presented.

Ultimately, great content drives communication and engenders trust, and together those two factors are paramount to business success in 2020 and beyond.

This article was originally published on YunoJuno in December 2020