Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.