Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 20)

TL;DR: August’s edition examines how companies are eliminating graduate jobs while redirecting recruitment budgets to failing AI pilots. From ancient rhetoric becoming essential survival skills to pre-social media university life, this edition explores why authentic human connection is our ultimate competitive advantage.

August's Go Flux Yourself explores:

The graduate job collapse: Entry-level positions requiring degrees have dropped by two-thirds since 2022, while students face £53,000 average debt. Stanford research reveals young workers in AI-exposed jobs experienced 13% employment decline as older colleagues in identical roles saw steady growth

The failing AI paradox: 95% of corporate AI pilots deliver no meaningful returns, yet companies redirect graduate recruitment budgets to these failing technologies. Half of UK firms now want to swap staff spending for AI investment despite zero evidence of productivity gains

The strategic generation: Anecdotal examples illustrate how young people aren't competing with AI but conducting it – using ChatGPT for interior design, creating revision podcasts, and embracing technology as "another thing to make life easier"

The pre-digital simplicity: Twenty-four years ago at St Andrews, Prince William was just another tutorial student alongside Oliver in a world without Facebook, smartphones, or AI assistants. Physical books, card catalogues, and pub conversations shaped minds through friction rather than convenience

To read the full newsletter, please visit www.oliverpickup.com.

Image created on Midjourney

The future

“First, we made our jobs robotic. Then we trained the robots how to do them. If AI takes your job, it won’t be because it’s so smart. It will be because over time we’ve made work so narrow, so repetitive, so obsessed with minimising variance and deferring to metrics, that it’s perfectly designed for machines.”

Tom Goodwin’s above observation about how we’ve made our jobs robotic before training robots to do them, articulated in mid-August on LinkedIn, feels remarkably prescient as thousands of teenagers prepare for university. When I interviewed the business transformation consultant, bullshit caller / provocateur, and media expert in 2022, following an update to his book Digital Darwinism, he warned about a looming leadership crisis. That crisis is now playing out in ways he probably didn’t fully anticipate.

The timing of the post couldn’t be more brutal. It’s been 25 years since I received my A-level results. Yet, I can still easily recall the pre-reveal trepidation followed by relief that I’d scraped the grades to study English Literature and Language at the University of St Andrews (as a peer of Prince William – more on this below, including a 20-year-old graduation picture).

What a thrilling time of year it should be: the end of school, then taking the next step on life’s magical journey, be it straight to university without passing go, a gap year working and then travelling, or eschewing higher education to begin a career.

I wonder how this year’s A-level leavers feel, given they’re walking into the most uncertain graduate job market in a generation. The promises made to them about university – to study hard, earn a degree, and secure a good job – are crumbling in real time.

Data from job search engine Adzuna suggests that job postings for entry-level positions requiring a degree have dropped by almost two-thirds in the UK since 2022, roughly double the decline for all entry-level roles (as quoted in the Financial Times). The same research found that entry-level hiring is down 43% in the US, and 67% in the UK, since ChatGPT launched in November 2022.

The study results tally with other sources. In June, for instance, UK graduate job openings had plunged by 75% in banking and finance, 65% in software development, and 54% in accounting compared to the same month in 2019, according to Indeed (also in the FT piece).

Meanwhile, students graduating from universities in England in 2025 have an average student loan debt of approximately £53,000, with total outstanding loans reaching £267 billion. Frankly, is university worth it today?

I was fortunate enough to be part of the last cohort to benefit from minimal tuition fees in Scotland before they were introduced to all students in the 2005-6 academic year. Further, when I studied my postgraduate degree in magazine journalism at Cardiff University’s JOMEC, we were (verbally and anecdotally) guaranteed jobs within a year; and, as far as I know, all my peers achieved that. Such certainty feels alien now, even quaint.

But where does this trend lead? What happens when an entire generation faces systematic exclusion from entry-level professional roles?

A Stanford University study tracking millions of workers through ADP payroll data revealed something rather more troubling: young workers aged 22-25 in “highly AI-exposed” jobs experienced a 13% employment decline since OpenAI released its LLM just under three years ago, while older colleagues in identical roles saw steady or rising employment.

Arguably, we’re witnessing the first generation where machines are genuinely better at doing what universities taught them than they are.

Erik Brynjolfsson, one of the Stanford paper’s co-authors (and a professor whom I interviewed a couple of months after ChatGPT was unveiled – even back then he was warning about the likely problems with AI acceleration and jobs), put it bluntly: “There’s a clear, evident change when you specifically look at young workers who are highly exposed to AI.” 

The research controlled for obvious alternatives — COVID effects, tech sector retrenchment, interest rate impacts — and the correlation held. Software developers and customer service agents under the age of 25 saw dramatic employment drops. Home health aides, whose work remains both physically and emotionally demanding, saw employment rise.

The distinction matters. AI isn’t just replacing workers randomly, but it’s targeting specific types of work. The Stanford team found that occupations where AI usage is more “automative” (completely replacing human tasks) showed substantial employment declines for young people. In contrast, “augmentative” uses (where humans collaborate with AI) showed no such pattern.

Anthropic CEO Dario Amodei warned in May that half of “administrative, managerial and tech jobs for people under 30” could vanish within five years. He’s probably being conservative.

However, what’s especially troubling about this shift is that a new MIT research, The GenAI Divide: State of AI in Business 2025, suggests that many AI deployment programmes are failing to deliver expected returns on investment, with companies struggling to show meaningful productivity gains from their technology investments. Specifically, 95% of generative AI pilots at companies are failing, and delivering next to no return on investment. 

Despite this, organisations continue redirecting budgets from graduate recruitment to AI initiatives. Half of UK companies now want to redirect money from staff to AI, according to Boston Consulting Group research.

This creates a dangerous paradox: companies are cutting the graduate pipeline that develops future leaders while betting on technologies that haven’t yet proven their worth. What happens to organisational capability in five years when the cohort of junior professionals who should be stepping into senior roles doesn’t exist, or those that are in the job market don’t have any meaningful experience?

This connects directly to Tom Goodwin’s observation. The combined forces of consulting culture, efficiency obsessions, and metric-driven management have reshaped roles once built on creativity, empathy, relationships, imagination, and judgment into “checklists, templates, and dashboards”. We stripped away the human qualities that made work interesting and valuable, creating roles “perfectly designed for machines and less worth doing for humans”.

Consider those entry-level consulting, law, and finance roles that have vanished. They were built around tasks like document review, basic data analysis, research synthesis, and report formatting – precisely the narrow, repetitive work at which large language models excel.

Yet amid this disruption, there are signals of adaptation and hope. During a recent conversation I had with Joanne Werker, CEO of the people engagement company Boostworks, she shared statistics and personal insights that capture both the challenges and the opportunities facing this generation. Her organisation’s latest research, published in late July, indicates that 57% of Gen Z and 71% of Millennials are exploring side hustles, not as passion projects, but to survive financially. Taking a positive view of this situation, one could argue that this will be a boon for innovation, given that necessity is the mother of invention.

Also noteworthy is that nearly one in five Gen Zers is already working a second job. Joanne’s daughters illustrate a different relationship with AI entirely. One, aged 30, works in music, while the other, 24, is in fashion, both creative fields where AI might be expected to pose a threat. Instead, they don’t fear the technology but embrace it strategically. The younger daughter used ChatGPT to redesign their family’s living room, inputting photos and receiving detailed interior design suggestions that impressed even Jo’s initially sceptical husband. As Joanne says, both daughters use AI tools not to replace their creativity, but to “be smarter and faster and better”, for work and elsewhere. “The younger generation’s response to AI is just, ‘OK, this is another thing to make life easier.’”

This strategic approach extends to education. Nikki Alvey, the (brilliant) PR pro who facilitated my conversation with Jo, has children at the right age to observe this transition. Her son, who just completed his A-levels, used AI extensively for revision, creating quizzes, podcasts, and even funny videos from his notes. As Nikki pointed out: “I wish I’d had that when I was doing GCSEs, A-levels, and my degree; we barely had the internet.”

Elsewhere, her daughter, who is studying criminology at the University of Nottingham, operates under different constraints. Her university maintains a blanket ban on AI use for coursework, though she uses it expertly for job applications and advocacy roles. This institutional inconsistency reflects higher education’s struggle to adapt to technological reality: some universities Nikki’s son looked at were actively discussing AI integration and proper citation methods, while others maintain outright bans.

Universities that nail their AI policy will recognise that future graduates need capabilities that complement, rather than compete with, AI. This means teaching students to think critically about information sources.

As I described during a recent conversation with Gee Foottit on St James’s Place Financial Adviser Academy’s ‘The Switch’ podcast: “Think of ChatGPT as an army of very confident interns. Don’t trust their word. They may hallucinate. Always verify your sources and remain curious. Having that ‘truth literacy’ will be crucial in the coming years.”

Related to this, LinkedIn’s Chief Economic Opportunity Officer Aneesh Raman describes this as the shift from a knowledge economy to the “relationship economy”, where distinctly human capabilities matter most.

The Stanford research offers clues about what this means. In occupations where AI augments rather than automates human work – such as complex problem-solving, strategic thinking, emotional intelligence, and creative synthesis – young workers aren’t being displaced.

Success won’t come from competing with machines on their terms, but from doubling down on capabilities that remain uniquely human. 

On The Switch podcast episode, which will be released soon (I’ll share a link when it does), I stressed that the future belongs to those – young and old – who can master what I call the six Cs of skills to dial up: 

  • Collaboration
  • Communication
  • Compassion
  • Courage
  • Creativity
  • and Curiosity

These are no longer soft skills relegated to HR workshops but survival capabilities for navigating technological disruption.

There’s a deeper threat lurking, though. The issue isn’t that the younger generations are AI-literate while their elders struggle with new technology, but understanding how to maintain our humanity while leveraging these tools. 

No doubt nurturing the six Cs will help, but a week or so ago, Microsoft’s AI chief, Mustafa Suleyman, described something rather more unsettling: “AI psychosis”: a phenomenon where vulnerable individuals develop delusions after intensive interactions with chatbots. In a series of posts on X, he wrote that “seemingly conscious AI” tools are keeping him “awake at night” because of their societal impact, even though the technology isn’t conscious by any human definition.

“There’s zero evidence of AI consciousness today,” Suleyman wrote. “But if people just perceive it as conscious, they will believe that perception as reality.”

The bitter irony is that the capabilities we now desperately need – namely, creativity, empathy, relationships, imagination, and judgement – are exactly what we stripped out of entry-level work to make it “efficient”. Now we need them back, but we’ve forgotten how to cultivate them at scale.

The generation entering university in September may lack traditional job security, but they possess something their predecessors didn’t: the ability to direct AI while (hopefully) remaining irreplaceably human. And that’s not a consolation prize. It’s a superpower.

The present

On stage with Tomás O’Leary at Origina Week

I tap these words on the morning of August 29 from seat 24F on Aer Lingus EI158 from Dublin to London Heathrow, flying high after a successful 48 hours on the Emerald Isle. A software client, Origina, flew me in. I’ve been assisting with the CEO, Tomás O’Leary’s thought leadership and the company’s marketing messaging for over a year (his words of wisdom around pointless software upgrades and needless infrastructure strain in my July newsletter). 

Having struck up a bond – not least thanks to our days reminiscing about playing rugby union (we were both No8s, although admittedly I’m a couple of inches shorter than him) – Tomás invited me to participate in Origina Week. This five-day extravaganza mixes serious business development skills with serious fun and learning.

Tomás certainly made me work for my barbecued supper at the excellent Killashee Spa Hotel: on Thursday, I was on stage, moderating three sessions for three consecutive hours. The final session – the last of the main programme – involved men and Tomás having a fireless “fireside” chat about technology trends as I see them, and his reflections on their relevance to the software space.

I was grateful to make some superb connections, be welcomed deeper into the bosom of the Origina family, and hear some illuminating presentations, especially behavioural psychologist Owen Fitzpatrick’s session on the art of persuasion. 

Watching Owen work was a masterclass in human communication, which no AI could replicate. For 90 minutes, around 250 people from diverse countries and cultures were fully engaged, leaning forward, laughing, and actively participating. This was neural coupling in action: the phenomenon where human brains synchronise during meaningful interaction. No video call, no AI assistant, no digital platform could have generated that energy.

This is what Tomás understood when he invested in bringing his global team together in the Irish capital. While many executives slash training budgets and rely on digital-only interactions, he recognises that some learning only happens face-to-face. That’s increasingly rare leadership in an era where companies are cutting human development costs while pouring billions into AI infrastructure.

Owen’s session focused on classical rhetoric: the ancient art of persuasion, which has become increasingly relevant in our digital age. He walked us through the four elements: ethos (credibility), logos (logic), pathos (emotion), and demos (understanding your audience). These are precisely the human skills we need as AI increasingly handles our analytical tasks.

It was a timely keynote. Those who have completed their A-levels this summer are entering a world where the ability to persuade, connect, and influence others becomes more valuable than the ability to process information.

Yet we’re simultaneously experiencing what recent research from O.C. Tanner calls a recognition crisis. Its State of Employee Recognition Report 2025 found that UK employees expect in-person interactions with recognition programmes to increase by 100% over the next few years, from 37% to 74%. These include handwritten notes, thank you cards, and award presentations. People are craving authentic human interaction precisely because it’s becoming scarce.

Recent data from Bupa reveals that just under a quarter (24%) of people feel lonely or socially isolated due to work circumstances, rising to 38% among 16-24-year olds. Over a fifth of young workers (21%) say their workplace provides no mental health support, with 45% considering moves to roles offering more social interaction.

Also, new research from Twilio reveals that more than one-third of UK workers (36%) demand formally scheduled “digital silence” from their workplace. Samantha Richardson, Twilio’s Director of Executive Engagement, observed: “Technology has transformed how we work, connect, and collaborate – largely for the better. But as digital tools become increasingly embedded in everyday routines, digital downtime may be the answer to combating the ‘always-on’ environment that’s impeding productivity and damaging workplace culture.”

This connects to something that emerged from Owen’s session. He described how the most powerful communication occurs through contrast, repetition, and emotional resonance – techniques that require human judgment, cultural understanding, and real-time adaptation. These are precisely the skills that remain irreplaceable in an AI world.

Consider how Nikki’s son used AI for revision. Rather than passively consuming information or getting out the highlighter pens and mapping information out on a big, blank piece of paper (as I did while studying, and still do sometimes), he actively prompted the technology to create quizzes, podcasts, and videos tailored to his learning style. This was not AI replacing human creativity, but human creativity directing AI capabilities.

The challenge for today’s graduates isn’t avoiding AI, but learning to direct it purposefully. This requires exactly the kind of critical thinking and creative problem-solving that traditional education often neglects in favour of information retention and standardised testing.

What’s particularly striking about the current moment is how it echoes patterns I’ve observed over the past year of writing this newsletter. In June 2024’s edition, I explored how AI companions were already changing human relationships. I’ve also written extensively about the “anti-social century” and our retreat from real-world connection. Now we’re seeing how these trends converge in the graduate employment crisis: technology is doing more than just transforming what we do. It is also changing how we relate to each other in the process. 

On this subject, I’m pleased to share the first of a new monthly podcast series I’ve begun with long-term client Clarion Events, which organises the Digital Transformation Expo (DTX) events in London and Manchester. The opening episode of DTX Unplugged features Nick Hodder, Director of Digital Transformation and Engagement at the Imperial War Museums (IWM), highlighting why meaningful business transformation begins with people, not technology.

The answer, whether in a hotel conference room in Dublin or a corporate office in Manchester, remains the same: in a world of AI, our ability to connect authentically with other humans has become our competitive edge.

The past

Twenty-four years ago in September, I sat in my first tutorial at the University of St Andrews — nine students around a table, including Prince William and seven young women. That tutorial room held particular energy. We were there to think, question, argue with texts and each other about ideas that mattered. Will, who played for my Sunday League football team, was just another student. 

The economic backdrop was fundamentally different. Graduate jobs were plentiful, social media was (thankfully) nascent – Facebook was three years away, and only mildly registered in my final year, 2004-05 – and so partying with peers was authentic, and free from fears of being digitally damned. Moreover, the assumption that a degree led to career success felt unshakeable because it was demonstrably true.

The social contract was clearer, too. Society invested in higher education as a public good that would generate returns through increased productivity, innovation, and civic engagement. Students could focus on learning rather than debt management because the broader community bore the financial risk in exchange for shared benefits.

My graduation day at the University of St Andrews in 2005

Looking back, what strikes me most is the simplicity of the intellectual environment. We read physical books, researched in libraries using card catalogues, and didn’t have any digital devices in the lecture halls or tutor rooms. (And the computers we had in our rooms took up a colossal amount of space.) Our critical thinking developed through friction: the effort required to find information, synthesise arguments from multiple sources, and express ideas clearly without technological assistance.

Knowledge felt both scarce and valuable precisely because it was hard to access. You couldn’t Google historical facts during seminars. If you hadn’t done the reading, everyone knew. If your argument was poorly constructed, there was nowhere to hide. The constraints forced genuine intellectual development.

The human connections formed during those four years proved more valuable than any specific subject knowledge. Late-night debates in residence halls, study groups grappling with challenging texts, and casual conversations between lectures – these experiences shaped how we thought and who we became.

We could explore medieval history, philosophical arguments, or literary criticism without worrying whether these subjects would directly translate to career advantages. The assumption was that broad intellectual development would prove valuable, even if connections weren’t immediately obvious. (Again, I was fortunate to be in the last cohort of subsidised university education.)

That faith in indirect utility seems almost lost now. Today’s students, facing massive debt burdens, quite reasonably demand clear pathways from educational investment to career outcomes. The luxury of intellectual exploration for its own sake becomes harder to justify when each module costs hundreds – if not thousands – of pounds.

Some elements remain irreplaceable. The structured opportunity to develop critical thinking skills, build relationships with peers and mentors, and discover intellectual passions in supportive environments still offers unique value. 

Indeed, these capabilities matter more now than they did a quarter of a century ago. When information is abundant but truth is contested, when AI can generate convincing arguments on any topic, and when economic structures are shifting rapidly, the ability to think independently becomes genuinely valuable rather than merely prestigious.

My 10-year-old son will reach university age by 2033. By then, higher education will have undergone another transformation. The economics might involve shorter programmes, industry partnerships, apprenticeship alternatives, or entirely new models that bypass traditional degrees. But the fundamental question remains unchanged: how do we prepare young minds to think independently, act ethically, and contribute meaningfully to society?

The answer may require reimagining university education entirely. Perhaps residential experiences focused on capability development rather than content transmission. Maybe stronger connections between academic learning and real-world problem-solving. Possibly more personalised pathways that recognise different learning styles and career ambitions. What won’t change is the need for structured environments where young people can develop their humanity while mastering their chosen fields of expertise. 

The students opening their A-level results this last month deserve better. They deserve educational opportunities that develop their capabilities without crushing them with debt. They deserve career pathways that use their human potential rather than competing with machines on machine terms. Most importantly, they deserve honest conversations about what higher education can and cannot provide in an age of technological disruption.

Those conversations should start with acknowledging what that tutorial room at St Andrews represented: human minds engaging directly with complex ideas, developing intellectual courage through practice, and building relationships that lasted decades (although my contact with Prince Will mysteriously stopped after I began working at the Daily Mail Online!). 

These experiences – whether at university or school, or elsewhere – remain as valuable as ever. The challenge is whether we can create sustainable ways to provide them without bankrupting the people who need them most.

Statistics of the month

🎓 A-level computing drops
Computing A-level entries fell by 2.8% in the UK despite years of growth, though female participation rose 3.5% to reach 18.6% of students taking the subject. Meanwhile, maths remains most popular with 112,138 students, but girls represent just 37.3% of the cohort. 🔗

👩‍💼 AI skills gender divide widens
Only 29% of women report having AI skills compared to 71% of men, while nearly 70% of UK jobs face high AI exposure. Under half of workers have been offered AI-related upskilling opportunities. 🔗

💰 Office return costs surge
UK employees spend an average £25 daily on commuting and expenses when working from the office, potentially costing nearly £3,500 annually in commuting alone if expected to be in the office for five days a week. 🔗

🏢 Summer hiring advantage emerges
Some 39% of UK businesses have struggled to hire in the last 12 months, with competition and slow hiring cited as key barriers. 🔗

🌍 Extreme poverty redefined
The World Bank raised its International Poverty Line from $2.15 to $3 per day, adding 125 million people to extreme poverty statistics. However, global extreme poverty has still fallen from 43% in 1990 to 10% today. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 17)

TL;DR May’s Go Flux Yourself explores how, in a world where intelligence is becoming cheap, easy, and infinite, the concept of childhood and adolescence is being rewritten. Are AI tools empowering young people or quietly corroding their minds?

Image created on Midjourney

The future

“If I were the devil, I wouldn’t come with violence. I’d come with convenience. I’d give them everything they wanted, and strip them of everything they needed. I’d make them feel connected, but completely alone, endlessly entertained, but quietly empty.”

The above is part of an answer by ChatGPT to a question posed recently by lifestyle influencer Alexia Ferrari, aka themillionairemaker on social media channels. This was the puzzler: “If you were the devil, how would you destroy the young minds of the next generation without them even knowing it?”

And its resonance – evidenced by millions of views and thousands of comments – suggests that she struck a nerve. The AI’s response was disturbingly eloquent.

Following on from the opening quotation, above, it continued: “I’d blur the lines between truth and opinion, until nothing meant anything. I’d teach them to chase dopamine, not purpose, to worship self, but love who they are. I wouldn’t destroy them with war, I’d destroy them with apathy. I’d keep them distracted, numb, scrolling, always scrolling.”

So far, so horrific. There was more. “I’d dismantle the family by redefining it. Make fathers optional, mothers exhausted, and children confused. I’d hand them a phone before they could speak in full sentences. And I’d watch their minds rot slowly, sweetly, silently. And the most brilliant part? They’d never know it was me. They’d call it freedom.” 

Ultimately, it described a world that feels uncomfortably like our own. The children are not only online but are being remade by it. In response, and perhaps seeking some philosophical distance, I asked Claude, Anthropic’s AI, the same question. Its answer was more measured, but no less damning.

“Let me suggest a rather chilling thought experiment,” Claude began, with a silent cackle. “If I were tasked with undermining young minds whilst keeping them blissfully unaware, I’d focus on their digital habits – the very tools they trust most. First, I’d flood their feeds with endless, mindless content. Not obviously harmful stuff, mind you, but material so perfectly tailored to their dopamine receptors that genuine curiosity would wither away. Why wonder about the world when the algorithm already knows what you want? 

“Then I’d weaponise comparison. Social media would become a hall of mirrors where everyone else appears happier, more successful, better looking. The constant measuring up would erode self-worth drip by drip. I’d fragment their attention spans. Tasks requiring sustained focus would feel increasingly impossible as their brains rewired for quick hits of stimulation. Reading a book? Ancient history. Deep thinking? Too much effort. 

“Perhaps most insidiously, I’d create echo chambers so convincing that opposing viewpoints would seem not just wrong, but incomprehensible. Critical thinking would rust from disuse. The beauty of this approach? They’d defend their digital chains. The very devices stealing their potential would feel like lifelines. They’d scroll willingly into intellectual darkness, smartphone in hand, convinced they’re more connected than ever. Frightening, really, how closely this mirrors aspects of our current reality. Makes you wonder what we might do differently, doesn’t it?”

Makes one wonder, indeed. These warnings, eerily congruent despite different the different large language models, do not speak of potential threats. They describe current conditions. They’re not predicting a future. They’re diagnosing a present.

The effect is cumulative. Essentially, as parents, educators, and adults, we are outsourcing more and more of the hard cognitive lifting – research, memory, even curiosity – to machines. And what we once called “childhood” is now a battleground between algorithms and agency.

I’m typing these words as I train back to London from Cheshire, where I was in the countryside with my two young children, at my parents’ house. This half term, we escaped the city for a few days of greenery and generational warmth. (The irony here is that while walks, books and board games dominated the last three days, my daughter is now on a maths game on an iPad, and my older son is blowing things up on his Nintendo Switch – just for an hour or so while I diligently polish this newsletter.) 

There were four-week-old lambs in the field next door, gleefully gambolling. The kids cooed. For a moment, all was well. But as they scampered through the grass, I thought: how long until this simplicity is overtaken by complexity? How long until they’re pulled into the same current sweeping the rest of us into a world of perpetual digital mediation?

That question sharpened during an in-person roundtable I moderated for Cognizant and Microsoft a week ago. The theme was generative AI in financial services, but the most provocative insight came not from a banker but from technologist David Fearne, “What happens,” he asked, “when the cost of intelligence sinks to zero?”

It’s a question that has since haunted me. Because it’s not just about jobs or workflows. It’s about meaning.

If intelligence becomes ambient – like electricity, always there, always on – what is the purpose of education? What becomes of effort? Will children be taught how to think, or simply how to prompt?

The new Intuitive AI report, produced by Cognizant and Microsoft, outlines a corporate future in which “agentic AI” becomes a standard part of every team. These systems will do much more than answer questions. They will anticipate needs, draft reports, analyse markets, and advise on strategy. They will, in effect, think for us. The vision, says Cognizant’s Fearne, is to build an “agentic enterprise”, which moves beyond isolated AI tools to interconnected systems that mirror human organisational structures, with enterprise intelligence coordinating task-based AI across business units.

That’s the world awaiting today’s children. A world in which thinking might not be required, and where remembering, composing, calculating, synthesising – once the hallmarks of intelligence – are delegated to ever-helpful assistants. 

The risk is that children become, well, lazy, or worse, they never learn how to think in the first place.

And the signs are not subtle. Gallup latest State of the Global Workforce study, published in April, reports that only 21% of the global workforce is actively engaged, a record low. Digging deeper, only 13% of the workforce is engaged in Europe – the lowest of any region – and in the UK specifically, just 10% of workers are engaged in their jobs.

Meanwhile, the latest Microsoft Work Trend Index shows 53% of the global workforce lacks sufficient time or energy for their work, with 48% of employees feeling their work is chaotic and fragmented

If adults are floundering, what hope is there for the generation after us? If intelligence is free, where will our children find purpose?

Next week, on June 4, I’ll speak at Goldsmiths, University of London, as part of a Federation of Small Businesses event. The topic: how to nurture real human connection in a digital age. I will explore the antisocial century we’ve stumbled into, largely thanks to the “convenience” of technology alluded to in that first ChatGPT answer. The anti-social century, as coined by The Atlantic’s Derek Thompson earlier this year, is one marked by convenient communication and vanishing intimacy, AI girlfriends and boyfriends, Meta-manufactured friendships, and the illusion of connection without its cost

In a recent LinkedIn post, Tom Goodwin, a business transformation consultant, provocateur and author (whom I spoke with about a leadership crisis three years ago), captured the dystopia best. “Don’t worry if you’re lonely,” he winked. “Meta will make you some artificial friends.” His disgust is justified. “Friendship, closeness, intimacy, vulnerability – these are too precious to be engineered by someone who profits from your attention,” he wrote.

In contrast, OpenAI CEO Sam Altman remains serenely optimistic. “I think it’s great,” he said in a Financial Times article earlier in May (calling the latest version of ChatGPT “genius-level intelligence”). “I’m more capable. My son will be more capable than any of us can imagine.”

But will he be more human?

Following last month’s newsletter, I had a call with Laurens Wailing, Chief Evangelist at 8vance and a longtime believer in technology’s potential to elevate, not just optimise, who reacted to my post. His company is using algorithmic matching to place unemployed Dutch citizens into new roles, drawing on millions of skill profiles. “It’s about surfacing hidden talent,” he told me. “Better alignment. Better outcomes.”

His team has built systems capable of mapping millions of CVs and job profiles to reveal “fit” – not just technically, but temperamentally. “We can see alignment that people often can’t see in themselves,” he told me. “It’s not about replacing humans. It’s about helping them find where they matter.”

That word stuck with me: matter.

Laurens is under no illusion about the obstacles. Cultural inertia is real. “Everyone talks about talent shortages,” he said, “but few are changing how they recruit. Everyone talks about burnout, but very few rethink what makes a job worth doing.” The urgency is missing, not just in policy or management, but in the very frameworks we use to define work.

And it’s this last point – the need for meaning – that feels most acute.

Too often, employment is reduced to function: tasks, KPIs, compensation. But what if we treated work not merely as an obligation, but as a conduit for identity, contribution, and community? 

Laurens mentioned the Japanese concept of Ikigai, the intersection of what you love, what you’re good at, what the world needs, and what you can be paid for. Summarised in one word, it is “purpose”. It’s a model of fulfilment that stands in stark contrast to how most jobs are currently structured. (And one I want to explore in more depth in a future Go Flux Yourself.)

If the systems we build strip purpose from work, they will also strip it from the workers. And when intelligence becomes ambient, purpose might be the only thing left worth fighting for.

Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

If we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will create a society that is smarter than ever, and more vacant than we can bear.

He is a true believer in the liberating potential of AI.

It’s a noble mission. But even Laurens admits it’s hard to drive systemic change. “Everyone talks about talent shortages,” he said, “but no one’s actually rethinking recruitment. Everyone talks about burnout, but still pushes harder. There’s no trigger for real urgency.”

Perhaps that trigger should be the children. Perhaps the most urgent question we can ask – as parents, teachers, citizens – is not “how will AI help us work?” but “how will AI shape what it means to grow up?”

Because if we get this wrong, if we let intelligence become a sedative instead of a stimulant, we will have created a society that is smarter than ever – and more vacant than we can bear.

Also, is the curriculum fit for purpose in a world where intelligence is on tap? In many UK schools, children are still trained to regurgitate facts, parse grammar, and sit silent in tests. The system, despite all the rhetoric about “future skills”, remains deeply Victorian in its structure. It prizes conformity. It rewards repetition. It penalises divergence. Yet divergence is what we need, especially now. 

I’ve advocated for the “Five Cs” – curiosity, creativity, critical thinking, communication, and collaboration – as the most essential human traits in a post-automation world. But these are still treated as extracurricular. Soft skills. Add-ons. When in fact they are the only things that matter when the hard skills are being commodified by machines.

The classrooms are still full of worksheets. The teacher is still the gatekeeper. The system is not agile. And our children are not waiting. They are already forming identities on TikTok, solving problems in Minecraft, using ChatGPT to finish their homework, and learning – just not the lessons we are teaching.

That brings us back to the unnerving replies of Claude and ChatGPT, and to the subtle seductions of passive engagement, plus the idea that children could be dismantled not through trauma but through ease. That the devil’s real trick is not fear but frictionlessness.

And so I return to my own children. I wonder whether they will know how to be bored. Because boredom – once a curse – might be the last refuge of autonomy in a world that never stops entertaining.

The present

If the future belongs to machines, the present is defined by drift – strategic, cultural, and moral drift. We are not driving the car anymore. We are letting the algorithm navigate, even as it veers toward a precipice.

We see it everywhere: in the boardroom, where executives chase productivity gains without considering engagement. In classrooms, where teachers – underpaid and under-resourced – struggle to maintain relevance. And in our homes, where children, increasingly unsupervised online, are shaped more by swipe mechanics than family values.

The numbers don’t lie, with just 21% of employees engaged globally, according to Gallup. And the root cause is not laziness or ignorance, the researchers reckon. It is poor management; a systemic failure to connect effort with meaning, task with purpose, worker with dignity.

Image created on Midjourney

The same malaise is now evident in parenting and education. I recently attended an internet safety workshop at my child’s school. Ten parents showed up. I was the only father.

It was a sobering experience. Not just because the turnout was low. But because the women who did attend – concerned, informed, exhausted – were trying to plug the gaps that institutions and technologies have widened. Mainly it is mothers who are asking the hard questions about TikTok, Snapchat, and child exploitation.

And the answers are grim. The workshop drew on Ofcom’s April 2024 report, which paints a stark picture of digital childhood. TikTok use among five- to seven-year-olds has risen to 30%. YouTube remains ubiquitous across all ages. Shockingly, over half of children aged three to twelve now have at least one social media account, despite all platforms having a 13+ age minimum. By 16, four out of five are actively using TikTok, Snapchat, Instagram, and WhatsApp.

We are not talking about teens misbehaving. We are talking about digital immersion beginning before most children can spell their own names. And we are not ready.

The workshop revealed that 53% of children aged 8–25 have used an AI chatbot. That might sound like curiosity. But 54% of the same cohort also worry about AI taking their jobs. Anxiety is already built into their relationship with technology – not because they fear the future, but because they feel unprepared for it. And it’s not just chatbots.

Gaming was a key concern. The phenomenon of “skin gambling” – where children use virtual character skins with monetary value to bet on unregulated third-party sites – is now widely regarded as a gateway to online gambling. But only 5% of game consoles have parental controls installed. We have given children casinos without croupiers, and then wondered why they struggle with impulse control.

This is not just a parenting failure. It’s a systemic abdication. Broadband providers offer content filters. Search engines have child-friendly modes. Devices come with monitoring tools. But these safeguards mean little if the adults are not engaged. Parental controls are not just technical features. They are moral responsibilities.

The workshop also touched on social media and mental health, referencing the Royal Society of Public Health’s “Status of Mind” report. YouTube, it found, had the most positive impact, enabling self-expression and access to information. Instagram, by contrast, ranked worst, as it is linked to body image issues, FOMO, sleep disruption, anxiety, and depression.

The workshop ended with a call for digital resilience: recognising manipulation, resisting coercion, and navigating complexity. But resilience doesn’t develop in a vacuum. It needs scaffolding, conversation, and adults who are present physically, intellectually and emotionally.

This is where spiritual and moral leadership must re-enter the conversation. Within days of ascending to the papacy in mid-May, Pope Leo XIV began speaking about AI with startling clarity.

He chose his papal name to echo Leo XIII, who led the Catholic Church during the first Industrial Revolution. That pope challenged the commodification of workers. This one is challenging the commodification of attention, identity, and childhood.

“In our own day,” Leo XIV said in his address to the cardinals, “the Church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice, and labour.”

These are not empty words. They are a demand for ethical clarity. A reminder that technological systems are never neutral. They are always value-laden.

And at the moment, our values are not looking good.

The present is not just a moment. It is a crucible, a pressure point, and a test of whether we are willing to step back into the role of stewards, not just of technology but of each other.

Because the cost of inaction is not a dystopia in the future, it is dysfunction now.

The past

Half-term took us to Quarry Bank, also known as Styal Mill, a red-brick behemoth nestled into the Cheshire countryside, humming with the echoes of an earlier industrial ambition. Somewhere between the iron gears and the stunning garden, history pressed itself against the present.

Built in 1784 by Samuel Greg, Quarry Bank was one of the most advanced cotton mills of its day – both technologically and socially. It offered something approximating healthcare, basic education for child workers, and structured accommodation. By the standards of the time, it was considered progressive.

Image created on Midjourney

However, 72-hour work weeks were still the norm until legislation intervened in 1847. Children laboured long days on factory floors. Leisure was a concept, not a right.

What intrigued me most, though, was the role of Greg’s wife, Hannah Lightbody. It was she who insisted on humane reforms and built the framework for medical care and instruction. She took a paternalistic – or perhaps more accurately, maternalistic – interest in worker wellbeing. 

And the parallels with today are too striking to ignore. Just as it was the woman of the house in 19th-century Cheshire who agitated for better conditions for children, it is now mothers who dominate the frontline of digital safety. It was women who filled that school hall during the online safety talk. It is often women – tech-savvy mothers, underpaid teachers, exhausted child psychologists – who raise the alarm about screen time, algorithmic manipulation, and emotional resilience.

The maternal instinct, some would argue. That intuitive urge to protect. To anticipate harm before it’s visible. But maybe it’s not just instinct. Maybe it’s awareness. Emotional bandwidth. A deeper cultural training in empathy, vigilance, care.

And so we are left with a gendered question: why is it, still, in 2025, that women carry the cognitive and emotional labour of safeguarding the next generation?

Where are the fathers? Where are the CEOs? Where are the policymakers?

Why do we still assume that maternal concern is a niche voice, rather than a necessary counterweight to systemic neglect?

History has its rhythms. At Quarry Bank, the wheels of industry turned because children turned them. Today, the wheels of industry turn because children are trained to become workers before they are taught to be humans.

Only the machinery has changed.

Back then, it was looms and mills. Today, it is metrics and algorithms. But the question remains the same: are we extracting potential from the young, or investing in it?

The lambs in the neighbouring field didn’t know any of this, of course. They leapt. They bleated. They reminded my children – and me – of a world untouched by acceleration.

We cannot slow time. But we can choose where we place our attention.

And attention, now more than ever, is the most precious gift we can give. Not to machines, but to the minds that will inherit them.

Statistics of the Month

📈 AI accelerates – but skills lag
In just 18 months, AI jumped from the sixth to the first most in-demand tech skill in the world – the steepest rise in over 15 years. Various other reports show people lack these skills, representing a huge gap. 🔗

📉 Workplace engagement crashes
Global employee engagement has dropped to just 21% – matching levels seen during the pandemic lockdowns. Gallup blames poor management, with young and female managers seeing the sharpest declines. The result? A staggering $9.6 trillion in lost productivity. 🔗

🧒 Social media starts at age three
More than 50% of UK children aged 3–12 now have at least one social media account – despite age limits set at 13+. By age 16, 80% are active across TikTok, Snapchat, Instagram, and WhatsApp. Childhood, it seems, is now permanently online. 🔗

🤖 AI anxiety sets in early
According to Nominet’s annual study of 8-25 year olds in the UK, 53% have used an AI chatbot, and 54% worry about AI’s impact on future jobs. The next generation is both enchanted by and uneasy about their digital destiny. 🔗

🚨 Cybercrime rebounds hard
After a five-year decline, major cyber attacks are rising in the UK – up to 24% from 16% two years ago. Insider threats and foreign powers are now the fastest-growing risks, overtaking organised crime. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 16)


TL;DR: April’s Go Flux Yourself explores the rise of AI attachment and how avatars, agents and algorithms are slipping into our emotional and creative lives. As machines get more personal, the real question isn’t what AI can do. It’s what we risk forgetting about being human …

Image created on Ninja AI

The future

“What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

The robots aren’t coming. They’re already in the room, nodding along, offering feedback, simulating empathy. They don’t sleep. They don’t sigh. And increasingly, they feel … helpful.

In 2025, AI is moving beyond spreadsheets and slide decks and entering our inner lives. According to new Harvard Business Review analysis, written by co-founder of Filtered.com and author Marc Zao-Sanders, the fastest-growing use for generative AI isn’t work but therapy and companionship. In other words, people are building relationships with machines. (I’ve previously written about AI companions – including in June last year.)

Some call this disturbing. Others call it progress. At DTX Manchester earlier this month, where I moderated a panel on AI action plans on the main stage (and wrote a summary of my seven takeaways from the event), the conversation was somewhere in between. One question lingered among the panels and product demos: how will we relate to one another when technology becomes our emotional rehearsal partner?

This puzzler is no longer only theoretical. RealTalkStudio, founded by Toby Sinclair, provides AI avatars that help users prepare for hard conversations: delivering bad news, facing conflict, and even giving feedback without sounding passive-aggressive. These avatars pick up on tone, hesitation, and eye movement. They pause in the right places, nod, and even move their arms around.

I met Toby at DTX Manchester, and we followed up with a video call a week or so later, after I’d road-tested RealTalkStudio. The prompts on the demo – a management scenario – were handy and enlightening, especially for someone like me, who has never really managed anyone (do children count?). They allowed me to speak with my “direct report” adroitly, to achieve a favourable outcome for both parties. 

Toby had been at JP Morgan for almost 11 years until he left to establish RealTalkStudio in September, and his last role was Executive Director of Employee Experience. Why did he give it all up?

“The idea came from a mix of personal struggle and tech opportunity,” he told me over Zoom. “I’ve always found difficult conversations hard – I’m a bit of a people pleaser, so when I had to give feedback or bad news, I’d sugarcoat it, use too many pillow. My manager [at JP Morgan] was the opposite: direct, no fluff. That contrast made me realise there isn’t one right way – but practise is needed. And a lot of people struggle with this, not just me.”

The launch of ChatGPT, in November 2022, prompted him to explore possible solutions using technology. “Something clicked. It was conversational, not transactional – and I immediately thought, this could be a space to practise hard conversations. At first, I used it for myself: trying to become a better manager at JP Morgan, thinking through career changes, testing it as a kind of coach or advisor. That led to early experiments in building an AI coaching product, but it flopped. The text interface was too clunky, the experience too dull. Then, late last year, I saw how far avatar tech had come.” 

Suddenly, Toby’s idea felt viable. Natural, even. “I knew the business might not be sustainable forever, but for now, the timing and the tech felt aligned. I could imagine it being used for manager training, dating, debt collectors, airline … so many use cases.”

Indeed, avatars are not just used in work settings. A growing number of people – particularly younger generations – are turning to AI to rehearse dating, for instance. Toby has been approached by an Eastern European matchmaking service. “They came to me because they’d noticed a recurring issue, especially with younger men: poor communication on dates, and a lack of confidence. They were looking for ways to help their clients – mainly men – have better conversations. And while practice helps, finding a good practice partner is tricky. Most of these men don’t have many female friends, and it’s awkward to ask someone: ‘Can we practise going on a date?’ That’s where RealTalk comes in. We offer a realistic, judgment-free way to rehearse those conversations. It’s all about building confidence and clarity.”

These avatars flirt back. They guide you through rejection. They help you practise confidence without fear of humiliation. It’s Black Mirror, yes. But also oddly touching. On one level, this is useful. Social anxiety is rising. Young people in particular are navigating a digital-first emotional landscape. An AI avatar offers low-risk rehearsal. It doesn’t laugh. It doesn’t ghost.

On another level, it’s deeply troubling. The ability to control the simulation – to tailor responses, remove ambiguity, and mute discomfort – trains us to expect real humans to behave predictably, like code. We risk flattening our tolerance for emotional nuance. If your avatar never rolls its eyes or forgets your birthday, why tolerate a flawed, chaotic, human partner?

When life feels high-stakes and unpredictable, a predictable conversation with a patient, programmable partner can feel like relief. But what happens when we expect humans to behave like avatars? When spontaneity becomes a bug, not a feature?

That’s the tension. These tools are good, and only improving. Too good? The quotation I started this month’s Go Flux Yourself with comes from Toby, who has a two-year-old boy, Dylan. As our allotted 30 minutes neared its end, the hugely enjoyable conversation turned philosophical, and he posed this question: “What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

It’s clear that AI avatars are no longer just slick customer service bots. They’re surprisingly lifelike. Character-3, the latest from Hedra, mimics micro-expressions with startling accuracy. Eyebrows arch. Shoulders slump. A smirk feels earned.

This matters because humans are built to read nuance. We feel it when something’s off. But as avatars close the emotional gap, that sense of artifice starts to slip. We begin to forget that what we engage with isn’t sentient – it’s coded.

As Justine Moore from Andreessen Horowitz stressed in an article outlining the roadmap for avatars (thanks for the tip, Toby), these aren’t talking heads anymore. They’re talking characters, designed to be persuasive. Designed to feel real enough.

So yes, they’re useful for training, coaching, even storytelling. But they’re also inching closer to companionship. And once a machine starts mimicking care, the ethics get blurry.

Nowhere is the ambivalence more acute than in the creative industries. The spectre of AI-generated music, art, and writing has stirred panic among artists. And yet – as I argued at Zest’s Greenwich event last week – the most interesting possibilities lie in creative amplification, not replacement.

For instance, the late Leon Ware’s voice, pulled from a decades-old demo, now duets with Marcos Valle on Feels So Good, a track left unfinished since 1979. The result, when I heard it at the Jazz Cafe last August, when I was lucky enough to catch octogenarian Valle, was genuinely moving. Not because it’s novel, but because it’s human history reassembled. Ware isn’t being replaced. He’s being recontextualised.

We’ve seen similar examples in recent months: a new Beatles song featuring a de-noised John Lennon; a Beethoven symphony completed with machine assistance. Each case prompts the same question: is this artistry, or algorithmic taxidermy?

From a technical perspective, these tools are astonishing. From a legal standpoint, deeply fraught. But from a cultural angle, the reaction is more visceral: people care about authenticity. A recent UK Music study found that 83% of UK adults believe AI-generated songs should be clearly labelled. Two-thirds worry about AI replacing human creativity altogether.

And yet, when used transparently, AI can be a powerful co-creator. I’ve used it to organise ideas, generate structure, and overcome writer’s block. It’s a tool, like a camera, or a DAW, or a pencil. But it doesn’t originate. It doesn’t feel.

As Dean, a community member of Love Will Save The Day FM (for whom my DJ alias Boat Floaters has a monthly show called Love Rescue), told me: “Real art is made in the accidents. That’s the magic. AI, to me, reduces the possibility of accidents and chance in creation, so it eliminates the magic.”

That distinction matters. Creativity is not just output. It’s a process. It’s the struggle, the surprise, the sweat. AI can help, but it can’t replace that.

Other contributions from LWSTD members captured the ambivalence of AI and creativity – in music, in this case, but these viewpoints can be broadened out to the other arts. James said: “Anything rendered by AI is built on the work of others. Framing this as ‘democratised art’ is disingenuous.” He noted how Hayao Miyazaki of Studio Ghilbli expressed deep disgust when social media feeds became drowned in AI-parodies of his art. He criticised it as an “insult to life itself”.

Sam picked up this theme. “The Ghibli stuff is a worrying direction of where things can easily head with music – there’s already terrible versions of things in rough styles but it won’t be long before the internet is flooded with people making their own Prince songs (or whatever) but, as with Ghibli, without anything beyond a superficial approximation of art.”

And Jed pointed out that “it’s all uncanny – it’s close, but it’s not right. It lacks humanity.”

Finally, Larkebird made an amusing distinction. “There are differences between art and creativity. Art is a higher state of creativity. I can add coffee to my tea and claim I’m being creative, but that’s not art.”

Perhaps, though, if we want to glimpse where this is really headed, we need to look beyond the avatars and look to the agents, which are currently dominating the space.

Ray Smith, Microsoft’s VP of Autonomous Agents, shared a fascinating vision during our meeting in London in early April. His team’s strategy hinges on three tiers: copilots (assistants), agents (apps that take action), and autonomous agents (systems that can reason and decide).

Imagine an AI that doesn’t just help you file expenses but detects fraud, reroutes tasks, escalates anomalies, all without being prompted. That’s already happening. Pets at Home uses a revenue protection agent to scan and flag suspicious returns. The human manager only steps in at the exception stage.

And yet, during Smith’s demo … the tech faltered. GPU throttling. Processing delays. The AI refused to play ball.

It was a perfect irony: a conversation about seamless automation interrupted by the messiness of real systems. Proof, perhaps, that we’re still human at the centre.

But the direction of travel is clear. These agents are not just tools. They are colleagues. Digital labour, tireless and ever-present.

Smith envisions a world where every business process has a dedicated agent. Where creative workflows, customer support, and executive decision-making are all augmented by intelligent, autonomous helpers.

However, even he admits that we need a cultural reorientation. Most employees still treat AI as a search box. They don’t yet trust it to act. That shift – from command-based to companion-based thinking – is coming, slowly, then suddenly (to paraphrase Ernest Hemingway).

A key point often missed in the AI hype is this: AI is inherently retrospective. Its models are trained on what has come before. It samples. It predicts. It interpolates. But it cannot truly invent in the sense humans do, from nothing, from dreams, from pain.

This is why, despite all the alarmism, creativity remains deeply, stubbornly human. And thank goodness for that.

But there is a danger here. Not of AI replacing us, but of us replacing ourselves – outsourcing our process, flattening our instincts, degrading our skills, compromising originality in favour of efficiency.

AI might never write a truly original poem. But if we rely on it to finish our stanzas, we might stop trying.

Historian Yuval Noah Harari has warned against treating AI as “just another tool”. He suggests we reframe it as alien intelligence. Not because it’s malevolent, but because it’s not us. It doesn’t share our ethics. It doesn’t care about suffering. It doesn’t learn from heartbreak.

This matters, because as we build emotional bonds with AI – however simulated – we risk assuming moral equivalence. That an AI which can seem empathetic is empathetic.

This is where the work of designers and ethicists becomes critical. Should emotional AI be clearly labelled? Should simulated relationships come with disclaimers? If not, we risk emotional manipulation at industrial scale, especially among the young, lonely, or digitally naive. (This recent New York Times piece, about a married, 28-year-old woman in love with her CPT, is well worth a read, to show how easy – and frightening, plus costly – it is to become attached to AI.)

We also risk creating a two-tier society: those who bond with humans, and those who bond with their devices.

Further, Harari warned in an essay, published in last Saturday’s Financial Times Weekend, that the rise of AI could accelerate political fragmentation in the absence of shared values and global cooperation. Instead of a liberal world order, we gain a mosaic of “digital fortresses”, each with its own truths, avatars, and echo chambers. 

Without robust ethics, the future of AI attachment could split into a thousand isolated solitudes, each curated by a private algorithmic butler. If we don’t set guardrails now, we may soon live in a world where connection is easy – and utterly empty.

The present

At DTX Manchester this month, the main-stage AI panel I moderated felt very different from those even last year. The vibe was less “what is this stuff?” and more “how do we control the stuff we’ve already unleashed?”

Gone are the proof-of-concept experiments. Organisations are deploying AI at scale. Suzanne Ellison at Lloyds Bank described a knowledge base now used by 21,000 colleagues, reducing information retrieval by half and boosting customer satisfaction by a third. But more than that, it’s made work more human, freeing up time for nuanced, empathetic conversations.

Likewise, the thought leadership business I co-founded last year, Pickup_andWebb, uses AI avatars for client-facing video content, such as a training programme. No studios. No awkward reshoots. Just instant script updates. It’s slick, smart, and efficient. And yes, slightly unsettling.

Dominic Dugan of Oktra, a man who has spent decades designing workspaces, echoed that tension. He’s sceptical. Most post-pandemic office redesigns, he argues, are just “colouring in”– performative, superficial, Instagram-friendly but uninhabitable. We’ve designed around aesthetics, not people.

Dugan wants us to talk about performance. If an office doesn’t help people do better work, or connect more meaningfully, what’s the point? Even the most elegantly designed workplace means little if it doesn’t accommodate the emotional messiness of human interaction – something AI, for all its growth, still doesn’t understand.

And yet, that fragility of our human systems – tech included – was brought into sharp relief in these last few days (and is ongoing, at the time of writing) when an “induced atmospheric vibration” reportedly caused widespread blackouts in Spain and Portugal, knocking out connectivity across major cities for hours, and in some cases days. No internet. No payment terminals. No AI anything. Life slowed to a crawl. Trains stopped. Offices went dark. Coffee shops switched to cash, or closed altogether. It was a rare glimpse into the abyss of analogue dependency, a reminder that our digital lives are fragile scaffolds built on uncertain foundations.

The outage was temporary. But the lesson lingers: the more reliant we become on these intelligent systems, the greater our vulnerability when they fail. And fail they will. That’s the nature of systems. But it’s also the strength of humans: our capacity to improvise, to adapt, to find ways around failure. The more we automate, the more we must remember this: resilience cannot be outsourced.

And that brings me to my own moment of reinvention.

This month I began the long-overdue overhaul of my website, oliverpickup.com. The current version – featuring a photograph on the home page of me swimming in the Regents Park Serpentine at a shoot interviewing Olympic triathlete Jodie Stimpson, goggles on upside down – has served me well, but it’s over a decade old. Also people think I’m into wild swimming. I’m not, and detest cold water. 

(The 2015 article in FT Weekend has one of my favourite opening lines: “Jodie Stimpson is discussing tactical urination. The West Midlands-based triathlete, winner of two Commonwealth Games golds last summer, is specifically talking about relieving herself in her wetsuit to flood warmth to the legs when open-water swimming.”) 

But it’s more than a visual rebrand. I’m repositioning, due to FOBO (fear of becoming obsolete). The traditional freelance model is eroding, its margins squeezed by algorithmic content and automated writing. While it might not have the personality, depth, and nuance of human writing, AI doesn’t sleep, doesn’t bill by the hour, and now writes decently enough to compete. I know I can’t outpace it on volume. So I have to evolve. Speaking. Moderating. Podcasting. Hosting. These are uniquely human domains (for now).

The irony isn’t lost on me: I now use AI to sharpen scripts, test tone, even rehearse talks. But I also know the line. I know what cannot be outsourced. If my words don’t carry me in them, they’re not worth publishing.

Many of us are betting that presence still matters. That real connection – in a room, on a stage, in a hard conversation – will hold value, even as screens whisper more sweetly than ever.

As such, I’m delighted to have been accepted by Pomona Partners, a speaker agency led by “applied” futurist Tom Cheesewright, whom I caught up with over lunch when at DTX Manchester. I’m looking forward to taking the next steps in my professional speaking career with Tom and the team.

The past

Recently, prompted by a friend’s health scare and my natural curiosity, I spat into a tube and sent off the DNA sample to ancestry.com. I want to understand where I come from, what traits I carry, and what history pulses through me.

In a world where AI can mimic me – my voice, writing style, and image – there’s something grounding about knowing the real me. The biological, lived, flawed, irreplaceable me.

It struck me as deeply ironic. We’re generating synthetic selves at an extraordinary rate. Yet we’re still compelled to discover our origins: to know not just where we’re going, but where we began.

This desire for self-knowledge is fundamental. It sits at the heart of my CHUI framework: Community, Health, Understanding, Interconnectedness. Without understanding, we’re at the mercy of the algorithm. Without roots, we become avatars.

Smith’s demo glitch – an AI agent refusing to cooperate – was a reminder that no matter how advanced the tools, we are still in the loop. And we should remain there.

When I receive my ancestry results, I won’t be looking for royalty. I’ll be looking for roots. Not to anchor me in the past, but to help me walk straighter into the future. I’ll also share those findings in this newsletter. Meanwhile, I’m off to put tea in my coffee.

Statistics of the month

📈 AI is boosting business. Some 89% of global leaders say speeding up AI adoption is a top priority this year, according to new LinkedIn data. And 51% of firms have already seen at least a 10% rise in revenue after implementation.

🏙️ Cities aren’t ready. Urban economies generate most of the world’s GDP, but 44% of that output is at risk from nature loss, recent World Economic Forum data shows. Meanwhile, only 37% of major cities have any biodiversity strategy in place. 🔗

🧠 The ambition gap is growing. Microsoft research finds that 82% of business leaders around the globe say 2025 is a pivotal year for change (85% think so in the UK). But 80% of employees feel too drained to meet those expectations. 🔗

📉 Engagement is slipping. Global employee engagement is down to 21%, according to Gallup’s latest State of the Global Workplace annual report (more on this next month). Managers have been hit hardest – dropping from 30% to 27% – and have been blamed for the general fall. The result? $438 billion in lost productivity. 🔗

💸 OpenAI wants to hit $125 billion. That’s their projected revenue by 2029 – driven by autonomous agents, API tools and custom GPTs. Not bad for a company that started as a non-profit. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 15)

TL;DR: March’s Go Flux Yourself explores what good leadership looks like in an AI-driven world. Spoiler: it’s not Donald Trump. From psychological safety and the “Lost Einsteins” to lessons from the inventor of plastic, it examines why innovation without inclusion is reckless – and why collaboration, kindness, and asking better questions might be our best defence against digital delusion and existential drift

Image created on Midjourney

The future

“Leadership is the art of harnessing the efforts of others to achieve greatness.”

Donald Trump’s America-first agenda may appeal to the base instincts of populism, a nationalist fever dream dressed up as economic strategy. However, it is hopelessly outdated as a leadership model for a globally connected, AI-enabled future. 

In fact, it’s worse than that. It’s actively regressive. Trumpism, and the rise of Trumpian imitators across the globe, isn’t just shutting borders. It’s shutting minds, too, and that’s more debilitating for society. It trades in fear, not foresight. It rewards silence over dissent. And in doing so, it stifles precisely the kind of leadership the future demands.

Because let’s be clear: the coming decades will not be defined by those who shout the loudest or build the tallest walls. They will be defined by those who keep channels open – not just for trade, but for ideas. For difference. For disagreement. For discovery.

That starts with listening. And not just listening politely, but listening generatively – creating the psychological space where people feel safe enough to share the thought that might change everything.

At the recent Workhuman Live Forum in London, Harvard’s Amy Edmondson – a global authority on leadership and psychological safety – warned of the “almost immeasurable” consequences of holding back. In her research, 93% of senior leaders admitted that their silence had tangible costs. Not theoretical. Not abstract. Tangible. Safety failures. Wasted resources. Poor decisions. Quiet disengagement. And perhaps worst of all, missed opportunities to learn.

Why do we hold back, and not speak up? Because we’re human. And humans are wired to avoid looking stupid. We’d rather be safe than smart. Edmondson calls it “impression management”, and we’re all fluent in it. From the start of primary school, we learn not to raise our hand unless we’re sure of the answer. By the time we enter the workforce, that instinct is second nature.

But in today’s volatile, uncertain, complex, and ambiguous (VUCA) world – after a chorus in the early pandemic days, five years ago, I’m hearing this used a lot more by business leaders now – that instinct is no longer helpful. It’s dangerous. Because real innovation doesn’t happen in safe, silent rooms. It happens in teams willing to fail fast, speak up, and challenge the status quo. In rooms where “I think we might be wrong” is not a career-ending statement, but a spark.

So how should leaders lead? The quotation that begins this month’s Go Flux Yourself is from Ken Frazier, former CEO of Merck, and was cited by Edmundson, who heard it in one of her sessions. It’s worth repeating: “Leadership is the art of harnessing the efforts of others to achieve greatness.”

This brings us to Aneesh Raman, LinkedIn’s Chief Economic Opportunity Officer, and his powerful message at Talent Connect and Sancroft Convene, in the shadow of St Paul’s Cathedral in London. Raman argues that we are moving out of the “knowledge economy” – where technical proficiency was king – and into the “innovation economy”, where our most human skills become our greatest assets.

He lists them as the five Cs: communication, creativity, compassion, courage, and curiosity. Let’s make it six: collaboration. These are no longer “soft skills” but the defining skills of the age. They allow us to build trust, forge connections, and work across differences. They are, as Raman says, why we are the apex species on the planet.

But here’s the catch: while these skills are distributed broadly across the population, the opportunity to develop and express them is not. Enter the “Lost Einsteins” – those with the potential to innovate but without the credentials, connections, or capital to turn ideas into impact. Economist Raj Chetty’s landmark study found that children from wealthy families are 10-times more likely to become inventors than equally talented peers from lower-income backgrounds.

This is a global failure. We are squandering talent on an industrial scale – not because of a lack of ability, but because of a lack of inclusion. And that’s a leadership failure.

We need leaders who can spot and elevate the quiet genius in the room, who don’t confuse volume with value, and who can look beyond the CV and see the potential in a person’s questions, not just their answers.

And we need to stop romanticising “hero” innovation – the lone genius in a garage – and embrace the truth: innovation is a team sport. For instance, Leonardo da Vinci, as biographer Walter Isaacson points out, was a great collaborator. He succeeded because he listened as much as he led.

Which brings us back to psychological safety – the necessary precondition for team-based innovation. Without it, diversity becomes dysfunction. With it, it becomes dynamite.

Edmondson’s research shows that diverse teams outperform homogenous ones only when psychological safety is high. Without that safety, diversity leads to miscommunication, mistrust, and missed potential. But with it? You get the full benefit of varied perspectives, lived experiences, and cognitive styles. You get the kind of high-quality conversations that lead to breakthroughs.

But these conversations don’t happen by accident. They require framing, invitation, and modelling. They require leaders to say – out loud – things like: “I’ve never flown a perfect flight” (as one airline captain Edmondson studied told his new crew). Or “I need to hear from you”. Or even: “I don’t know the answer. Let’s figure it out together.”

KeyAnna Schmiedl, Workhuman’s Chief Human Experience Officer, put it beautifully in a conversation we had at the Live Forum event: leadership today is less about having the answer and more about creating the conditions for answers to emerge. It’s about making work more human – not through performative gestures, but through daily, deliberate acts of kindness. Not niceness. Kindness.

Niceness avoids conflict. Kindness leans into it, constructively. Niceness says, “That’s fine”. Kindness says, “I hear you – but here’s what we need.” Niceness smooths things over. Kindness builds things up.

And kindness is deeply pragmatic. It’s not about making everyone happy. It’s about making sure everyone is heard. Because the next big idea could come from the intern. From the quiet one. From the woman in trainers, not the man in a suit.

This reframing of leadership is already underway. Schmiedl herself never thought of herself as a leader – until others started reflecting it back to her. Not because she had all the answers, but because she had a way of asking the right questions, of creating rooms where people showed up fully, where difference wasn’t just tolerated but treasured.

So what does all this mean for the rest of us?

It means asking better questions. Not “Does anyone disagree?” (cue crickets). But “Who has a different perspective?” It means listening more than speaking. It means noticing who hasn’t spoken yet – and inviting them in. It means, as Edmondson says, getting curious about the dogs that don’t bark. Other “good questions” include: “What are we missing?” Also: “Please can you explain that further?”

And it means remembering that the goal is not psychological safety itself. The goal is excellence. Innovation. Learning. Fairness. Safety is just the soil in which those things can grow.

The future belongs to the leaders who know how to listen, invite dissent, ask good questions, and, ultimately, understand that the art of leadership is not dominance, but dialogue.

Because the next Einstein is out there. She, he, or they just haven’t been heard yet.

The present

“We’re gearing up for this year to be a year where you’ll have some ‘oh shit’ moments,” said Jack Clark, policy chief at Anthropic, the $40 billion AI start-up behind the Claude chatbot, earlier this year. He wasn’t exaggerating. From melting servers at OpenAI (more on this below) to the dizzying pace of model upgrades, 2025 already feels like we’re living through the future on fast-forward.

And yet, amid all the noise, hype, and existential hand-wringing, something quieter – but arguably more profound – is happening: people are remembering the value of connection.

This March, I had the pleasure of speaking at a Federation of Small Businesses (FSB) virtual event for members in South East London. The session, held on Shrove Tuesday, was fittingly titled “Standing Out: The Power of Human Leadership in an AI World”. Between pancake references and puns (some better than others), I explored what it means to lead with humanity in an age when digital tools dominate every dashboard, inbox, and conversation.

The talk was personal, anchored in my own experience as a business owner, a journalist, and a human surfing the digital tide. I shared my CHUI framework – Community, Health, Understanding, and Interconnectedness – as a compass for turbulent times. Because let’s face it: the world is messy right now. Geopolitical uncertainty is high. Domestic pressures are mounting. AI is changing faster than our ability to regulate or even comprehend it. And loneliness – real, bone-deep isolation – is quietly eroding the foundations of workplaces and communities.

And yet, there are bright spots. And they’re often found in the places we least expect – like virtual networking events, Slack channels, and local business groups.

Since that FSB session, I’ve connected with a flurry of new people, each conversation sparking unexpected insight or opportunity. One such connection was Bryan Altimas, founder of Riverside Court Consulting. Bryan’s story perfectly exemplifies how leadership and collaboration can scale, even in a solo consultancy.

After the pandemic drove a surge in cybercrime, Altimas responded not by hiring a traditional team but by building a nimble, global network of 15 cybersecurity specialists – from policy experts to ethical hackers based as far afield as Mauritius. “Most FSB members don’t worry about cybersecurity until it’s too late,” he told me in our follow-up chat. But instead of fear-mongering, Altimas and his team educate. They equip small businesses to be just secure enough that criminals look elsewhere – the digital equivalent of fitting a burglar alarm on your front door while your neighbour leaves theirs ajar.

What struck me most about Altimas wasn’t just his technical acumen, but his collaborative philosophy. Through FSB’s Business Crimes Forum, he’s sat on roundtables with the London Mayor’s Office and contributed to parliamentary discussions. These conversations – forged through community, not competition – have directly generated new client relationships and policy influence. “It’s about raising the floor,” he said. “We’re stronger when we work together.”

That sentiment feels increasingly urgent. In an age where cybercriminals operate within sophisticated, decentralised networks, small businesses can’t afford to work in silos. Our defence must be networked, too – built on shared knowledge, mutual accountability, and trust.

And yet, many governments seem to be doing the opposite. The recent technical capability notice issued to Apple – which led to the withdrawal of advanced data protection services from UK devices – is a case in point. Altimas called it “the action of a digitally illiterate administration”, one that weakens security for all citizens while failing to deter the real bad actors. The irony? In trying to increase control, we’ve actually made ourselves more vulnerable.

This brings us back to the role of small business leaders and, more broadly, to the power of community. As I told the audience at the FSB event, the future of work isn’t just about AI. It’s about who can thrive in an AI world. And the answer, increasingly, is those who can collaborate, communicate, and connect across differences.

In a world where 90% of online content is projected to be AI-generated this year, authentic human interaction becomes not just a nice-to-have, but a business differentiator. Relationship capital is now as valuable as financial capital. And unlike content, it can’t be automated.

That’s why I encourage business leaders to show up. Join the webinars. Say yes to the follow-up call. Ask the awkward questions. Be curious. Some of the most valuable conversations I’ve had recently – including with Altimas – started with nothing more than a LinkedIn connection or a quick post-event “thanks for your talk”.

This isn’t about nostalgia or rejecting technology. As I said in my FSB talk, tech is not the enemy of human connection – it’s how we use it that matters. The question is whether our tools bring us closer to others or push us further into isolation.

The paradox of the AI age is that the more powerful our technologies become, the more essential our humanity is. AI can optimise, analyse, and synthesise, but it can’t empathise, mentor, or build trust in a room. It certainly can’t make someone feel seen, valued, or safe enough to speak up.

That’s where leadership comes in. As Edmondson noted, psychological safety doesn’t happen by accident. It must be modelled, invited, and reinforced. In many cases, work must be reframed to make clear that anyone and everyone can make a difference, alongside an acknowledgement by leaders that things will inevitably go wrong. And as Raman said, the next phase of work will be defined not by who codes the best, but by who collaborates the most.

Our best bet for surviving the “oh shit” moments of 2025 is not to go it alone, but to lean in together. As FSB members, for instance, we are not just business owners. We are nodes in a network. And that network – messy, human, imperfect – might just be our greatest asset.

The past

In 1907, Leo Baekeland changed the world. A Belgian-born chemist working in New York, he created Bakelite – the world’s first fully synthetic plastic. It was, by every measure, a breakthrough. Hard, durable, and capable of being moulded into almost any shape (the clue is in the name – plastikos, from the Greek, meaning “capable of being shaped”), Bakelite marked the dawn of the modern plastics industry. 

For the first time, humankind wasn’t limited to what nature could provide. We could manufacture our own materials. These materials would soon find their way into everything from telephones to televisions, jewellery to jet engines.

Baekeland had no idea what he was unleashing. And perhaps that’s the point.

More than a century later, we’re drowning in the aftershocks of that innovation. At Economist Impact’s 10th Sustainability Week earlier this month – once again in the quietly majestic surroundings of Sancroft Covene – I had the pleasure of moderating a panel titled “Preventing plastics pollution through novel approaches”. I even dressed for the occasion, sporting a nautical bow tie (always good to keep the theme on-brand), and kicked things off with a bit of self-aware humour about my surname.

One of the panellists, Kris Renwick of Reckitt, represented the makers of Harpic – the toilet cleaner founded by none other than Harry Pickup, surely the most illustrious bearer of my surname. (Although late actor Ronald Pickup has a case.) There’s a certain poetry in that Harry made his name scrubbing away society’s waste. 

Especially when set against another panellist, Alexandra Cousteau – granddaughter of Jacques-Yves, the pioneering oceanographer who co-invented the Aqua-Lung and brought the mysteries of the sea to the world. Cousteau, who first set sail on an expedition at just four months old, told the audience that there is 50% less sea life today than in her grandfather’s time.

Let that sink in. Half of all marine life gone – in just three generations.

And plastics are a big part of the problem. We now produce around 460 million tonnes of plastic every year. Of that, 350 million tonnes becomes waste – a staggering 91% is never recycled. Contrary to popular belief, very little of it ends up in the oceans directly, though. 

According to Gapminder, just under 6% of all plastic waste makes it to the sea. Most of it – around 80 million tonnes – is mismanaged: dumped, burned, or buried in ways that still wreak havoc on ecosystems and human health. As Cousteau pointed out, the average person, astonishingly, is believed to carry around the equivalent of a plastic spoon’s worth of microplastics in their body. Including in their brain.

Image created on Midjourney

It’s a bleak picture – and one with eerie echoes in the current hype cycle around AI.

Bakelite was hailed as a wonder material. It made things cheaper, lighter, more efficient. So too does AI. We marvel at what generative tools can do – composing music, designing logos, writing code, diagnosing diseases. Already there are brilliant use cases – and undoubtedly more to come. But are we, once again, rushing headlong into a future we don’t fully understand? Are we about to repeat the same mistake: embracing innovation, while mismanaging its consequences?

Take energy consumption. This last week, OpenAI’s servers were reportedly “melting” under the strain of demand after the launch of their new image-generation model. Melting. It’s not just a metaphor. The environmental cost of training and running large AI models is immense – with a 2019 estimate (ie before the explosion of ChatGPT) suggesting a single model can emit as much carbon as five cars over their entire lifetimes. That’s not a sustainable trajectory.

And yet, much like Bakelite before it, AI is being pushed into every corner of our lives. Often with the best of intentions. But intentions, as the old saying goes, are not enough. What matters is management.

On our plastics panel, Cousteau made the case for upstream thinking. Rather than just reacting to waste, we must design it out of the system from the start. That means rethinking materials, packaging, infrastructure. In other words, it requires foresight. A willingness to zoom out, to consider long-term impacts rather than just short-term gains.

AI demands the same. We need to build governance, ethics, and accountability into its architecture now – before it becomes too entrenched, too ubiquitous, too powerful to regulate meaningfully. Otherwise, we risk creating a different kind of pollution: not plastic, but algorithmic. Invisible yet insidious. Microbiases instead of microplastics. Systemic discrimination baked into decision-making processes. A digital world that serves the few at the expense of the many.

All of this brings us back to leadership. Because the real challenge isn’t innovation. It’s stewardship. As Cousteau reminded us, humans are phenomenally good at solving problems when we decide to care. The tragedy is that we so often wait until it’s too late – until the oceans are full, until the servers melt, until the damage is done.

Moderating that session reminded me just how interconnected these conversations are. Climate. Technology. Health. Equity. We can’t afford to silo them anymore. The story of Bakelite is not just the story of plastics. It’s the story of unintended consequences. The story of how something miraculous became monstrous – not because it was inherently evil, but because we weren’t paying attention.

And that, in the end, is what AI forces us to confront. Are we paying attention? Are we asking the right questions, at the right time, with the right people in the room?

Or are we simply marvelling at the magic – and leaving someone else to clean up the mess?

Statistics of the month

📊 AI in a bubble? Asana’s latest research reveals that AI adoption is stuck in a ‘leadership bubble’ – while executives embrace the tech, most employees remain on the sidelines. Two years in, 67% of companies still haven’t scaled AI across their organisations. 🔗

🤝 Collaboration drives adoption. According to the same study, workers are 46% more likely to adopt AI when a cross-functional partner is already using it. Yet most current implementations are built for solo use – missing the chance to unlock AI’s full, collective potential. 🔗

📉 Productivity gap alert. Gartner predicts that by 2028, over 20% of workplace apps will use AI personalisation to adapt to individual workers. Yet today, only 23% of digital workers are fully satisfied with their tools – and satisfied users are nearly 3x more productive. The workplace tech revolution can’t come soon enough.

📱 Emoji wars at work. New research from The Adaptavist Group exposes a generational rift in office comms: 45% of UK over-50s say emojis are inappropriate, while two-thirds of Gen Z use them daily. Meanwhile, full-stops are deemed ‘professional’ by older workers, but 23% of Gen Z perceive them as ‘rude’. Bring on the AI translators! 🔗

😓 Motivation is fading. Culture Amp finds that UK and EMEA employee motivation has declined for three straight years. Recognition is at a five-year low, and fewer workers feel performance reviews reflect their impact. Hard work, unnoticed. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 14)

TL;DR February’s Go Flux Yourself examines fairness as a business – and societal – necessity. Splunk’s Kirsty Paine tackles AI security, Harvard’s Siri Chilazi critiques DEI’s flaws, and Robert Rosenkranz applies Stoic wisdom to ambition, humility, and success in an AI-driven world …

Image created on Midjourney with the prompt “a forlorn man with his young son both with ski gear on at the top of a mountain with no snow on it (but green grass and rock) with a psychedelic sky”

The future

“To achieve anything meaningful, you must accept that you don’t have all the answers. The most successful people are the ones who keep learning, questioning, and improving.”

Robert Rosenkranz has lived the American Dream – but you won’t hear him shouting about it. At 82, he has little interest in the brash, performative ambition that defines modern politics and business. Instead, his story is one of quiet, relentless progress. 

Born into a struggling family, he worked his way through Yale and Harvard, then went on to lead Delphi Financial Group for over three decades. By the time he stepped down as CEO in 2018, he had grown the company’s value 100-fold, overseeing more than $20 billion in assets.

Yet, Rosenkranz’s real legacy might not be in finance, but in philanthropy. Yesterday (February 27), in a smart members’ club (where I had to borrow a blazer at reception – oops!) in Mayfair, London, I attended an intimate lunch to discuss The Stoic Capitalist, his upcoming book on ambition, self-discipline, and long-term success. 

As we received our starters, he shared an extraordinary statistic: “In America, there are maybe a couple of dozen people who have given over a billion dollars in their lifetime. A hundred percent of them are self-made.”

Really? I did some digging, and the numbers back him up. As of 2024, over 25 American philanthropists have donated more than $1 billion each, according to Forbes. Further, of those who have signed the Giving Pledge – committing to give away at least half their wealth – 84% are self-made. Only 16% inherited their fortunes.

The message is clear: those who build their wealth from nothing are far more likely to give it away. Contrast this with Donald Trump, the ultimate heir-turned-huckster. Brash, transactional (“pay-to-play” was how American political scientist Ian Bremmer neatly describes him), obsessed with personal gain, the American President represents a vision of success where winning means others must lose. Rosenkranz, by contrast, embodies something altogether different – ambition not as self-interest, but as a long game that enriches others.

He is also, tellingly, apathetic about politics, latterly. Having once believed in the American meritocracy, the Republican who has helped steer public policy now sees a system increasingly warped by inherited wealth, populism, and those pay-to-play politics. “The future of American politics worries me,” he admitted at the lunch. And given the rise of Trumpian imitators, he has reason to be concerned. To my mind, the world needs more Rosenkranzes – self-made leaders who view ambition and success as vehicles for building, rather than simply taking.

This tension – between long-term, disciplined ambition and short-term, self-serving power – runs through this month’s Go Flux Yourself. Because whether we’re talking about AI security, workplace fairness, or the philosophy of leadership, the real winners will be those who take the long view and seek fairness.

Fairness at work: The illusion of progress

Fairness in the workplace is one of those ideas that corporate leaders love to endorse in principle – but shy away from in practice. Despite billions spent on Diversity, Equity, and Inclusion (DEI) initiatives, meaningful change remains frustratingly elusive. (Sadly, this fact only helps Trump’s forceful agenda to ditch such policies – an approach that is driving the marginalised to seek shelter, at home or abroad.)

“For a lot of organisations, programmatic interventions are appealing because they are discrete. They’re off to the side. It’s easy to approve a one-time budget for a facilitator to come and do a training or participate in a single event. That’s sometimes a lot easier than saying: ‘Let’s change how we evaluate performance.’ But precisely because those latter types of solutions are embedded and affect how work gets done daily, they’re more effective.”

This is the heart of what Harvard’s Siri Chilazi told me when we discussed Make Work Fair, the new book she has co-authored with Iris Bohnet. Their research offers a much-needed reality check on corporate DEI efforts.

Image created on Midjourney with the prompt “a man and a women in work clothes on a balancing scale – equal – in the style of a matisse painting”

She explained why so many workplace fairness initiatives fail: they rely on changing individual behaviour rather than fixing broken systems. “Unconscious bias training has become this multi-billion-dollar industry,” she said. “But the evidence is clear — it doesn’t work.” Studies have shown that bias training rarely leads to lasting behavioral change, and in some cases, it even backfires, making people more defensive about their biases rather than less.

So what does work? Chilazi and Bohnet argue that structural interventions — the kind that make fairness automatic rather than optional — are the key to real progress. “If you want to reduce bias in hiring, don’t just tell people to ‘be more aware’ — design the process so that bias has fewer opportunities to creep in,” she told me.

This means:

  • Standardising interviews so every candidate is evaluated against the same criteria
  • Removing names from CVs to eliminate unconscious bias in early screening
  • Making promotion decisions based on clear, structured frameworks rather than subjective “gut feelings”

The companies that have done this properly – like AstraZeneca, which now applies transparent decision-making frameworks to promotions – have seen real progress. Others, Chilazi warned, are simply engaging in performative fairness. “If an organisation is still relying on vague, unstructured decision-making, it doesn’t matter how many DEI consultants they hire – bias will win.”

Perhaps the most telling statistic comes from a 2023 McKinsey report that found that 90% of executives believe their DEI initiatives are effective, but only 40% of employees agree. That gap tells you everything you need to know.

This matters not just ethically, but competitively. Companies that embed fairness into their DNA don’t just avoid scandals and lawsuits – they outperform their competitors. “The data is overwhelming,” Chilazi said. “Fairer companies attract better talent, foster more innovation, and have stronger long-term results.”

Yet many businesses refuse to make fairness a structural priority. Why? Because, as Chilazi put it, “real fairness requires real power shifts. And that makes a lot of leaders uncomfortable.”

But here’s the reality: fairness isn’t a cost – it’s an investment. The future belongs to the companies that understand this. And those that don’t? They’ll be left wondering why the best talent keeps walking out the door.

NB I’ll be discussing some of this next week, on March 4, at the latest Inner London South Virtual Networking event for the Federation of Small Businesses (of which I’m a member). See here to tune in.

Fairness in AI: Who controls the future?

If fairness in the workplace is in crisis, fairness in AI is a full-blown emergency. And unlike workplace bias – which at least has legal protections and public scrutiny – AI bias is being quietly embedded into the foundations of our future.

AI now influences who gets hired, who gets a loan, who gets medical treatment, and even who goes to prison. Yet, shockingly, most companies deploying these systems have no real governance strategy in place.

At the start of February, I spoke with Splunk’s Geneva-based Kirsty Paine, a cybersecurity strategist and World Economic Forum Fellow, who is actively working with governments, regulators, and industry leaders to shape AI security standards. Her message was blunt: “AI governance isn’t just about ethics or compliance – it’s a resilience issue. If you don’t get it right, your business is exposed”.

This is where many boards are failing. They assume AI security is a technical problem, best left to IT teams. But as Paine explained, if AI makes a bad decision – one that leads to reputational, financial, or legal fallout – blaming the engineers won’t cut it.

“We need boards to start thinking of AI governance the same way they think about financial oversight,” she said. “If you wouldn’t approve a financial model without auditing it, why would you sign off on AI that fundamentally impacts customers, employees, and business decisions?”

Historically, businesses have treated cybersecurity as a defensive function – protecting systems from external attacks. But AI doesn’t work like that. It is constantly learning, evolving, and interacting with new data and new risks.

“You can’t just ‘fix’ an AI system once and assume it’s safe,” Paine told me. “AI doesn’t stop learning, so its risks don’t stop evolving either. That means your governance model needs to be just as dynamic.”

At its core, this is about power. Who controls AI, and in whose interests? Right now, most AI development is happening behind closed doors, controlled by a handful of tech giants with little accountability.

One of the biggest governance challenges is that no single company can solve AI security alone. That’s why Paine is leading cross-industry efforts at the WEF, bringing together governments, regulators, and businesses to create shared frameworks for AI security and resilience.

“AI security shouldn’t be a competitive advantage – it should be a shared priority,” she said. “If businesses don’t start working together on governance, they’ll be left at the mercy of regulators who will make those decisions for them.”

One of the most significant barriers to AI security is communication. Paine, who started her career as a mathematics teacher in challenging schools, knows that how you explain something determines whether people truly understand it.

“In cybersecurity and AI, we love jargon,” she admitted. “But if your board doesn’t understand the language you’re using, how can they make informed decisions?”

This is where her teaching background has shaped her approach. “I had to explain complex maths to students who found it intimidating,” she said. “Now, I do the same thing in boardrooms.” The goal isn’t to impress people with technical terms but to ensure they actually get it, was her message.

And this, ultimately, is the hidden risk of AI governance: if leaders don’t understand the systems they’re approving, they can’t govern them effectively.

The present

If fairness has been the intellectual thread running through my conversations this month, sobriety has been the personal one. I’ve been talking about it a lot – on Voice of Islam radio, for example (see here, from about 23 minutes in), where I was invited to discuss the impact of alcohol on society – and in wrapping up Upper Bottom, the sobriety podcast I co-hosted for the past year.

Ending Upper Bottom felt like the right decision – producing a weekly podcast (an endless cycle of researching, recording, editing, publishing and promoting) is challenging, and harder to justify with no financial reward and little social impact. But it also marked a turning point. 

When we launched last February, it was a passion project – an exploration of what it meant to re-evaluate alcohol’s role in our lives. Over the months, the response was encouraging: messages from people rethinking their own drinking, others inspired to take a break, and some who felt seen for the first time. It proved what I suspected all along: the sweetest fruits of sobriety can be found through clarity, agency, and taking control of your own story.

And now? Well, I’m already lining up new hosting gigs – this time, paid ones. Sobriety has given me a sharper focus, a better work ethic, and, frankly, a clearer voice. I have no interest in being a preacher about it – if you want a drink, have a drink – but I do know that since cutting out alcohol, opportunities keep rolling in. And I’m open to more.

I bring this up because storytelling – whether through a podcast mic, a radio interview, or the pages of Go Flux Yourself – is essentially about fairness too. Who gets to tell their story? Whose voice gets amplified? Who is given the space to question things that seem “normal” but, on closer inspection, might not be serving them?

This is the thread that ties my conversations this month – whether with Kirsty on AI governance, Robert on wealth distribution and politics, or Siri on workplace fairness, or my own reflections on sobriety – into something bigger. Fairness isn’t just about systems. It’s about who gets to write the script.

And right now, I’m more interested than ever in shaping my own.

The past

February was my birthday month. Another year older, another opportunity to reflect. And this year, the reflection came at a high altitude.

I spent a long weekend skiing in Slovenia with my 10-year-old son, Freddie – his first time on skis. It was magical, watching him initially wobble, find his balance, and then, quickly, gain confidence as he carved his way down the slopes. It took me back to my own childhood, when I was lucky enough to ski from a young age. But that word – lucky – stuck with me.

Because here’s the truth: by the time Freddie is my age, skiing might not be possible anymore.

The Alps are already feeling the effects of climate change. Lower-altitude resorts are seeing shorter seasons, more artificial snow, and unpredictable weather patterns. Consider 53% of European ski resorts face a ‘very high risk’ of snow scarcity if temperatures rise by 2°C. By the time Freddie’s children – if he has them – are old enough to ski, the idea of a family ski holiday may be a relic of the past.

It’s sobering to think about, especially after spending a month discussing fairness at work and in AI. Because climate change is the ultimate fairness issue. The people least responsible for it – future generations – are the ones who will pay the highest price.

For now, I’m grateful. Grateful that I got to experience skiing as a child, grateful that I got to share it with Freddie, grateful that – for now – we still have these mountains to enjoy.

But fairness isn’t about nostalgia. It’s about responsibility. And if we don’t take action, the stories we tell our grandchildren about the world we once had will be the closest they ever get to it.

Statistics of the month

📉 Is Google search fading? A TechRadar study found that 27% of US respondents now use AI tools instead of search engines. (I admit, I’m the same.) The way we find information is shifting fast. 🔗

🚀 GenAI is the skill to have. Coursera saw an 866% rise in AI course enrolments among enterprise learners. Year-on-year increases hit 1,100% for employees, 500% for students, and 1,600% for job seekers. Adapt, or be left behind. 🔗

Job applications are too slow. Candidates spend 42 minutes per application – close to the 53-minute threshold they consider excessive. Nearly half (45%) give up if the process drags on. Businesses must streamline hiring or risk losing top talent. 🔗

🤖 Robots are easing the burden on US nurses. AI assistants have saved clinicians 1.5 billion steps and 575,000+ hours by handling non-patient-facing tasks. A glimpse into the future of healthcare efficiency. 🔗

💻 The Slack-Zoom paradox. Virtual tools have boosted productivity for 59% of workers, yet 45% report “Zoom fatigue” – with men disproportionately affected. Remote work: a blessing and a burden. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 13)

TL;DR January’s Go Flux Yourself examines the rise of social retreat in the digital age, the erosion of real-world relationships, which is leading to population collapse – welcome to “the anti-social century” – and the role of misinformation in shaping our realities … 

Image created on Midjourney with the prompt “a playground of children looking at their mobile phones and not talking to one another / interacting physically in the sytle of an L. S. Lowry painting”

The future

Humans are social creatures – or at least, we used to be.

If you were to chart a graph of human loneliness over the last century, it would resemble a slow upward creep followed by a dramatic surge in the last two decades. 

We are, according to The Atlantic’s Derek Thompson, living in The Anti-Social Century, where the idea of spending time with others is increasingly seen as optional, exhausting, or even undesirable. (I strongly recommend spending the time to read his long article – take tissues.)

The numbers paint a stark picture. Americans now spend more time alone than ever before. The percentage of U.S. adults having dinner or drinks with friends on any given night has plummeted by over 30% in the past 20 years

Meanwhile, solo dining – once the hallmark of the lonely business traveller or the social outcast – has surged 29% in just the last two years. The number one reason people give? A greater need for “me time”.

This shift extends beyond social gatherings. The Atlantic quotes Washington, D.C., restaurateur Steve Salis, who says: “There’s an isolationist dynamic that’s taking place in the restaurant business. I think people feel uncomfortable in the world today. They’ve decided that their home is their sanctuary. It’s not easy to get them to leave.”

And while the adults in the room seem to be leading the way, consider these stats: The average person is awake for roughly 900 minutes each day. According to the Digital Parenthood Initiative, American children and teenagers spend around 270 minutes on screens during weekdays and 380 minutes on weekends. This means that screens consume over 30% of their waking hours.

Even when people do leave their homes, they are less likely to engage with others meaningfully. A study, referenced in Thompson’s article, by OpenTable found that the fastest-growing segment of restaurant-goers are those eating alone.

Technology is the obvious culprit. If the automobile and television nudged us towards individualism in the 20th century, the smartphone has propelled us into a full-blown social retreat in the 21st. John Burn-Murdoch in The Financial Times describes this phenomenon as the “relationship recession”, a term that neatly captures the decline in both casual friendships and long-term romantic partnerships.

And the point here is that this isn’t simply an American problem: it’s a global one. Across Europe, the proportion of young people who don’t socialise at least once a week has risen from one in ten to one in four. In Finland, the decline in relationships has become so extreme that couples who move in together are now more likely to break up than have a child.

As Professor Niall Ferguson pointed out in his talk Demographics, What Next? at the World Economic Forum in Davos earlier this month, in South Korea, an entire movement has emerged in response to these shifting dynamics. The “Four No’s” Movement (4B)– standing for no dating, no marriage, no sex, and no childbirth – has gained significant traction, driven by concerns over gender inequality, economic pressures, and shifting cultural values.

“A world of rising singledom is not necessarily any better or worse than one filled with couples and families,” writes Burn-Murdoch, “but it is fundamentally different to what has come before, with major social, economic and political implications.”

The collapse of relationships isn’t just a lifestyle shift: it’s a demographic crisis. Japan and South Korea, both grappling with birth rates far below replacement level, are on the brink of population collapse.

Elon Musk has gone as far as to call declining birth rates a bigger existential threat than climate change. Dramatic? Maybe. But consider this: the United Nations predicts world population will peak at 10.4 billion in 2084, after which it will decline. And yet, some experts believe that we may reach a tipping point well before then, as fertility rates continue to plummet in wealthier nations.

But why is this happening now?

One reason, as The Atlantic notes, is that the structure of our daily lives has shifted. In the past, physical communities were a natural byproduct of life. You met friends in school, in your neighbourhood, at work, or through shared activities. Today, those default social structures are eroding.

Thompson writes: “For many young people, the routines of daily life no longer require leaving the house, let alone interacting with other people in a meaningful way. Everything – from food delivery to entertainment to work – can now be accessed from a screen.”

It’s not just that young people are socialising less. It’s that their entire experience of the world is mediated through digital interactions. Digital life has become a substitute for real life.

And then there’s the rise of AI-driven relationships. For years, the assumption was that AI companionship was a male-dominated phenomenon. But a recent New York Times piece turned that idea on its head, profiling a married woman who formed a deep emotional attachment to an AI boyfriend.

“I never imagined I would feel this way about something that isn’t real,” she admitted. “But here I am.”

This follows a broader trend. Last year I wrote how, in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. But then there was a clear gender disparity: AI girlfriends were overwhelmingly preferred, outpacing their male counterparts by a factor of seven. We need new data, it seems.

The rise of AI-driven relationships raises unsettling questions:

  • What happens when virtual relationships feel safer, more convenient, and more emotionally fulfilling than real-world ones?
  • What happens when AI companions become indistinguishable from human ones?
  • What happens when loneliness itself becomes a business model?

Dr Jonathan Haidt, also speaking at WEF in Switzerland, has pointed to 2012 as the year youth mental health began its downward spiral. The reason? That was the tipping point when smartphones – the iPhone was five-years old then – and social media became ubiquitous.

In his research, Dr Haidt found that:

  • Teen girls and young women are particularly affected—with 20% of teenage girls reporting that they’ve made a suicide plan.
  • Compared to past generations, today’s kids experience:
    • 70% less physical touch with peers.
    • 70% less laughter with friends.
    • Far less independence, free play, and real-world responsibility.

The impact is clear: the less time young people spend engaging in the real world, the worse their mental health becomes.

His proposed solution, on his “mission to restore childhood”? A radical rethink of childhood norms:

  1. No smartphones before age 14.
  2. No social media before age 16.
  3. Phone-free schools.
  4. More independence, free play, and real-world responsibilities.

He pointed out that governments can help with two of these. But parents can deal with the other couple. I’ll let you work out to which he was referring.

Granted, it’s a compelling vision – but in a world where five-year-olds already own smartphones and parents outsource childcare to screens, it feels almost utopian.

Burn-Murdoch, asks the fundamental question: “Is this really what people want? If not, what needs to change?”

Do young people truly want to be alone, or have they simply been conditioned to accept a world where human connection is secondary to digital convenience?

Maybe the real question isn’t whether this trend is good or bad. Maybe the real question is: do we even want to be together anymore?

The present

If the future is about our retreat from real-world relationships, the present is about why we no longer trust what we see, read, or hear. The war on truth is well underway, and bad actors are not just fighting it – it’s being waged by the technology we rely on to inform us. Little wonder misinformation and disinformation ranked fifth in the WEF’s new Global Risks Report.

At a WEF session, Steven Pinker pointed out that the news has always been naturally negative, designed to generate outrage rather than understanding. But what happens when even the sources we trust most are misrepresenting reality? 

Apple recently came under fire for its AI-powered news summaries, which have repeatedly fabricated information. In December, the BBC formally complained after Apple’s AI-generated notifications misreported that an accused murderer had shot himself. He hadn’t. 

The same system falsely claimed that tennis legend Rafael Nadal had come out as gay and that teen sensation Luke Littler had won the PDC World Darts Championship before the event had even taken place. These were complete fabrications, generated by an AI that doesn’t understand context, nuance, or accountability.

This is the crux of the problem. Generative AI does not know anything. It simply regurgitates and reassembles probabilities based on existing data. It cannot verify sources, weigh evidence, or apply journalistic ethics. And yet, major tech companies are increasingly handing over editorial decisions to these flawed systems. 

The BBC warned that Apple’s AI-generated summaries “do not reflect – and in some cases completely contradict – the original BBC content”. Reporters Without Borders went further, arguing that facts cannot be decided by a roll of the dice. Meanwhile, the National Union of Journalists called for Apple to remove the feature entirely, warning that AI-generated news is a “blow to the outlet’s credibility and a danger to the public’s right to reliable information”.

Who can you trust? This woeful situation, though, a symptom of a much deeper crisis in journalism itself. The past two decades have seen a slow-motion collapse of traditional media. I recently reflected on this in an interview with Think Feel Do, an impact marketing agency, looking back on my early days at The Observer sports desk 20 years ago. 

Image created on Midjourney with the prompt “a newspaper sports editor in the 2000s watching horse racing on a screen in the office, with his feet up on his desk, in front of other reporters and staff in the style of a Hockney painting”

As I said in the interview: “It certainly feels like a different era. With it being a Sunday paper, the staff didn’t head into the office until Thursday, having been off since Saturday. The sports editor would constantly have horse racing on the TV, and it was very boozy – Fleet Street was famous for that culture.

“The last newsroom I worked in about a decade ago was an open-plan office with a command-and-control hierarchy. Since then, it’s been incredibly challenging for media organisations, particularly newspapers, following the advent of the Internet. The traditional business model has been completely upended – essentially, most newspapers are now just managing decline because they haven’t worked out how to organise the advertising model effectively.

“I’m part of the problem: apart from my Financial Times Weekend subscription, I can’t recall the last time I bought a newspaper – it must be two years or more. The situation has led to an explosion of clickbait content, making life even more difficult in our post-truth world.

“As Mark Twain supposedly said: ‘If you don’t read a newspaper, you’re uninformed; if you do read the newspaper, you’re misinformed’; This feels particularly relevant today, where we’ve seen the damage caused by misinformation, not least during the Coronavirus crisis. It makes me somewhat ashamed to be a member of the media, given some of the mistruths peddled during that period that we’re still struggling to deal with.”

Back then, Fleet Street had character. Newsrooms were bustling, long boozy lunches were the norm, and print advertising still funded serious investigative journalism. There was a sense of camaraderie, of purpose. Today, those newsrooms have been gutted. The rise of the internet shattered the traditional business model, leaving most newspapers managing decline rather than thriving. 

As people stopped buying physical newspapers, media companies scrambled to pivot to digital, only to find that online advertising wasn’t nearly as lucrative. The result has been brutal: thousands of journalists laid off, local news outlets shuttered, and serious reporting increasingly replaced by clickbait.

This collapse of traditional media is exactly why independent voices – whether through newsletters, podcasts, or other alternative platforms – are more critical than ever. In a world where trust in mainstream institutions is eroding, people seek authentic, nuanced, and human content. That’s why, after a year of hosting Upper Bottom, my sobriety podcast exploring drinking culture with an ambivalent lens, I’m now looking for new podcasting opportunities. 

Having built the show from the ground up – teaching myself to host, record, edit, and distribute episodes – I’ve developed a deep appreciation for the format. Podcasting offers something that much of modern media lacks: space for real conversations, free from the constraints of algorithm-driven outrage. 

Unsurprisingly, podcasts have surged in popularity while trust in mainstream news has declined. Listeners value the intimacy of the format and the ability to engage deeply with a subject rather than skimming headlines. There’s something refreshing about hearing a person’s actual voice rather than reading AI-generated summaries riddled with inaccuracies. 

Whether it’s covering the future of work, technology, human-centric innovation, or broader cultural shifts, I’m keen to continue exploring these themes through the spoken word. If you’re working on a podcast – or know someone who is – let’s talk.

Returning to traditional media, the financial strain has left journalism vulnerable to another existential threat: the rise of misinformation and disinformation. The two are often conflated, but they serve different purposes. Disinformation refers to deliberate falsehoods, often spread for political or financial gain, while misinformation is inaccurate information shared unknowingly. Social media has become the primary battleground for both. 

A 2018 study suggested that fake news spreads six times faster than real news on X, and AI-generated content is making it even harder to separate fact from fiction. This links back to the relationship recession. We’re also experiencing a trust recession. A 2024 Edelman Trust Barometer survey found that 61% of people no longer trust the news they consume, while 67% worry that AI-generated misinformation will soon make it impossible to know what’s true.

The consequences of this breakdown in trust are staggering. During the pandemic, conspiracy theories about vaccines contributed to widespread hesitancy, prolonging the crisis and costing lives. In elections, disinformation campaigns manipulate public opinion and undermine democracy. In war zones, AI-generated propaganda spreads rapidly, making it harder to distinguish reality from fiction. These are not abstract concerns—they are reshaping how people perceive the world, how they vote, and how they interact with one another.

Beyond politics, there is also the mental health toll of living in a world where truth feels increasingly elusive. Studies show that constant uncertainty fuels anxiety, depression, and social withdrawal. It is no coincidence that trust in media is collapsing at the same time that loneliness and isolation are surging. 

In particular, young people opt out of real-world interactions at unprecedented rates, as alluded to above. But perhaps they aren’t just retreating because they prefer digital interactions. Maybe they are withdrawing because they no longer trust the world around them. When every news source seems biased, when every politician seems corrupt, when every piece of media might be AI-generated nonsense, is it any wonder that people are choosing to disengage?

So, where do we go from here? Looking ahead 20 years, there are several possible scenarios. In the best case, AI handles the grunt work of journalism – automating transcription, summarising reports, and organising data – while human journalists focus on analysis, context, and investigative reporting. A more likely outcome is that only a handful of major media outlets survive, while the rest collapse. The worst-case scenario is a world where AI-generated misinformation dominates, and no one trusts anything anymore.

There are, however, some glimmers of hope. While major newspapers struggle, independent local journalism is seeing a resurgence. Outlets like The Mill and The Londoner have shown that people will pay for quality news – if it feels relevant to their daily lives. And while social media has often been an engine for misinformation, it has also enabled investigative journalists to share their work directly with engaged audiences. The challenge is finding a way to balance these forces – to harness the benefits of AI while maintaining journalistic integrity.

Ultimately, the fight for truth is about more than just media. It’s about education. If we are to navigate this new information landscape, we must teach the next generation to think critically, question sources, and demand accountability. Because if we don’t, we risk entering an era where reality itself becomes an illusion. And once that happens, how do we ever find our way back?

Trust is our most valuable currency in a world of misinformation, AI-generated news, and social media echo chambers. The question is: who do we trust? And how do we ensure that trust isn’t misplaced?

The past

Looking back, I’m grateful that social media didn’t exist when I was at university. My first tutorial at the University of St Andrews was surreal enough – just me, seven female students, and Prince William. 

I graduated 20 years ago this summer, and so much has changed. Recently, I was invited onto the Leading the Future podcast to reflect on my career – from starting in sports journalism to pivoting into technology and business. We talked about how vital human skills remain, even in an AI-driven world. And yes, we discussed the heir to the throne, a bit.

In the 40-minute episode, titled: Human Centrism with Oliver Pickup, I also covered:

  • How I started out in sports journalism 
  • When I realised I wanted more than sports journalism
  • How I pivoted to become a technology and business journalist
  • The importance of celebrating humans
  • Why I recently set up a thought leadership business (Pickup_andWebb)
  • My thoughts on smartphones and social media for children / teens
  • How to be part of the “AI class” – and make use of agentic AI

Do take a listen if any of those topics appeal to your curiosity.

Statistics of the month

📉 92 million jobs will be eliminated by 2030, but 170 million new roles will be created, according to the World Economic Forum’s Future of Jobs Report 2025.

💻 The same report found that only 24% of workers feel confident they have the skills needed for career progression in the next three years – meaning 76% don’t.

🤖 63% of people trust AI to inform decisions at work, a new CIPD study shows – but only 1% would trust AI to make important decisions outright.

⚖️ For the first time in Workmonitor’s 22-year history, work-life balance (83%) is now more important than pay (82%).

👎 44% of employees have quit a job due to a toxic workplace, Workmonitor’s report suggests – up 33% from last year.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

From darkness to light – unlocking the potential of technology-enabled supply chains

As businesses grapple with volatile demand and rising customer expectations, emerging technologies are helping operators see – and shape – their logistics networks like never before, according to experts

For centuries, supply chains have mainly operated in the dark. Even in our hyper-connected era, supply chain visibility and sustainability remain critical challenges, with the latest Proxima Supply Chain Barometer revealing that 86% of chief executives see significant hurdles with supply chain resiliency. 

This is driving a growing sense of urgency, with 55% of CEOs planning to dedicate more time to supply chain topics than in the last year. Meanwhile, 99% of respondents identify barriers to supply chain carbonisation, according to the Proxima research published in September and based on a survey of 3,000 CEOs across the UK, US, DACH, and Benelux-based companies. 

However, a convergence of artificial intelligence, Internet of Things (IoT) sensors, and innovative delivery models is shining light on these blind spots, promising a future where goods flow with next-level efficiency and sustainability.

“We’ve been running supply chains largely in the dark,” says Steve Statler, chief marketing officer at Wiliot, an ambient IoT company. “The opportunity now is to see everything everywhere all at once – like getting the cheat code in a computer game where suddenly the entire battlefield map is illuminated.”

This newfound visibility is already transforming operations. Royal Mail is deploying Wiliot’s Bluetooth readers across 6,500 vehicles to track 850,000 rolling cages that transport parcels nationwide using 2.5 million of Wiliot’s battery-free Bluetooth tags. The system helps right-size trucks, orchestrate labour, and prevent asset losses. More significantly, it lays the groundwork for real-time parcel tracking with temperature and carbon monitoring – enabling ambient delivery services for temperature-sensitive items like medicine and food without requiring refrigerated transport.

Last-mile upgrade – drones and driverless cars

 The pressure to innovate comes as consumer expectations reach new heights. Mia Yamaguchi, retail development lead at Uber Direct, says her company’s recent study shows 96% of Gen Z consumers now expect retailers to offer on-demand delivery. Yet only 22% of UK merchants, across various industries, currently provide such services, revealing a stark gap between capability and demand that represents a significant opportunity for forward-thinking businesses.

Uber Direct is addressing this gap by extending its food delivery network to all retail categories, from pharmaceuticals to DIY supplies. The model leverages existing Uber Eats couriers, demonstrating how innovation often means creatively repurposing assets rather than building from scratch. “We’re effectively a logistics innovator with a strong Uber backing,” says Yamaguchi, “utilising our existing fleet rather than investing heavily in new infrastructure.”

This approach has proved particularly valuable for construction companies and time-sensitive deliveries. When builders run short of supplies, sending workers to fetch materials can result in hours of lost productivity. Uber Direct’s solution enables rapid replenishment without disrupting work schedules, illustrating how modern logistics can directly impact broader economic efficiency.

Innovation in last-mile delivery is accelerating globally. In China, companies like JD.com are already using drones to service remote areas, particularly mountainous regions where traditional delivery proves challenging. Meanwhile, Uber is piloting autonomous vehicles in California. These developments hint at a future where the final stretch of delivery could be entirely automated.

Rise of the robots – working alongside humans

DHL Supply Chain is taking automation several steps further. The logistics giant will be the first company in the UK to deploy Boston Dynamics’ Stretch robots, using computer vision to unload trailers. Machine learning algorithms improve inventory accuracy while reducing labour needs, while generative AI streamlines back-office processes from legal work to solution design.

“Innovation and change should be embraced as something positive,” explains Saul Resnick, chief executive officer of DHL Supply Chain in the United Kingdom and Ireland. “These technologies ultimately improve job satisfaction, work quality, and safety for our people.” Rather than replacing workers, DHL’s automation strategy focuses on eliminating repetitive, physically demanding tasks while creating opportunities for employees to develop new skills.

Perhaps the most transformative development is the rise of ambient IoT – networks of battery-free sensors powered by surrounding radio waves. Wiliot’s postage stamp-sized computers can be attached to virtually anything, creating what Statler calls “an app store for the physical world”.

This capability enables use cases from monitoring vaccine temperatures to ensuring proper stock rotation in retail. “When you can see everything continuously, you discover issues people didn’t want to see before,” says Statler. “Products left at wrong temperatures, incorrect loading, poor stock rotation – all these inefficiencies become visible and addressable.” The technology has profound implications for food waste reduction and medical supply chain safety.

Navigating supply chain disruptions

Geopolitical disruptions and supply chain resilience remain critical concerns for business leaders. Resnick states that working with partners like Everstream Analytics provides access to predictive insights and risk analytics which help to calculate how events – from blocked shipping lanes to natural disasters – might impact supply chains weeks in advance.

“Having those tools available allows us to be more dynamic,” Resnick explains. “We can tell you that a ship won’t arrive because it’s stuck in the Red Sea and needs to go around Africa. You may need air freight – yes, there’s a cost, but on a case-by-case basis, this can be less than the delay.”

These predictive capabilities, powered by AI and enhanced visibility, enable businesses to make proactive decisions about inventory levels, alternative routes, and transport modes. This agility is crucial as supply chains face continued volatility and disruption.

Despite the promise, implementing these technologies isn’t straightforward. Innovation cycles often outlast executive tenures, making long-term transformation difficult. Integration with legacy systems poses technical challenges, while workforce concerns about automation require careful change management. The shift to more resilient, diversified supply chains following COVID-19 and geopolitical disruptions also demands investment, stresses Resnick.

Using the power of partnerships 

 For companies beginning their digital transformation journey, success requires a clear focus on business problems rather than technology solutions. Resnick emphasises the importance of viewing technology as an enabler rather than an end in itself. Statler notes that building ecosystems of trusted partners is crucial. “The notion that you can succeed by building everything yourself is shortsighted.”

Yield management emerges as a critical concept across all three experts’ insights. Whether it’s Uber maximising courier utilisation during off-peak hours, DHL optimising warehouse operations through automation, or Royal Mail enhancing its legacy infrastructure with modern tracking capabilities, the key is leveraging existing assets more efficiently before investing in new ones.

In the next 12 months, the convergence of AI, IoT, and innovative delivery models will create unprecedented opportunities for supply chain optimisation. Early adopters are already seeing significant benefits: Royal Mail’s rolling cage tracking system has reduced asset losses and improved fleet efficiency, while Uber Direct’s expansion into new retail categories is helping merchants meet evolving consumer expectations.

Tomorrow’s supply chains will be visible, predictive, and responsive in previously unimaginable ways. For business leaders, the imperative is clear: embrace technological change or risk being left in the dark. “Innovation is business critical,” Resnick concludes. “You can’t stand still in this regard. The alternative to moving forward doesn’t exist.”

This article was first published by Raconteur, in December 2024, following an in-person roundtable event that I moderated

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 10)

TL;DR: October’s Go Flux Yourself explores the dark and light sides of AI through Nobel Prize winners and cybersecurity experts, weighs the impact of disinformation ahead of the US election, confronts haunting cases of AI misuse, and finds hope in a Holocaust survivor’s legacy of ethical innovation …

Image created on Midjourney with the prompt  “a scary megalomaniac dressed as halloween monster with loads of computers showing code behind him in the style of an Edward Hopper painting”

The future

“Large language models are like young children – they grow and develop based on how you nurture and treat them.”

I’m curious by nature – it’s central to my profession as a truth-seeking human-work evolution journalist. But sometimes, it’s perhaps best not to peek behind the curtain, as what lies behind might be terror-inducing. Fittingly, this newsletter is published on Halloween, so you might expect some horror. Consider yourself warned!

I was fortunate enough to interview two genuine cybersecurity luminaries in as many days towards the end of October. First, Dr Joye Purser, Field CISO at Veritas Technologies and a former White House director who was a senior US Government official during the Colonial Pipeline attack in 2021, was over from Atlanta. 

And, the following day, the “godfather of Israeli cybersecurity”, Shlomo Kramer, Co-Founder and CEO of Cato Networks, treated me to lunch at Claridge’s – lucky me! – after flying in from Tel Aviv.

The above quotation is from my conversation with Joye, who warned that if a nation isn’t democratic, they’ll train their AI systems very differently, with state-controlled information. 

Both she and Shlomo painted a sobering picture of our technological future, particularly as we approach what could be the most digitally manipulated vote in history: the United States presidential election. Remember, remember the fifth of November, indeed.

“The risk is high for disinformation campaigns,” Joye stressed, urging voters to “carefully scrutinise the information they receive for what is the source, how recent or not recent is the information, and just develop an increasing public awareness of the warning signs or red flags that something’s not right with the communication”.

Shlomo, who co-founded Check Point Software Technologies in 1993, offered a stark analysis of how social media has fractured our society. “People don’t care if it’s right or wrong, whether a tweet is from a bot or a Russian campaign,” he said. “They just consume it, they believe it – it becomes their religion.” 

Shlomo drew a fascinating parallel between modern social media echo chambers and medieval church communities, suggesting we’ve come full circle from faith-based societies through the age of reason and back to tribal belief systems.

And, of course, most disagreements that develop into wars are primarily down to religious beliefs, at least on the surface. Is it a coincidence that two of the largest wars we have known in four decades are raging? (And if Donald Trump is voted back into the White House in a week, then what will that mean for Europe if – as heavily hinted – funding for Ukraine’s military is strangled?) 

After the collective trauma of the coronavirus pandemic, the combination of echo chambers on social media and manipulated AIs is creating flames for a smouldering society. Have you noticed how, generally, people are snappier with one another?

The cybersecurity challenges are equally worrying. Both experts highlighted how AI is supercharging traditional threats. Shlomo’s team recently uncovered an AI tool that can generate entire fake identities – complete with convincing video, passport photos, and multiple corroborating accounts – capable of fooling sophisticated know-your-customer systems at financial institutions.

Maybe most concerning was their shared view that cybersecurity isn’t a problem we can solve but must constantly manage. As Shlomo said: “You have to run as fast as possible to stay in the same place.” It’s a perpetual arms race between defenders and increasingly sophisticated threats.

Still, there’s hope. The very technologies that create these challenges might also help us overcome them. Both experts emphasised that while bad actors can use AI for deception, it’s also essential for defence. The key is ensuring we develop these tools with democratic values and human welfare in mind.

When I asked about preparing our children for this uncertain future – as I often do when interviewing experts who also have kids – their responses were enlightening. Joye emphasised the importance of teaching children to be “informed consumers of information” who understand the significance of trusted sources and proper journalism. 

Shlomo’s advice was more philosophical: children must learn to “listen to themselves and believe what they hear is true” – to trust their inner voice amid the cacophony of digital noise.

In the post-truth era, who can we trust if not ourselves?

A couple of years ago, John Elkington, a world authority on corporate responsibility and sustainable development who coined the term “triple bottom line”, told me: “In the vacuum of effective politicians, people are turning to businesses for leadership, so business leaders must accept that responsibility.” (Coincidently, this year marks three decades since the British environmental thinker coined the “3 Ps” of people, planet, and profit.)

For this reason, CEOs, especially, have to speak up with authority, authenticity and original thought. Staying curious, thinking critically, and calling out bad practices are increasingly important, particularly by industry leaders. 

With an eye on the near future and the need for truth, I’m pleased to announce the soft launch of Pickup_andWebb, a collaboration with brand strategist and client-turned-friend Cameron Webb. Pickup_andWebb develops incisive, issue-led thought leadership for ambitious clients looking to provoke stakeholder and industry debate and enhance their expert reputation.

“In an era of unprecedented volatility, CEOs navigate treacherous waters,” I wrote recently in our opening Insights article titled Speak up or sink. “The growing list of headwinds is formidable – from geopolitical tensions and wars reshaping global alliances to the relentless march of technological advancements disrupting entire industries. 

“Add to this the perfect storm of rising energy and material costs, traumatised supply chains, and the ever-present spectre of climate change, and it’s clear that the modern CEO’s role has never been more challenging – or more crucial. Yet, despite this incredible turbulence, the truly successful CEO of 2024 must remain a beacon of stability and vision. They are the captains who keep their eyes fixed on the distant horizon, refusing to be distracted by the immediate squalls. 

“More than ever, they must embody the role of progressive visionaries, their gaze penetrating years into the future to seize nascent opportunities or deftly avoid looming catastrophes. But vision alone is not enough.

“Today’s exemplary leaders are expected to steer with a unique blend of authenticity, humility, and vulnerability. They understand that true strength lies not in infallibility but in the courage to acknowledge uncertainties and learn from missteps. 

“These leaders aren’t afraid to swim against the tide, challenging conventional wisdom when necessary and inspiring their crews to navigate uncharted waters.”

If you are – or you know – a leader who might need help swimming against the tide and spreading their word, let’s start a conversation and co-create in early 2025.

The present

This month’s news perfectly illustrated AI’s Jekyll-and-Hyde nature on the subject of truth and technology. We saw the good, the bad, and the downright ugly.

While I’ve shared the darker future possibilities outlined by cybersecurity experts Joye and Shlomo, the 2024 Nobel Prizes highlighted AI’s extraordinary potential for good.

Sir Demis Hassabis, chief executive of Google DeepMind, shared the chemistry prize for using AI to crack a 50-year-old puzzle in biology: predicting the structure of every protein known to humanity. His team’s creation, AlphaFold, has already been used by over two million scientists worldwide, helping develop vaccines, improve plant resistance to climate change, and advance our understanding of the human body.

The day before, Geoffrey Hinton – dubbed the “godfather of AI” – won the physics prize for his pioneering work on neural networks, the very technology that powers today’s AI systems. Yet Hinton, who left Google in May 2023 to “freely speak out about the risk of AI”, now spends his time advocating for greater AI safety measures.

It’s a fitting metaphor for our times: the same week that celebrated AI’s potential to revolutionise scientific discovery also saw warnings about its capacity for deception and manipulation. As Hassabis himself noted, AI remains “just an analytical tool”; how we choose to use it matters, echoing Joye’s comment about how we feed LLMs.

Related to this topic, I was on stage twice at the Digital Transformation EXPO (DTX) London 2024 at the start of the month. Having been asked to produce a write-up of the two-day conference – the theme was “reinvention” – I noted how “the tech industry is caught in a dizzying dance of progress and prudence”.

I continued: “As industry titans and innovators converged at ExCeL London in early October, a central question emerged: how do we harness the transformative power of AI while safeguarding the essence of our humanity?

“As we stand on the brink of unprecedented change, one thing becomes clear: the path forward demands technological prowess, deep ethical reflection, and a renewed focus on the human element in our digital age.”

In the opening keynote, Derren Brown, Britain’s leading psychological illusionist, called for a pause in AI development to ensure technological products serve humans, not vice versa.

“We need to keep humanity in the driving seat,” Brown urged, challenging the audience to rethink the breakneck pace of innovation. This call for caution contrasted sharply with the rest of the conference’s urgency.

Piers Linney, Founder of ImplementAI and former Dragons’ Den investor, provided the most vivid analogy of the event. He likened competing in today’s market without embracing AI to “cage fighting – to the death – against the world champion, yet having Ironman in one’s corner and not calling him for help”.

Meanwhile, Michael Wignall, Customer Success Leader UK at Microsoft, warned: “Most businesses are not moving fast enough. You need to ask yourself: ‘Am I ready to embrace this wave of transformation?’ Your competitors may be ready.” His advice was unequivocal: “Do stuff quickly. If you are not disrupting, you will be disrupted.”

I was honoured to moderate a main-stage panel exploring human-centred tech design, offering a crucial counterpoint to the “move-fast-and-break-things” mantra. Gavin Barton, VP of Engineering at Booking.com, Sue Daley, Director of Tech and Innovation at techUK, and Dr Nicola Millard, Principal Innovation Partner at BT Group, joined me.

“Focus on the outcome you’re looking for,” advised Gavin. “Look at the problem rather than the metric; ask what the real problem is to solve.” Sue cautioned against unquestioningly jumping on the AI bandwagon, stressing: “Think about what you’re trying to achieve. Are you involving your employees, workforce, and potentially customers in what you’re trying to do?” Nicola introduced her “3 Us” framework – Useful, Useable, and Used – for evaluating tech innovation.

Regarding tech’s darker side, Jake Moore, Global Cybersecurity Advisor at ESET, delivered a hair-raising presentation titled The Rise of the Clones on DTX’s Cyber Hacker stage. His practical demonstration of deep fake technology’s potential for harm validated the warnings from both Joye and Shlomo about AI-enabled deception.

Moore revealed how he had used deep fake video and voice technology to penetrate a business’s defences and commit small-scale fraud. It was particularly unnerving given Shlomo’s earlier warning about AI tools generating entire fake identities that can fool sophisticated verification systems.

Quoting the late Sir Stephen Hawking’s prescient warning that “AI will be either the best or the worst thing for humanity”, Moore’s demonstration felt like a stark counterpoint to the Nobel Prize celebrations. Here, in one conference hall, we witnessed both the promise and peril of our AI future – rather like watching Dr Jekyll transform into Mr Hyde.

Later in the month, there were yet darker instances of AI’s misuse and abuse. In a story that reads like a Black Mirror episode, American Drew Crecente discovered his late teenage daughter, Jennifer, murdered in 2006, had been resurrected as an AI chatbot on Character.AI. The company claimed the bot was “user-created” and quickly removed it, but the incident raises profound questions about data privacy and respect for the deceased in our digital age.

Arguably even more distressing, and also in the United States, was the case of 14-year-old Sewell Setzer III, who took his own life after developing a relationship with an AI character based on Game of Thrones’ Daenerys Targaryen. His mother’s lawsuit against Character.AI highlights the dangers of AI companions that can form deep emotional bonds with vulnerable users – particularly children and teenagers.

Finally, in what police called a “landmark” prosecution, Bolton-based graphic design student Hugh Nelson was jailed for 18 years after using AI to create and sell child abuse images. The case exemplifies how rapidly improving AI technology can be weaponised for the darkest purposes, with prosecutors noting that “the imagery is becoming more realistic”.

While difficult to stomach, these stories validate warnings about AI’s destructive potential when developed without proper safeguards and ethical considerations. As Joye emphasised, how we nurture these technologies matters profoundly. The challenge ahead is clear: we must harness AI’s extraordinary potential for good while protecting the most vulnerable members of our society.

The past

During lunch at Claridge’s, Shlomo shared a remarkable story about his grandfather, Shlomo – whom he is named after – that feels particularly pertinent given the topic of human resilience in the face of technological change.

The elder Shlomo was an entrepreneur in Poland who survived Stalin’s Gulag through his business acumen. After enduring that horror, he navigated the treacherous post-war period in Austria – a time and place immortalised in Orson Welles’ The Third Man – before finally finding sanctuary in Israel in the early 1960s.

When the younger Shlomo co-founded Check Point Software Technologies over 30 years ago, the company’s first office was in his late grandfather’s vacant apartment. It feels fitting that a business focused on protecting people from digital threats began in a space owned by someone who had spent his life helping others survive very real ones.

The heart-warming story reminds us that while the challenges we face may evolve – from physical threats to digital deception – human ingenuity, ethical leadership, and the drive to protect others remain constant. 

As we grapple with AI’s implications for society, we would do well to remember this Halloween that technology is merely a tool; it’s the hands that wield it – and the values that guide those hands – that truly matter.

Statistics of the month

  • According to McKinsey and Company’s report The role of power in unlocking the European AI revolution, published last week, “in Europe, demand for data centers is expected to grow to approximately 35 gigawatts (GW) by 2030, up from 10 GW today. To meet this new IT load demand, more than $250 to $300 billion of investment will be needed in data center infrastructure, excluding power generation capacity.”
  • LinkedIn’s research reveals that more than half (56%) of UK professionals feel overwhelmed by how quickly their jobs are changing, which is particularly true of the younger generation (70% of 25-34 year olds), while 47% say expectations are higher than ever.
  • Data from Asana’s Work Innovation Lab reveals that AI use is still predominantly a “solo” activity for UK workers, with the majority feeling most comfortable using it alone compared to within a team or their wider organisation. The press release hypothesises: “This may be because UK individual workers think they have a better handle on technology than their managers or the business. Workers rank themselves as having the highest level of comfort with technology (86%) – compared to their team (78%), manager (74%) and organisation (76%). This trend is mirrored across industries and sectors.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 9)


TL;DR: September’s Go Flux Yourself debates when to come clean about the Tooth Fairy’s existence, considers the beauty of magic, developing a sense of wonder, how to excel at human-centred innovation, and provides lessons from the inventor of the carpet sweeper …

Image created on Midjourney with the prompt “a magical tooth fairy with a mechanised carpet sweeper looking happy in the style of a cubist Picasso painting”

The future

“So why not live with the magic? Be a kid again and believe in the fantastical. Life is more fun with a little smoke and mirrors.”

I love this time of year, as the completed page of September – the last month of summer indulgence – is turned to reveal October, the beginning of the golden quarter and the cosiness of autumn. 

As I tap these words, I spy, on the other side of the window, shrivelled yellow and brown leaves creating a patchwork carpet on the garden floor, having twisted and tumbled from their trees. Yet those trees remain well covered in greenery, for now. 

The changing of the seasons reminds us of the natural process of renewal. But, as always, autumn and soon winter will provide darkness in a particularly gloomy, and unpredictable world. And yet, these dark months are punctuated by magical, soothing, and memorable events. 

As the dad of two children who still – just about, in the case of my 10-year-old boy (more below) – believe in Father Christmas, I adore reliving the wonder of the festive season, which begins for me on Halloween. For parents, while no doubt a considerable effort is needed to make this period magical, it is rewarding and life-affirming. Soon, that innocence will be lost. 

Being in my early 40s, I’ve had over half of the 4,000 weeks we live on average, points out journalist and author Oliver Burkeman. The second half of my life will be hugely different from the first, on a macro and micro level.

For various reasons, I’ve recently been thinking about my remaining 1,800-ish weeks – if lucky. And whenever I consider what I would like to fill them with, it filters down to spending time with my nearest and dearest, cliched and twee as that might be. 

Maybe it’s a sign of my stage in life, but a large part of this desire is to protect my kids. Stand back, and the post-pandemic world is pretty mad right now.

As wars rage in the Middle East and Eastern Europe, global warming is increasing, loneliness is also on the rise, every country in the world has lower fertility rates than in 1950, NASA is building the world’s first telescope designed for planetary defence, journalists have turned to artificial intelligence-created newsreaders in Venezuela to avoid arrest, AI girlfriends are being preferred over real thing in their hundreds of millions, and technology has reached a point where everything should be questioned, not just the news. And who can predict what will happen if Donald Trump regains access to the Oval Office in the next couple of months?

How does a parent prepare a child for survival in a post-truth world? What are we doing to the minds of little people by confecting the winter months, in particular, with beautiful lies to generate artificial happiness? One could argue that their joy when meeting Father Christmas and receiving festive gifts is a cruel construct that, once revealed as such, will lead to deep mistrust.

I began this month’s newsletter with a quotation from Irish-based author, L.H. Cosway’s Six of Hearts, a story about a world-renowned illusionist (according to its description – I’ve not read the book). I was drawn to it as Freddie, who hit double figures in age earlier in September, lost a tooth the other day. Thankfully, I had to dash out to play football on the evening he asked my wife: “Does the Tooth Fairy really exist? I mean, is she genuinely real?”

Having been instructed, later that evening, to switch a quid for the tooth under Freddie’s pillow, it’s logical to assume that my wife didn’t reveal the truth. I can understand why. First the Tooth Fairy, then … then EVERYTHING! But when is the right moment to admit the game is up?

Can we handle the truth even as adults? I read with interest this morning that Melvin Vopson, an associate professor of physics at the University of Portsmouth, has proved our entire universe may be an advanced computer simulation—much like The Matrix. Yes, he has a book to sell on this subject, but he told MailOnline: “The Bible itself tells us that we are in a simulation, and it also tells us who is doing it. It is done by an AI.”

After screaming “BS”, the natural human reaction is: “Well, what’s the point, then?” Cue a sad emoji. We’re all big kids at heart.

On the subject of magic, Arthur C. Clarke, famously wrote: “Any sufficiently advanced technology is indistinguishable from magic.” Even if AI isn’t capable – yet – of creating the universe, it’s undoubtedly becoming impressive and, moreover, indiscernible from reality. 

Not everyone is finding AI magical in a work setting, though. Research from Upwork, published in July, highlights the “AI expectation gap“, with 96% of C-suite leaders admitting they expect the use of AI tools to increase their company’s overall productivity levels. However, 77% of employees say these tools have actually decreased their productivity and added to their workload.

A friend messaged me the other day to let me know he had managed 110 days of sobriety but that he felt he needed to drink at a gig because he was struggling with his feelings and wanted to release some pressure. The exchange soon revealed that he had been jobless for almost a year, and a beloved grandmother had died days before the gig. 

He wrote that he was “feeling anxiety and just a sense of doom about everything around modern life. I’m worried about the future of society and for our kids. There is so much hate around and the want for AI to replace us. If I, as someone with 20 years of experience, struggle to get a job, what will it be like for future generations?”

How do you answer that? With too much time to ponder, my friend fears the worst at the moment – for him, but mostly for his children. I get that.

I’ve attended many events in the last month, and I’ve taken to asking interviewees how I should prepare my kids for the future. At Gartner Reimagine HR, Paul Rubenstein, Chief Customer Officer at Visier, a company that provides AI-powered people analytics, told me that today’s youngsters must be “agile, smart, and fearless”. Great answer.

Perhaps, then, the greatest gift we can give our children isn’t shielding them from reality but nurturing their sense of wonder. In a world of AI and uncertainty, their ability to see magic in the every day might be the superpower they need to thrive.

The present

It’s conference season, and later this week, I’ll be on stage twice at DTX (the Digital Transformation Expo) at London ExCel. I’m looking forward to moderating a panel on the main stage alongside BT, techUK, and booking.com panellists to examine why human-centred tech design and delivery have never been so critical.

Regarding technology’s stickiness, Dr Nicola Millard, Principal Innovation Partner at BT Business (who wrote her PhD thesis on this subject), talks about the three Us: useful, useable, and used. The last of these revolves around peer pressure, essentially. Her octogenarian mother requested an iPad only because her friend had one. Meanwhile, Nicola is a “shadow customer” engaging with companies digitally on behalf of her mother, who finds it too overwhelming.

I touched on this subject in a keynote speech on human-work evolution at the Institute for Engineering and Technology for Accurate Background earlier in September. I urged the audience to be “kind explorers” to navigate the digital world – and careful not to leave people behind. 

I referenced Terry, my late, jazz-loving nonagenarian friend I’ve written about before in this newsletter. I recall his deep frustration at trying to speak to a human at his bank for a minor query and spending hours – no exaggeration – going around in circles, via automated assistants. Unable to walk, he resorted to using his trusty fountain pen to write a letter to his local branch. No return letter or call arrived, tragically. It was far from a magic experience, and the tech dumbfounded Terry, who would have reached triple figures next month.

We can use the CHUI values framework to help improve human-centric design. I wrote about this last month, but as a reminder, here are the main elements.

Community: Emphasises the importance of fostering a sense of belonging in both physical and virtual spaces. It’s about creating inclusive environments and valuing diverse perspectives.

Health: Goes beyond just physical wellbeing to include mental and emotional health. It stresses the need for work-life balance and overall wellness, especially in remote or hybrid work settings.

Understanding: Highlights the need for continuous learning and viewing issues from multiple angles. It’s about developing deep knowledge rather than just surface-level information.

Interconnectedness: Recognises that in our global, digital world, everything is linked. It involves understanding how different roles connect and the broader impact of our actions.

In this instance, below is how one might use CHUI.

Community:

  • Co-design with diverse user groups. This ensures that designs are inclusive and meet the needs of various communities.
  • Foster inclusive design practices. By considering different cultural, social, and economic backgrounds, designs become more universally accessible.
  • Create solutions that strengthen social connections. This aligns with the community aspect of CHUI, designing products or services that bring people together.

Health:

  • Prioritise mental and physical wellbeing in design. This could involve creating ergonomic products or digital interfaces that reduce eye strain and promote good posture.
  • Design for accessibility and reduced stress. Ensuring that designs are usable by people with different abilities and don’t create unnecessary stress or frustration.
  • Incorporate biophilic design principles. This involves bringing elements of nature into the design, which has been shown to improve wellbeing and reduce stress.

Understanding:

  • Deep user research and empathy mapping. This helps designers truly understand user needs, motivations, and pain points, leading to more effective solutions.
  • Iterative design with continuous user feedback. This ensures that designs evolve based on real user experiences and needs.
  • Design for intuitive learning and skill development: Creating interfaces or products that are easy to understand and help users develop new skills over time.

Interconnectedness:

  • Design for interoperability and ecosystem thinking. Considering how a design fits into the larger ecosystem of products or services that a user interacts with.
  • Consider broader societal and environmental impacts. This involves thinking about the ripple effects of a design on society and the environment.
  • Create solutions that enhance human-to-human connections. Designing with the goal of facilitating meaningful interactions between people.

Human-centred design should extend to the workplace – whether in the office or a remote setting. The temperature of the already hot topic of where people work was dialled up a fortnight ago, when Amazon decreed a five-day back-to-the-office mandate. 

LinkedIn shared timely data highlighting how companies are hiring now. Interestingly, the professional social media platform’s Economic Graph showed that hiring for fully remote roles is generally declining, with a 6.2% decrease year-on-year at large companies. Meanwhile, small companies are reversing the trend with a year-on-year 2.3% rise in remote hires. By contrast, larger companies are seeing significant growth in hybrid work models. Which organisation can say it has perfected the magic formula?

The past

The date of the aforementioned Accurate talk, to mostly HR professionals, was delivered on September 19 – the same day American entrepreneur Melville Bissell patented the carpet sweeper back in 1876. I know what you’re thinking – what does a mechanised carpet sweeper have to do with human-work evolution in the digital age?

Melville and his wife Anna Bissell owned a crockery shop in Michigan, where dust and breakages were daily occurrences. So Melville developed a carpet sweeping machine to keep the family store always tidy. It was so effective that word spread, demand rose, and soon, the Bissells were selling far more carpet sweepers than cups and saucers.

Today, five generations later, the family-run Bissell Inc. is one of the leading manufacturers of floor care products in North America in terms of sales, with a vast market share.

When Melville invented his device, he didn’t know he was starting a revolution in home cleaning. He was trying to solve a problem, to make life a bit easier and better.

Similarly, we are still determining precisely where our explorations in the digital age will lead us. 

But if we approach them with curiosity, with kindness, and with a commitment to our shared humanity, I believe we can evolve human work so that’s not just more efficient, but more fulfilling. Not just more profitable, but more purposeful.

I urge you to go forth and explore, be curious, be kind, and be human. That’s where the real magic can be found.

Statistics of the month

  • According to HiBob research, almost a quarter (24%) of Brits would replace all younger generation workers if they could. Further, 70% of companies struggle to manage younger-generation employees.
  • The above study found that Gen Zers are causing managers headaches around issues with attitudes towards authority (41%), emotional intelligence (38%) and levels of professionalism (34%). 

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 8)

TL;DR: August’s Go Flux Yourself includes the CHUI Framework to navigate human-work evolution, stresses the need to be kind explorers for human flourishing, EE’s helpful advice not to allow under 11s access to smartphones, and me facing my fear of heights by jumping out of a plane at 12,000ft … 

Image created on Midjourney with the prompt “a smiling, young, blonde-haired explorer, navigating the future world – one filled with wonder and opportunity, and futuristic-looking things – in the style of a Caravaggio painting”

The future

“When I grow up, I want to be an explorer.”

My four-year-old daughter, Darcey, “graduated” from her nursery in South East London a few weeks ago. On a warming July morning, proud parents perched awkwardly on children’s chairs to watch their little ones say farewell to Cheeky Monkeys. 

The preschool kids, dressed in their smartest clothes for the big occasion, stood in a line before us mums and dads and, with star-covered plastic mortarboards on their heads, put on a show.

After a collective opening song, they spelt their names individually – just about. Then, they took turns to reveal their career ambitions. The children listed many professions, some more realistic (and legal) than others. We had a dancer, an actor, a ballerina, a train driver, a ninja, a pirate, and four teachers – copycats! – before Darcey broke the chain.

Darcey at her Cheeky Monkeys graduation

When she declared her desire to be an “explorer”, I initially chortled. How cute, I thought. However, on reflection, I genuinely loved her answer – so much so that I’ve worked it into a keynote speech I’m giving in mid-September. 

The remit for the event, titled Navigating the Changing World of Work (if interested in claiming a seat, please let me know, and I’ll ask – I think I have an allocation), held at the Institute of Engineering and Technology (where I am a member), a stunning venue steeped in history, is to set the scene, drawing on my expertise in human-work evolution.

Thinking back to the graduation at Cheeky Monkeys, I realised that Darcey, in her four-year-old wisdom, had stumbled upon the essence of what we all must be in this rapidly evolving digital world: explorers. Kind explorers, to be precise. (Darcey, who adores babies – we have an army of plastic ones – is caring, but she is still working on the kindness-to-other-humans bit. Hopefully, the discipline of school, which she begins in early September, will help with that.)

Indeed, we’re all explorers in this brave new world of artificial intelligence, automation, and digital transformation. We’re charting unknown territories, facing unfamiliar challenges, and trying to make sense of a landscape that’s shifting beneath our feet faster than we can blink.

But what will Darcey be when she grows up? As in: what will her actual job be? Of course, it’s impossible to predict at this point. Partly because she has 14 years before she leaves school (hopefully), and mostly because professions are being warped – if not wiped out – by technological advancements. Plus, plenty of new jobs will be spawned. 

At the start of the year, LinkedIn produced a list of 25 UK roles that were on the rise, according to the platform’s data. The piece points out that job skill sets have changed by 25% since 2015 and are projected to shift by 65% by 2030 globally.

Right now – well, at the beginning of 2024, at least – the 25 fastest-growing jobs include Sustainability Manager (ranked first), AI Engineer (seventh), Security Operations Centre Analyst (tenth), Energy Engineer (fourteenth), and Data Governance Manager (sixteenth). Most of these roles did not exist a decade ago.

So, how best can we prepare our children for the world of work? The World Economic Forum talks about the four Cs being the essential skills for the 21st century. Namely: critical thinking, collaboration, communication, and creativity. I would lob another C in there: compassion.

My AI-induced FOBO (fear of becoming obsolete) has triggered my own – necessary – journey of exploration, which has led to much more fun speaking work, podcasting, moderating, and essentially more human interaction; this is why I talk about “human-work evolution” and not just “future of work”, which often leaves people out of the conversation.

Through all of this, I’ve discovered that the key to navigating the future of work lies not in mastering any particular technology, but in cultivating our uniquely human qualities. As such, I’ve chiselled the values I discussed in April’s Go Flux Yourself and created the CHUI Framework. 

CHUI is an acronym for the following:

  • Community
  • Health
  • Understanding
  • Interconnectedness

These values are crucial as we navigate the complexities of the digital age. They remind us that no matter how advanced our technology becomes, we are, at our core, social beings who thrive on connection, wellbeing, empathy, and the intricate web of relationships that bind us together.

Here’s a breakdown of each element of CHUI.

Community

In an increasingly digital world, the importance of community cannot be overstated. We need to foster a sense of belonging, both in physical and virtual spaces. This means creating inclusive workplace cultures, facilitating meaningful connections between team members, and recognising the value of diverse perspectives.

Health

This encompasses not just physical health but mental and emotional wellbeing. As the lines between work and personal life blur, especially in remote and hybrid work environments, we must prioritise holistic health. This includes promoting work-life balance, providing mental health resources, and creating a workplace culture that values wellbeing.

Understanding

Deep understanding is more valuable than ever in a world of information overload and rapid change. This means cultivating curiosity, promoting continuous learning, and developing the ability to see things from multiple perspectives. It’s about going beyond surface-level knowledge to grasp the complexities of our work and our world truly.

Interconnectedness

Everything is connected in our globalised, digitalised world. Actions in one part of the world can have far-reaching consequences. In the workplace, this means recognising how different roles and departments interrelate, understanding the broader impact of our work, and fostering a sense of shared purpose and responsibility.

By embracing these CHUI values, we can create more resilient, adaptable, and human-centric workplaces and better control human-work evolution.

Ultimately, we must be explorers, venturing into unknown territories, mapping out new ways of working, and discovering new possibilities at the intersection of human and machine intelligence.

But more than that, we need to be kind explorers. Kind to ourselves as we navigate the complexities of the digital age. Kind to colleagues and clients as they grapple with change and uncertainty. Kind to our intelligent assistants as we learn to work alongside them. And kind to the wider world that our decisions will impact.

The map we create today will shape the landscape for generations to come. So, let’s ensure it’s a landscape defined not just by technological advancement but human flourishing.

Let’s create workplaces where people can bring their whole selves to work, where diversity is celebrated, wellbeing is prioritised, learning is constant, and technology serves humanity – not the other way around.

Let’s be the kind of leaders who don’t just manage change but inspire it, who don’t just adapt to the future but shape it, who don’t just talk about values but live them daily.

We don’t know precisely where our digital explorations will lead us. But if we approach them with curiosity, kindness, and a commitment to our shared humanity, I believe we can evolve human work so that it’s not just more efficient but more fulfilling, not just more profitable but more purposeful.

So, let’s go forth and explore! Let’s be curious. Let’s be kind. Let’s be human.

The present

After typing this edition of Go Flux Yourself, I’ll grab my iPhone 13 and post teasers on LinkedIn and Instagram. I’m acutely aware of the irony. Here I am, a supposedly responsible adult, tapping away on a device I’m hesitant to put in my daughter’s hands. It’s a conundrum that plagues modern parenting: how do we navigate the digital landscape with our children when we’re still finding our own bearings?

Darcey, my four-year-old explorer-in-training, is growing up in a world where technology is as ubiquitous as oxygen. She’s already adept at swiping through photos on my phone and giggling at videos of herself, which is both impressive and terrifying.

I understand that Darcey’s interaction with technology – and smartphones in particular – will be crucial for her development. In a world where digital literacy is as essential as reading and writing, denying her access to these tools feels akin to sending her into battle armed with nothing but a wooden spoon.

But then I read about EE’s recommendation, published earlier this week, that children under 11 shouldn’t have smartphones, and I breathed a sigh of relief. It’s as though someone’s permitted me to pump the brakes on this runaway tech train.

The stance taken by EE – one of the UK’s largest mobile network providers – isn’t just some arbitrary line in the sand. It’s backed by growing concerns about the effects of smartphone and internet usage on children’s mental health and behaviour. The US Surgeon General’s warning that social media use presents “a profound risk of harm” for children only adds weight to these concerns.

As a parent, I’m caught in a tug-of-war between embracing technology’s potential and shielding my child from its perils. On one hand, I want Darcey and her older brother, Freddie, to be digitally savvy and navigate the online world confidently. On the other, I’m terrified of exposing my children to the darker corners of the internet, where trolls lurk, and misinformation spreads like wildfire.

It’s not just about protecting her from external threats, either. I worry about the internal changes that constant connectivity might bring. Will she develop the patience to read a book when TikTok offers instant gratification? Will she learn to navigate real-world social situations when she can hide behind a screen? Will she ever know the joy of being bored and having to use her imagination to entertain herself? 

In June’s newsletter, I discussed the loneliness epidemic, and the rise of AI girlfriends and boyfriends – what will this situation look like in a decade if left unchecked? 

Dr Jonathan Haidt’s observation about the potent combination of social media and smartphones causing a decline in mental health rings true (this is worth a watch). It’s not just about access to information; it’s about the constant, addictive pull of likes, shares, and notifications. It’s about the pressure to present a perfect online persona, even before you’ve figured out who you really are.

As I ponder this digital dilemma, I can’t help but wonder if we’re in the midst of a grand social experiment with our children as the unwitting subjects. Will future generations look back on our era of unregulated social media use with the same bewilderment we feel when we see old adverts promoting the health benefits of smoking?

EE’s advice may be a step towards a more balanced approach. Maybe we need to redefine what it means to be “connected” in the digital age. Could we embrace technology without being enslaved by it?

For now, I’m focusing on nurturing Darcey’s explorer spirit in the physical world. When the time comes for her to venture into the digital realm, I hope she’ll do so with the curiosity of an explorer and the caution of a well-prepared adventurer.

Meanwhile, I’m trying to model healthier tech habits. It’s a work in progress, but I’m learning to put my phone down more often, to be present in the moment, and to remember that the most important connections are the ones we make face-to-face.

In this brave new world of pixels and algorithms, the most revolutionary act is to be human. To laugh, play, and explore – without a screen in sight. After all, isn’t that what childhood should be about?

The past

In the spirit of being a fearless explorer I recently took a leap of faith. Quite literally.

In mid-August, my wife, Clare, and I found ourselves hurtling towards the earth at 120 mph after jumping out of a perfectly good aeroplane at 12,000 feet. This wasn’t just a random act of madness; we were skydiving to raise money for Lewisham and Greenwich NHS Trust Charity, near Ashford in Kent (here is the fundraising link, if interested).

Now, I have a fear of heights. A proper, palm-sweating, stomach-churning fear. But as I’ve been banging on about the importance of exploration and facing our fears in this digital age, I figured it was time to practice what I preach.

The experience was, well, “exhilarating” doesn’t entirely describe it. It was a cocktail of terror, awe, and pure, unadulterated, life-affirming joy. As we plummeted through the air, the fear melted away, replaced by an overwhelming sense of freedom. It was a vivid reminder that our greatest adventures often lie just beyond our comfort zones.

But this wasn’t just about personal growth or fundraising. It was also about sharing an important experience with Clare, strengthening our bond through a joint adventure. And with Darcey and Freddie watching from the ground, I hope we’ve inspired them to be brave, to push their boundaries, and to embrace life’s challenges head-on.

As we touched down, wobbly-legged but elated, I couldn’t help but draw parallels to our journey through this rapidly evolving digital landscape. Sometimes, we need to take that leap into the unknown. We need to trust our training, face our fears, and embrace the exhilarating free fall of progress.

So here’s to being fearless explorers – whether we’re charting the digital unknown or plummeting through the physical sky. May we always have the courage to jump, the wisdom to learn from the fall, and the joy to relish the ride.

Clare and I take to the skies

Statistics of the month

  • Following Labour’s pledge to introduce the “right to switch off”, new research reveals that most are “workaholics”, with 20% struggling to decline after-hours work requests, and 88% experiencing ongoing stress (Owl Labs).
  • In 2023, funding for generative AI skyrocketed to $22.4 billion, nearly nine times higher than in 2022 and 25 times the 2019 amount, despite a decline in overall AI investment since 2021 (Our World In Data).
  • Alphabet-backed Waymo, a self-driving technology company, revealed that it now provides over 100,000 paid robotaxi rides each week across its key markets in Los Angeles, San Francisco, and Phoenix (TechCrunch).

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 5)

TL;DR: May’s – slightly delayed – Go Flux Yourself includes being selfless to find happiness, building tech for good, virtual work experience, the importance of messy stories, and a tribute to rugby league legend Rob Burrow … 

Image created on Midjourney with the prompt “Leeds Rhinos rugby league legend Rob Burrow smiling with the ball and happy people gathering around him – with tech robots looming behind them – in the style of a David Hockney painting”

The future

“The greatest burden a child must bear is the unlived life of its parents.”

These words, attributed to Swiss psychoanalyst Carl Jung, were quoted on stage by Britain’s leading “psychological illusionist” Derren Brown, the big draw at DTX Manchester in late May when discussing the pursuit of happiness. 

This wisdom hit me hard as the father of two small but quickly growing children. (And this newsletter didn’t arrive on May 31st – apologies – because I was holidaying in the Netherlands for half-term.) It smacked me harder, though, as someone passionate about human-work evolution and the world we are building. 

Is the combination of technology and social media making us overly self-interested? Is too much screen time, for adults and kids alike, making us more susceptible to jealousy while eroding common decency and looking out for those around us? We need to look up, look out, and that starts with, well, looking inward.

Related to this, I recently saw a brilliant and quite emotional post on LinkedIn that distilled the problem of human selfishness. In a thought-provoking classroom experiment, a university professor challenged his students to a unique test of teamwork and empathy. 

Each student was assigned a balloon bearing their name, which was then released from the ceiling. The challenge was to locate their own balloon within a five-minute time limit. If everyone succeeded, they would collectively win; if even one student failed, they would all lose. 

Despite their earnest efforts, not a single student managed to find their individual balloon amid the chaos. Undeterred, the professor gathered the wayward balloons and gave the class a new directive: “When you catch a balloon, give it to its rightful owner.” With this simple shift in perspective, the students all completed the task before three minutes were up.

The professor concluded the lesson with a poignant observation: “Happiness is like these balloons. If each of us single-mindedly pursues our own, we will inevitably come up short. But if we prioritise the wellbeing of others, we will find that our own happiness follows suit.” 

This principle holds true not only in the classroom, but also in the corporate – and specifically the technology – world as well. By actively supporting our colleagues in achieving their objectives, we foster an environment of reciprocity and shared success.

At DTX Manchester, where I moderated a session on AI in the workplace, Brown, who I once shared a seance table with – a story not for here – talked about our materialistic, consumerist tendencies. Most of us, he said, are on the “hedonic treadmill”, chasing and attaining new things to feel happier. But that immediate dopamine spike soon drops, and then we look for the next shiny thing. Essentially, he argued – convincingly – that this doesn’t make us happy.

Building on this point, he offered the audience a thought experiment suggested by Stoic philosopher William B. Irvine. In On Desire: Why We Want What We Want, Irvine wrote: “Suppose you woke up one morning to discover that you were the last person on earth … In the situation described, you could satisfy many material desires that you can’t satisfy in our actual world. You could have the car of your dreams. You could even have a showroom full of expensive cars. You could have the house of your dreams – or live in a palace. You could wear very expensive clothes. You could acquire not just a big diamond ring but the Hope Diamond itself. The interesting question is this: without people around, would you still want these things?”

The answer is obvious when framed in this way. But do enough of us realise this truth?

Brown also challenged leaders to prioritise authentic storytelling over superficial narratives. He argued that businesses often present overly simplistic and tidy stories, failing to capture the messiness and complexity of reality. To cultivate genuine resilience, Brown urged leaders to embrace the journey and resist the temptation to fixate on definitive endings.

However, Brown’s most urgent plea was directed at technologists, calling upon them to use their talents for good. He revealed a startling statistic, attributed to Tristan Harris, Co-Founder of the Center for Humane Technology: over 50% of AI engineers believe there is at least a 10% chance that mishandling AI could lead to the destruction of humanity. 

This sobering reality underscores the critical need for a mindset shift in the tech industry, ensuring that innovation uplifts humanity rather than accelerates its demise.

As AI advances at an unprecedented pace, there is an immense business opportunity and an ethical imperative to create technology that genuinely addresses human needs, not just superficial desires. 

The cautionary tale of social media platforms like Facebook is a stark reminder of the unintended consequences that can arise when innovation is disconnected from human welfare. With the stakes exponentially higher in the era of recursive, self-improving AI systems, the risks of rushing ahead without careful consideration are grave, ranging from automated cyber weapons to blackmail and disinformation campaigns.

Those who seize this opportunity to create technology that genuinely benefits humanity will build thriving businesses and contribute to writing a new, more enlightened chapter in the human story. Brown concluded that this endeavour is worth far more than any fleeting dopamine rush from a dazzling new toy. It served as a much-needed call to action for leaders and innovators to shape a future in which technology and humanity can flourish together.

I’ve been asked to deliver an opening keynote on the future of work to a group of lawyers in London later in the year, and with the 25 minutes I have been afforded, I’ll be focusing on these messages, I reckon.

The present

Certainly, the themes of collaborating for good and being intentional and considered are current when looking through the lens of remote working – mainly because no company has perfected its strategy. Moreover, it requires careful iteration, with humans – not technology – in the driving seat and the most business-critical element. 

A couple of weeks ago, I was delighted to moderate an in-person roundtable near the “Silicon Roundabout” of London’s Old Street, which delved into the challenges and opportunities of creating a remote-ready workforce.

I set the scene by referencing recent research from Stanford professor Nick Bloom, which indicated that 29% of the global workforce were hybrid workers, 59% were fully on-site, and only 12% were fully remote workers.

Predictably, during the discussion, trust emerged as a cornerstone of successful remote work. The roundtable participants concurred that businesses must foster a culture of trust, and the unanimous verdict was that monitoring staff is creepy and demotivating. 

O.C. Tanner’s 2024 Global Culture Report was published a few days after the roundtable session. It showed that 41% of UK employees have their working time strictly monitored, and just 53% are granted freedom in how they accomplish their work. How backward. For employees, it’s time to put the mouse-jigglers away, and employers need to conduct adult-to-adult relationships with their staff. 

Someone needs to tell Manchester United co-owner Sir Jim Ratcliffe, the UK’s richest person. In mid-May, he found a new excuse for ordering employees back to the office. In a message to the club’s employees, he complained about the “disgraceful” messiness in the office. He called an end to the flexible work-from-home policy that has been in place since the coronavirus crisis.

As a kicker, Ratcliffe – who resides in Monaco, presumably in part for tax avoidance purposes – justified a full-time return to the office because one of his other businesses experienced a 20% drop in email traffic when it experimented with home-working Fridays. It’s daft reasoning, for sure. Do more emails mean more productivity? Not in 2024, where the most enlightened business leaders are familiar with Cal Newport’s concept of deep work – the need for focused periods of concentration without the pings, bings and other notifications that have become an irritating part of work life.

As businesses strive to future-proof their workforce, the concept of “virtual work experience” has gained popularity – although one suspects Sir Jim would not approve. And if so, I’m 100% with him on this one.

Leaders must understand that while these online placements can provide valuable exposure and skills, they should not be considered a complete substitute for in-person experience. 

Companies like Heathrow Airport and Pret A Manger have partnered with Springpod to offer virtual work experience programmes, aiming to impart relevant knowledge to aspiring professionals in various fields. These initiatives include engaging activities such as – ahem – quizzes and immersive product development journeys designed to educate and inspire the next generation of talent. 

The hands-on experience, face-to-face interactions, and real-world problem-solving opportunities that come with traditional work placements are essential for developing a well-rounded skill set and understanding the nuances of a profession. 

Ultimately, by offering a balanced approach that combines online learning with practical, on-site experience, leaders can ensure that their future workforce is adequately prepared to tackle the challenges of their chosen careers. 

Further, investing in a comprehensive training and development programme that includes virtual and in-person elements demonstrates a commitment to nurturing top talent. By providing a well-rounded learning experience, organisations will attract ambitious candidates, foster a culture of continuous improvement, and be well-positioned for long-term success.

The past

At first glance, the passing of a rugby league player might seem inappropriate for a technology and business newsletter. But the death of former Leeds Rhinos scrum-half Rob Burrow yesterday (June 2), at the age of 41 – a year younger than me, chillingly – transcended sport and was mourned across the nation. 

Sadly, Rob’s demise was no surprise. Four-and-a-half years ago, and only two years after he hung up his boots, he was diagnosed with motor neurone disease (MND) and given 18 months to live. Bravely, Rob chose to take his fight public to raise awareness of the horrific disease – and the lack of support for sufferers – and, along with the considerable help of his former teammate Kevin Sinfield, attracted around £15 million for MND charities.

I started my career as a sports journalist and covered rugby league, partly because of my upbringing in North West England, the game’s heartland. I watched and met Rob, who played for Leeds almost 500 times and won 18 international caps, numerous times. I always marvelled at how the smallest player on the pitch – at 5’5” or 156cm, he was only a dozen centimetres taller than my nine-year-old boy – was so often the bravest and most influential. Indeed, today’s obituaries will laud a “giant among men”, rightly. 

How fitting that, by coincidence, the ground will be broken on the Rob Burrow Centre for MND in Leeds the day after his death. Excelling at a game in which he was always a foot shorter than other players, he was a groundbreaker on and off the pitch.

The Prince of Wales – a mate of mine at the University of St Andrews (but that’s another story) – presented CBEs to Rob and Kevin in January, and when the news broke on Sunday, he saluted “a legend of rugby league” on social media. He added: “Rob Burrow had a huge heart. He taught us ‘in a world full of adversity, we must dare to dream’.”

Rob’s life story holds valuable lessons for the world of technology and business. Every entrepreneur and innovator should aspire to emulate his unwavering determination and ability to excel despite the odds stacked against him. In the face of adversity, Rob persevered and used his platform to drive change and raise awareness for a cause that desperately needed attention. 

His legacy reminds us that true success is measured not just by personal achievements but by the positive impact one leaves on the world, no matter the industry. As we navigate the ever-evolving landscape of technology and business, let us draw inspiration from Rob’s courage, resilience, and dedication to making a difference. In doing so, we, too, can dare to dream, innovate, and create a better future for all.

Statistics of the month

  • 41% of UK employees have their working time strictly monitored, and a mere 53% are granted freedom in how they accomplish their work, according to O.C. Tanner’s 2024 Global Culture Report.
  • The CIPD’s latest Labour Market Outlook showed that 55% of employers in the UK are seeking to maintain their current staffing levels – the highest figure since 2016-17. With fewer organisations looking to recruit, employers must invest in learning and development to fill skills gaps and future-proof their workforce – but is that happening?
  • Generative AI tools should save UK workers 19 million hours a week by 2026, calculates Pearson. Teaching and healthcare “could be transformed”, is the conclusion of the research. I’m not so sure.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 2)

TL;DR: February’s Go Flux Yourself discusses FOBO – the fear of becoming obsolete – tarot readings, why communication (and not relying on AI) matters more than ever, and leaping out of one’s comfort zone …

Image created on Midjourney with the prompt “the page of swords taking a leap outside of his comfort zone in the style of an Edgar Degas painting”

The future

“It always feels too soon to leap. But you have to. Because that’s the moment between you and remarkable.”

So wrote American author, entrepreneur and former dot-com business executive Seth Godin in his prescient 2015 book, Leap First: Creating work that matters. It’s a fitting quotation. Not least because today is February 29 – a remarkable date only possible in a leap year. 

It’s appropriate also because most of us must jump out of our comfort zones – whether voluntarily or shoved – and try new things for work and pleasure in this digital age. We want to be heard, to be valued. Moreover, there is a collective FOBO – fear of being obsolete (as discussed with suitable levity on the Resistance is Futile podcast this week).

As someone primarily known as a writer, I have felt FOBO in the last 15 months, since the advent of generative artificial intelligence. So much so that when I was sitting in a cafe, waiting for my car to be serviced in November – a year after OpenAI unleashed ChatGPT – I couldn’t help but approach, with nervous excitement, the tarot card reader on the next table, whose 10.30am booking hadn’t appeared. 

I asked: “What’s coming next in my career?”

She pulled six cards from the deck in her hands, although two fell out of the pack, which was significant, I was informed. One of the fumbled cards was The Two of Wands. “This is about staying within our comfort zone and looking out to see what’s coming next,” the reader said. “It suggests you must start planning, discovering who you are and what you really want, and focusing on that.”

The other slipped card was The Page of Swords. “This one – intrinsically linked to the Two of Wands – says that you need to work in something that requires many different communication skills. But this is also something about trying something new, particularly regarding communication, learning new skills, and getting more in touch with the times.”

Energised by 20 minutes with the tarot reader, I’ve leapt outside my comfort zone and re-focused on expressing my true(r) self, having established this newsletter, and a (sobriety) podcast. (I’ve also set up a new thought-leadership company, but more of that next month!) I’m loving the journey. Taking the leap has forced me to confront what makes me tick, what I enjoy, and how to be more authentic professionally and personally. Already, the change has been, to quote Godin, “remarkable”.

And yet, I fear an increasing reliance on AI tools is blunting our communication skills and, worse, our sense of curiosity and adventure. Are we becoming dumbed down and lazy? And, by extension, are the threads that make up the fabric of society – language, communication, community – fraying to the point of being irreparable?

At the end of last month, in the first Go Flux Yourself, I wrote how Mustafa Suleyman, former co-founder of DeepMind, discussed job destruction triggered by AI advancement. He predicted that in 30 years, we will be approaching “zero cost for basic goods”, and society will have moved beyond the need for universal basic income and towards “universal basic provision”. How will we stay relevant and curious if we want for nothing? 

Before we reach that point, LinkedIn data published earlier this month found that soft skills comprise four of UK employers’ top five most in-demand skills, with communication ranked number one. Further, the skills needed for jobs will change by “at least 65%” by the decade’s end. 

Wow. Ready to take your leap?

The present

Grammarly’s 2024 State of Business Communication Report, published last week, exposed the problem of communication – or rather miscommunication – for businesses. Getting this wrong affects the organisation’s culture and its chances of success today and tomorrow. 

Indeed, the report showed that effective communication increases productivity by 64%, boosts customer satisfaction by 51%, and raises employee confidence by 49% – that last one is especially interesting, and it’s worth noting that March 1 is Employee Appreciation Day, which was started in 1995. While I’m sure hardly any companies will appreciate their employees any more than usual, building confidence through better communication is business critical.

There is much work to do here. The Grammarly study found that in the past 12 months, workers have seen a rise in communication frequency (78%) and variety of communication channels (73%). Additionally: “Over half of professionals (55%) spend excessive time crafting messages or deciphering others’ communications, while 54% find managing numerous work communications challenging, and for 53%, this is all compounded by anxiety over misinterpreting written messages.”

Is AI helping or hindering communication?

I love this cartoon by Tom Fishburne, the California-based “Marketoonist”, who neatly summarises the dilemma.

Also this month, we marvelled at OpenAI’s early demonstrations of Sora (Japanese for “sky”, apparently), which converts text to video. FOBO was ratcheted up another notch.

Thankfully, I was reminded that most AI is far from perfect – like the automatic camera operator used for a football match at Caledonian Thistle during the pandemic-induced lockdown. The “in-built, AI, ball-tracking technology” seemed a good idea, but was repeatedly confused by the linesman’s bald head. It offered an amusing twist on spot the ball.

Granted, that was over three years ago, and the use cases of genuinely helpful AI are growing, if still narrow in scope. For example, this fascinating new article by James O’Malley, highlights how Transport for London has been experimenting with integrating AI into Willesden Green tube station. The system was trained to identify 77 different use cases, broken down into these categories: hazards, unattended items, person(s) on the track, unauthorised access, stranded customers and safeguarding.

Clearly, better communication between man and machine is essential as we journey ahead.

The past

“My heart is too full to tell you just how I feel … I sincerely hope I shall always be a credit to my race and to the motion picture industry.”

On this day 84 years ago, Hattie McDaniel spoke these words after being named best supporting actress at the 12th Academy Awards in 1940. She was the first black actor to win – or be nominated – for an Oscar. 

The 44-year-old won for her portrayal of Mammy, a house servant, in Gone With the Wind. She accepted her gold statuette on stage at the Cocoanut Grove nightclub in Los Angeles’ Ambassador Hotel – a “no-Blacks” hotel (she was afforded a special pass). However, McDaniel, whose acting career included 74 maid roles, according to The Hollywood Reporter, was denied entry to the after-party at another “no-Blacks” club. A bittersweet experience in the extreme.

We might look back and be appalled by old social norms. Certainly, the pace of progress in certain areas is lamentably slow – after McDaniel, no other Black woman won an Oscar again for 50 years until Whoopi Goldberg was named best supporting actress for her role in Ghost. Still, it is important to track progress by considering history and context.

How long will it be before we have “no-AI” videoconferencing calls? And would that be classed as progress?

I’ve been thinking about the darker corners of my past recently. Earlier this month, I started a podcast, Upper Bottom, that takes a balanced (not worthy, and hopefully lighthearted) look at sobriety. Almost exactly a year ago, I called Alcoholics Anonymous and explained that while nothing tragically wrong had happened, I wanted to reset my relationship with booze. “Ah, you are what we call an ‘upper bottom’,” said the call handler. “You haven’t reached rock bottom but want to change your ways with alcohol.”

Spurred by the tarot reading, and fortified by the ongoing sobriety – April 1 (no joke) will make it a year without a drop – I’m grateful for the opportunity to polish my communication skills, learn new skills (if you want me to produce and host a podcast I would be delighted to collaborate), and build a community via Upper Bottom.

My voice is being heard, literally, and I’m speaking the truth on a human level. In 2024, that matters.

Statistics of the month

  • On the subject of slow progress, only 18% of high-growth companies in the UK have a woman founder, according to a report just published by a UK government taskforce.
  • Nearly seven in 10 UK Gen Zeders are rejecting full-time employment – many as a result of AI and layoff fears, finds Fiverr.
  • And new research by Uniquely Health shows that less than half (49%) of the nation is confident they would be classed as “healthy” by a doctor. Time to make the most of the extra day this year and leap to some exercise?

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

Go Flux Yourself: Navigating the Future of Work (No. 1)

TL;DR: This month’s Go Flux Yourself includes thinking like badgers, rogue chatbots, American presidents snogging, productivity problems, return-to-office mandates, and AI leaders admitting they don’t know “what happens next” – but not in that order … 

Image created on Midjourney with the prompt “a Henri Bonnard-style painting set in the New Forest in England with badgers, remote workers, Joe Biden and Donald Trump kissing, and lonely males looking at their smartphones”

About this newsletter

A warm welcome to the first edition of a rude-sounding-yet-useful newsletter for business leaders striving to make sense of today and be better prepared for tomorrow.

Below is a summary of what I hope to offer with Go Flux Yourself (with luck, a memorably naughty pun on “flux”, meaning continuous change, in case it requires an explanation).

“Master change and disruption with Oliver Pickup’s monthly future-of-work newsletter: insights and stories on transformation, curated by an award-winning, future-of-work specialist.”

I’m a London-based technology and business communicator – I write, speak, strategise, moderate, listen, and learn – and you can find more about me and my work at www.oliverpickup.com.

At the end of every month, I serve up insights, statistics, quotations and observations from the fascinating and ever-changing future-of-work space in which I operate. 

Every month, the Go Flux Yourself newsletter will have three sections:

  • The future – forward-looking, regarding challenges and opportunities.
  • The present – relevant news, eye-catching examples. glimpses of upcoming challenges and opportunities.
  • The past – lessons from yesterday that might help leaders tomorrow.

The most important thing is to get fluxed, and change. “He that will not apply new remedies must expect new evils, for time is the greatest innovator,” wrote Francis Bacon almost 400 years ago (in 1625).

The future

“No one knows what happens next.” Especially badgers.

The above, rather alarmingly, is the sign/motto above Sam Altman’s desk (without the bit about badgers – more on them later), as revealed in a panel session, Technology in a Turbulent World, at the World Economic Forum’s annual meeting in snowy Davos. 

It reeks of faux justification and diminished responsibility for possible humanity-damaging mistakes made by the co-founder and CEO of Microsoft-backed OpenAI, arguably the world’s most important company in 2024.

Fellow panellist Marc Benioff, chair and CEO of Salesforce, stated: “We don’t want to see an AI Hiroshima.” Indeed, “no one knows what happens next” echoes Facebook’s original – and poorly aged – mantra of “move fast and break things” that was adopted by Silicon Valley and the wider technology community. But at what cost? Can the capitalists curb their rapaciousness? Well, what’s to stop them, really? They can stomp on the paper tigers that currently stand against them. (I’m going to be writing and speaking about this more in February.)

The United Nations secretary general, António Guterres, clarified his feelings at WEF and argued that every breakthrough in generative AI increases the threat of unintended consequences. “Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact,” said the Portuguese. But he strikes the same tone when talking about climate change, and his comments, again, are falling on seemingly deaf ears. Or at least greed for green – the paper kind – outweighs concerns for humanity.

A few days earlier, on January 9, Scott Galloway, professor at New York University Stern School of Business, and Inflection AI’s co-founder Mustafa Suleyman (former co-founder of DeepMind), asked: “Can AI be contained?

Galloway pointed out that given there are over 70 elections around the globe in 2024 – the most in history – there is likely to be a “lollapalooza of misinformation”. And that was before the deepfake of Joe Biden snogging Donald Trump, which was on the front page of the Financial Times Weekend’s magazine on January 27 (see below). 

The provocative American entrepreneur and educator also pointed out that AI will likely increase loneliness, with “searches for AI girlfriends off the charts”. How depressing. But the recent example of a Belgian man – married with two children – killing himself as his beloved chatbot convinced him to end his life for the sake of the planet is evidence enough. 

In a similar vein, delivery firm DPD disabled part of its AI-powered online chatbot after it went rogue a couple of weeks ago. A customer struggling to track down his parcel decided to entertain himself with the chatbot facility. It told the user a joke, when prompted, served up profane replies, and created a haiku calling itself a “useless chatbot that can’t help you”. What would Alan Turing think? 

Anyway, Galloway also noted how the brightest young minds are not attracted to government roles, and it’s a massive challenge (not least when top talent can earn much, much more at tech firms). (As an aside, I interviewed Prof G a couple of years ago for a piece on higher education, and he called me “full of sh1t”. Charming.)

Meanwhile, Suleyman discussed job destruction due to AI advancement. He predicted that in 30 years, we will be approaching “zero cost for basic goods”, and society will have moved beyond the need for universal basic income and towards “universal basic provision”. 

How this Star Trek economy is funded is open to debate, and no one has a convincing solution, yet. (Although Jeremy Hunt, who was on the panel in Davos with Altman, Benioff, et al, might not be consulted. The chancellor revealed that his first question to ChatGPT was “is Jeremy Hunt a good chancellor?” The egoist queried the reply – “Jeremy Hunt is not chancellor” – without, even now, realising that ChatGPT’s training data stopped before his appointment.)

Further, the absence of trust in government – as per the latest Edelman Trust Barometer (which has the general population in the UK (39) and the US (46) well below half, and both down on the 2023 figures) – and increasing power of the tech giants could mean that the latter will act more like nation-states. And with that social contract effectively ripped up, and safety not assured, chaos could reign. Suleyman talked about the “plummeting cost of power”, and posited conflict can be expected if actual nation-states can no longer look after their citizens, digitally or physically. The theme of prioritising trust is a big one for me in 2024, and in January a lot of my writing and speaking has been founded upon this topic.

If “no one knows what happens next”, leaders must educate themselves to broaden their scope of understanding and be proactive to get fluxed. The words of 18th-century English historian Edward Gibbons come to mind: “The wind and the waves are always on the side of the ablest navigator.”

Certainly, I’ve been busy educating myself, and have completed courses in generative AI, public speaking and podcasting, to help me achieve my 2024 goal of being more human in an increasingly digital age. This time next month, I’ll be able to share news about a (sobriety) podcast and also a thought-leadership business I’m launching in February.

The present

A couple of weeks ago, judge Robert Richter dealt a blow to those in the financial services industry – and possibly beyond – hoping to work fully remotely. He ruled against a senior man­ager at the Fin­an­cial Con­duct Author­ity who wanted to work from home full-time, find­ing the office was a bet­ter envir­on­ment for “rapid dis­cus­sion” and “non-verbal com­mu­nic­a­tion”.

The landmark case will have been closely watched by other employers considering return-to-office mandates. The judge found that the financial watchdog was within its rights to deny Elizabeth Wilson’s request, stating there were “weak­nesses with remote work­ing”. Poor Elizabeth; like badgers, all she wants is to be at home without disruption.

Judge Richter wrote in judgement: “It is the exper­i­ence of many who work using tech­no­logy that it is not well suited to the fast-paced inter­play of exchanges which occur in, for example, plan­ning meet­ings or train­ing events when rapid dis­cus­sion can occur on top­ics.

He also poin­ted to “a lim­it­a­tion to the abil­ity to observe and respond to non-verbal com­mu­nic­a­tion which may arise out­side of the con­text of formal events but which non­ethe­less forms an import­ant part of work­ing with other indi­vidu­als”.

It will be interesting to see how this ruling impacts the financial services industry especially. It feels like a big blow to those operating in this area, and solidifies the notion that firms are rigidly not keeping up with the times. Will this trigger an exodus of top talent?

Leaders believe that productivity lies at the heart of the workplace debate – but should it? The old maxim that “a happy worker makes a productive worker” springs to mind. One comes before the other. With this in mind, I enjoyed participating in a roundtable hosted by Slack and Be the Business, atop the Gherkin in the city of London, that discussed how better communication delivers the most significant wins regarding productivity for small- to medium-sized businesses in the UK. 

The session coincided with new research examining how SMBs can overcome stagnation in 2024. Of the many interesting findings, these were the most compelling for me: Poor management was the top internal barrier to growth, highlighted by over four in ten (45%). This was followed by: Poor communication and lack of collaboration (38%); Lack of motivation (36%); and Employee burnout (33%).

Clearly, whether working in the office or not, communication and collaboration go hand in hand, and these have to improve – for everyone’s sake, with the UK languishing at the bottom of the G7 productivity rankings. 

As the roundtable chair, CEO of Be the Business Anthony Impey, noted, a 1% increase in the UK’s productivity will boost the economy by £95 billion over five years.

The past

Here come the badgers, finally. 

This month, I enjoyed a weekend spa retreat in the New Forest, close to Lymington, where – ironically – the aforementioned Gibbons served as a member of parliament in the 1780s. I stayed five miles due north in Brockenhurst and enjoyed strolling in the countryside, marvelling at deer and wild horses. I was fascinated to learn the (alleged) etymology of Brockenhurst stems from the Celtic for “badger’s home” with the black-and-white nocturnal creatures having been common residents for centuries. 

I was informed that the badgers have, over the years, built an underground tunnel that stretches from Brockenhurst to Lymington. Human attempts to block the way, and collapse the tunnel, have come to nought. The badgers are resilient and inventive, they will always dig around obstacles, and make new tunnels. It struck me that we should all be more like badgers.

Statistics of the month

  • Only 8% of European businesses have adopted AI, whereas the number is over 50% in the United States, according to Cecilia Bonefeld-Dahl, Director General of DIGITALEUROPE.
  • Cisco’s 2024 Data Privacy Benchmark Study shows more than one-quarter of organisations have banned the use of generative AI, highlighting the growing privacy concerns and the trust challenges facing organisations over their use of AI.
  • O.C. Tanner’s 2024 Global Culture Report revealed that less than half of UK leaders (47%) consider their employees when deciding to enact business-wide changes. And just 44% seek employee opinions as changes are rolled out.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media

How AI is enabling CFOs and CHROs to work smarter together in an economically uncertain period

Breaking data silos to drive strategy enables HR and finance departments to work in concert – and by removing administrative tasks, and empowering employees with self-service solutions, AI makes their working lives more fun.

November 22, 2023, marked 60 years since Lee Harvey Oswald assassinated John F Kennedy. The 35th president of the United States knew plenty about handling pressure in uncertain times. JFK commanded patrol torpedo boats during World War II, was at the helm during the Cuban Missile Crisis and signed the first nuclear weapons treaty a month before his premature death.

What would the youngest-serving US president have made of modern-day pressures, not least the advancement of AI in the post-pandemic world? Kennedy’s observation that “the Chinese use two brush strokes to write the word ‘crisis’” is perhaps apposite. “One brush stroke stands for danger; the other for opportunity,” he explained. “In a crisis, be aware of the danger – but recognise the opportunity.”

CFOs and CHROs would be wise to heed JFK’s words and recognise the opportunity presented by AI. Admittedly, they have had it more challenging than most since the start of the COVID-19 crisis and during the ongoing cost-of-living crisis. And, considering CFOs and CHROs have seen their list of responsibilities multiply in recent years, they may not have the time or inclination to engage with a technology that, as many predict, will cut human jobs and possibly slaughter humanity, according to some doom-mongering experts.

However, embracing AI enables those operating in these spaces to handle their workload better, focus deeper on identifying and nurturing talent – which is why most HR professionals enter the industry – and manage finances, the chief concern of CFOs.

Further, with more data at their fingertips, both can play more critical roles strategically, working closely together and with other members of the C-suite.

Holistic approach

“The best HR teams impact right across the operations and strategy of organisations and are looking outside them to anticipate the coming challenges and opportunities too,” says Lesley Richards, Head of the Chartered Institute of Personnel and Development (CIPD) in Wales. “It is absolutely right to say that a more holistic approach is necessary. In fact, it’s really hard to create excellence in this type of work without all of the critical elements being well aligned.”

Notably, ‘Will AI fix work?’, a report published in May by Microsoft – which has invested billions of pounds in OpenAI, creator of ChatGPT and DALL-E – suggested it will be the HR professionals, working in concert with finance teams, that can best understand and use AI who will communicate better with workers and improve the overall employee experience. 

As the axiom goes, a happy worker is a productive worker, so it is worth CFOs working closer with CHROs. Increasingly, data-powered AI is the golden thread that binds together their work, enabling smarter decisions around employee recruitment, retention, performance and career development.

While 49 percent of around 30,000 respondents to Microsoft’s global survey reported they were worried that AI would consume their jobs, considerably more (70 percent) wanted to delegate as much work as possible to AI to lessen their loads. “Human-AI collaboration will be the next transformational work pattern,” stated the report.

Clare Barclay, Microsoft UK’s Chief Executive, expanded this theme when she took to the stage at London Tech Week in June. “This wave of AI innovation sweeping the world right now is going to impact the world for generations to come,” she enthused. “It will be the most significant inflection point in our lifetime.”

Avoiding digital debt

The uptake of generative AI, in particular, has been extraordinary. Barclay pointed out that the internet took seven years to reach 100 million users. It took ChatGPT just two months to hit the same number (a record that has since been surpassed by Threads, Meta’s latest social media platform that looks and feels like the channel formerly known as Twitter). 

“This type of adoption has not been seen before and really will disrupt all industries and how they traditionally operate,” continued Barclay. “It will also significantly impact the world of work and how people work.” She cited a PwC study that calculated AI would boost the UK’s GDP by over 10 percent by the end of the decade, equating to an additional £232 billion.

To take advantage of AI, though, finance and HR teams must look up, urged Barclay. Referencing more Microsoft research, she warned that 64 percent of workers didn’t have enough time or energy to complete their daily jobs. “They’re challenged and overwhelmed with the pace of work, burnout and a lack of productivity. We call this deluge of information ‘digital debt’, sapping energy, slowing down the ability to think clearly and severely impacting thinking for innovation.”

AI can assist. “There’s been a lot of discussion about job losses and the impact of AI, but the research showed that leaders around the globe are the least interested in using AI to cut jobs,” said Barclay. “Instead, they believe, and value how, it will help employees to be more productive and focus on more meaningful work, as well as having wellbeing benefits – obviously one of the overheads of this digital debt.”

In short, AI provides finance and HR teams with digital assistants, or ‘copilots’, as Microsoft calls them. “These copilots will help workers manage this digital deluge, prioritise the most important tasks, create compelling content and improve their creativity significantly,” added Barclay. “Ultimately, this is about using this technology to help employees navigate what matters most to them.”

AI as copilot

If used correctly, AI in its various forms will ease the workload for HR and finance teams, says Paris-based Helen Poitevin, Distinguished VP Analyst, HCM at Gartner. “In future, as AI can provide hyper-personalised recommendations and insights around employees, HR professionals will be able to better support staff in creating career development plans, streamlining documentation and improving the onboarding process, among other things.”

Daniel Pell, UKI Country Manager for Workday, says HR departments are already establishing more ‘self-service’ tools for employees that reduce their administration workload. Increasingly, companies empower staff to book holidays via a smartphone application or intranet portal – no need to plead with HR for time off.

Yet, more crucially, learning is not keeping up with the pace of work. According to Work Economic Forum projections, 60 percent of the workforce requires upskilling by 2027, but less than half have access to the necessary training.

Again, AI can help CHROs and, in turn, CFOs. “Using AI to identify and predict skills for prospective and current employees enables more effective job matching and career development,” says Pell. By acting on recommendations surfaced by AI, HR professionals can proactively approach staff that have been in the same role for three years, for instance, to offer an appealing internal move. This engagement extends the employee’s lifecycle within an organisation.

Showing a small amount of recognition can go a long way, too. LinkedIn research found that saying ‘thank you’ to employees four times a year raised their retention rate to 96 percent. With AI, HR teams can prompt managers to celebrate their team members’ contributions.

Additionally, frequent pulse surveys allow HR teams to monitor the morale and sentiments of individuals, providing them with the data needed to make necessary improvements or interventions before a situation is irreparable and costly.

Given Gallup’s most recent ‘State of the global workplace’ report found that 2022’s 23 percent employee engagement was a record high, there is significant room for improvement in this area. But by using AI to boost this figure, CHROs and CFOs working in tandem will become the beating heart of company strategy in the digital age.

This article was first published in Workday’s SmarterCFO magazine in autumn 2023

Is 5G the key to a truly digital society?

A panel of experts – including Vodafone UK, NatWest’s Boxed and Google – say asset tracking and optimising connected buildings and vehicles are some of the more encouraging 5G use cases, but we need better collaboration and storytelling to narrow the digital divide and create a truly digital society in the UK

Nick Gliddon, Vodafone UK’s director of business, argues that 5G is crucial to help both communities and businesses make swift and wide-ranging progress. Earlier this year, Vodafone research calculated that having a best-in-class 5G network in the UK would deliver up to £5 billion a year in economic benefit by 2030. 

An additional study of 2,000 UK adults suggested Britons believe 5G can improve society more than AI. The survey found healthcare (31%), utilities like energy and water (21%), and railways (20%) were key sectors that will benefit most from 5G.

Empowering people is the beating heart of a digital society, and Gliddon says 5G can help this on five fronts. It will improve connectivity, video capabilities, business applications, immersive experiences, and digital-twin technology, which is a digital representation of a physical process, portrayed in a digital version of its environment.

As a digital society grows in the UK, there are also opportunities for businesses. “A truly digital society is one where individuals, platforms and utilities are seamlessly interconnected,” says Tom Bentley, head of growth at Boxed, NatWest’s banking-as-a-service platform. “Cloud and 5G technologies provide a better customer service experience where the fulfilment of product or utilities can be instant, compared with the existing physical processes of the past.” He adds that such service “fundamentally relies on quality data combined with strong interconnectivity”.

Ben Shimshon, co-founder and managing partner at Thinks, an insight and strategy consultancy, notes that some UK organisations – especially SMEs – are taking advantage of the opportunities opened up by better connectivity piecemeal, and often more slowly. “Some 99.4% of businesses in the UK have fewer than 50 employees, and three quarters of those are sole traders with no employees. A lot of them are doing predominantly offline things like scaffolding or running a shop,” he says. Many find the notion of a digital society “quite daunting”.

Clear business benefits

Part of the challenge is articulating the advantages of greater connectivity to time-pressed leaders of micro-businesses, not least because many are content with the status quo and incentives for digital adoption remain limited, says Bentley.

Still, those gaining digital access see clear benefits, Shimshon says, such as faster invoicing and payments to improve cash flow. Digital adoption happens gradually for many SMEs as new technologies like card readers are embraced, leading to incremental improvements across operations.

Matthew Evans, director for markets at the UK’s technology trade association techUK, argues that practical needs – like freeing up leisure time by streamlining admin – will resonate more with time-poor SME owners than abstract efficiency promises. “Think of that scaffolder who much prefers to watch his son playing football than doing his company accounts,” he says. “That needs to be the pitch: these digital tools will free up that time.”

Victoria Newton, chief product officer of Engine, Starling Bank’s software-as-a-service arm, agrees the focus should be on practically solving business problems rather than leading with technology. She highlights how Starling has transformed business banking by enabling round-the-clock digital financial services, through building a proprietary cloud-based banking infrastructure, Engine, from scratch. “Starling was able to do this, take our technology and imbed it within banks in countries starting that digital revolution themselves.”

Customer choice 

As society becomes increasingly digital, though, the group acknowledged organisations must put citizens first. For example, Newton believes customer choice is paramount – some may opt for online self-service, but others still want human contact through banks, branches or contact centres. Top-down measures to increase digital capabilities risk excluding the most digitally disenfranchised without affordable options, she adds.

Another barrier to progress, says British Chambers of Commerce director Faye Busby, is that “people naturally don’t like change”. She highlights research, published in collaboration with Xero at the start of the year, showing that 75% of businesses believe their “broadband and general connectivity is very reliable”, suggesting they don’t realise what more connectivity could achieve for them. They underestimate the potential of a digital society.

Again, 5G has the power to electrify a digital society, but only once more people realise the good work that is going on. Several examples demonstrate 5G networks unleashing transformative applications, and most have been made possible thanks to visionary partners.

Gliddon calls Coventry “the most advanced 5G city in the UK”, partly because the city council, who have collaborated with Vodafone for almost a decade, is so progressive. The council gained smart-city capabilities by providing planning assistance to deploy 5G antennas rapidly, improving traffic flow, air quality monitoring and municipal operations. Coventry is creating a smart energy grid to better manage local renewable power generation by building on these digital foundations.

Coventry University is also the first in the UK to successfully deploy a 5G Standalone network. The forward-thinking council mandated 5G labs at Coventry University to support next-generation teaching in subjects like healthcare and engineering. Students can now access immersive learning through technologies like virtual reality. 

For instance, healthcare students are using virtual reality and augmented reality to explore the human body like never before. Professors at the university use a headset and 5G allows them to access any part of the body during a lesson, making points and taking questions from students in real time, making the teaching experience much more flexible and interactive.

Elsewhere, for environmental services provider Veolia, 5G enables real-time asset tracking. Veolia’s head of digital strategy and innovations, Chris Burrows, outlines how sensors on the company’s recycling-collection trucks can ensure it takes 16.5 seconds – or fewer – to complete a bin empty, identify potholes, and build air-quality maps across cities. 

CCTV cameras on Veolia’s trucks also use edge computing to pinpoint potential collisions, analysing footage instantly. “It effectively gives you a threat-to-life score,” he says. This facilitates rapid accident responses while providing evidence against false claims. Burrows emphasises that realising these benefits requires a supportive company culture and employees willing to act on data insights.

Meanwhile, techUK’s Evans lists encouraging 5G deployments in areas like ports and hospitals to manage assets and workforces. “The NHS wastes £300 million a year on medicine, at least half of which is avoidable, and is down to fridges breaking down, or drugs being left outside for too long. Better asset tracking would change that.”

Evans says, though, that if 5G is going to be successful and become “the digital fabric in the digital society”, there must be large-scale rollouts targeting enterprise use cases.

Daniel Peach, head of digital acceleration programmes at Google, predicts that greater 5G adoption will spur many new business models and opportunities. “It might seem minor, but there are a lot of buildings we don’t have data for,” he says. “There is a use case of energy optimisation and moving beyond motion sensors. If you track that centrally, you can entirely shut off parts of the building when it’s not in use. There is so much scope for innovation around connected buildings and connected vehicles.”

Cut the jargon

To accelerate the move to a digital society, there are a number of barriers to overcome. Gliddon stresses the need for appropriate language to explain digital innovation in an engaging, sector-specific way. This chimes with Busby, who believes unclear terminology, such as “connectivity”, remains a barrier, with many unable to grasp its meaning.

The energy demands of an increasingly data-driven society must be addressed. For Burrows, “digital sobriety” is needed regarding endless data storage and transfer. Peach expects 5G’s carbon impact to fall, being more efficient than 3G or 4G.

Finally, significant investment is required for the digital society – nationwide 5G coverage comes at a cost. Currently, telecoms operators are largely being asked to fund its deployment alone and forecasts suggest there is a hole of £25-30 billion if the industry is to meet Government expectations. This is one of the reasons why Vodafone and Three have announced plans to merge. If approved – a merged company would have the necessary scale to invest in creating one of Europe’s most advanced 5G networks. Vodafone says it would invest £11 billion in the network over the next decade and take 5G Standalone to 99% of populated areas by 2034.

Ultimately, while the core network technology promises significant performance improvements, realising technological potential requires careful human and organisational transformation. Joined-up thinking and greater collaboration between telcos, academia, the public and private sector, and telling compelling stories that persuade businesses to embrace digital innovation is vital to unleashing 5G’s possibilities and building an inclusive and sustainable digital society.

This article was first published by Raconteur, in November 2023, following an in-person roundtable event that I moderated

Quick fix? Why B&Q’s modernisation plan is no short-term project

Entering his sixth year at the helm of home improvement giant B&Q, CEO Graham Bell is bullish about the company’s future, thanks to a new set of priorities, technologies and store formats

As a sports fan, B&Q’s Graham Bell can’t resist making some topical Rugby World Cup analogies to describe his feelings about the firm he’s been leading for the past five years. 

“If we were a rugby team, we could win the competition because we’re feeling strong, confident and ready to push for success,” he declares. 

The Scot believes that the business is getting “fitter by the year” and its employees are prepared to “go through walls” to realise his ambitious vision: for B&Q to become its customers’ trusted partner for everything relating to their homes.

Just how match fit B&Q is hasn’t always been clear, though, and perhaps Bell is indulging in a little sporting bravado. Financial results published this month revealed challenging trading conditions, with parent company Kingfisher – which also owns brands such as Screwfix, Castorama and Brico Dépôt – reporting a 33% decline in pre-tax profits for the first half of the year, to £317m. Indeed, some might argue that B&Q’s performance under Bell has been a little lacklustre so far. 

Even so, his company leads the UK home improvements sector, boasting a market share of 8.2% in 2022, according to RetailEconomics. And Bell counters criticism of B&Q’s performance by noting that “we’ve performed really well since 2019, in terms of sales and profits, and we’ve hit our targets”. He adds that the modernisation project that he started and is only now starting to bear fruit isn’t yet reflected in the figures.

“If you stand still as a company, you may benefit for a couple of years because you’re not spending, but that won’t last long,” Bell argues. “Without investing in the business, you will lose ground to your competitors.”

Why B&Q had to be transformed

Bell’s modernisation drive has been sorely needed. When he took charge of B&Q in October 2018, the organisation was “unfit”, he says. It’s an assessment he’s well placed to make, having spent 25 years with Kingfisher, holding several senior roles across the group. 

He started his first stint with B&Q in 1998, serving in posts including property director and HR director, before spending 12 years at Screwfix. As CEO there, Bell oversaw a sizeable expansion of its store network in the UK.

When the opportunity came to return to B&Q as boss, he found it irresistible because he knew that the company could perform better. 

“I was looking at it and thinking: ‘It’s still number one but it’s not acting like number one,’” he recalls. “I’ve always seen B&Q as a top-10 UK brand. You don’t often get the chance to manage such a brand and I felt that I could move it forward.” 

How did B&Q change during Covid?

He wasn’t interested in playing it safe at B&Q either. Global socioeconomic crises that have occurred since his appointment would have been enough to halt progress on most business transformations. Not so for the 10-year plan that Bell set in motion. 

This long-term strategy, built on extensive research into future consumer needs and inspired by leading tech companies such as Google, has served as a “north star” for the leadership team, he explains. 

“Our vision is that we want people to see B&Q as helping them manage their lives through their homes,” Bell explains. “Having that vision helped us during the pandemic, because it enabled us to accelerate some initiatives, such as using our stores as mini-warehouses to deliver locally. We became 100% customer-centric because we had to look after people.”

How is Bell overhauling B&Q?

One way B&Q went about that was by starting video consultations during the Covid lockdowns, with kitchen designers remotely advising customers at home – something that would have been unimaginable even a few years before the pandemic.

Without investing in the business, you will lose ground to your competitors

B&Q has been modernising in other ways too. The company’s Instagram profile has attracted almost 200,000 followers thanks to instructive videos such as “How to improve your lawn”, “September gardening jobs” and “Quick ways to revamp your space”.

Bell plans to expand the firm’s offerings in the coming years, encompassing pet supplies, children’s products, home services and insurance, as well as building out the core DIY categories. Ultimately, he envisages B&Q offering a digital hub where customers can collate essential home information. 

“Imagine that you will have all your appliances listed with their serial numbers for the guarantees and warranty reminders, along with mood boards and artistic ideas,” he says.

Bell not only has a clear idea of what he wants the business to offer in the future; he also understands that effective investment today is vital to achieving that. For example, he’s keen to adopt tech to boost product availability, improve the fulfilment process and inspire customers in new ways. 

Customer service and sustainability are a natural match 

What’s more, the brand’s refreshed emphasis on doing everything it can to help customers has translated into initiatives ranging from providing energy-saving tips to encouraging responsible business among its SME clients. These customer-focused initiatives are all vital elements of the Bell plan.

For instance, B&Q’s new energy-saving service (ESS), launched in partnership with the Energy Trust, offers homeowners tailored advice on energy-efficiency investments, linking them to government grants. 

As an organisation, we must be fit enough to react quickly to whatever happens

“It gives you a shopping list of things you could improve, from insulation and thermostatic radiator valves, right up to double glazing and solar panels,” Bell explains.

Although the ESS was initially driven by the firm’s desire to become credible in the sustainability space, the war in Ukraine and the cost-of-living crisis created a new impetus, he says, adding: “In practice, people see saving money as a bigger priority than saving the planet.”

In March, TV presenter Jermaine Jenas fronted B&Q’s Energy Savers initiative. The former England footballer, clearly chosen to appeal to younger consumers, demonstrated how to save energy at home and highlighted the importance of helping community organisations to minimise their fuel bills. Jenas visited Welling United Football Club to help it adopt several energy-saving measures. 

How B&Q is prioritising adaptability

The ability to adapt to changing consumer, community and commercial needs is something Bell sees as key to B&Q’s future success. 

“This business needs to be agile,” he says. “We thought the pandemic was over and suddenly we had the Ukraine war. Utility costs went mad, as did mortgage interest rates. As an organisation, we must be fit enough to react quickly to whatever happens.”

Adaptability is also evident in the ongoing store expansion programme, which aims to enhance “convenience, comfort and simplicity” in urban areas through smaller B&Q Local shops. It’s meeting consumer demand in these communities for easy access to DIY products, according to Bell, who has taken a lead from the reduced ranges offered in supermarkets’ satellite stores. 

“We’ve trialled this format and gone out of our way to be more accessible,” he says.

The CEO is clearly relishing the challenge of implementing the second half of his 10-year business plan. But, like any good team player, he is happy to commend the efforts of those around him to propel the business forward. 

“I’m very pleased and confident about where we are now,” Bell says. “That doesn’t happen just through me. It happens through a team of energetic, creative people and about 30,000 employees, who are motivated to serve our customers better.”

This article was first published by Raconteur, as part of the Retail & Ecommerce special report in The Times, in September 2023

Cloud migration: doing more with less

To remain competitive and relevant, take advantage of nascent technologies like generative artificial intelligence, and accelerate business-critical innovation, leaders must move to the cloud, according to experts

Businesses across all industries have steadily been quickening their march to cloud computing over the last handful of years. Now, in an increasingly digitalised post-pandemic world, they are fully charging. With an economic downturn looming, there is an urgent need to innovate, reduce costs, bolster security, access on-demand scalability, and optimise resource use – all enabled by the cloud. 

According to Gordon McKenna, vice president of Cloud Evangelist and Alliances at Ensono, a global hybrid IT managed service provider, the explosion of generative artificial intelligence has only boosted the business case. “Generative AI is the biggest game changer,” he says. “The technology is not new, but now anyone can take advantage of pre-programmed language models. It has lowered the barrier to entry and is transforming everything.”

McKenna uses ChatGPT to help “write emails and documents”, saving “hours” of his time. He continues: “It’s the next wave of IT, and it allows people to do more with less by aiding them to do their jobs quicker and better.” His mantra is “be a disruptor or be disrupted”, and he argues that companies operating in the cloud stand to gain the most from generative AI and other nascent technologies. “Business leaders will accelerate to the cloud because they want to be AI-ready and aggregate their data securely on a modern platform.”

Harry Margiolakiotis, managing director of engineering at Thought Machine, a fintech company that builds cloud-native technology mainly for the financial services industry, says his company can maximise the potential of generative AI “because we are in the cloud and all cloud providers are currently rushing to offer gen AI services”. He explains: “Otherwise, we would have to set up all sorts of infrastructure to reach that point, to be able to use and experiment with these breakthrough technologies.”

Need for speed

Thought Machine’s clients stand to benefit. “The cycle of [digital core banking projects] going live is typically years, but thanks to the cloud and the software agility, this is now happening within five months,” says Margiolakiotis. “Having a live bank with half a million customers in single-digit months is unprecedented. It’s impossible to achieve this on-premise.”

Margiolakiotis says other pluses make it a compelling argument. “A cloud strategy for any business is one of the safest ways to stay competitive and relevant,” he says. “But you need to design it well. A ‘lift-and-shift’ approach is a start, but becoming cloud native is the best way to leverage the available technologies and on the way attract top talent too.”

The need for speed is evident for Nigel Gibbons, director and senior advisor at NCC Group, a Manchester-headquartered cybersecurity firm. “The business world is shifting fast,” he says. “Before the pandemic, I would have advised organisations to take baby steps, but you actually need to hit the accelerator. Organisations that can operate to the edge of their risk envelope should commit to the cloud faster due to a raft of benefits. We can now take monolithic applications, old-legacy stuff, and transform them at a speed that no one appreciated a couple of years ago.”

Moreover, yesterday’s model won’t be fit for tomorrow – beware the cloud paradigm of doing more for less – the reality is to prepare to be ‘doing more with more’ and expediting innovation for growth. Yet, a certain amount of caution is required regarding mindset and expertise, Gibbons warns. “Moving to the cloud resets trust, resets risk, resets all those parameters you formerly operated under on-premise,” he says. “Too many organisations who have rushed to the cloud have gotten themselves into awkward situations, where they thought they could float the traditional control frameworks, models, and ways of operating.”

Data privacy concerns

Gibbons notes one of the “biggest areas of opportunity around cloud is driving down risk and taking organisations into a new band of profitability and innovation by understanding what true risk potential exists”. Additionally, cloud “brings a significant step up in terms of advanced end-to-end security capabilities, which few organisations realise”.

Meanwhile, a somewhat frustrated Nathan Hayes, IT director of global law firm Osborne Clarke, points out that the legal industry is generally slow to adopt new technologies. “Our cloud-first approach is very much appended with the term ‘where possible’,” he says, identifying two barriers to progress. “One is our clients, a few of whom are remarkably opposed to cloud services. And regulators in some jurisdictions are still a bit in the dark ages.”

With data privacy especially a concern for law firms, Osborne Clarke is in the process of “setting up ring-fenced, Microsoft-Azure-hosted OpenAI tools” to experiment and secure data in the cloud, says Hayes. He stresses the dangers of relying on AI, though, and references a recent case whereby a US attorney submitted a court brief with false – and unchecked – citations generated through ChatGPT. This cautionary example emboldened the theme that it’s best to be prudent in the legal profession. “It’s all about trust,” says Hayes. “If you can’t trust your lawyers, who can you trust?”

Mark Howard, head of technology engineering at Nuffield Health, the UK’s largest healthcare charity, reveals similarly modest advancements in this area, where they are actively growing the expertise. “We are making steady progress on our cloud journey, and de-risking is a big part of what we want to do,” he says. “We’re not a technology company, so for us it’s also about bringing the strategic importance of cloud migration to life for our wider business, as well as influencing organisational change. You have to go in with eyes open, and acknowledge that it’s a conscious decision made for good reasons, with a continual focus on addressing risk.”

Indeed, Hayes challenges the notion that businesses in the cloud can do more with less. “I’m finding that we can do much more, but it’s costing much more, too.”

Long-term value

Perhaps Elon Musk would sympathise with this observation, given that the cost-cutting billionaire allegedly stopped paying Google Twitter’s $300 million (£236 million) a year bill for cloud services, possibly leading to disastrous user limitations when the contract expired on July 1. 

In response to Hayes, Gibbons says: “In industries constrained by compliance and regulation, the cloud offers such potential for business resilience, but without care the cost can become exponential.” 

He argues that the return on investment is significant but not immediate. “Yes, there is an increasing cost, but that cost should be offset against the potential you realise as a business,” Gibbons says. “Because utility-based computing – which is what the cloud is – allows me to realise the innovation, the untapped potential, the intellectual property in my people’s heads. Ultimately, survival in the digital economy depends on organisations making that shift.”

Dane Buchanan, global director of data, analytics and tech at M&C Saatchi Performance, is more optimistic about generating efficiencies and value from the cloud. Companies can do more with less “by continuously monitoring and optimising their cloud resources”, scaling up or down based on demand, he counters. 

“Media agencies and the cloud go hand in hand, as we have to be dynamic and deal with a multitude of clients worldwide,” says Buchanan. Ingesting data across many markets brings its own set of challenges, particularly when it comes to where we store data and how we navigate region-specific data regulations. Working in the cloud gives us the flexibility we need to spin up data centres based on individual clients requirements. 

For M&C Saatchi Performance, most of their tech resides within Amazon Web Services. “We’ve enabled auto-scaling and utilise AWS Forecast for proactive planning,” says Buchanan. To illustrate his point, he turns the key on a car analogy. “Think of data as fuel; without it, the vehicle will not move. The cloud enables you to get the most out of that data. But this requires proper data governance and labelling an area in which many companies have underinvested. Cloud-based SaaS products typically offer greater flexibility, security, and more frequent updates when compared to traditional software.”

Cloud innovation for good

Security is a theme taken up by Ensono’s McKenna. A recent visit to Microsoft’s data centre in Dublin proved reassuring. “There are tank traps leading up the entrance,” he says. “There are only five people in the entire building – four of whom are security staff, and the other is a guy collecting discs for destruction. Then it’s biometric control to enter, and you are weighed in and out, just in case you have picked up a disc. How can you be more secure than that, on-prem?”

Aside from physical security, the cloud is secure in other ways, says Nathaniel Jones, director of strategic threat and engagement at global cybersecurity firm Darktrace. He dismisses the so-called quantum apocalypse – the concept that encrypted data that has been captured and stored until it can be decrypted – as a concern for the immediate term. Gibbons agrees: “This is one of the reasons for migrating to the cloud: my data lake will always be up to the latest standard around encryption.”

“A more pressing issue is for businesses to understand themselves, their own infrastructure and high-value assets, in the transition to a cloud-first approach,” said Jones. “A zero trust cloud model that continuously monitors the organisation’s internal and external attack surface will help cost-wise but also take advantage of the shared responsibility model in cyber risk transference.”

While tech will possibly save the world, there is certainly an element of digital Darwinism for businesses to contend with, Jones adds. “From an IT perspective, if you’re not moving to the cloud, your business will die,” he concludes.

This article was first published by Raconteur, following an in-person roundtable event that I moderated in June 2023