Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 18)

TL;DR: June’s edition explores truth-telling in an age of AI-generated misinformation, the flood of low-quality content threatening authentic expertise, and why human storytelling becomes our most valuable asset when everything else can be faked – plus some highlights from South by Southwest London.

Image created on Midjourney

The future

“When something is moving a million times every 10 years, there’s only one way that you can survive it. You’ve got to get on that spaceship. Once you get on that spaceship, you’re travelling at the same speed. When you’re on the rocket ship, all of a sudden, everything else slows down.”

Nvidia CEO Jensen Huang’s words, delivered at London Tech Week earlier this month alongside Prime Minister Keir Starmer, capture the current state of artificial intelligence. We are being propelled by technological change at an unprecedented speed, orders of magnitude quicker than Moore’s law, and it feels alien and frightening.

Before stepping foot on the rocket ship, though, the first barrier to overcome for many is trust in AI. Indeed, for many, it’s advancing so rapidly that the potential for missed or hidden consequences is alarming enough to prompt a hard brake or not climb aboard at all.

Others understand the threats but focus on the opportunities promised by AI and are jostling for position, bracing for warp speed. Nothing will stop them, but at what cost to society?

For example, we’re currently witnessing two distinct trajectories for the future of online content and, to some extent, services. One leads towards an internet flooded with synthetic mediocrity and, worse, untrustworthy information; the other towards authentic human expertise becoming our most valuable currency.

Because the truth crisis has already landed, and AI is taking over, attacking the veracity of, well, everything we read and much of what we see on a screen. 

In May, NewsGuard, which provides data to help identify reliable information online, identified 1,271 AI-generated news and information sites across 16 languages, operating with little to no human oversight, up from 750 last year.

It’s easy not to see this as you pull on your astronaut helmet and space gloves, but this is an insidious, industrial-scale production of mediocrity. Generative AI, fed on historical data, produces content that reflects the average of what has been published before, offering no new insights, lived experiences, or authentic perspectives. The result is an online world increasingly polluted with bland, sourceless, soulless and often inaccurate information. The slop is only going to get sloppier, too. What does that mean for truth and, yes, trust?

The 2025 State of AI in Marketing Report, published by HubSpot last week, reveals that 84% of UK marketers now use AI tools daily in their roles, compared to a global average of 66%.

Media companies are at risk of hosting, citing, and copying the marketing content. Some are actively creating it while swinging the axe liberally, culling journalists, and hacking away at integrity. 

The latest Private Eye reported how Piers North, CEO of Reach – struggling publisher of the Mirror, Express, Liverpool Echo, Manchester Evening News, and countless other titles – has a “cunning plan: to hand it all over to the robots to sort out”. 

According to the magazine, North told staff: “It feels like we’re on the cusp of another digital revolution, and obviously that can be pretty daunting, but here I think we’ve got such an opportunity to do more of the stuff we love and are brilliant at. So with that in mind, you won’t be surprised to hear that embracing AI is going to feature heavily in my strategic priorities.”

The incentive structure is clear: publish as much as possible and as quickly as possible to attract traffic. Quality, alas, becomes secondary to volume.

But this crisis creates opportunity. Real expertise becomes more valuable precisely because it’s becoming rarer. The brands and leaders who properly emphasise authentic human knowledge will enjoy a competitive advantage over competitors drowning in algorithmic sameness, now and in the future.

What does this mean for our children? They’re growing up in a world where they’ll need to become master detectives of truth. The skills we took for granted – being able to distinguish reliable sources from unreliable ones and recognising authentic expertise from synthetic mimicry – are becoming essential survival tools. 

They’ll need to develop what we might call “truth literacy”: the ability to trace sources, verify claims, and distinguish between content created by humans with lived experience and content generated by algorithms with training data.

This detective work extends beyond text to every form of media. Deepfakes are becoming indistinguishable from reality. Voice cloning requires just seconds of audio. Even video evidence can no longer be trusted without verification.

The implications for work – and, well, life – are profound. For instance, with AI agents being the latest business buzzword, Khozema Shipchandler, CEO of global cloud communications company Twilio, shared with me how their technology is enabling what he calls “hyper-personalisation at scale”. But the key discovery isn’t the technology itself; it’s how human expertise guides its application.

“We’re not trying to replace human agents,” Khozema told me. “We’re creating experiences where virtual agents handle lower complexity interactions but can escalate seamlessly to humans when genuine expertise is needed.”

He shared a healthcare example. Cedar Health, based in the United States, found that 97% of patient inquiries were related to a lack of understanding of bills. However, patients initially preferred engaging with AI agents because they felt less embarrassed about gaps in their medical terminology. The AI could process complex insurance data instantly, but when nuanced problem-solving was required, human experts stepped in with full context.

In this case, man and machine are working together brilliantly. As Shipchandler put it: “The consumer gets an experience where they’re being listened to all the way through, they’re getting accuracy because everything gets recapped, and they’re getting promotional offers that aren’t annoying because they reference things they’ve actually done before.”

The crucial point, though, is that none of this works without human oversight, empathy, and strategic thinking. The AI handles the data processing; humans provide the wisdom.

Jesper With-Fogstrup, Group CEO of Moneypenny, a telephone answering service, echoed this theme from a different angle. His global company has been testing AI voice agents for a few months, handling live calls across various industries. The early feedback has been mixed, but revealing.

“Some people expect it’s going to be exactly like talking to a human,” With-Fogstrup told me in a cafe down the road from Olympia, the venue for London Tech Week. “It just isn’t. But we’re shipping updates to these agents every day, several times a day. They’re becoming better incredibly quickly.”

What’s fascinating is how customers reveal more of themselves to AI agents compared to human agents. “There’s something about being able to have a conversation for a long time,” Jesper observed. “The models are very patient. Sometimes that’s what’s required.”

But again, the sweet spot isn’t AI replacing humans. It’s AI handling routine complexity so humans can focus on what they do uniquely well. As Jesper explained: “If it escalates into one of our Moneypenny personal assistants, they get a summary, they can pick up the conversation, they understand where it got stuck, and they can resolve the issue.”

The future of work, then, isn’t about choosing between human and artificial intelligence. It’s about designing systems where each amplifies the other’s strengths while maintaining the ability to distinguish between them.

Hilary Cottam’s research for her new book, The Work We Need, arrives at the same conclusion from a different direction. After interviewing thousands of workers, from gravediggers to the Microsoft CEO, she identified six principles for revolutionising work: 

  • Securing the basics
  • Working with meaning
  • Tending to what sustains us
  • Rethinking our use of time
  • Enabling play
  • Organising in place

Work, Cottam argues, is “a sort of chrysalis in which we figure out who we are and what we’re doing here, and what we should be doing to be useful”. That existential purpose can’t be automated away.

The young female welder Cottam profiled, working on nuclear submarines for BAE in Barrow-in-Furness, exemplifies this. She and her colleagues are “very, very convinced that their work is meaningful, partly because they’re highly skilled. And what’s very unusual in the modern workplace is that a submarine takes seven years to build, and most of the teamwork on that submarine is end-to-end.”

This is the future we should be building towards: AI handling the routine complexity, humans focusing on meaning and purpose, and the irreplaceable work of creating something that lasts. But we must teach our children how to distinguish between authentic human expertise and sophisticated synthetic imitation. Not easy.

Meanwhile, the companies already embracing this approach are seeing remarkable results. They’re not asking whether AI will replace humans, but how human expertise can be amplified by AI to create better outcomes for everyone while maintaining transparency about when and how AI is being used.

As Huang noted in his conversation with the Prime Minister: “AI is the great equaliser. The new programming language is called ‘human’. Anybody can learn how to program in AI.”

But that democratisation only works if we maintain the distinctly human capabilities that give that programming direction, purpose, and wisdom. The rocket ship is accelerating. Will we use that speed to amplify human potential or replace it entirely?

The present

At the inaugural South by Southwest London, held in Shoreditch, East London, at the beginning of June, I witnessed fascinating tensions around truth-telling that illuminate our current moment. The festival brought together storytellers, technologists, and pioneers, each grappling with how authentic voices survive in an increasingly synthetic world. Here are some of my highlights.

Image created on my iPhone

Tina Brown, former editor-in-chief of Tatler, Vanity Fair, The New Yorker, and The Daily Beast, reflecting on journalism’s current challenges, offered a deceptively simple observation: “To be a good writer, you have to notice things.” In our AI-saturated world, this human ability to notice becomes invaluable. While algorithms identify patterns in data, humans notice what’s missing, what doesn’t fit, and what feels wrong.

Brown’s observation carries particular weight, given her experience navigating media transformation over the past five decades. She has watched industries collapse and rebuild, seen power structures shift, and observed how authentic voices either adapt or fade away.

“Legacy media itself is reinventing itself all over the place,” she said. “They’re all trying to do things differently. But what you really miss in these smaller platforms is institutional backing. You need good lawyers, institutional backing for serious journalism.”

This tension between democratised content creation and institutional accountability sits at the heart of our current crisis. Anyone can publish anything, anywhere, anytime. But who ensures accuracy? Who takes responsibility when misinformation spreads? Who has the resources to fact-check, verify sources, and maintain standards?

This is a cultural challenge, as well as a technical one. When US President Donald Trump can shout down critics with “fake news”, and seemingly run a corrupt government – the memecoin $TRUMP and involvement with World Liberty Financial, reportedly raised over half a billion dollars, and there was the $400m (£303m) gift of a new official private jet from Qatar, among countless other questionable dealings – what does that mean for the rest of us?

Brown said: “The incredible thing is that the US President … doesn’t care how bad it looks. The first term was like, well, the president shouldn’t be making money out of himself. All that stuff is out of the window.”

When truth-telling itself becomes politically suspect, when transparency is viewed as a weakness rather than a strength, the work of authentic communication becomes both more difficult and more essential.

This dynamic played out dramatically in the spy world, as Gordon Carrera, the BBC’s Security Correspondent, and former CIA analyst David McCloskey revealed during a live recording of their podcast, The Rest is Classified, about intelligence operations. The most chilling story they shared wasn’t about sophisticated surveillance or cutting-edge technology. It was about children discovering their parents’ true identities only when stepping off a plane in Moscow, greeted by Vladimir Putin himself.

Imagine learning that everything you believed about your family, your identity, and your entire childhood was constructed fiction. These children of deep-cover Russian operatives lived authentic lives built on complete deception. The psychological impact, as McCloskey noted, requires “all kinds of exotic therapies”.

Just imagine. Those children will have gone past the anger about being lied to and crashed into devastation, having had their sense of reality torpedoed. When the foundation of truth crumbles, it’s not simply the facts that disappear: it’s the ability to trust anything, anywhere, ever again.

This feeling of groundlessness is what our children risk experiencing if we don’t teach them how to navigate an increasingly synthetic information environment. 

The difference is that while those Russian operatives’ children experienced one devastating revelation, our children face thousands of micro-deceptions daily: each AI-generated article, each deepfake video, each synthetic voice clip eroding their ability to distinguish real from artificial.

Zelda Perkins, speaking about whistleblowing at SXSW London, captured something essential about the courage required to tell brutal truths. When she broke her NDA to expose Harvey Weinstein’s behaviour and detonate the #MeToo movement in 2017, she was trying to dismantle an institution that enables silence rather than bringing down a powerful man. “The problem wasn’t really Weinstein,” she emphasised. “The problem is the system. The problem is these mechanisms that protect those in power.”

Her most powerful reflection was that she has no regrets about speaking out and telling the truth despite the unimaginable impact on her career and beyond. “My life has been completely ruined by speaking out,” she said. “But I’m honestly not sure I’ve ever been more fulfilled. I’ve never grown more, I’ve never learned more, I’ve never met more people with integrity.”

I’m reminded of a quote from Jesus in the bible (John 8:32 – and, yes, I had to look that up, of course): “And ye shall know the truth and the truth shall make you free.”

Truth can set you free, but it may come at a cost. This paradox captures something essential about truth-telling in our current moment. Individual courage matters, but systemic change requires mass action. As Perkins noted: “Collective voice is the most important thing for us right now.”

Elsewhere at SXSW London, the brilliantly named Mo Brings Plenty – an Oglala Lakota television, film, and stage actor (Mo in Yellowstone) – spoke with passion about Indigenous perspectives. “In our culture, we talk about the next seven generations,” he said. “What are we going to pass on to them? What do we leave behind?”

This long-term thinking feels revolutionary in our culture of instant gratification. Social media rewards immediate engagement. AI systems optimise for next-click prediction. Political cycles focus on next-election victories.

But authentic leaders think in generations, not quarters. They build systems that outlast their own tenure. They tell truths that may be uncomfortable now but are necessary for future flourishing.

The creative community at SXSW London embodied this thinking. Whether discussing children’s environmental education or music’s power to preserve cultural memory, artists consistently framed their work in terms of legacy and impact beyond immediate success.

As Dr Deepak Chopra noted in the “Love the Earth” session featuring Mo Brings Plenty: “Protecting our planet is something we can all do joyfully with imagination and compassion.”

This joyful approach to brutal truths offers a template for navigating our current information crisis. We don’t need to choose between honesty and hope. We can tell hard truths while building better systems and expose problems while creating solutions.

The key is understanding that truth-telling isn’t about punishment or blame. It’s about clearing space for authentic progress that will precipitate the flourishing of humanity, not its dulling.

The (recent) past

Three weeks ago, I took a 12-minute Lime bike (don’t worry, I have a clever folding helmet and never run red lights) from my office in South East London to Goldsmiths, University of London. I spoke to a room full of current students, recent graduates, and business leaders, delivering a keynote titled: “AI for Business Success: Fostering Human Connection in the Digital Age.” The irony wasn’t lost on me: here I was, using my human capabilities to argue for the irreplaceable value of human connection in an age of AI.

Image taken by my talented friend Samer Moukarzel

The presentation followed a pattern that I had been perfecting over the past year. I begin with a simple human interaction: asking audience members to turn to each other and share their favourite day of the week and favourite time of that day. (Tuesday at 8.25pm, before starting five-a-side footie, for me.) It triggered a minute or two of genuine curiosity, slight awkwardness, perhaps a shared laugh or unexpected discovery.

That moment captures everything I’m trying to communicate. While everyone obsesses over AI’s technical capabilities, we’re forgetting that humans crave connection, meaning, and the beautiful unpredictability of authentic interaction.

A week or so later, for Business and IP Centre (BIPC) Lewisham, I delivered another presentation: “The Power of Human-Led Storytelling in an AI World.” This one was delivered over Zoom, and the theme remained consistent, but the context shifted. These were local business leaders, many of whom were struggling with the same questions. How do we stay relevant? How do we compete with automated content? How do we maintain authenticity in an increasingly synthetic world?

Both presentations built on themes I’ve been developing throughout this year of Go Flux Yourself. The CHUI framework, the concept of being “kind explorers”, the recognition that we’re living through “the anti-social century”, where technology promises connection but often delivers isolation.

But there’s something I’ve learned from stepping onto stages and speaking directly to people that no amount of writing can teach: the power of presence. When you’re standing in front of an audience, there’s no algorithm mediating the exchange. No filter softening hard-to-hear truths, and no AI assistant smoothing rough edges.

You succeed or fail based on your ability to read the room, adapt in real time, and create a genuine connection. These are irreplaceable human skills that become more valuable as everything else becomes automated.

The historical parallel keeps returning to me. On June 23, I delivered the BIPC presentation on what would have been Alan Turing’s 113th birthday. The brilliant mathematician whose work gave rise to modern computing and AI would probably be fascinated – and perhaps concerned – by what we’ve done with his legacy.

I shared the myth that Apple’s bitten logo was supposedly Steve Jobs’ tribute to Turing, who tragically died after taking a bite from a cyanide-laced apple. It’s compelling and poetic, connecting our digital age to its origins. There’s just one problem: it’s entirely false.

Rob Janoff, who designed the logo, has repeatedly denied any homage to Turing. Apple itself has stated there’s no link. The bite was added so people wouldn’t mistake the apple for a cherry. Sometimes, the mundane truth is just mundane.

But here’s why I started with this myth: compelling narratives seem more important than accurate ones, and everything is starting to sound exactly the same because algorithms are optimised for engagement over truth.

As I’ve refined these talks over the past months, I’ve discovered that as our environment becomes increasingly artificial, the desire for authentic interaction grows stronger. The more content gets automated, the more valuable genuine expertise becomes. The more relationships are mediated by algorithms, the more precious unfiltered, messy human connections feel.

That’s the insight I’ll carry forward into the second half of 2025. Not that we should resist technological change, but that we should use it to amplify our most human capabilities while teaching our children how to be master detectives of truth in an age of synthetic everything, and encouraging them to experiment, explore, and love.

Statistics of the month

💼 Executive AI race
Almost two-thirds (65%) of UK and Irish CEOs are actively adopting AI agents, with 58% pushing their organisations to adopt Generative AI faster than people are comfortable with. Two-thirds confirm they’ll take more risks than the competition to stay competitive. 🔗

📧 The infinite workday
Microsoft’s 2025 Annual Work Trend Index Report reveals employees are caught in constant churn, with 40% triaging emails by 6am, receiving 117 emails and 153 chats daily. Evening meetings after 8pm are up 16% year-over-year, and weekend work continues rising. 🔗

🤖 AI trust paradox
While IBM replaced 94% of HR tasks with AI, many executives have serious reservations. Half (51%) don’t trust AI fully with financial decision-making, and 22% worry about data quality feeding AI models. 🔗

📉 Gender gap persists
The World Economic Forum’s 2025 Global Gender Gap Report shows 68.8% of the gap closed, yet full parity remains 123 years away. Despite gains in health and education, economic and political gaps persist. 🔗

Unemployment warning
Anthropic CEO Dario Amodei predicts AI could eliminate half of all entry-level white-collar jobs and send unemployment rocketing to 20% within five years. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Thanks to Gen Zers, post-pandemic bullying call outs have skyrocketed

If the coronavirus crisis was the darkest of clouds, the silver lining was that the fallout accelerated countless technologies, spurred working trends, and shifted societal norms. And now there is further cause for celebration: A new study has found the last three years were increasingly tough on alleged workplace bullies.

Ethisphere’s 2023 Ethical Insights Report, published in January and based on the responses of 2 million employees globally, suggested bullying was being called out at an unprecedented rate. Before the pandemic, 20% of respondents stated they had observed bullying at work, while 33% of respondents did after Covid-19 arrived, according to the study.

Moreover, the research indicated Gen Zers’ lower tolerance for bullying – compared to other generations – was making a massive difference.

Of the 26 other types of misconduct tracked by Ethisphere – a firm that defines and measures corporate ethical standards – only five increased in the same period. But, aside from bullying, none more than 1.1% (insider trading, and violation of health-and-safety policies). 

Could it be people are more sensitive to bullying in the wake of the MeToo and Black Lives Matters movements and, therefore, more willing to stand up for themselves and others?

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in February 2023 – to read the complete piece, please click HERE.

Broken meetings culture is causing people to switch off, literally

It was only a matter of time. The endless meetings cycles that have become embedded in the working cultures of so many organizations across industries have escalated to the point where people are simply tuning out during them.

And with so many meetings still taking place on video, rather than in-person, a large number of people don’t think they need to be in them at all – which is leading to mass disengagement, according to some workplace sources.

A whopping 43% of 31,000 workers, polled from across 31 countries by Microsoft, said they don’t feel included in meetings. 

“Meeting culture is broken, and it’s having a significant impact on employee productivity and business efficiency,” said Sam Liang, CEO and co-founder of Otter.ai, a California-based software company that uses artificial intelligence to convert speech to text.

A recent Otter.ai study revealed that, on average, workers spend one-third of their time in meetings, 31% of which are considered unnecessary. But employers continue to plow ahead without changing these embedded structures.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Glass half-full or half-empty: How to balance a partying culture at work

What was your honest reaction when Sanna Marin, Finland’s prime minister, was scandalized for partying recently? In August, the 36-year-old sparked controversy after leaked videos showed her dancing and drinking with friends. 

Whichever side of the bar you sit on, Marin’s partying raised important questions about how business leaders in all walks of life should conduct themselves when with and without colleagues in a social environment. 

How do employees feel about a boozy boss? And do enforced work events, where people are encouraged to imbibe at a free bar, help or hinder the health of a workplace in a post-pandemic world?

Indeed, in most industries, for decades – if not centuries – socializing with colleagues and attending work drinks has been central to company culture. Away from the workplace, over a glass or two, people can relax, make meaningful memories, share challenges and opportunities – at work and home – and, ultimately, strengthen bonds with coworkers. But is the glass half-full, half-empty, or completely empty in 2022?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in September 2022 – to read the complete piece, please click HERE.

Workers share their worst toxic boss experiences

All the chatter about quiet quitting – namely, doing what a job requires and no more – has provoked deeper discussions about toxic workplace culture and poor management as organizations firm up their hybrid-working strategies.

Some execs have aired concerns that the bring-your-whole-selves-to-work trend has backfired, and in many cases has caused fragmented workforces, while some leaders have taken advantage of the concept to justify their own questionable behavior.

WorkLife spoke to a range of employees from those who consider themselves quiet quitters, to those who have resigned outright, plus those still considering resigning, to find out what prompted them to take their current course of action. Under the condition of anonymity – for fear of career-damaging repercussions – they shared their recent experiences, which highlight the alarming management they have endured. We’ve selected three of the worst accounts.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in September 2022 – to read the complete piece, please click HERE.

How hybrid working has complicated mergers & acquisitions

Up to 90% of business acquisitions don’t achieve the expected value or benefits. The principal reason for this failure is that integrating groups is notoriously challenging – and even more so now, with many organizations shifting to hybrid working strategies. 

Global deal-making activities hit a record $6 trillion last year. And yet, while employees are the most critical asset of most companies, they often get neglected in the excitement of an M&A.

The age-old M&A model typically involved the employees of one company leaving their offices, to join those of their new employer. But with today’s hybrid and flexible working setups, that looks very different. And adds new complexity to the long-term challenge of successful cultural integration.

Organic opportunities for new colleagues to connect are likely to be missed thanks to the move to hybrid working. So what could – and should – be done?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

How to fix the metaverse’s sexual harassment problem (and make ‘metawork’ a reality)

Since Meta – the tech titan formerly known as Facebook – revealed last year that it would invest heavily in the metaverse, there has been massive enthusiasm about the possibilities of this nascent technology, not least in a future-of-work capacity. 

Indeed, at the end of July, a study by Grand View Research predicted the booming metaverse market will reach $6.8 trillion by 2030. However, alarming recent data indicates that almost two-thirds of adults believe metaverse technologies will enable sexual harassment.

national tracking poll by business-intelligence company Morning Consult, published in March, found that 61% of 4,420 U.S. adults were concerned about this specific subject. Women seem most worried about it, with 41% of female respondents saying they have “major” concerns, compared to 34% of males. 

The same research showed that 79% of adults are worried about the tracking and misuse of personal data in the metaverse. Add in the numerous articles written about people’s personal experiences of harassment in the metaverse, and it’s clear there is a deep-rooted trust issue that business leaders should consider before funding metaverse worlds for employees, whether onboarding staff, hosting events, or meetings.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

‘It’s pulling us apart’: Has the ‘bring our whole selves to work’ trend backfired?

In the post-Covid-19 era, business leaders are advised to be authentic in word and deed, display their vulnerabilities, and encourage staff to bring their whole selves to work. But some argue this has merely opened a can of worms within organizations — an outcome that may be hard to rectify.

Almost half (44%) of U.S. employees said they have actively avoided some co-workers because they disagree with their political views since returning to the office following the coronavirus crisis, according to unpublished Gartner research seen by WorkLife.

Brian Kropp, group vice president and chief of research for Gartner’s human resources practice, acknowledged that events of the last 2 1/2 years have frayed work relationships. Still, in his view, we have brought this problem on ourselves.

“We spend so much time talking about ‘bringing your whole self to work,’ making sure that we’re inclusive and encouraging people to be who they are when they’re in the office,” he said. “Part of an employee’s whole self is their political beliefs.”

As workplaces have become more open and inclusive, they have also invited the day’s political, societal and cultural debates into the workplace. “Unfortunately, in this period of extreme political and cultural tension, that conflict has permeated into the workplace, and now it’s pulling us apart from each other,” added Kropp.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

People are being harsher in the workplace post-pandemic – how did we get here?

Be honest: are you snappier with your colleagues and harsher with your spoken and written words than two years ago? We might not like to admit it, but the pandemic altered us all, to a degree – at work and home. 

Individually, the change might be imperceptible. However, collectively it adds up to a negative conclusion. And if left unchecked, this general lack of positivity will toxify the workplace and corrode relationships.

Brian Kropp, group vp and chief of research for Gartner’s HR practice, expressed his concern for employers and their staff. “There are numerous things pulling employees apart from each other, and that’s incredibly difficult as an organization because the purpose of having a company is bringing people together, to collaborate, and to achieve something bigger than any individual could achieve alone,” he said.

Could this be the start of a worrying trend? “We’re finding that we are entering a period where things inside and outside our organizations are causing the workforce fragmentation,” Kropp added. 

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

WTF is Tropicalization?

The purest distillation of Darwinism is “evolve or die.” And following the acceleration of trends spurred by the coronavirus crisis, most business leaders have realized they must lasso and partner with specialists all over the planet to survive and thrive in the post-pandemic world.

Little wonder the value of the average merger and acquisition (M&A) deal in 2021 for the 10 highest deals in the U.K. was £3.3 billion ($3.9 billion), according to Office for National Statistics data — over five times more than the previous year’s £600,000 ($716,000). In the U.S., the value of M&A deals amounted to roughly $212 billion in December 2021, with the acquisition of Time Warner by America Online deemed the largest all-time M&A deal in the U.S. in 2022, according to Statista. And globally, M&A volumes hit a record $5.9 trillion, up 62% on the 2020 figure, Dealogic data showed.

While transformation is necessary for growth, few welcome it. Change management is essential for the success — or failure — of the merging of companies. If not handled with sensitivity, a clash of ways of working and cultures can be toxic. For this reason, the word “tropicalization” is increasingly being used in business circles.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in July 2022 – to read the complete piece, please click HERE.

Mojitos in the metaverse? More companies take to hosting team happy hours via virtual reality headsets

Before the pandemic, U.S. marketing agency The Starr Conspiracy’s employees would enjoy Olympic-like competitions in the office car parks and revel in regular in-person, happy-hour meetings. However, with the fun tap turned off by the coronavirus-induced restrictions, company bosses sensed disconnection and isolation were growing for remote-working staff. So they reached for virtual reality headsets.

Now, all 72 employees have Oculus Quest 2s, which cost about $300 per set, and join in for happy hours and quiz nights in the metaverse. But, aside from the obvious practical issues — it’s hard first to locate and then swig a mojito while wearing an obstructive plastic mask — will employees swallow such activities, and can they genuinely re-engage staff?

This article was first published on DigiDay’s WorkLife platform in February 2022 – to continue reading please click here.