Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 22)


TL;DR: October’s Go Flux Yourself explores the epidemic of disconnection in our AI age. As 35% of Britons use smart doorbells to avoid human contact on Hallowe’en, and children face 2,000 social media posts daily, we’re systematically destroying the one skill that matters most: genuine human connection.

Image created on Midjourney

The future

“The most important single ingredient in the formula of success is knowing how to get along with people.”

Have we lost the knowledge of how to get along with people? And to what extent is an increasing dependence on large language models degrading this skill for adults, and not allowing it to bloom for younger folk?

When Theodore Roosevelt, the 26th president of the United States, spoke the above words in the early 20th century, he couldn’t have imagined a world where “getting along with people” would require navigating screens, algorithms, and artificial intelligence. Yet here we are, more than a century after he died in 1919, rediscovering the wisdom in the most unsettling way possible.

Indeed, this Hallowe’en, 35% of UK homeowners plan to use smart doorbells to screen trick-or-treaters, according to estate agents eXp UK. Two-thirds will ignore the knocking. We’re literally using technology to avoid human contact on the one night of the year when strangers are supposed to knock on our doors.

It’s the perfect metaphor for where we’ve ended up. The scariest thing isn’t what’s at your door. It’s what’s already inside your house.

Princess Catherine put it perfectly earlier in October in her essay, The Power of Human Connection in a Distracted World, for the Centre for Early Childhood. “While digital devices promise to keep us connected, they frequently do the opposite,” she wrote, in collaboration with Robert Waldinger. part-time professor of psychiatry at Harvard Medical School. “We’re physically present but mentally absent, unable to fully engage with the people right in front of us.”

I was a contemporary of Kate’s at the University of St Andrews in the wilds of East Fife, Scotland. We both graduated in 2005, a year before Twitter launched and a year after “TheFacebook” appeared. We lived in a world where difficult conversations happened face-to-face, where boredom forced creativity, and where friendship required actual presence. That world is vanishing with terrifying speed.

The Princess of Wales warns that an overload of smartphones and computer screens is creating an “epidemic of disconnection” that disrupts family life. Notably, her three kids are not allowed smartphones (and I’m pleased to report my eldest, aged 11, has a simple call-and-text mobile). “When we check our phones during conversations, scroll through social media during family dinners, or respond to emails while playing with our children, we’re not just being distracted, we are withdrawing the basic form of love that human connection requires.”

She’s describing something I explored in January’s newsletter about the “anti-social century”. As Derek Thompson of The Atlantic coined it, we’re living through a period marked by convenient communication and vanishing intimacy. We’re raising what Catherine calls “a generation that may be more ‘connected’ than any in history while simultaneously being more isolated, more lonely, and less equipped to form the warm, meaningful relationships that research tells us are the foundation of a healthy life”.

The data is genuinely frightening. Recent research from online safety app Sway.ly found that children in the UK and the US are exposed to around 2,000 social media posts per day. Some 77% say it harms their physical or emotional health. And, scariest yet, 72% of UK children have seen content in the past month that made them feel uncomfortable, upset, sad or angry.

Adults fare little better. A recent study on college students found that AI chatbot use is hollowing out human interaction. Students who used to help each other via class Discord channels now ask ChatGPT. Eleven out of 17 students in the study reported feeling more isolated after AI adoption.

One student put it plainly: “There’s a lot you have to take into account: you have to read their tone, do they look like they’re in a rush … versus with ChatGPT, you don’t have to be polite.”

Who needs niceties in the AI age?! We’re creating technology to connect us, to help us, to make us more productive. And it’s making us lonelier, more isolated, less capable of basic human interactions.

Marvin Minsky, who won the Turing Award back in 1969, said something that feels eerily relevant now: “Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

He said that 56 years ago. We’re not there yet. But we’re building towards something, and whether that something serves humanity or diminishes it depends entirely on the choices we make now.

Anthony Cosgrove, who started his career at the Ministry of Defence as an intelligence analyst in 2003 and has earned an MBE, has seen this play out from the inside. Having led global teams at HSBC and now running data marketplace platform Harbr, he’s witnessed first-hand how organisations stumble into AI adoption without understanding the foundations.

“Most organisations don’t even know what data they already hold,” he told me over a video call a few weeks ago. “I’ve seen millions of pounds wasted on duplicate purchases across departments. That messy data reality means companies are nowhere near ready for this type of massive AI deployment.”

After spending years building intelligence functions and technology platforms at HSBC – first for wholesale banking fraud, then expanding to all financial crime across the bank’s entire customer base – he left to solve what he calls “the gap between having aggregated data and turning it into things that are actually meaningful”.

What jumped out from our conversation was his emphasis on product management. “For a really long time, there was a lack of product management around data. What I mean by that is an obsession about value, starting with the value proposition and working backwards, not the other way round.”

This echoes the findings I discussed in August’s newsletter about graduate jobs. As I wrote then, graduate jobs in the UK have dropped by almost two-thirds since 2022 – roughly double the decline for all entry-level roles. That’s the year ChatGPT launched. The connection isn’t coincidental.

Anthony’s perspective on this is particularly valuable. “AI can only automate fragments of a job, not replace whole roles – even if leaders desperately want it to.” He shared a conversation with a recent graduate who recognised that his data science degree would, ultimately, be useless. “The thing he was doing is probably going to be commoditised fairly quickly. So he pivoted into product management.”

This smart graduate’s instinct was spot-on. He’s now, in Anthony’s words, “actively using AI to prototype data products, applications, digital products, and AI itself. And because he’s a data scientist by background, he has a really good set of frameworks and set of skills”.

Yet the broader picture remains haunting. Microsoft’s 2025 Work Trend Index reveals that 71% of UK employees use unapproved consumer AI tools at work. Fifty-one per cent use these tools weekly, often for drafting reports and presentations, or even managing financial data, all without formal IT approval.

This “Shadow AI” phenomenon is simultaneously encouraging and terrifying. “It shows that people are agreeable to adopting these types of tools, assuming that they work and actually help and aren’t hard to use,” Anthony observed. “But the second piece that I think is really interesting impacts directly the shareholder value of an organisation.”

He painted a troubling picture: “If a big percentage of your employees are becoming more productive and finishing their existing work faster or in different ways, but they’re doing so essentially untracked and off-books, you now have your employees that are becoming essentially more productive, and some of that may register, but in many cases it probably won’t.”

Assuming that many employees are using AI for work without being open about it with their employers, how concerned about security and data privacy are they likely to be?

Earlier in the month, Cybernews discovered that two AI companion apps, Chattee Chat and GiMe Chat, exposed millions of intimate conversations from over 400,000 users. The exposed data contained over 43 million messages and over 600,000 images and videos.

At the time of writing, one of the apps, Chattee, was the 121st Entertainment app on the Apple App Store, downloaded over 300,000 times. This is a symptom of what people, including Microsoft’s AI chief Mustafa Suleyman (as per August’s Go Flux Yourself), are calling AI psychosis: the willingness to confide our deepest thoughts to algorithms while losing the ability to confide in actual humans.

As I explored in June 2024’s newsletter about AI companions, this trend has been accelerating. Back in March 2024, there had been 225 million lifetime downloads on the Google Play Store for AI companions alone. The problem isn’t scale. It’s the hollowing out of human connection.

Then there’s the AI bubble itself, which everyone in the space has been talking about in the last few weeks. The Guardian recently warned that AI valuations are “now getting silly”. The Cape ratio – measuring cyclically adjusted price-to-earnings ratios – has reached dotcom bubble levels. The “Magnificent 7” tech companies now represent slightly more than a third of the whole S&P 500 index.

OpenAI’s recent deals exemplify the circular logic propping up valuations. The arrangement under which OpenAI will pay Nvidia for chips and Nvidia will invest $100bn in OpenAI has been criticised as exactly what it is: circular. The latest move sees OpenAI pledging to buy lots of AMD chips and take a stake in AMD over time.

And yet amid this chaos, there are plenty of people going back to human basics: rediscovering real, in-person connection through physical activity and genuine community.

Consider walking football in the UK. What began in Chesterfield in 2011 as a gentle way to coax older men back into exercise has become one of Britain’s fastest-growing sports. More than 100,000 people now play regularly across the UK, many managing chronic illnesses or disabilities. It has become a sport that’s become “a masterclass in human communication” that no AI could replicate. Tony Jones, 70, captain of the over-70s, described it simply. “It’s the camaraderie, the dressing room banter.”

Research from Nottingham Trent University found that walking footballers’ emotional well-being exceeded the national average, and loneliness was less common. “The national average is about 5% for feeling ‘often lonely’,” said professor Ian Varley. “In walking football, it was 1%.”

This matters because authentic human interaction – the kind that requires you to read body language, manage tone, and show up physically – can’t be automated. Princess Catherine emphasises this in her essay, citing Harvard Medical School’s research showing that “the people who were more connected to others stayed healthier and were happier throughout their lives. And it wasn’t simply about seeing more people each week. It was about having warmer, more meaningful connections. Quality trumped quantity in every measure that mattered.”

The digital world offers neither warmth nor meaning. It offers convenience. And as Catherine warns, convenience is precisely what’s killing us: “We live increasingly lonelier lives, which research shows is toxic to human health, and it’s our young people (aged 16 to 24) that report being the loneliest of all – the very generation that should be forming the relationships that will sustain them throughout life.”

Roosevelt understood this instinctively over a century ago: success isn’t about what you know or what you can do. It’s about how you relate to other people. That skill – the ability to truly connect, to read a room, to build trust, to navigate conflict, to offer genuine empathy – remains stubbornly, beautifully human.

And it’s precisely what we’re systematically destroying. If we don’t take action to arrest this dark and deepening trend of digitally supercharged disconnection, the dream of AI and other technologies being used for enlightenment and human flourishing will quickly prove to be a living nightmare.

The present

Image runner’s own

As the walking footballers demonstrate, the physical health benefits of group exercise are sometimes secondary to camaraderie – but winning and hitting goals are also fun and life-affirming. In October, I ran my first half-marathon in under 1 hour and 30 minutes. I crossed the line at Walton-on-Thames to complete the River Thames half at 1:29:55. A whole four seconds to spare! I would have been nowhere near that time without Mike.

Mike is a member of the Crisis of Dads, the running group I founded in November 2021. What started as a clutch of portly, middle-aged plodders meeting at 7am every Sunday in Ladywell Fields, in south-east London, has grown to 26 members. Men in their 40s and 50s exercising to limit the dad bod and creating space to chat through things on our minds.

The male suicide rate in the UK in 2024 was 17.1 per 100,000, compared to 5.6 per 100,000 for women, according to the charity Samaritans. Males aged 50-54 had the highest rate: 26.8 per 100,000. Connection matters. Friendship matters. Physical presence matters.

Mike paced me during the River Thames half-marathon. With two miles to go, we were on track to go under 90 minutes, but the pain was horrible. His encouragement became more vocal – and more profane – as I closed in on something I thought beyond my ability.

Sometimes you need someone who believes in your ability more than you do to swear lovingly at you to cross that line quicker.

Work in the last month has been equally high octane, and (excuse the not-so-humble brag) record-breaking – plus full of in-person connection. My fledgling thought leadership consultancy, Pickup_andWebb (combining brand strategy and journalistic expertise to deliver guaranteed ROI – or your money back), is taking flight.

And I’ve been busy moderating sessions at leading technology events across the country, around the hot topic of how to lead and prepare the workforce in the AI age.

Moderating at DTX London (image taken by organisers)

On the main stage at DTX London, I opened by using the theme of the session about AI readiness to ask the audience whose workforce was suitably prepared. One person, out of hundreds, stuck their hand up: Andrew Melville, who leads customer strategy for Mission Control AI in Europe. Sportingly, he took the microphone and explained the key to his success.

I caught him afterwards. His confidence wasn’t bravado. Mission Control recently completed a data reconciliation project for a major logistics company. The task involved 60,000 SKUs of inventory data. A consulting firm had quoted two to three months and a few million pounds. Mission Control’s AI configuration completed it in eight hours. A thousand times faster, and 80% cheaper.

“You’re talking orders of magnitude,” Andrew said. “We’re used to implementing an Oracle database, and things get 5 or 10% more efficient. Now you’re seeing a thousand times more efficiency in just a matter of days and hours.”

He drew a parallel to the Ford Motor Company’s assembly line. Before that innovation, it took 12 hours to build a car. After? Ninety minutes. Eight times faster. “Imagine being a competitor of Ford,” Andrew said, “and they suddenly roll out the assembly line. And your response to that is: we’re going to give our employees power tools so they can build a few more cars every day.”

That’s what most companies are doing with AI. Giving workers ChatGPT subscriptions and hoping for magic, and missing the fundamental transformation required. As I said on stage at DTX London, it’s like handing workers the keys to a Formula 1 car, without instructions and wondering why there are so many almost immediate and expensive crashes.

“I think very quickly what you’re going to start seeing,” Andrew said, “is executives that can’t visualise what an AI transformation looks like are going to start getting replaced by executives that do.”

At Mission Control, he’s building synthetic worker architectures – AI agents that can converse with each other, collaborate across functions, and complete higher-order tasks. Not just analysing inventory data, but coordinating with procurement systems and finance teams simultaneously.

“It’s the equivalent of having three human experts in different fields,” Andrew explained, “and you put them together and you say, we need you to connect some dots and solve a problem across your three areas of expertise.”

The challenge is conceptual. How do you lead a firm where human workers and digital workers operate side by side, where the tasks best suited for machines are done by machines and the tasks best suited for humans are done by humans?

This creates tricky questions throughout organisations. Right now, most people are rewarded for being at their desks for 40 hours a week. But what happens when half that time involves clicking around in software tools, downloading data sets, reformatting, and loading back? What happens when AI can do all of that in minutes?

“We have to start abstracting the concept of work,” Andrew said, “and separating all of the tasks that go into creating a result from the result itself.”

Digging into that is for another edition of the newsletter, coming soon. 

Elsewhere, at the first Data Decoded in Manchester, I moderated a 30‑minute discussion on leadership in the age of AI. We were just getting going when time was up, which feels very much like 2025. The appetite for genuine insight was palpable. People are desperate for answers beyond the hype. Leaders sense the scale of the shift. However, their calendars still favour show-and-tell over do-and‑learn. That will change, but not without bruises.

Also in October, my essay on teenage hackers was finally published in the New Statesman. The main message is that we’re criminalising the young people whose skills we desperately need, and not offering a path towards cybersecurity, or related industries, over the darker criminal world.

Looking slightly ahead, on 11 November, I’ll be expanding on these AI-related themes, debating at The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre. The subject, Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work, prompts the question: should professionals embrace AI as a tool to amplify skills, creativity and flow, or hand over entire workflows to autonomous agents?

I know which side I’m on. 

(If you fancy listening in and rolling your sleeves up alongside over 200 ambitious professionals – for a day of inspiration, connection and, most importantly, growth – I can help with a discounted ticket. Use OLIVERPCFEST for £50 off the cost here.)

The past

In 2013, I was lucky enough to edit the Six Nations Guide with Lewis Moody, the former England rugby captain, a blood-and-thunder flanker who clocked up 71 caps. At the time, Lewis was a year into retirement, grappling with the physical aftermath of a brutal professional career.

When the tragic news broke earlier in October that Lewis, 47, had been diagnosed with the cruelly life-sapping motor neurone disease (MND), it set forth a waterfall of sorrow from the rugby community and far beyond. I simply sent him a heart emoji. He texted the same back a few hours later.

Lewis’s hellish diagnosis and the impact it has had on so many feels especially poignant given Princess Catherine’s reflections on childhood development. She writes about a Harvard study showing that “people who developed strong social and emotional skills in childhood maintained warmer connections with their spouses six decades later, even into their eighties and nineties”.

She continued: “Teaching children to better understand both their inner and outer worlds sets them up for a lifetime of healthier, more fulfilling relationships. But if connection is the key to human thriving, we face a concerning reality: every social trend is moving in the opposite direction.”

AI has already changed work. The deeper question is whether we’ll preserve the skills that make us irreplaceably human.

This Halloween, the real horror isn’t monsters at the door. It’s the quiet disappearance of human connection, one algorithmically optimised interaction at a time.

Roosevelt was right. Success depends on getting along with people. Not algorithms. Not synthetic companions. Not virtual influencers.

People.

Real, messy, complicated, irreplaceable people. 

Statistics of the month

💰 AI wage premium grows
Workers with AI skills now earn a 56% wage premium compared to colleagues in the same roles without AI capabilities – showing that upskilling pays off in cold, hard cash. (PwC)

🔄 A quarter of jobs face radical transformation
Roughly 26% of all jobs on Indeed appear poised to transform radically in the near future as GenAI rewrites the DNA of work across industries. (Indeed)

📈 AI investment surge continues
Over the next three years, 92% of companies plan to increase their AI investments – yet only 1% of leaders call their companies “mature” on the deployment spectrum, revealing a massive gap between spending and implementation. (McKinsey)

📉 Workforce reduction looms
Some 40% of employers expect to reduce their workforce where AI can automate tasks, according to the World Economic Forum’s Future of Jobs Report 2025 – a stark reminder that transformation has human consequences. (WEF)

🎯 Net job creation ahead
A reminder that despite fears, AI will displace 92 million jobs but create 170 million new ones by 2030, resulting in a net gain of 78 million jobs globally – proof that every industrial revolution destroys and creates in equal (or greater) measure. (WEF)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 19)


TL;DR: July’s newsletter explores why servers are literally melting, teenagers are hacking from their bedrooms, and we’re wasting resources on an industrial scale. Better stewardship – of energy, talent, and attention – separates tomorrow’s winners from the rest.

Image created on Midjourney

The future

“We cannot build enough chips, we cannot build enough data centres, we cannot build enough power grids to meet the demand. But whether or not we need it and where do we draw the line, that’s a different topic.”

The servers are melting, literally.

On July 19, Google and Oracle revealed that their UK data centres had experienced cooling-related failures during Britain’s third heatwave of the summer. Google’s London-based region Europe-West2 went down, while Oracle’s local facilities faced similar technical difficulties. Our digital infrastructure buckled under the relentless heat.

These weren’t isolated incidents, either. OpenAI’s image-generation models regularly push graphics processing unit temperatures beyond sustainable limits, while cooling systems struggle to keep pace with computational demands. 

According to the Boston Consulting Group (BCG), generative AI is expected to account for 60% of data centre power growth through 2028. Every new AI iteration consumes exponentially more power, water, and computational capacity than the last.

What we’re witnessing is the collision of two unsustainable systems: our heating planet and our energy-hungry digital ambitions. It’s hard to ignore the brutal mathematics, which add up to a planet increasingly heating up and being sucked dry.

Notably, on May 20, the United Kingdom reached Earth Overshoot Day: the date when our demand for ecological resources exceeds what the Earth can regenerate in a year. Alarmingly, we hit this tipping point three days before supposed emissions-guzzler China, and far earlier than the global average. We’re over-consuming everything, not just digital resources.

Mehdi Paryavi, Chairman of the International Data Center Authority (IDCA), put it starkly when I spoke with him recently: the global race to build computing capacity is outpacing our ability to power it sustainably. “AI becoming the future of our world today does require massive investments,” he told me. “We cannot build enough chips, we cannot build enough data centres, we cannot build enough power grids to meet the demand,” he added, which is where this month’s opening quotation comes from.

The numbers tell the story. According to the International Energy Agency (IEA), data centres are on course to consume 3% of global electricity by 2030, roughly equivalent to the entire energy consumption of Japan. Elsewhere, the Center for Strategic and International Studies calculates that United States data centres alone will need 84 gigawatts by the end of the decade, a 2,100% increase from today’s 4 gigawatts.

Meanwhile, Morgan Stanley research published last September projects data centre carbon emissions will triple by 2030, reaching 2.5 billion tons of CO2. Forget about building a digital future, we’re constructing a carbon catastrophe.

The regulatory landscape is tightening, too. The upcoming Cyber Security and Resilience Act, expected to be introduced within weeks in the UK, will bring data centres into critical national infrastructure for the first time. Andy Green, a partner at cybersecurity consultancy Avella, explains the significance: “Where previously it was just best endeavours or they tried to comply with an ISO standard, now they’ll have actual regulations that they need to meet.”

This means mandatory incident reporting within specific timeframes, supply chain security assurance, and adherence to the National Cyber Security Centre’s Cyber Assessment Framework. As Green notes: “It’s a high bar that they’re going to have to hit in quite short order.”

The shift represents another constraint on an already strained system. Data centres now face not only physical limits from heat and power, but also regulatory compliance, which could further complicate their operations.

At South by Southwest London in early June, I enjoyed speaking with Will Alpine (he and his wife, fellow co-founder of Enabled Emissions Campaign Holly, chose their surname to reflect their climate activism, which I love).

Alpine is a former Microsoft sustainability engineer who helped coin the term “Green AI” back in 2020. He was among the first to warn about AI’s energy impacts before the generative AI boom made those concerns impossible to ignore. Alpine eventually left Microsoft after internal battles over what he calls “enabled emissions”: the carbon cost of AI accelerating high-pollution industries.

His frustration with corporate sustainability theatre was palpable: “It doesn’t matter how green your compute is if it’s being used to enable fossil fuel extraction.”

This insight points to the heart of our resource crisis: we’re optimising individual components while ignoring systemic waste.

Alpine warns that we must consider not just the operational energy costs, but “the physical materials that go into making the chips and precious materials, all the resource constraints associated with that, and then the waste at the end of the life cycle”. Often, hardware is recycled before its natural end to chase the latest specifications, and companies sleepwalk through needless software upgrades.

Here’s where the crisis becomes truly absurd. Tomás O’Leary, CEO of Origina, captured this waste perfectly when we spoke recently. “Every pointless software upgrade, every vendor-mandated migration, every compliance-driven refresh adds to infrastructure strain,” he said. “Companies waste enormous resources on changes that deliver zero business value.”

The infrastructure barons are exacerbating the situation. Hyperscalers now command around 60% of data centre capacity, with Amazon, Microsoft, and Meta leading the land grab. They’re securing entire campuses before they hit the market, creating what O’Leary calls “digital feudalism”, where smaller companies queue cap in hand for access to computational resources.

Yet amid this escalating consumption, some organisations prove that efficiency beats excess. At the start of the year, DeepSeek achieved generative AI performance comparable to ChatGPT at just 6% of typical costs: $6 million versus the industry standard of $100 million. The problem isn’t computational limits, though, but computational waste.

Chicago-based Roger Strukhoff, IDCA’s Chief Research Officer, sees this reflected in the organisation’s latest global digital readiness rankings. Despite hosting 44% of the world’s data centres, the US ranks just 38th globally because its approach prioritises scale over sustainability. He told me: “Are we environmentally sound? Are we socially, economically sound? There’s no point having a digital economy if your society is going in other directions.”

The Scandinavian countries top the IDCA rankings because they understand holistic resource management. Building the biggest data centres means nothing if you’re burning through energy, materials, and human potential at unsustainable rates.

This brings us to our most mismanaged resource: human talent. While we obsess over GPU shortages, we’re squandering cognitive capacity on an industrial scale. Arrests in mid-June, related to the Scattered Spider cybercriminal gang, highlight just how badly we’re failing young people with extraordinary capabilities.

On July 10, police arrested four people in connection with the M&S and Co-op cyberattacks from a couple of months ago: a 20-year-old woman from Staffordshire and three males aged 17-19 from London and the West Midlands, highlighting the poor support we provide to young people with extraordinary capabilities.

Indeed, these weren’t sophisticated criminal masterminds but teenagers operating from their bedrooms. A 2022 study by the University of East London found that 69% of European teenagers have committed some form of cybercrime. Additionally, there are 3.5 million vacant cybersecurity positions globally. We’re criminalising the people we need to protect our digital infrastructure.

For the New Statesman, I’ve been investigating the subject of teenage hackers: their motivations, the lack of career opportunities, and how initiatives like The Hacking Games, which uses gaming to identify skills to combat cybercrime, are addressing these issues. The pattern is depressingly predictable: bedroom gaming leads to online gaming, then gaming cheats, hacking forums, minor cybercrime, financial gain, and finally serious cybercrime. Each step feels logical and harmless, yet the pathway leads from curiosity to destruction. (I will share the essay once it is published. As a long-time NS subscriber, I am thrilled to feature its hallowed pages.)

The rise of AI should free humans to focus on tasks that are uniquely human, such as creativity, empathy, and complex problem-solving. Instead, we’re automating away entry-level opportunities while burning out experienced workers with increasingly demanding tasks. Two-thirds of HR professionals now believe that colleagues who do not use AI risk falling behind, potentially creating a two-tier workforce, according to new Culture Amp research. Yet 77% of those using AI are entirely self-taught, hinting at a shortfall in formal AI training across organisations.

The solution isn’t to abandon technology but to steward it more thoughtfully. As Paryavi explained, data centres “are not the drivers impacting our environment negatively. They are actually enablers of creating a cleaner environment if we do it right, if we think outside the box.”

Smart organisations are already making this shift. Rather than constantly upgrading systems, they’re optimising existing ones. Instead of demanding the latest AI models, they’re solving specific problems with targeted solutions. And rather than competing for scarce computational resources, they’re building efficiency into their operations from the ground up.

Those who choose efficiency over excess, sustainability over waste, and genuine innovation over artificial obsolescence will thrive within physical constraints while others chase computational fantasies.

As we enter an era where every joule of energy and every moment of human attention becomes precious, the future belongs to the resource stewards, not the resource raiders.

The present

Image created on Midjourney

By the time this month’s Go Flux Yourself is published, on the last day of July, I’ll be in Greece on a family holiday, taking a proper break for the first time in months. No newsletters or keynotes to write, podcasts to organise, workshops to facilitate, and no interviews to conduct. Just sea, sun, and the kind of decompression that only comes when you physically remove yourself from your usual environment.

This break feels particularly timely, as I facilitated a couple of workshops on the power of taking breaks earlier in July. I’ve been working with a global pharmaceutical company on their LinkedIn summer challenge, helping technical professionals find their voices during the holiday season. The core message was simple: time away helps creativity flourish in ways that always-on hustle simply can’t match.

During an unrelated visit to this client’s UK headquarters, I interviewed six young professionals about their international assignments. The company sends young European graduates anywhere in the world for up to two years through their early-career development programme, and their role is often very different from what they studied, for good reason. For example, one employee shifted from project management to employee engagement. Another moved from pricing to global market access. A third transitioned from pharmacy to brand management. The pattern wasn’t about perfect qualifications but recognising capabilities that traditional recruitment processes miss entirely.

This approach reminds me of February’s newsletter, where Harvard’s Siri Chilazi warned about the distinction between “performative fairness” and structural change. This company has embedded skills-based career development into its culture, actively cultivating potential across international boundaries.

On the subject of expanding opportunities, I’m pleased to announce I’m now hosting a new podcast for Clarion Events called DTX Unplugged, which delves into innovations, trends, and challenges shaping business evolution. After years of moderating the company’s main stage panels and writing event takeaways, this feels like a natural evolution. Good working relationships develop organically over time, and when they mature into new opportunities, that’s genuine relationship capital at work.

The timing of the podcast – which I will share in future newsletters, once live – aligns with broader shifts I’ve been tracking since launching Go Flux Yourself in January 2024. Ultimately, in our resource-constrained world, the most successful collaborations are no longer transactional but built on a shared curiosity about solving real problems together.

Back to switching off, now. Remember that rest doesn’t equal unproductive time. It’s an investment in capabilities that can’t be automated or optimised away. The ideas that emerge from genuine downtime, the connections that develop through unhurried conversations, and the perspectives that shift when you step outside familiar patterns all remain irreplaceably human.

It will take me a couple of days to properly decompress in Greece. And, no doubt, in the final days before my flight home, I’ll start thinking about my next moves: a new speaker-focused website, for instance, more speaking opportunities, and exciting podcast developments. But for now, the most productive thing I can do is absolutely nothing at all.

The past

Image created on Midjourney

As a bushy-haired 12-year-old football addict with a love of Alessandro Del Piero, I scored a hat-trick for Tigers against Panthers to win our internal school football competition. It remains one of my most cherished memories: being carried off the pitch on my teammates’ shoulders, feeling that perfect combination of individual achievement and collective celebration. Sad, I know!

Over three decades later, watching my own children at their sports day a couple of weeks ago, I was reminded why these moments matter so much.

The temperature had hit nearly 30 degrees as parents gathered on the Astroturf. It was Darcey’s first sports day, and I watched her throw herself into each event with uninhibited enthusiasm, a little taken aback. There was space hopping (where she excelled), the egg-and-spoon race, football dribbling, and the climactic tug of war, among other more traditional events. The children competed in teams representing continents of the world, with both Darcey and elder brother Freddie proudly wearing Europe’s (blue) colours.

What struck me wasn’t just the competition but the collaboration. Freddie, despite being one of the tallest and strongest in his year, made sure to encourage a teammate with special educational needs, offering words of support and gentle reassurance through gentle touches during the more challenging events. In that moment, I saw everything that matters for the future of work: competitiveness balanced with compassion, individual capability channelled toward collective success.

I thought about this before appearing as a human-work evolution expert on St James’s Place Financial Academy’s The Switch Podcast, where I discussed preparing careers for an AI-driven future with host Gee Foottit. I was asked whether it’s possible to future-proof careers anymore, or if adaptability is the only strategy. My answer drew on exactly these sporting metaphors. “You can’t future-proof a career,” I said, “but you can future-ready yourself by developing what I call the ‘six Cs’: the uniquely human capabilities that become more valuable as AI advances.”

I listed communication, creativity, compassion, courage, curiosity, and collaboration. These so-called soft skills are the defining capabilities of the age. They allow us to build trust, forge connections, and work across differences in ways that no algorithm can replicate. (Again, I will share the episode once it has been aired, later in the year.)

The sports day reinforced this: children learning to compete fairly, support teammates, handle disappointment gracefully, and celebrate others’ achievements. These are precisely the skills that will matter in a world where routine tasks become automated.

The most successful societies have always been those that channel competitive instincts toward collaborative ends. Whether we’re talking about Olympic teams, scientific research groups, or the small cohort of reformed teenage hackers now working to protect rather than exploit digital infrastructure, the magic happens when individual capabilities serve shared purposes.

The pre-digital era naturally fostered this kind of collaboration. You couldn’t build a cathedral, win a war, or explore new continents without complex coordination between different specialists. Physical constraints meant that waste was visible and resources had to be allocated thoughtfully.

We’ve gained tremendous individual capabilities since then, but we’ve lost something essential: the productive friction that forced people to work together, to consider long-term consequences, to balance personal ambition with collective welfare.

As we enter a period where every joule of energy, every moment of human attention, and every ounce of raw material becomes precious, those childhood lessons from the playing field become more vital than ever. Competition drives excellence, but collaboration determines whether that excellence serves human flourishing or just individual advancement.

The teammates carrying me on their shoulders weren’t just celebrating my goals but what we’d achieved together. In our resource-constrained future, that distinction may well determine which teams, organisations, and societies thrive.

Statistics of the month

🏢 AI governance vacuum
Some 93% of UK organisations use AI, but only 7% have fully embedded governance frameworks. Alarmingly, 35% of companies have no clear owner for AI strategy, despite EU AI Act obligations. [🔗] [🔗]

⚙️ AI testing failures exposed
Only 28% of organisations apply bias detection during AI testing, while just 22% test for model interpretability. Most rely on legacy development processes that have not been updated to address AI-specific risks, such as bias and explainability gaps. [🔗]

🎯 AI skills paradox
One in five UK business leaders now depend on freelancers to deliver critical AI skills they lack in-house, while 46% of freelancers report increased earnings from AI work. [🔗] 

💼 Freelance revolution accelerates
Over a third of UK businesses now save more than £40,000 monthly using freelancers, with 87% planning to engage them up to 10 times in the next six months. Meanwhile, 70% of self-employed workers earn more than in full-time employment. [🔗]

🚫 AI resistance grows
Meanwhile, across the Atlantic, nearly two-thirds of US adults (64%) say they’re more likely to resist using AI-powered technologies as long as possible, compared with 35% who say they’re more likely to embrace using AI as soon as possible. [🔗]

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 15)

TL;DR: March’s Go Flux Yourself explores what good leadership looks like in an AI-driven world. Spoiler: it’s not Donald Trump. From psychological safety and the “Lost Einsteins” to lessons from the inventor of plastic, it examines why innovation without inclusion is reckless – and why collaboration, kindness, and asking better questions might be our best defence against digital delusion and existential drift

Image created on Midjourney

The future

“Leadership is the art of harnessing the efforts of others to achieve greatness.”

Donald Trump’s America-first agenda may appeal to the base instincts of populism, a nationalist fever dream dressed up as economic strategy. However, it is hopelessly outdated as a leadership model for a globally connected, AI-enabled future. 

In fact, it’s worse than that. It’s actively regressive. Trumpism, and the rise of Trumpian imitators across the globe, isn’t just shutting borders. It’s shutting minds, too, and that’s more debilitating for society. It trades in fear, not foresight. It rewards silence over dissent. And in doing so, it stifles precisely the kind of leadership the future demands.

Because let’s be clear: the coming decades will not be defined by those who shout the loudest or build the tallest walls. They will be defined by those who keep channels open – not just for trade, but for ideas. For difference. For disagreement. For discovery.

That starts with listening. And not just listening politely, but listening generatively – creating the psychological space where people feel safe enough to share the thought that might change everything.

At the recent Workhuman Live Forum in London, Harvard’s Amy Edmondson – a global authority on leadership and psychological safety – warned of the “almost immeasurable” consequences of holding back. In her research, 93% of senior leaders admitted that their silence had tangible costs. Not theoretical. Not abstract. Tangible. Safety failures. Wasted resources. Poor decisions. Quiet disengagement. And perhaps worst of all, missed opportunities to learn.

Why do we hold back, and not speak up? Because we’re human. And humans are wired to avoid looking stupid. We’d rather be safe than smart. Edmondson calls it “impression management”, and we’re all fluent in it. From the start of primary school, we learn not to raise our hand unless we’re sure of the answer. By the time we enter the workforce, that instinct is second nature.

But in today’s volatile, uncertain, complex, and ambiguous (VUCA) world – after a chorus in the early pandemic days, five years ago, I’m hearing this used a lot more by business leaders now – that instinct is no longer helpful. It’s dangerous. Because real innovation doesn’t happen in safe, silent rooms. It happens in teams willing to fail fast, speak up, and challenge the status quo. In rooms where “I think we might be wrong” is not a career-ending statement, but a spark.

So how should leaders lead? The quotation that begins this month’s Go Flux Yourself is from Ken Frazier, former CEO of Merck, and was cited by Edmundson, who heard it in one of her sessions. It’s worth repeating: “Leadership is the art of harnessing the efforts of others to achieve greatness.”

This brings us to Aneesh Raman, LinkedIn’s Chief Economic Opportunity Officer, and his powerful message at Talent Connect and Sancroft Convene, in the shadow of St Paul’s Cathedral in London. Raman argues that we are moving out of the “knowledge economy” – where technical proficiency was king – and into the “innovation economy”, where our most human skills become our greatest assets.

He lists them as the five Cs: communication, creativity, compassion, courage, and curiosity. Let’s make it six: collaboration. These are no longer “soft skills” but the defining skills of the age. They allow us to build trust, forge connections, and work across differences. They are, as Raman says, why we are the apex species on the planet.

But here’s the catch: while these skills are distributed broadly across the population, the opportunity to develop and express them is not. Enter the “Lost Einsteins” – those with the potential to innovate but without the credentials, connections, or capital to turn ideas into impact. Economist Raj Chetty’s landmark study found that children from wealthy families are 10-times more likely to become inventors than equally talented peers from lower-income backgrounds.

This is a global failure. We are squandering talent on an industrial scale – not because of a lack of ability, but because of a lack of inclusion. And that’s a leadership failure.

We need leaders who can spot and elevate the quiet genius in the room, who don’t confuse volume with value, and who can look beyond the CV and see the potential in a person’s questions, not just their answers.

And we need to stop romanticising “hero” innovation – the lone genius in a garage – and embrace the truth: innovation is a team sport. For instance, Leonardo da Vinci, as biographer Walter Isaacson points out, was a great collaborator. He succeeded because he listened as much as he led.

Which brings us back to psychological safety – the necessary precondition for team-based innovation. Without it, diversity becomes dysfunction. With it, it becomes dynamite.

Edmondson’s research shows that diverse teams outperform homogenous ones only when psychological safety is high. Without that safety, diversity leads to miscommunication, mistrust, and missed potential. But with it? You get the full benefit of varied perspectives, lived experiences, and cognitive styles. You get the kind of high-quality conversations that lead to breakthroughs.

But these conversations don’t happen by accident. They require framing, invitation, and modelling. They require leaders to say – out loud – things like: “I’ve never flown a perfect flight” (as one airline captain Edmondson studied told his new crew). Or “I need to hear from you”. Or even: “I don’t know the answer. Let’s figure it out together.”

KeyAnna Schmiedl, Workhuman’s Chief Human Experience Officer, put it beautifully in a conversation we had at the Live Forum event: leadership today is less about having the answer and more about creating the conditions for answers to emerge. It’s about making work more human – not through performative gestures, but through daily, deliberate acts of kindness. Not niceness. Kindness.

Niceness avoids conflict. Kindness leans into it, constructively. Niceness says, “That’s fine”. Kindness says, “I hear you – but here’s what we need.” Niceness smooths things over. Kindness builds things up.

And kindness is deeply pragmatic. It’s not about making everyone happy. It’s about making sure everyone is heard. Because the next big idea could come from the intern. From the quiet one. From the woman in trainers, not the man in a suit.

This reframing of leadership is already underway. Schmiedl herself never thought of herself as a leader – until others started reflecting it back to her. Not because she had all the answers, but because she had a way of asking the right questions, of creating rooms where people showed up fully, where difference wasn’t just tolerated but treasured.

So what does all this mean for the rest of us?

It means asking better questions. Not “Does anyone disagree?” (cue crickets). But “Who has a different perspective?” It means listening more than speaking. It means noticing who hasn’t spoken yet – and inviting them in. It means, as Edmondson says, getting curious about the dogs that don’t bark. Other “good questions” include: “What are we missing?” Also: “Please can you explain that further?”

And it means remembering that the goal is not psychological safety itself. The goal is excellence. Innovation. Learning. Fairness. Safety is just the soil in which those things can grow.

The future belongs to the leaders who know how to listen, invite dissent, ask good questions, and, ultimately, understand that the art of leadership is not dominance, but dialogue.

Because the next Einstein is out there. She, he, or they just haven’t been heard yet.

The present

“We’re gearing up for this year to be a year where you’ll have some ‘oh shit’ moments,” said Jack Clark, policy chief at Anthropic, the $40 billion AI start-up behind the Claude chatbot, earlier this year. He wasn’t exaggerating. From melting servers at OpenAI (more on this below) to the dizzying pace of model upgrades, 2025 already feels like we’re living through the future on fast-forward.

And yet, amid all the noise, hype, and existential hand-wringing, something quieter – but arguably more profound – is happening: people are remembering the value of connection.

This March, I had the pleasure of speaking at a Federation of Small Businesses (FSB) virtual event for members in South East London. The session, held on Shrove Tuesday, was fittingly titled “Standing Out: The Power of Human Leadership in an AI World”. Between pancake references and puns (some better than others), I explored what it means to lead with humanity in an age when digital tools dominate every dashboard, inbox, and conversation.

The talk was personal, anchored in my own experience as a business owner, a journalist, and a human surfing the digital tide. I shared my CHUI framework – Community, Health, Understanding, and Interconnectedness – as a compass for turbulent times. Because let’s face it: the world is messy right now. Geopolitical uncertainty is high. Domestic pressures are mounting. AI is changing faster than our ability to regulate or even comprehend it. And loneliness – real, bone-deep isolation – is quietly eroding the foundations of workplaces and communities.

And yet, there are bright spots. And they’re often found in the places we least expect – like virtual networking events, Slack channels, and local business groups.

Since that FSB session, I’ve connected with a flurry of new people, each conversation sparking unexpected insight or opportunity. One such connection was Bryan Altimas, founder of Riverside Court Consulting. Bryan’s story perfectly exemplifies how leadership and collaboration can scale, even in a solo consultancy.

After the pandemic drove a surge in cybercrime, Altimas responded not by hiring a traditional team but by building a nimble, global network of 15 cybersecurity specialists – from policy experts to ethical hackers based as far afield as Mauritius. “Most FSB members don’t worry about cybersecurity until it’s too late,” he told me in our follow-up chat. But instead of fear-mongering, Altimas and his team educate. They equip small businesses to be just secure enough that criminals look elsewhere – the digital equivalent of fitting a burglar alarm on your front door while your neighbour leaves theirs ajar.

What struck me most about Altimas wasn’t just his technical acumen, but his collaborative philosophy. Through FSB’s Business Crimes Forum, he’s sat on roundtables with the London Mayor’s Office and contributed to parliamentary discussions. These conversations – forged through community, not competition – have directly generated new client relationships and policy influence. “It’s about raising the floor,” he said. “We’re stronger when we work together.”

That sentiment feels increasingly urgent. In an age where cybercriminals operate within sophisticated, decentralised networks, small businesses can’t afford to work in silos. Our defence must be networked, too – built on shared knowledge, mutual accountability, and trust.

And yet, many governments seem to be doing the opposite. The recent technical capability notice issued to Apple – which led to the withdrawal of advanced data protection services from UK devices – is a case in point. Altimas called it “the action of a digitally illiterate administration”, one that weakens security for all citizens while failing to deter the real bad actors. The irony? In trying to increase control, we’ve actually made ourselves more vulnerable.

This brings us back to the role of small business leaders and, more broadly, to the power of community. As I told the audience at the FSB event, the future of work isn’t just about AI. It’s about who can thrive in an AI world. And the answer, increasingly, is those who can collaborate, communicate, and connect across differences.

In a world where 90% of online content is projected to be AI-generated this year, authentic human interaction becomes not just a nice-to-have, but a business differentiator. Relationship capital is now as valuable as financial capital. And unlike content, it can’t be automated.

That’s why I encourage business leaders to show up. Join the webinars. Say yes to the follow-up call. Ask the awkward questions. Be curious. Some of the most valuable conversations I’ve had recently – including with Altimas – started with nothing more than a LinkedIn connection or a quick post-event “thanks for your talk”.

This isn’t about nostalgia or rejecting technology. As I said in my FSB talk, tech is not the enemy of human connection – it’s how we use it that matters. The question is whether our tools bring us closer to others or push us further into isolation.

The paradox of the AI age is that the more powerful our technologies become, the more essential our humanity is. AI can optimise, analyse, and synthesise, but it can’t empathise, mentor, or build trust in a room. It certainly can’t make someone feel seen, valued, or safe enough to speak up.

That’s where leadership comes in. As Edmondson noted, psychological safety doesn’t happen by accident. It must be modelled, invited, and reinforced. In many cases, work must be reframed to make clear that anyone and everyone can make a difference, alongside an acknowledgement by leaders that things will inevitably go wrong. And as Raman said, the next phase of work will be defined not by who codes the best, but by who collaborates the most.

Our best bet for surviving the “oh shit” moments of 2025 is not to go it alone, but to lean in together. As FSB members, for instance, we are not just business owners. We are nodes in a network. And that network – messy, human, imperfect – might just be our greatest asset.

The past

In 1907, Leo Baekeland changed the world. A Belgian-born chemist working in New York, he created Bakelite – the world’s first fully synthetic plastic. It was, by every measure, a breakthrough. Hard, durable, and capable of being moulded into almost any shape (the clue is in the name – plastikos, from the Greek, meaning “capable of being shaped”), Bakelite marked the dawn of the modern plastics industry. 

For the first time, humankind wasn’t limited to what nature could provide. We could manufacture our own materials. These materials would soon find their way into everything from telephones to televisions, jewellery to jet engines.

Baekeland had no idea what he was unleashing. And perhaps that’s the point.

More than a century later, we’re drowning in the aftershocks of that innovation. At Economist Impact’s 10th Sustainability Week earlier this month – once again in the quietly majestic surroundings of Sancroft Covene – I had the pleasure of moderating a panel titled “Preventing plastics pollution through novel approaches”. I even dressed for the occasion, sporting a nautical bow tie (always good to keep the theme on-brand), and kicked things off with a bit of self-aware humour about my surname.

One of the panellists, Kris Renwick of Reckitt, represented the makers of Harpic – the toilet cleaner founded by none other than Harry Pickup, surely the most illustrious bearer of my surname. (Although late actor Ronald Pickup has a case.) There’s a certain poetry in that Harry made his name scrubbing away society’s waste. 

Especially when set against another panellist, Alexandra Cousteau – granddaughter of Jacques-Yves, the pioneering oceanographer who co-invented the Aqua-Lung and brought the mysteries of the sea to the world. Cousteau, who first set sail on an expedition at just four months old, told the audience that there is 50% less sea life today than in her grandfather’s time.

Let that sink in. Half of all marine life gone – in just three generations.

And plastics are a big part of the problem. We now produce around 460 million tonnes of plastic every year. Of that, 350 million tonnes becomes waste – a staggering 91% is never recycled. Contrary to popular belief, very little of it ends up in the oceans directly, though. 

According to Gapminder, just under 6% of all plastic waste makes it to the sea. Most of it – around 80 million tonnes – is mismanaged: dumped, burned, or buried in ways that still wreak havoc on ecosystems and human health. As Cousteau pointed out, the average person, astonishingly, is believed to carry around the equivalent of a plastic spoon’s worth of microplastics in their body. Including in their brain.

Image created on Midjourney

It’s a bleak picture – and one with eerie echoes in the current hype cycle around AI.

Bakelite was hailed as a wonder material. It made things cheaper, lighter, more efficient. So too does AI. We marvel at what generative tools can do – composing music, designing logos, writing code, diagnosing diseases. Already there are brilliant use cases – and undoubtedly more to come. But are we, once again, rushing headlong into a future we don’t fully understand? Are we about to repeat the same mistake: embracing innovation, while mismanaging its consequences?

Take energy consumption. This last week, OpenAI’s servers were reportedly “melting” under the strain of demand after the launch of their new image-generation model. Melting. It’s not just a metaphor. The environmental cost of training and running large AI models is immense – with a 2019 estimate (ie before the explosion of ChatGPT) suggesting a single model can emit as much carbon as five cars over their entire lifetimes. That’s not a sustainable trajectory.

And yet, much like Bakelite before it, AI is being pushed into every corner of our lives. Often with the best of intentions. But intentions, as the old saying goes, are not enough. What matters is management.

On our plastics panel, Cousteau made the case for upstream thinking. Rather than just reacting to waste, we must design it out of the system from the start. That means rethinking materials, packaging, infrastructure. In other words, it requires foresight. A willingness to zoom out, to consider long-term impacts rather than just short-term gains.

AI demands the same. We need to build governance, ethics, and accountability into its architecture now – before it becomes too entrenched, too ubiquitous, too powerful to regulate meaningfully. Otherwise, we risk creating a different kind of pollution: not plastic, but algorithmic. Invisible yet insidious. Microbiases instead of microplastics. Systemic discrimination baked into decision-making processes. A digital world that serves the few at the expense of the many.

All of this brings us back to leadership. Because the real challenge isn’t innovation. It’s stewardship. As Cousteau reminded us, humans are phenomenally good at solving problems when we decide to care. The tragedy is that we so often wait until it’s too late – until the oceans are full, until the servers melt, until the damage is done.

Moderating that session reminded me just how interconnected these conversations are. Climate. Technology. Health. Equity. We can’t afford to silo them anymore. The story of Bakelite is not just the story of plastics. It’s the story of unintended consequences. The story of how something miraculous became monstrous – not because it was inherently evil, but because we weren’t paying attention.

And that, in the end, is what AI forces us to confront. Are we paying attention? Are we asking the right questions, at the right time, with the right people in the room?

Or are we simply marvelling at the magic – and leaving someone else to clean up the mess?

Statistics of the month

📊 AI in a bubble? Asana’s latest research reveals that AI adoption is stuck in a ‘leadership bubble’ – while executives embrace the tech, most employees remain on the sidelines. Two years in, 67% of companies still haven’t scaled AI across their organisations. 🔗

🤝 Collaboration drives adoption. According to the same study, workers are 46% more likely to adopt AI when a cross-functional partner is already using it. Yet most current implementations are built for solo use – missing the chance to unlock AI’s full, collective potential. 🔗

📉 Productivity gap alert. Gartner predicts that by 2028, over 20% of workplace apps will use AI personalisation to adapt to individual workers. Yet today, only 23% of digital workers are fully satisfied with their tools – and satisfied users are nearly 3x more productive. The workplace tech revolution can’t come soon enough.

📱 Emoji wars at work. New research from The Adaptavist Group exposes a generational rift in office comms: 45% of UK over-50s say emojis are inappropriate, while two-thirds of Gen Z use them daily. Meanwhile, full-stops are deemed ‘professional’ by older workers, but 23% of Gen Z perceive them as ‘rude’. Bring on the AI translators! 🔗

😓 Motivation is fading. Culture Amp finds that UK and EMEA employee motivation has declined for three straight years. Recognition is at a five-year low, and fewer workers feel performance reviews reflect their impact. Hard work, unnoticed. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 14)

TL;DR February’s Go Flux Yourself examines fairness as a business – and societal – necessity. Splunk’s Kirsty Paine tackles AI security, Harvard’s Siri Chilazi critiques DEI’s flaws, and Robert Rosenkranz applies Stoic wisdom to ambition, humility, and success in an AI-driven world …

Image created on Midjourney with the prompt “a forlorn man with his young son both with ski gear on at the top of a mountain with no snow on it (but green grass and rock) with a psychedelic sky”

The future

“To achieve anything meaningful, you must accept that you don’t have all the answers. The most successful people are the ones who keep learning, questioning, and improving.”

Robert Rosenkranz has lived the American Dream – but you won’t hear him shouting about it. At 82, he has little interest in the brash, performative ambition that defines modern politics and business. Instead, his story is one of quiet, relentless progress. 

Born into a struggling family, he worked his way through Yale and Harvard, then went on to lead Delphi Financial Group for over three decades. By the time he stepped down as CEO in 2018, he had grown the company’s value 100-fold, overseeing more than $20 billion in assets.

Yet, Rosenkranz’s real legacy might not be in finance, but in philanthropy. Yesterday (February 27), in a smart members’ club (where I had to borrow a blazer at reception – oops!) in Mayfair, London, I attended an intimate lunch to discuss The Stoic Capitalist, his upcoming book on ambition, self-discipline, and long-term success. 

As we received our starters, he shared an extraordinary statistic: “In America, there are maybe a couple of dozen people who have given over a billion dollars in their lifetime. A hundred percent of them are self-made.”

Really? I did some digging, and the numbers back him up. As of 2024, over 25 American philanthropists have donated more than $1 billion each, according to Forbes. Further, of those who have signed the Giving Pledge – committing to give away at least half their wealth – 84% are self-made. Only 16% inherited their fortunes.

The message is clear: those who build their wealth from nothing are far more likely to give it away. Contrast this with Donald Trump, the ultimate heir-turned-huckster. Brash, transactional (“pay-to-play” was how American political scientist Ian Bremmer neatly describes him), obsessed with personal gain, the American President represents a vision of success where winning means others must lose. Rosenkranz, by contrast, embodies something altogether different – ambition not as self-interest, but as a long game that enriches others.

He is also, tellingly, apathetic about politics, latterly. Having once believed in the American meritocracy, the Republican who has helped steer public policy now sees a system increasingly warped by inherited wealth, populism, and those pay-to-play politics. “The future of American politics worries me,” he admitted at the lunch. And given the rise of Trumpian imitators, he has reason to be concerned. To my mind, the world needs more Rosenkranzes – self-made leaders who view ambition and success as vehicles for building, rather than simply taking.

This tension – between long-term, disciplined ambition and short-term, self-serving power – runs through this month’s Go Flux Yourself. Because whether we’re talking about AI security, workplace fairness, or the philosophy of leadership, the real winners will be those who take the long view and seek fairness.

Fairness at work: The illusion of progress

Fairness in the workplace is one of those ideas that corporate leaders love to endorse in principle – but shy away from in practice. Despite billions spent on Diversity, Equity, and Inclusion (DEI) initiatives, meaningful change remains frustratingly elusive. (Sadly, this fact only helps Trump’s forceful agenda to ditch such policies – an approach that is driving the marginalised to seek shelter, at home or abroad.)

“For a lot of organisations, programmatic interventions are appealing because they are discrete. They’re off to the side. It’s easy to approve a one-time budget for a facilitator to come and do a training or participate in a single event. That’s sometimes a lot easier than saying: ‘Let’s change how we evaluate performance.’ But precisely because those latter types of solutions are embedded and affect how work gets done daily, they’re more effective.”

This is the heart of what Harvard’s Siri Chilazi told me when we discussed Make Work Fair, the new book she has co-authored with Iris Bohnet. Their research offers a much-needed reality check on corporate DEI efforts.

Image created on Midjourney with the prompt “a man and a women in work clothes on a balancing scale – equal – in the style of a matisse painting”

She explained why so many workplace fairness initiatives fail: they rely on changing individual behaviour rather than fixing broken systems. “Unconscious bias training has become this multi-billion-dollar industry,” she said. “But the evidence is clear — it doesn’t work.” Studies have shown that bias training rarely leads to lasting behavioral change, and in some cases, it even backfires, making people more defensive about their biases rather than less.

So what does work? Chilazi and Bohnet argue that structural interventions — the kind that make fairness automatic rather than optional — are the key to real progress. “If you want to reduce bias in hiring, don’t just tell people to ‘be more aware’ — design the process so that bias has fewer opportunities to creep in,” she told me.

This means:

  • Standardising interviews so every candidate is evaluated against the same criteria
  • Removing names from CVs to eliminate unconscious bias in early screening
  • Making promotion decisions based on clear, structured frameworks rather than subjective “gut feelings”

The companies that have done this properly – like AstraZeneca, which now applies transparent decision-making frameworks to promotions – have seen real progress. Others, Chilazi warned, are simply engaging in performative fairness. “If an organisation is still relying on vague, unstructured decision-making, it doesn’t matter how many DEI consultants they hire – bias will win.”

Perhaps the most telling statistic comes from a 2023 McKinsey report that found that 90% of executives believe their DEI initiatives are effective, but only 40% of employees agree. That gap tells you everything you need to know.

This matters not just ethically, but competitively. Companies that embed fairness into their DNA don’t just avoid scandals and lawsuits – they outperform their competitors. “The data is overwhelming,” Chilazi said. “Fairer companies attract better talent, foster more innovation, and have stronger long-term results.”

Yet many businesses refuse to make fairness a structural priority. Why? Because, as Chilazi put it, “real fairness requires real power shifts. And that makes a lot of leaders uncomfortable.”

But here’s the reality: fairness isn’t a cost – it’s an investment. The future belongs to the companies that understand this. And those that don’t? They’ll be left wondering why the best talent keeps walking out the door.

NB I’ll be discussing some of this next week, on March 4, at the latest Inner London South Virtual Networking event for the Federation of Small Businesses (of which I’m a member). See here to tune in.

Fairness in AI: Who controls the future?

If fairness in the workplace is in crisis, fairness in AI is a full-blown emergency. And unlike workplace bias – which at least has legal protections and public scrutiny – AI bias is being quietly embedded into the foundations of our future.

AI now influences who gets hired, who gets a loan, who gets medical treatment, and even who goes to prison. Yet, shockingly, most companies deploying these systems have no real governance strategy in place.

At the start of February, I spoke with Splunk’s Geneva-based Kirsty Paine, a cybersecurity strategist and World Economic Forum Fellow, who is actively working with governments, regulators, and industry leaders to shape AI security standards. Her message was blunt: “AI governance isn’t just about ethics or compliance – it’s a resilience issue. If you don’t get it right, your business is exposed”.

This is where many boards are failing. They assume AI security is a technical problem, best left to IT teams. But as Paine explained, if AI makes a bad decision – one that leads to reputational, financial, or legal fallout – blaming the engineers won’t cut it.

“We need boards to start thinking of AI governance the same way they think about financial oversight,” she said. “If you wouldn’t approve a financial model without auditing it, why would you sign off on AI that fundamentally impacts customers, employees, and business decisions?”

Historically, businesses have treated cybersecurity as a defensive function – protecting systems from external attacks. But AI doesn’t work like that. It is constantly learning, evolving, and interacting with new data and new risks.

“You can’t just ‘fix’ an AI system once and assume it’s safe,” Paine told me. “AI doesn’t stop learning, so its risks don’t stop evolving either. That means your governance model needs to be just as dynamic.”

At its core, this is about power. Who controls AI, and in whose interests? Right now, most AI development is happening behind closed doors, controlled by a handful of tech giants with little accountability.

One of the biggest governance challenges is that no single company can solve AI security alone. That’s why Paine is leading cross-industry efforts at the WEF, bringing together governments, regulators, and businesses to create shared frameworks for AI security and resilience.

“AI security shouldn’t be a competitive advantage – it should be a shared priority,” she said. “If businesses don’t start working together on governance, they’ll be left at the mercy of regulators who will make those decisions for them.”

One of the most significant barriers to AI security is communication. Paine, who started her career as a mathematics teacher in challenging schools, knows that how you explain something determines whether people truly understand it.

“In cybersecurity and AI, we love jargon,” she admitted. “But if your board doesn’t understand the language you’re using, how can they make informed decisions?”

This is where her teaching background has shaped her approach. “I had to explain complex maths to students who found it intimidating,” she said. “Now, I do the same thing in boardrooms.” The goal isn’t to impress people with technical terms but to ensure they actually get it, was her message.

And this, ultimately, is the hidden risk of AI governance: if leaders don’t understand the systems they’re approving, they can’t govern them effectively.

The present

If fairness has been the intellectual thread running through my conversations this month, sobriety has been the personal one. I’ve been talking about it a lot – on Voice of Islam radio, for example (see here, from about 23 minutes in), where I was invited to discuss the impact of alcohol on society – and in wrapping up Upper Bottom, the sobriety podcast I co-hosted for the past year.

Ending Upper Bottom felt like the right decision – producing a weekly podcast (an endless cycle of researching, recording, editing, publishing and promoting) is challenging, and harder to justify with no financial reward and little social impact. But it also marked a turning point. 

When we launched last February, it was a passion project – an exploration of what it meant to re-evaluate alcohol’s role in our lives. Over the months, the response was encouraging: messages from people rethinking their own drinking, others inspired to take a break, and some who felt seen for the first time. It proved what I suspected all along: the sweetest fruits of sobriety can be found through clarity, agency, and taking control of your own story.

And now? Well, I’m already lining up new hosting gigs – this time, paid ones. Sobriety has given me a sharper focus, a better work ethic, and, frankly, a clearer voice. I have no interest in being a preacher about it – if you want a drink, have a drink – but I do know that since cutting out alcohol, opportunities keep rolling in. And I’m open to more.

I bring this up because storytelling – whether through a podcast mic, a radio interview, or the pages of Go Flux Yourself – is essentially about fairness too. Who gets to tell their story? Whose voice gets amplified? Who is given the space to question things that seem “normal” but, on closer inspection, might not be serving them?

This is the thread that ties my conversations this month – whether with Kirsty on AI governance, Robert on wealth distribution and politics, or Siri on workplace fairness, or my own reflections on sobriety – into something bigger. Fairness isn’t just about systems. It’s about who gets to write the script.

And right now, I’m more interested than ever in shaping my own.

The past

February was my birthday month. Another year older, another opportunity to reflect. And this year, the reflection came at a high altitude.

I spent a long weekend skiing in Slovenia with my 10-year-old son, Freddie – his first time on skis. It was magical, watching him initially wobble, find his balance, and then, quickly, gain confidence as he carved his way down the slopes. It took me back to my own childhood, when I was lucky enough to ski from a young age. But that word – lucky – stuck with me.

Because here’s the truth: by the time Freddie is my age, skiing might not be possible anymore.

The Alps are already feeling the effects of climate change. Lower-altitude resorts are seeing shorter seasons, more artificial snow, and unpredictable weather patterns. Consider 53% of European ski resorts face a ‘very high risk’ of snow scarcity if temperatures rise by 2°C. By the time Freddie’s children – if he has them – are old enough to ski, the idea of a family ski holiday may be a relic of the past.

It’s sobering to think about, especially after spending a month discussing fairness at work and in AI. Because climate change is the ultimate fairness issue. The people least responsible for it – future generations – are the ones who will pay the highest price.

For now, I’m grateful. Grateful that I got to experience skiing as a child, grateful that I got to share it with Freddie, grateful that – for now – we still have these mountains to enjoy.

But fairness isn’t about nostalgia. It’s about responsibility. And if we don’t take action, the stories we tell our grandchildren about the world we once had will be the closest they ever get to it.

Statistics of the month

📉 Is Google search fading? A TechRadar study found that 27% of US respondents now use AI tools instead of search engines. (I admit, I’m the same.) The way we find information is shifting fast. 🔗

🚀 GenAI is the skill to have. Coursera saw an 866% rise in AI course enrolments among enterprise learners. Year-on-year increases hit 1,100% for employees, 500% for students, and 1,600% for job seekers. Adapt, or be left behind. 🔗

Job applications are too slow. Candidates spend 42 minutes per application – close to the 53-minute threshold they consider excessive. Nearly half (45%) give up if the process drags on. Businesses must streamline hiring or risk losing top talent. 🔗

🤖 Robots are easing the burden on US nurses. AI assistants have saved clinicians 1.5 billion steps and 575,000+ hours by handling non-patient-facing tasks. A glimpse into the future of healthcare efficiency. 🔗

💻 The Slack-Zoom paradox. Virtual tools have boosted productivity for 59% of workers, yet 45% report “Zoom fatigue” – with men disproportionately affected. Remote work: a blessing and a burden. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 10)

TL;DR: October’s Go Flux Yourself explores the dark and light sides of AI through Nobel Prize winners and cybersecurity experts, weighs the impact of disinformation ahead of the US election, confronts haunting cases of AI misuse, and finds hope in a Holocaust survivor’s legacy of ethical innovation …

Image created on Midjourney with the prompt  “a scary megalomaniac dressed as halloween monster with loads of computers showing code behind him in the style of an Edward Hopper painting”

The future

“Large language models are like young children – they grow and develop based on how you nurture and treat them.”

I’m curious by nature – it’s central to my profession as a truth-seeking human-work evolution journalist. But sometimes, it’s perhaps best not to peek behind the curtain, as what lies behind might be terror-inducing. Fittingly, this newsletter is published on Halloween, so you might expect some horror. Consider yourself warned!

I was fortunate enough to interview two genuine cybersecurity luminaries in as many days towards the end of October. First, Dr Joye Purser, Field CISO at Veritas Technologies and a former White House director who was a senior US Government official during the Colonial Pipeline attack in 2021, was over from Atlanta. 

And, the following day, the “godfather of Israeli cybersecurity”, Shlomo Kramer, Co-Founder and CEO of Cato Networks, treated me to lunch at Claridge’s – lucky me! – after flying in from Tel Aviv.

The above quotation is from my conversation with Joye, who warned that if a nation isn’t democratic, they’ll train their AI systems very differently, with state-controlled information. 

Both she and Shlomo painted a sobering picture of our technological future, particularly as we approach what could be the most digitally manipulated vote in history: the United States presidential election. Remember, remember the fifth of November, indeed.

“The risk is high for disinformation campaigns,” Joye stressed, urging voters to “carefully scrutinise the information they receive for what is the source, how recent or not recent is the information, and just develop an increasing public awareness of the warning signs or red flags that something’s not right with the communication”.

Shlomo, who co-founded Check Point Software Technologies in 1993, offered a stark analysis of how social media has fractured our society. “People don’t care if it’s right or wrong, whether a tweet is from a bot or a Russian campaign,” he said. “They just consume it, they believe it – it becomes their religion.” 

Shlomo drew a fascinating parallel between modern social media echo chambers and medieval church communities, suggesting we’ve come full circle from faith-based societies through the age of reason and back to tribal belief systems.

And, of course, most disagreements that develop into wars are primarily down to religious beliefs, at least on the surface. Is it a coincidence that two of the largest wars we have known in four decades are raging? (And if Donald Trump is voted back into the White House in a week, then what will that mean for Europe if – as heavily hinted – funding for Ukraine’s military is strangled?) 

After the collective trauma of the coronavirus pandemic, the combination of echo chambers on social media and manipulated AIs is creating flames for a smouldering society. Have you noticed how, generally, people are snappier with one another?

The cybersecurity challenges are equally worrying. Both experts highlighted how AI is supercharging traditional threats. Shlomo’s team recently uncovered an AI tool that can generate entire fake identities – complete with convincing video, passport photos, and multiple corroborating accounts – capable of fooling sophisticated know-your-customer systems at financial institutions.

Maybe most concerning was their shared view that cybersecurity isn’t a problem we can solve but must constantly manage. As Shlomo said: “You have to run as fast as possible to stay in the same place.” It’s a perpetual arms race between defenders and increasingly sophisticated threats.

Still, there’s hope. The very technologies that create these challenges might also help us overcome them. Both experts emphasised that while bad actors can use AI for deception, it’s also essential for defence. The key is ensuring we develop these tools with democratic values and human welfare in mind.

When I asked about preparing our children for this uncertain future – as I often do when interviewing experts who also have kids – their responses were enlightening. Joye emphasised the importance of teaching children to be “informed consumers of information” who understand the significance of trusted sources and proper journalism. 

Shlomo’s advice was more philosophical: children must learn to “listen to themselves and believe what they hear is true” – to trust their inner voice amid the cacophony of digital noise.

In the post-truth era, who can we trust if not ourselves?

A couple of years ago, John Elkington, a world authority on corporate responsibility and sustainable development who coined the term “triple bottom line”, told me: “In the vacuum of effective politicians, people are turning to businesses for leadership, so business leaders must accept that responsibility.” (Coincidently, this year marks three decades since the British environmental thinker coined the “3 Ps” of people, planet, and profit.)

For this reason, CEOs, especially, have to speak up with authority, authenticity and original thought. Staying curious, thinking critically, and calling out bad practices are increasingly important, particularly by industry leaders. 

With an eye on the near future and the need for truth, I’m pleased to announce the soft launch of Pickup_andWebb, a collaboration with brand strategist and client-turned-friend Cameron Webb. Pickup_andWebb develops incisive, issue-led thought leadership for ambitious clients looking to provoke stakeholder and industry debate and enhance their expert reputation.

“In an era of unprecedented volatility, CEOs navigate treacherous waters,” I wrote recently in our opening Insights article titled Speak up or sink. “The growing list of headwinds is formidable – from geopolitical tensions and wars reshaping global alliances to the relentless march of technological advancements disrupting entire industries. 

“Add to this the perfect storm of rising energy and material costs, traumatised supply chains, and the ever-present spectre of climate change, and it’s clear that the modern CEO’s role has never been more challenging – or more crucial. Yet, despite this incredible turbulence, the truly successful CEO of 2024 must remain a beacon of stability and vision. They are the captains who keep their eyes fixed on the distant horizon, refusing to be distracted by the immediate squalls. 

“More than ever, they must embody the role of progressive visionaries, their gaze penetrating years into the future to seize nascent opportunities or deftly avoid looming catastrophes. But vision alone is not enough.

“Today’s exemplary leaders are expected to steer with a unique blend of authenticity, humility, and vulnerability. They understand that true strength lies not in infallibility but in the courage to acknowledge uncertainties and learn from missteps. 

“These leaders aren’t afraid to swim against the tide, challenging conventional wisdom when necessary and inspiring their crews to navigate uncharted waters.”

If you are – or you know – a leader who might need help swimming against the tide and spreading their word, let’s start a conversation and co-create in early 2025.

The present

This month’s news perfectly illustrated AI’s Jekyll-and-Hyde nature on the subject of truth and technology. We saw the good, the bad, and the downright ugly.

While I’ve shared the darker future possibilities outlined by cybersecurity experts Joye and Shlomo, the 2024 Nobel Prizes highlighted AI’s extraordinary potential for good.

Sir Demis Hassabis, chief executive of Google DeepMind, shared the chemistry prize for using AI to crack a 50-year-old puzzle in biology: predicting the structure of every protein known to humanity. His team’s creation, AlphaFold, has already been used by over two million scientists worldwide, helping develop vaccines, improve plant resistance to climate change, and advance our understanding of the human body.

The day before, Geoffrey Hinton – dubbed the “godfather of AI” – won the physics prize for his pioneering work on neural networks, the very technology that powers today’s AI systems. Yet Hinton, who left Google in May 2023 to “freely speak out about the risk of AI”, now spends his time advocating for greater AI safety measures.

It’s a fitting metaphor for our times: the same week that celebrated AI’s potential to revolutionise scientific discovery also saw warnings about its capacity for deception and manipulation. As Hassabis himself noted, AI remains “just an analytical tool”; how we choose to use it matters, echoing Joye’s comment about how we feed LLMs.

Related to this topic, I was on stage twice at the Digital Transformation EXPO (DTX) London 2024 at the start of the month. Having been asked to produce a write-up of the two-day conference – the theme was “reinvention” – I noted how “the tech industry is caught in a dizzying dance of progress and prudence”.

I continued: “As industry titans and innovators converged at ExCeL London in early October, a central question emerged: how do we harness the transformative power of AI while safeguarding the essence of our humanity?

“As we stand on the brink of unprecedented change, one thing becomes clear: the path forward demands technological prowess, deep ethical reflection, and a renewed focus on the human element in our digital age.”

In the opening keynote, Derren Brown, Britain’s leading psychological illusionist, called for a pause in AI development to ensure technological products serve humans, not vice versa.

“We need to keep humanity in the driving seat,” Brown urged, challenging the audience to rethink the breakneck pace of innovation. This call for caution contrasted sharply with the rest of the conference’s urgency.

Piers Linney, Founder of ImplementAI and former Dragons’ Den investor, provided the most vivid analogy of the event. He likened competing in today’s market without embracing AI to “cage fighting – to the death – against the world champion, yet having Ironman in one’s corner and not calling him for help”.

Meanwhile, Michael Wignall, Customer Success Leader UK at Microsoft, warned: “Most businesses are not moving fast enough. You need to ask yourself: ‘Am I ready to embrace this wave of transformation?’ Your competitors may be ready.” His advice was unequivocal: “Do stuff quickly. If you are not disrupting, you will be disrupted.”

I was honoured to moderate a main-stage panel exploring human-centred tech design, offering a crucial counterpoint to the “move-fast-and-break-things” mantra. Gavin Barton, VP of Engineering at Booking.com, Sue Daley, Director of Tech and Innovation at techUK, and Dr Nicola Millard, Principal Innovation Partner at BT Group, joined me.

“Focus on the outcome you’re looking for,” advised Gavin. “Look at the problem rather than the metric; ask what the real problem is to solve.” Sue cautioned against unquestioningly jumping on the AI bandwagon, stressing: “Think about what you’re trying to achieve. Are you involving your employees, workforce, and potentially customers in what you’re trying to do?” Nicola introduced her “3 Us” framework – Useful, Useable, and Used – for evaluating tech innovation.

Regarding tech’s darker side, Jake Moore, Global Cybersecurity Advisor at ESET, delivered a hair-raising presentation titled The Rise of the Clones on DTX’s Cyber Hacker stage. His practical demonstration of deep fake technology’s potential for harm validated the warnings from both Joye and Shlomo about AI-enabled deception.

Moore revealed how he had used deep fake video and voice technology to penetrate a business’s defences and commit small-scale fraud. It was particularly unnerving given Shlomo’s earlier warning about AI tools generating entire fake identities that can fool sophisticated verification systems.

Quoting the late Sir Stephen Hawking’s prescient warning that “AI will be either the best or the worst thing for humanity”, Moore’s demonstration felt like a stark counterpoint to the Nobel Prize celebrations. Here, in one conference hall, we witnessed both the promise and peril of our AI future – rather like watching Dr Jekyll transform into Mr Hyde.

Later in the month, there were yet darker instances of AI’s misuse and abuse. In a story that reads like a Black Mirror episode, American Drew Crecente discovered his late teenage daughter, Jennifer, murdered in 2006, had been resurrected as an AI chatbot on Character.AI. The company claimed the bot was “user-created” and quickly removed it, but the incident raises profound questions about data privacy and respect for the deceased in our digital age.

Arguably even more distressing, and also in the United States, was the case of 14-year-old Sewell Setzer III, who took his own life after developing a relationship with an AI character based on Game of Thrones’ Daenerys Targaryen. His mother’s lawsuit against Character.AI highlights the dangers of AI companions that can form deep emotional bonds with vulnerable users – particularly children and teenagers.

Finally, in what police called a “landmark” prosecution, Bolton-based graphic design student Hugh Nelson was jailed for 18 years after using AI to create and sell child abuse images. The case exemplifies how rapidly improving AI technology can be weaponised for the darkest purposes, with prosecutors noting that “the imagery is becoming more realistic”.

While difficult to stomach, these stories validate warnings about AI’s destructive potential when developed without proper safeguards and ethical considerations. As Joye emphasised, how we nurture these technologies matters profoundly. The challenge ahead is clear: we must harness AI’s extraordinary potential for good while protecting the most vulnerable members of our society.

The past

During lunch at Claridge’s, Shlomo shared a remarkable story about his grandfather, Shlomo – whom he is named after – that feels particularly pertinent given the topic of human resilience in the face of technological change.

The elder Shlomo was an entrepreneur in Poland who survived Stalin’s Gulag through his business acumen. After enduring that horror, he navigated the treacherous post-war period in Austria – a time and place immortalised in Orson Welles’ The Third Man – before finally finding sanctuary in Israel in the early 1960s.

When the younger Shlomo co-founded Check Point Software Technologies over 30 years ago, the company’s first office was in his late grandfather’s vacant apartment. It feels fitting that a business focused on protecting people from digital threats began in a space owned by someone who had spent his life helping others survive very real ones.

The heart-warming story reminds us that while the challenges we face may evolve – from physical threats to digital deception – human ingenuity, ethical leadership, and the drive to protect others remain constant. 

As we grapple with AI’s implications for society, we would do well to remember this Halloween that technology is merely a tool; it’s the hands that wield it – and the values that guide those hands – that truly matter.

Statistics of the month

  • According to McKinsey and Company’s report The role of power in unlocking the European AI revolution, published last week, “in Europe, demand for data centers is expected to grow to approximately 35 gigawatts (GW) by 2030, up from 10 GW today. To meet this new IT load demand, more than $250 to $300 billion of investment will be needed in data center infrastructure, excluding power generation capacity.”
  • LinkedIn’s research reveals that more than half (56%) of UK professionals feel overwhelmed by how quickly their jobs are changing, which is particularly true of the younger generation (70% of 25-34 year olds), while 47% say expectations are higher than ever.
  • Data from Asana’s Work Innovation Lab reveals that AI use is still predominantly a “solo” activity for UK workers, with the majority feeling most comfortable using it alone compared to within a team or their wider organisation. The press release hypothesises: “This may be because UK individual workers think they have a better handle on technology than their managers or the business. Workers rank themselves as having the highest level of comfort with technology (86%) – compared to their team (78%), manager (74%) and organisation (76%). This trend is mirrored across industries and sectors.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

WTF is an insider threat – and why is it a growing problem for businesses?

Most (95%) cybersecurity incidents were caused by human error last year, the World Economic Forum calculated. Such incidents appear to be spiraling, with the annual cost of cybercrime predicted to reach $8 trillion this year, according to Cybersecurity Ventures. If that wasn’t alarming enough, experts warn that bad actors within organizations are a growing security risk.

Some employees, manipulated and compromised through “social engineering,” might not even realize they are aiding and abetting criminals. Similarly, employers might not know they have been attacked, until it’s too late.

Worse, all too often, businesses — which are, in the post-pandemic era, being urged to provide greater autonomy to and trust in employees — are blindsided by this so-called “insider threat.”

In a nutshell, an insider threat refers to someone who steals data or breaks the internal systems of the organization they work for, for their own purposes. For example, in 2017, an administrator working for Dutch hosting provider Verelox, deleted all customer data and wiped most of the company’s servers.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How the drive to improve employee experience could trigger a ‘data-privacy crisis’

How much personal information would you feel comfortable with your company knowing, even if it improves the working experience? Where is the line? Also, will that boundary be different for your colleagues?

Right now, it’s all a gray area, but it could darken quickly. Because of that fuzziness and subjectivity, it’s a tricky balance to strike for employers. On the one hand, they are being encouraged — if not urged — to dial up personalization to attract and retain top talent. On the other hand, however, with too much information on staff, they might be accused of taking liberties and trespassing on data privacy issues. 

In 2023, organizations are increasingly using emerging technologies — artificial intelligence (AI) assistants, wearables, and so on — to collect more data on employees’ health, family situations, living conditions, and mental health to respond more effectively to their needs. But embracing these technologies has the potential to trigger a “data-privacy crisis,” warned Emily Rose McRae, senior director of management consultancy Gartner’s human resources practice.

Earlier in January, Gartner identified that “as organizations get more personal with employee support, it will create new data risks” as one of the top nine workplace predictions for chief human resource offices this year.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How hybrid working brings teams closer but also creates ‘micro cultures’ and internal conflicts

Who needs a water cooler in the digital age? Paradoxically, the pandemic-induced shift to hybrid and remote working has, in many instances, drawn teams closer together, according to Gartner research. 

“We have seen that people have stronger ties with their immediate hybrid team as they have more interactions with those members,” said Piers Hudson, senior director of Gartner’s HR functional strategy and management research team.

Conversely, Hudson noted bonds between people from different departments, who they would have previously run into more often when in an office environment, have weakened in hybrid and remote setups. “We found that employees interact once a week or less with their ‘weak ties’ — people outside their function — versus several times a week before the pandemic,” he said. 

For most hybrid or remote workers, though, team members are “the only people they interact with several times a day,” added Hudson. 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

Why cybersecurity leaders are actively recruiting neurodiverse talent

In an attempt to clamp down harder on the increased risk of cybersecurity threats to businesses, tech leaders are actively hiring neurodivergent people because of the strong problem-solving and analytical skills they can offer.

The neurodiversity spectrum is wide, ranging from attention deficit hyperactivity disorder (ADHD), dyslexia, dyspraxia and Tourette syndrome, to autism and bi-polarity. But common characteristics of neurodivergent individuals – including pattern-spotting, creative insights and visual-spatial thinking – are finally being realized, not least in the cyber security sector.

Holly Foxcroft, head of neurodiversity in cyber research and consulting at professional search firm London-centered Stott and May Consulting, said that neurodivergent individuals have “spiky profiles.” Foxcroft, who is neurodivergent herself, explained that these visual representations highlight the strengths and areas needed for development or support. “Neurodivergent profiles show that individuals perform highly in areas where neurotypicals have a consistent and moderate line,” she said. 

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

The future of work is not evenly distributed – how employers can prepare

“The future is already here; it’s just not evenly distributed.” U.S.-Canadian writer William Gibson, the father of the cyberpunk sub-genre of science fiction, has had his finger on the pulse of breakthrough innovations for decades. However, in early 2023, this perceptive comment is especially apt for the working world, which is going through the most seismic transformation in our history.

The digital revolution, accelerated by the pandemic fallout, presents challenges and opportunities. For instance, technology has enabled remote working. And yet, employees are clocking up more hours when not in the office, and loneliness that harms mental health is becoming a worrying side effect. Plus, the number of meetings has also shot up, and often people mistake being busy for being productive.

Moreover, while workers demand more time and location flexibility, where does that leave industries in which it isn’t feasible? It’s all very well for those in desk-based jobs to use tech to improve their work-life balance, yet around 80% of global workers are “deskless.” They need to be physically present to do their jobs. 

To help navigate the journey ahead, WorkLife selected nine recent statistics to show the direction of travel, identify the most prominent likely obstacles, and offer advice from experts on how employers can overcome them. In this article, we have included four, and the remaining five will be published separately.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

WTF is social engineering?

Who can you trust online? Given the surging number of global identity thefts, it seems we are nowhere near cautious enough regarding digital interactions.

Neil Smith, partner success manager for EMEA North at cybersecurity firm Norton, said 55% of people in the U.K. admit that they would have no idea what to do if their identity was stolen. “The biggest worry is that it is often ourselves that is the root cause of identity theft,” he added.

Further, Allen Ohanian, chief information security officer of Los Angeles County said that, alarmingly, 67% of us trust people online more than in the physical world.

In early 2022, the World Economic Forum calculated that 95% of cybersecurity incidents occur due to human error. “Almost every time there’s an attack, it’s down to a mistake by or manipulation of people like you and me,” said Jenny Radcliffe, who goes by the moniker “The People Hacker.”

Indeed, 98% of all cyberattacks involve some form of social engineering, cyber security experts Purplesec worked out.

But what exactly is social engineering?

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

How the move to hybrid working has become a ‘buffet’ for cybercriminals

The future of work may be flexible, but are businesses – particularly small- to medium-sized organizations – investing enough time, money, and effort to ramp up cybersecurity sufficiently? No, is the short answer, and it’s a massive concern on the eve of 2023.

With the sophistication of cyber threats on the rise and the increased attack vectors exposed by hybrid working, bad actors are preying on the weakest links in the chain to reach top-tier targets. 

A witticism doing the rounds on the cybersecurity circuit jokes that the hackers who have transformed ransomware attacks – whereby criminals lock their target’s computer systems or data until a ransom is paid – into a multibillion-dollar industry are more professional than their most high-profile corporate victims. But it’s no laughing matter.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

The appliance of prescience

Advances in artificial intelligence are giving organisations in both the public and private sectors increasingly powerful forecasting capabilities. How much further down this predictive path is it possible for them to go?

Minority Report, Steven Spielberg’s 2002 sci-fi thriller based on a short story by Philip K. Dick, explores the concept of extremely proactive policing. The film, starring Tom Cruise, is set in 2054 Washington DC. The city’s pre-crime department, using visions provided by three clairvoyants, can accurately forecast where a premeditated homicide is about to happen. The team is then able to dash to the scene and collar the would-be murderer just before they strike.

While police forces are never likely to have crack teams of incredibly useful psychics at their disposal, artificial intelligence has advanced to such an extent in recent years that its powerful algorithms can crunch huge volumes of data to make startlingly accurate forecasts.

Could a Minority Report style of super-predictive governance ever become feasible in the public sector – or, indeed, in business? If so, what would the ethical implications of adopting such an approach be?

There is a growing list of narrow-scope cases in which predictive analytics has been used to fight crime and save lives. In Durham, North Carolina, for instance, the police reported a 39% fall in the number of violent offences recorded between 2007 and 2014 after using AI-based systems over that period to observe trends in criminal activities and identify hotspots where they could make more timely interventions.

AI has also been used to tackle human trafficking in the US, where it has helped the authorities to locate and rescue thousands of victims. Knowing that about 75% of child trafficking cases involve grooming on the internet, the government’s Defense Advanced Research Projects Agency monitors suspicious online ads, detects coded messages and finds connections between these and criminal gangs.

In Indonesia, the government has partnered with Qlue, a specialist in smart city technology, to predict when and where natural disasters are most likely to strike. Its systems analyse flood data collected from sensors and information reported by citizens. This enables it to identify the localities most at risk, which informs disaster management planning and enables swifter, more targeted responses.

While all these cases are positive examples of the power of predictive AI, it would be nigh-on impossible to roll out a Minority Report style of governance on a larger scale. That’s the view of Dr Laura Gilbert, chief analyst and director of data science at the Cabinet Office. “To recreate a precognitive world, you would need an incredibly advanced, highly deterministic model of human behaviour – using an AI digital-twin model, perhaps – with low levels of uncertainty being tolerable,” she says. “It’s not certain that this is even possible.”

An abundance of information is required to understand a person’s likely behaviour, such as their genetic make-up, upbringing, current circumstances and more. Moreover, achieving errorless results would require everyone to be continuously scrutinised.

“Doing this on a grand scale – by closely monitoring every facet of every life; accurately analysing and storing (or judiciously discarding) all the data collected; and creating all the technology enhancements to enable such a programme – would be a huge investment and also cost us opportunities to develop other types of positive intervention,” Gilbert says. “This is unlikely to be even close to acceptable, socially or politically, in the foreseeable future.”

Tom Cheesewright, a futurist, author and consultant, agrees. He doubts that such an undertaking would ever be considered worthwhile, even in 2054. “The cost to the wider public in terms of the loss of privacy would be too great,” Cheesewright argues, adding that, in any case, “techniques for bypassing surveillance are widely understood”.

Nonetheless, Vishal Marria, founder and CEO of enterprise intelligence company Quantexa, notes that the private sector, particularly the financial services industry, is making great use of AI in nipping crimes such as money-laundering in the bud. “HSBC has pioneered a new approach to countering financial crime on a global scale across billions of records,” he says. “Only by implementing contextual analytics technology could it identify the risk more accurately, remove it and enable a future-proof mitigation strategy.”

Alex Case, senior director in EMEA for US software company Pegasystems, believes that governments and their agencies can take much from the private sector’s advances. Case, who worked as a deputy director in the civil service from 2018 to 2021, says: “The levels of service being routinely provided by the best parts of the private sector can be replicated in government. In contrast with the dystopian future depicted in Minority Report, the increasing use of AI by governments may lead to a golden age of citizen-centric public service.”

Which other operations or business functions have the most to gain from advances in predictive analytics? Cheeswright believes that “the upstream supply chain is an obvious one in the current climate. If you can foresee shortages owing to pandemics, wars, economic failures and natural disasters, you could gain an enormous competitive advantage.”

The biggest barriers to wielding such forecasting power are a lack of high-quality data and a shortage of experts who can analyse the material and draw actionable insights from it. “Bad data can turn even a smooth deployment on the technology side into a disaster for a business,” notes Danny Sandwell, data strategist at Quest Software. “Data governance – underpinned by visibility into, and insights about, your data landscape – is the best way to ensure that you’re using the right material to inform your decisions. Effective governance helps organisations to understand what data they have, its fitness for use and how it should be applied.”

Sandwell adds that a well-managed data governance programme will create a “single version of the truth”, eliminating duplicate data and the confusion it can cause. Moreover, the most advanced organisations can build self-service platforms by establishing standards and investing in data literacy. “Data governance enables a system of best practice, expertise and collaboration – the hallmarks of an analytics-driven business,” he says.

Gilbert offers business leaders one final piece of advice in this area: recruit carefully. She argues that “a great data analyst is worth, at a conservative estimate, 20 average ones. They can often do things that any number of average analysts working together still can’t achieve. What’s more, a bad analyst will cost you both money and time.”

And, as Minority Report’s would-be criminals in discover to their cost, time is the one resource that’s impossible to claw back.

This article was first published in Raconteur’s Future of Data report in October 2022

How financial services operators are dialling up conversational AI to catch out fraudsters

Organisations are using new technology to analyse the voices of those posing as customers in real time while reducing false positives

Great Britain is the fraud capital of the world, according to a Daily Mail investigation published in June. The study calculated that 40 million adults have been targeted by scammers this year. In April, a reported £700m was lost to fraud, compared to an average of £200m per month in 2021. As well as using convincing ruses, scammers are increasingly sophisticated cybercriminals.

If the UK does go into recession, as predicted, then the level of attacks is likely to increase even further. Jon Holden is head of security at digital-first bank Atom. “Any economic and supply-chain pressure has always had an impact and motivated more fraud,” he says. He suggests that the “classic fraud triangle” of pressure, opportunity and rationalisation comes into play. 

Financial service operators are investing in nascent fraud-prevention technologies such as conversational AI and other biometric solutions to reduce fraud. “Conversational AI is being used across the industry to recognise patterns in conversations, with agents or via chatbots, that may indicate social engineering-type conversations, to shut them down in real time,” continues Holden. “Any later than real time and the impact of such AI can be deadened as the action comes too late. Linking this to segmentation models that identify the most vulnerable customers can help get action to those that need it fastest and help with target prevention activity too.”

This last point is crucial because educating customers about swindlers is not straightforward. “Unfortunately, there will always be vulnerable people being scammed,” Holden says. “The banks are doing a lot of work to identify and protect vulnerable customers, but clever social engineering, often over a long period, will always create more victims of romance scams, investment scams, or purchase scams when victims send money for goods never received.”

How AI can help fight fraud

AI is a critical tool to fight fraud. Not only does it reduce the possibility of human error but it raises the flag quickly, which enables faster, smarter interventions. Additionally, it provides “far better insight of the cyber ecosystem”, adds Holden, “almost at the point of predictive detection, which helps with both threat decisioning and threat hunting”. 

Jason Costain is head of fraud prevention at NatWest, which serves 19 million customers across its banking and financial services brands. He agrees it is vital for conversational AI to join the chat. Because the call centre is an important customer service channel and a prime target for fraudulent activity – both from lone-wolf attackers and organised crime networks – he resolved to establish more effective security mechanisms while delivering a fast, smooth experience for genuine customers. 

In late 2020, NatWest opted for a speech recognition solution by Nuance, a company which Microsoft recently acquired. It screens every incoming call and compares voice characteristics – including pitch, cadence, and accent – to a digital library of voices associated with fraud against the bank. The software immediately flags suspicious calls and alerts the call centre agent about potential fraud attempts.

Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible

Before the end of the first year of deploying the Nuance Gatekeeper system, NatWest had screened 17 million incoming calls. Of those, 23,000 led to alerts and the bank found that around one in every 3,500 calls is a fraud attempt. As well as a library of ‘bad’ voices, NatWest agents now have a safe list of genuine customer voices that can be used for rapid authentication without customers needing to recall passwords and other identifying information. That knowledge enables the bank to identify and disrupt organised crime activities to protect its customers and assist law enforcement.

“We’re using voice-biometric technology to build a clear picture of our customers’ voices and what criminal voices sound like,” Costain says. “We can detect when we get a fraudulent voice coming in across our network as soon as it happens. Using a combination of biometric and behavioural data, we now have far greater confidence that we are speaking to our genuine customers and keeping them safe.”

He estimates the return on investment from the tool is more than 300%. “As payback from technology deployment, it’s been impressive. But it’s not just about stopping financial loss; it’s about disrupting criminals.” For instance, NatWest identified a prolific fraudster connected to suspect logins on 1,500 bank accounts, and an arrest followed.

“For trusted organisations like banks, where data security is everything, the identification of the future is all about layers of security: your biometrics, the devices you use, and understanding your normal pattern of behaviour,” adds Costain. “At NatWest, we are already there, and our customers are protected by it.”

Benefits of investing in conversational AI

There are other benefits to be gained by investing in conversational AI solutions. Dr Hassaan Khan is head of the School of Digital Finance at Arden University. He points to a recent survey that indicates almost 90% of the banking sector’s interactions will be automated by 2023. “To stay competitive, organisations must rethink their strategies for improved customer experience. Banks are cognisant that conversational AI can help them be prepared and meet their customers’ rising demands and expectations,” he says.

This observation chimes with Livia Benisty. She is the global head of anti-money laundering at Banking Circle, the B2B bank relied on by Stripe, Paysafe, Shopify and other big businesses, responsible for settling approximately 6% of the world’s ecommerce payments. “With AML fines rocketing – the Financial Conduct Authority dished out a record $672 million (£559m) in 2021 – it’s clear that transaction monitoring cannot cope in its current state,” Benisty says. “That’s why adopting AI and machine learning is vital for overturning criminal activity.”

She argues, however, that many in the financial services industry are reluctant to invest in the newest AML solutions for fear of being reprimanded by regulators. “If you’re a bank, you come under a lot of scrutiny and there’s been resistance to using AI like ours,” she says. “AI is seen as unproven and risky to use but the opposite is true. Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible. AI alleviates admin-heavy processes, enhancing detection by increasing rules precision and highlighting red flags the naked human eye could never spot.”

Even regulators would be impressed by the results revealed by Banking Circle’s head of AML. More than 600 bank accounts have been closed or escalated to the compliance department, thanks to AI-related findings. Further, the solution “dramatically reduces” the so-called false positive alerts. “It’s well known the industry can see rates of a staggering 99%,” adds Benisty. “In highlighting fewer non-risky payments, fewer false positives are generated, ultimately meaning more time to investigate suspicious payments.”

As the economy weakens, and criminals grow stronger, financial services operators would be wise to dial up their conversational AI capabilities to improve customer experience today and pave the way to a password-less tomorrow.

This article was first published in Raconteur’s Fraud, Cybersecurity and Financial Crime report in July 2022

Ransomware is your biggest threat, NCSC CEO’s tells business

As head of the National Cyber Security Centre, Lindy Cameron believes company leaders must improve preparedness and resilience by educating staff – and themselves

Lindy Cameron is a difficult person to reach. That’s understandable: as CEO of the National Cyber Security Centre (NCSC), she’s at the forefront of the UK’s fight against computer security threats. While it’s tough for a journalist to negotiate an interview, it’s reassuring that she’s dedicated to her task. 

The NCSC provides advice and support for public and private sector organisations, helping them avoid computer security threats. Cameron took the helm in October 2020, succeeding inaugural CEO Ciaran Martin, who stepped aside after four years in the job.

Ransomware presents the most immediate danger to the UK

Her assessment of cyber threats, themes and advice should be required reading for CIOs and other members of the C-suite. Indeed, on the rare occasions she has spoken in public since taking up the role, she hasn’t held back.

For instance, in March she warned of the UK’s need to be “clear-eyed about Chinese ambition in technological advancement”. Speaking in her first address as CEO, she chided China’s “hostile activity in cyberspace” while adding that “Russia [is] the most acute and immediate threat” to the country.

Ransomware: an immediate danger 

The former number two at the Northern Ireland Office has over two decades of experience working in national security policy and crisis management. She was equally forthright and insightful in October’s keynote speech at Chatham House’s Cyber 2021 conference, where she reflected on her first year at the NCSC and identified four key cybersecurity themes. The most alarming is the pervasiveness of ransomware, the scourge of business leaders.

In May, US cloud-based information security company Zscaler calculated that cybercrime was up 69% in 2020. Ransomware accounted for over a quarter (27%) of all attacks, with a total of $1.4 billion demanded in payments. And those figures didn’t include two hugely damaging breaches that occurred in 2021, marking an elevated scope for bad actors.

July’s ransomware attack on multinational remote management software company Kaseya affected thousands of organisations and saw the largest ever ransomware demand of $70 million. The REvil ransomware gang that claimed responsibility for the attack ordered ransoms ranging from a few thousand dollars to multiple millions, although it’s unclear how much was paid. The gang said 1 million systems had been impacted across almost 20 countries. While those numbers are likely to be exaggerated, the attack triggered widespread operational downtime for over 1,000 companies.

The Kaseya incident came two months after the attack on Colonial Pipeline, one of the largest petroleum pipelines in the United States. The attack disabled the 5,500-mile system, sparking fuel shortages and panic buying at gas stations. Within hours of the breach, a $4.4m ransom was paid to DarkSide, an aptly named Russian hacking group. Despite the payment – later recovered – the pipeline was down for a week.

“Ransomware presents the most immediate danger to the UK, UK businesses and most other organisations – from FTSE 100 companies to schools; from critical national infrastructure to local councils,” Cameron told the October conference. “Many organisations – but not enough – routinely plan and prepare for this threat, and have confidence their cybersecurity and contingency planning could withstand a major incident. But many have no incident response plans, or ever test their cyber defences.”

Managing and mitigating cyber risk

The sheer number of cyberattacks, their broader scope and growing sophistication should keep CIOs awake at night. The latest Imperva Cyber Threat Index score is 764 out of 1,000, nearing the top-level “critical” category. Other statistics hint at the prevalence of cybercrime in 2021: some 30,000 websites on average are breached every day, with a cyberattack occurring every 11 seconds, almost twice as often as in 2019.

Cybersecurity organisation Mimecast reckons six in 10 UK companies suffered such an attack in 2020. In her Raconteur interview, conducted a fortnight after her appearance at Chatham House, Cameron reiterated her concerns.

“Right now, ransomware poses the most immediate threat to UK businesses, and sadly it is an issue which is growing globally,” she says. “While many organisations are alert to this, too few are testing their defences or their planned response to a major incident.”

Organisations can prevent the vast majority of high-profile cyber incidents we’ve seen following guidance we have already issued

Despite the headline-stealing attacks, businesses aren’t doing enough to prepare for ransomware attacks, says Cameron. Cyber risks can and must be managed and mitigated. To an extent, CIOs and chief information security officers (CISOs) are responsible for communicating the potentially fatal threat to various stakeholders.

Cyberattacks are different from other shocks as they aren’t readily perceptible. They are deliberate and can be internal and external. They hit every aspect of an organisation – human resources, finance, operations and more – making them incredibly hard to contain.

“The impact of a ransomware attack on victims can be severe,” Cameron continues, “and I’ve heard powerful testimonies from CEOs facing the repercussions of attacks they were unprepared for. Attacks can affect an organisation’s finances, operations and reputation, both in the short and long term.”

Building cyber resilience 

CEOs can’t hide behind their security teams if breached by a cyberattack. Cameron warns that defending against these incidents can’t be treated as “just a technical issue” – it’s a board-level matter, demanding action from the top. 

“A CEO would never say they don’t need to understand legal risk just because they have a General Counsel. The same applies to cybersecurity.” 

Cybersecurity should be central to boardroom thinking, Cameron adds. “We need to go further to ensure good practice is understood and resilience is being built into organisations. Investing resources and time into putting good security practices into place is crucial for boosting cyber resilience.”

Cameron notes that the NCSC’s guidance, updated in September, will reduce the likelihood of becoming infected by malware – including ransomware – and limit the impact of the infection. It also includes advice on what CIOs, CISOs and even CEOs should do if systems are already infected with malware. 

Cameron, who was previously director general responsible for the Department for International Development’s programmes in Africa, Asia and the Middle East, echoes Benjamin Franklin’s famous maxim: “By failing to prepare, you are preparing to fail.” 

There’s a wide range of practical, actionable advice available on the NCSC website, she notes.

“One of the key things I have learned in my first year as NCSC CEO is that organisations can prevent the vast majority of high-profile cyber incidents we’ve seen following guidance we have already issued,” she adds. 

Low-hanging fruit

At the Chatham House event, Cameron acknowledged that small- and medium-sized enterprises are especially vulnerable to cyberattacks. “I completely understand this is getting harder, especially for small businesses with less capability,” she said. “But it is crucial to build layered defences that are resilient to this.”

SMEs are the low-hanging fruit for cybercriminals, as they usually don’t have the budget or the access for sufficient IT support or security. “We appreciate smaller organisations may not have the same resources to put into cybersecurity as larger businesses,” Cameron says. 

The NCSC has produced tailored advice for such organisations in its Small Business Guide. This explains what to consider when backing up data, how to protect an organisation from malware, tips to secure mobile devices and the information stored on them, things to bear in mind when using passwords and advice on identifying phishing attacks.

Criminals will seek to exploit a weak point, which could include an SME in a supply chain. Larger organisations, says Cameron, have a “responsibility to work with their suppliers to ensure operations are secured. In the past year, we have seen an increase in supply chain attacks with impacts felt around the world, underlining how widespread supply networks can be.”

Supply chain concerns

Supply chain attacks were another of Cameron’s four key themes at the Chatham House conference. Such vulnerabilities “continue to be an attractive vector at the hand of sophisticated actors and … the threat from these attacks is likely to grow,” she said. “This is particularly the case as we anticipate technology supply chains will become increasingly complicated in the coming years.”

The most infamous recent supply chain attack was on SolarWinds, said Cameron. According to the former CEO and other SolarWinds officials, the breach happened because criminals hacked a key password – it was solarwinds123. This highlights the importance of strong passcodes for companies large and small. 

“SolarWinds was a stark reminder of the need for governments and enterprises to make themselves more resilient should one of their key technology suppliers be compromised,” Cameron said at Chatham House.

The two other areas of cyber concern she promoted were the vulnerabilities exposed by the coronavirus and the development of strategically important technology. “We are all increasingly dependent on that technology and it is now fundamental to both our safety and the functioning of society,” she said of the latter.

On the former theme, Cameron said that malicious actors are trying to access Covid-related information, whether vaccine procurement plans or data on new variants. 

“Some groups may also seek to use this information to undermine public trust in government responses to the pandemic. The coronavirus pandemic continues to cast a significant shadow on cybersecurity and is likely to do so for many years to come.”

CIOs must keep this in mind as many organisations grapple with post-pandemic ways of working. This involves more remote workers using personal or poorly protected devices on unsecured networks, all of which play into the hands of bad actors.

“Over the past 18 months, many organisations will have likely increased remote working for staff and introduced new online services and devices to stay connected,” says Cameron. “While this has offered a solution for many businesses, it’s vital for the risks to be mitigated so users and networks work securely. Our home-working guidance offers practical steps to help with safe remote working.”

Post-pandemic cybersecurity 

Providing other essential advice, Cameron underlines the importance for organisations of all sizes to build their cyber resilience. 

“It’s vital that organisations of all sizes take the right steps to build their cyber resilience. Educating employees is an important aspect of keeping any business secure. Staff can be an effective first line of defence against cyberattacks if they are equipped with the right understanding and feel they can report anything suspicious.”

Businesses should put a clear IT policy in place that guides employees on best practices, while staff should be encouraged to use the NCSC’s “Top Tips for Staff” training package. 

“These steps are about creating a positive cybersecurity culture and we believe senior leaders should lead by example,” she adds. 

The NCSC’s Board Toolkit is particularly useful for CIOs, designed to help facilitate cybersecurity discussions between board members and technical experts. It will “help ensure leaders are informed and cybersecurity considerations can be integrated into business objectives”.

These conversations are now critical, as advances in artificial intelligence, the internet of things, 5G and quantum computing multiply attack surfaces. Reflecting on the NCSC’s work since its inception five years ago, Cameron says the organisation has achieved a huge amount, including dealing with significant cyber incidents, improving the resilience of critical networks and developing a skills pipeline for the future. 

“This is delivering real benefits for the nation, from protecting multinational companies to defending citizens against online harm. However, the challenges we face in cyberspace are always changing, so we can’t rest on our laurels.”

This article was first published in Raconteur’s Future CIO report in November 2021

Is China dominating the West in the artificial intelligence arms race?

The US has warned that it is behind its historical foe in the East, and the European bloc is also concerned, but there are ways in which the UK, for example, could catch up, according to experts

If you ask technology experts in the West which country is winning the artificial intelligence arms race, a significant majority will point to China. But is that right? Indeed, Nicolas Chaillan, the Pentagon’s first Chief Software Officer, effectively waved the white flag when, in September, his resignation letter lamented his country’s “laggard” approach to skilling up for AI and a lack of funding. 

A month later, he was more explicit when telling the Financial Times: “We have no competing fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over, in my opinion.”

The 37-year old spent three years steering a Pentagon-wide effort to increase the United States’ AI, machine learning, and cybersecurity capabilities. After stepping down, he said there was “good reason to be angry.” He argued that his country’s supposed slow technological transformation was allowing China to achieve global dominance and effectively take control of critical areas, from geopolitics to media narratives and everywhere in between.

 Chaillan suggested that some US government departments had a “kindergarten level” of cybersecurity and stated he was worried about his children’s future. He made his outspoken comments mere months after a congressionally mandated national security commission predicted in March that China could speed ahead as the world’s AI superpower within the next decade.

 Following a two-year study, the National Security Commission on Artificial Intelligence concluded that the US needed to develop a “resilient domestic base” for creating semiconductors required to manufacture a range of electronic devices, including diodes, transistors, and integrated circuits. Chair Eric Schmidt, the former Google CEO, warned: “We are very close to losing the cutting edge of microelectronics, which power our companies and our military because of our reliance on Taiwan.”

Countering the rise of China

Jens Stoltenberg, the Nato Secretary-General since 2014, echoed the US concerns about how China is galloping away from competitors due to its investment in innovative technology, which other countries have embraced. The implicit – yet hard-to-prove – worry is that the ubiquitous tech is a strategic asset for the Chinese government. But is this a case of deep-rooted, centuries-old mistrust of the East by the West?

 The former Norwegian Prime Minister, ever the diplomat, was at pains to stress that China was not considered an “adversary.” However, he did make the point that its cyber capabilities, new technologies, and long-distance missiles were on the radar of European security services. 

 In late October, Stoltenberg admitted that Nato would expand its focus to counter the “rise of China” in an interview with the Financial Times. “Nato is an alliance of North America and Europe,” he said, “but this region faces global challenges: terrorism, cyber but also the rise of China.”

 Ominously, Stoltenberg continued: “China is coming closer to us. We see them in the Arctic. We see them in cyberspace. We see them investing heavily in critical infrastructure in our countries. They have more and more high-range weapons that can reach all Nato-allied countries.”

 But is China truly so far in front of others? According to the venerated Global AI Index, calculated by Tortoise Media, the US leads the race, with China second. In late September, the UK – currently third in the rankings, slightly ahead of Canada and South Korea – unveiled its National AI Strategy, which sets out a 10-year plan to make it a “global AI superpower”.

 UK plans to become global AI superpower

Some £2.3 billion has already been poured into AI initiatives by the UK government since 2014, though this document – the country’s first package solely focused on AI and machine learning – will accelerate progress, enthuses the Department for Digital, Culture, Media and Sport’s digital minister, Chris Philp. 

“The UK already punches above its weight internationally, and we are ranked third in the world behind the US and China in the list of top countries for AI,” he said. “AI technologies generate billions [of pounds] for the economy and improve our lives. They power the technology we use daily and help save lives through better disease diagnosis and drug discovery.”

A self-styled AI champion and World Economic Forum AI Council member, Simon Greenman, states that the UK is home to the most significant number of AI companies and start-ups (8%) aside from the US (40%). Additionally, venture capital investment in UK AI projects was £2.4bn in 2019. 

“Money isn’t the issue,” says the Checkit Non-Executive Director, when discussing the perceived lack of progress being made by the UK. “The problem is we don’t have enough good commercial AI skills, such as product management and enterprise sales, to put the theory, research, and vision into practice.

“For instance, the ‘Office of AI’ doesn’t have an AI implementation budget. If we’re going to realise the potential that AI can bring to the UK, the government needs to put its money where its mouth is and appoint somebody who has a central budget to implement large-scale AI deployments when it comes to public policy.”

Greater collaboration needed

Fakhar Khalid, Chief Scientist of London-headquartered SenSat, a cloud-based 3D interactive virtual engineering platform, is more optimistic about the UK’s chances of becoming an AI superpower and calls for patience. While he agrees that “the US and China are the leading nations in terms of AI innovation and commercialisation,” he notes that China published its first AI strategy in 2017. The US followed with equivalent plans two years later. 

 “Although these strategies have recently started to emerge in the public and policy domain, these countries have been investing healthily in their ecosystems since the early 1990s,” he says. “In the 90s, the US was not only the leading country for AI education, but its academic innovation also had strong ties with the industry, ensuring a direct impact on the growth of their economy.”

Hinting at the different types of government that enable more collaboration in China compared to the US, the UK, and even Europe as a bloc, he continues: “China, on the other hand, has been radical and ambitious in building its technology capabilities by strongly linking government, academia, and industry to show the beneficial impact of AI on their economy. The government centrally controls China’s AI strategy with hyperlocal implementation.

“The UK’s long overdue AI strategy is a clear indication that we are here to declare ourselves as the key leader in this field, yet we have much to learn from these nations about commercialising our research and creating a strong and impactful link between academia and industry.”

For Dr Mahlet Zimeta, Head of Public Policy at the Open Data Institute in the UK, while China and the US are ahead in the AI race, there are ways in which her country can catch up. “The territories that are lined up to be global AI superpowers are China, US, and the European Union,” she says, “because the great access to and availability of data means the analysis is better. They have massive advantages of scale, but the UK could show international leadership around AI ethics.”

With a greater focus on data skills, standards, and sharing, and encouraging an international collaborative ecosystem driving AI innovation, the West can leap ahead of China. And perhaps, in time, all AI superpowers will work together, in harmony, to the benefit of humanity.  

How critical infrastructure is dealing with the threat of cyber attacks

A crippling ransomware attack on one of the largest fuel distribution networks in the US has brought into sharp focus the cyber threats facing infrastructure of national importance

In 2020, the Cybersecurity and Infrastructure Security Agency alerted the US to the risk of a devastating cyber attack on a crucial system of national importance. On 7 May this year, the UK’s National Cyber Security Centre (NCSC) issued a stark warning along similar lines. By coincidence, it was the same day that hackers would cripple one of the largest fuel distribution networks in North America. 

The taking of the Colonial Pipeline brought the authorities’ worst fears to life. The ransomware attack disabled the 5,500-mile network, causing fuel shortages in the south-eastern states of the US and prompting the Biden administration to declare a state of emergency. Although the Colonial Pipeline Company’s CEO, Joseph Blount, controversially paid the $4.4m (£3.2m) ransom, the network was out of action for a week.

Transparency and trust are key to having robust and executable action plans. Everyone has a role to play in security

This case was “not shocking” to Sarah Lyons, the NCSC’s deputy director for economy and society. There had been warnings aplenty. Only three months previously, for instance, a hacker unsuccessfully attempted to poison the water supply of Oldsmar, a city in Florida. 

“The pandemic has exacerbated cyber attacks targeting organisations, including providers of critical national infrastructure, which will always be an attractive target,” she says. “The Colonial Pipeline incident confirmed our belief that any such attack could have wide-ranging societal ramifications. It also gave us a glimpse at the kind of attack with a physical impact that could materialise in future if connected places providing critical public services are compromised.”

Fatal warning: potential cyber-physical attacks

The way that critical national infrastructure has evolved to use interconnected digital networks makes it far more vulnerable than it used to be, according to Lyons, who believes that the risks could be even greater when 5G is more widely adopted. 

“Regulated industries such as telecoms and energy are being connected to unregulated services and suppliers,” she explains. “These industries, which we all rely on daily, are an attractive target for a range of threat actors, unfortunately. A successful attack could cause significant disruptions to key public services and compromise citizens’ sensitive data.” 

Lyons urges operators to “recognise that it’s vital that we ensure these networks are resilient to cyber attacks. In a worst-case scenario, a successful one could endanger people.”

George Patsis, CEO of Obrela Security Industries, agrees, warning that “the sky is the limit” when it comes to the extent of the damage that cyber attacks on critical infrastructure could wreak. “These have the potential to be cyber physical, putting many people’s lives at risk,” he says. 

Patsis uses the London Underground as an example. “Computers control the timing of when trains arrive at junctions. If someone were to infiltrate the network and alter their synchronisation by only a few seconds, it could cause multiple fatal crashes,” he says.

Most worrying is a lack of robustness in operational technology (OT) security, which Gartner defines as “practices and technologies used to protect people, assets, and information; monitor and/or control physical devices, processes and events; and initiate state changes to enterprise OT systems.”

Patsis says: “As OT increasingly becomes internet-enabled, it creates new attack avenues. There is now a big focus on securing OT in the same way we do the IT estate.” 

While he notes that the Colonial Pipeline affair has been a “huge driver” for improving OT security, Patsis stresses that there is much work to do in this area.

Unique challenge: securing operational technology

Theresa Lanowitz, head of evangelism at AT&T Cybersecurity, takes much the same view. “With the convergence of IT and OT systems, there has been an exponential growth in internet-of-things devices that has heightened concerns about the digital security of these systems,” she says. 

Lanowitz calls for a “mindset shift” in securing OT assets. “Legacy infrastructure has been in place for decades and is now being combined as part of the convergence of IT and OT,” she says. “This can be challenging for organisations that previously used separate security tools for each environment and now require holistic asset visibility to prevent blind spots. Attacks are coming from all sides and are creeping across from IT to OT and vice versa. Organisations should adopt a risk-based approach that recognises that there is no perfect security solution.” 

She continues: “Enterprises that strategically balance security, scalability, access, usability and cost can ultimately provide the best long-term protection against an evolving adversary.”

Has the Colonial Pipeline attack encouraged infrastructure providers to take more effective defensive measures? “Frankly, not enough,” argues Rob Carew, chief product officer at Arcadis Gen, the digital arm of Arcadis, a Dutch engineering consultancy. “There is still a disconnect between cybersecurity and critical infrastructure.” 

He suggests that cybersecurity is widely seen in the sector as an “add-on”, rather than intrinsic, when it comes to monitoring the health of critical infrastructure.

“The problem is compounded by ageing hardware and software technology, which can often be exploited through unforeseen vulnerabilities,” Carew says. “Transparency and trust are key in having robust and executable action plans. Everyone has a role to play in security. If it becomes a regular topic of conversations among asset owners, operators, managers, maintainers and the supply chain, it will become part of the organisation’s DNA.”

Actions, though, speak louder than words. While the Colonial Pipeline incident may have set alarm bells ringing, there is still – months later – high panic across the infrastructure network, with the cybercriminals seemingly better equipped to expose vulnerabilities and gain financially from doing so.

This article first appeared in Raconteur’s Future of Infrastructure report in September 2021

Mastercard cyber chief on using AI in the fight against fraud

Ajay Bhalla, Mastercard’s president of cyber and intelligence solutions, thinks innovations like AI can tackle cybercrime – and help save the planet

The fight against fraud has always been a messy business, but it’s especially grisly in the digital age. To keep ahead of the cybercriminals, investment in technology – particularly artificial intelligence – is paramount, says Ajay Bhalla, president of cyber and intelligence solutions at Mastercard. 

Since the opening salvo of the coronavirus crisis, cybercriminals have launched increasingly sophisticated attacks across a multitude of channels, taking advantage of heightened emotions and poor online security.

Some £1.26 billion was lost to financial fraud in the UK in 2020, according to UK Finance, a trade association, while there was a 43% year-on-year explosion in internet banking fraud losses. The banking industry managed to stop some £1.6 billion of fraud over the course of the year, equivalent to £6.73 in every £10 of attempted fraud.

If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line

The landscape has rapidly evolved over the past year, says Bhalla, due to factors like the rapid growth of online shopping and the emergence of digital solutions in the banking sector and beyond. These changes have broken down the barriers to innovation, driving an unprecedented pace of change in the way we pay, bank and shop, says the executive, who’s responsible for deploying innovative technology to ensure the safety and security of 90 billion transactions every year. 

“Against that backdrop, cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded.”

AI: the new electricity

It isn’t just the growing number of transactions that attracts criminal attention, but the diversity of opportunity, according to London-based Bhalla, who has held various roles at Mastercard around the world since 1993. 

“As the Internet of Things becomes ever more pervasive, so the size of the attack surface grows,” he says, noting that there will be 50 billion connected devices by 2025. 

Against this backdrop, AI will be essential to tackle cyber threats. 

“AI is fundamental to our work in areas such as identity and ecommerce, and we think of it as the new electricity, powering our society and driving forward progress,” says the 55-year-old.

Mastercard has pioneered the use of AI in banking through its worldwide network of R&D labs and AI innovation centres, and its AI-powered solutions have saved more than $30bn being lost to fraud over the past two years. 

In 2020, it opened an Intelligence and Cyber Centre in Vancouver, aimed at accelerating innovation in AI and IoT. The company filed at least 40 AI-related patent applications last year; it has developed the biggest cyber risk assessment capability on the planet, according to Bhalla. 

“We are constantly testing, adapting and improving algorithms to solve real-world challenges.”

Turning to examples of the company’s work, Bhalla says Mastercard has built an ability to trace and alert on financial crime across its network, a world first. He also points to the recently launched Enhanced Contactless, or ECOS, which leverages state-of-the-art security and privacy technology to make contactless payments resistant to attacks from quantum computers, using next-generation algorithms and cryptography. 

“With ECOS, contactless payments still happen in less than half a second, but they are three million times harder to break.”

Building security through biometrics


Such innovations are transforming customers’ interactions with financial services providers. For example, Mastercard has combined AI-powered technologies with physical biometrics – like face, fingerprint and palm – to identify legitimate account holders. These technologies recognise behavioural traits, like the way in which customers hold their phone or how fast they type, actions that can’t be replicated by fraudsters. 

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay.”

Excited by developments in this area, Bhalla says Mastercard recently detected an attack that involved hundreds of devices attempting to log in from a phone that had reported itself as lying flat on its back. “Given the speed at which the credentials were typed, we knew it was unlikely it could be done with the phone flat on a surface,” Bhalla says. “In this way, a sophisticated attack that looked otherwise legitimate was detected before any fraud losses could occur.”

Cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded

Mastercard might boast an impressive list of successful fraud-fighting solutions, but wrong turns are vital for the journey, Bhalla admits. “If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line,” he says. “At Mastercard, trust in and reliance on our services is far too important to take that risk, so rigorously testing solutions before they get anywhere near the end user is our standard operating procedure.”

Trust is a must

A keen rower and golfer, Bhalla volunteers as an executive-in-residence at the University of Oxford’s Saïd Business School. He has a bachelor’s degree in commerce from Delhi University and a master’s degree in management from the University of Mumbai. 

Even with his experience and tech knowledge, Bhalla insists that Mastercard and others within the industry must go back to basics and focus on customer experience. The company’s leadership in standards has been core to earning and retaining the trust of its customers, he notes. 

The technology may be evolving quickly, but one core principle remains, says Bhalla. “Our business is based on trust, which is hard-won and easily lost.”

The correct operating processes and standards must be in place from the outset so that both customers and businesses can have confidence in the technology and trust that it will be useful, safe and secure. 

“What has changed is the sharp focus now placed on developing leading-edge solutions that prevent fraud and manage its impact, which is not surprising given that the average cost of a single data breach has now grown to $3.86 million,” Bhalla says.

Providing a blueprint for business leaders, Bhalla strongly believes that “innovation must be good for people … and address their needs at the fundamental design stage of the systems and solutions we create.”

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay

Bhalla is using tech to fight fraud and drive financial inclusion, with Mastercard aiming  to connect 1 billion people globally to the digital economy by 2025. His ambitions are wider still, with much of his work focused on “protecting the world we have”. 

Mindful that climate change is high on the agenda, especially for younger generations, Mastercard has launched a raft of programmes in the area, including this year’s Sustainable Card Badge, which looks to identify cards made more sustainably from recyclable, recycled, bio-sourced, chlorine-free, degradable or ocean plastics.

Much like fighting fraud, global warming is reaching a crucial stage. Thanks to the efforts of industry leaders like Bhalla, the world stands a better chance of ultimate triumph on both fronts.

This article was originally written for Raconteur’s Fighting Fraud report, published in June 2021

The worrying rise of ransomware as a service

The Colonial cyberattack that cost a US fuel pipeline $4.4m in May highlights why businesses need to treat the fast-emerging threat of ‘ransomware as a service’ more seriously

A wry observation doing the rounds among cybersecurity experts is that the hackers who’ve transformed ransomware attacks into a multibillion-dollar industry are more professional than their high-profile corporate victims. 

It was certainly no laughing matter for the CEO of the Colonial Pipeline, one of the largest fuel-distribution networks in the US, when an attack in early May disabled the 5,500-mile system, triggering fuel shortages and panic-buying at filling stations. Within hours of the breach, Joseph Blount controversially paid a $4.4m (£3.1m) ransom to DarkSide, the Russian hacking group that mounted the attack, on the basis that it was “for the good of the country”. Despite this, the network was still out of action for a week.

The Colonial Pipeline case is one of many similar incidents, which have increased sharply in number since the pandemic started but have tended to go under the radar, as the victims are understandably reluctant to publicise their security failings. This high-profile example has exposed the rise of so-called ransomware as a service (RaaS), which DarkSide and various other professional hackers are now offering. 

Ethically speaking, you have to consider that you are enabling cybercrime by paying a ransom

The number of cybercrimes committed worldwide in 2020 was 69% higher than the previous year’s total. Ransomware was involved in 27% of these and a total of $1.4bn was demanded, according to a report published in May by US data security company Zscaler. In the UK, cybersecurity specialist Mimecast believes that as many as 60% of companies suffered a ransomware attack during the year. 

Ransomware is on the rise (Soumil Kumar from Pexels)

“Covid-19 has driven a huge ransomware surge,” reports Deepen Desai, Zscaler’s chief information security officer. “Our researchers witnessed a fivefold increase in such attacks starting in March 2020, when the World Health Organization declared the pandemic.”

Criminals seeking to exploit the network vulnerabilities created by the general shift to remote working during the Covid crisis either developed more sophisticated hacking methods or, seeking a shortcut, paid for RaaS. 

RaaS business model rings alarm bells

“RaaS has enabled even the least technically advanced criminals to launch attacks,” says George Papamargaritis, director of managed security services operations at Obrela Security Industries. “Gangs are advertising their services on the dark web, collaborating to share code, infrastructure, techniques and profits.” 

The RaaS model means that the spoils are split among three partners in crime: the programmer, the service provider and the attacker. “This is a highly structured and organised machine that operates much like many other legitimate organisations,” he adds.

The earliest reference to RaaS can be traced back to 2016. But, as Jen Ellis, vice-president of community and public affairs at Rapid7 and co-chair of the Ransomware Task Force, notes: “There are indications that it’s on the rise as more criminals take the chance to make a quick, easy and relatively risk-free profit by entering the ransomware market.”

This collaborative approach to ransomware attacks is terrible news for businesses, warns Ian Pratt, global head of security for personal systems at Hewlett-Packard. “Once, it was the preserve of opportunistic individuals who targeted consumers with demands of a few hundred pounds. Today, criminal gangs operating ransomware make millions from corporate victims in so-called big-game hunts,” he says. “This should have the alarm bells ringing in boardrooms.”

By educating themselves and their employees, business leaders can improve company-wide security protocols and so minimise the risk of ransomware attacks. Pratt explains that “users are the point of entry for most attacks”, accounting for 70% of successful network breaches. Malware is “almost always delivered via email attachments, web links and downloadable files”.

Prevention better than cure

Michiel Prins, co-founder of HackerOne, a vulnerability-disclosure platform connecting businesses with penetration testers, agrees. “Difficult as it may seem to prevent these attacks, prevention is always better than cure when it comes to ransomware,” he says. “This means maintaining a nimble and adversarial approach to cybersecurity that takes into account the perspective of an attacker, getting beyond traditional solutions that miss more elusive vulnerabilities.”

Prins argues that working with ethical hackers will “strengthen an organisation’s overall security posture”, as potential weak spots are reported and fixed “before serious damage is done”. Additionally, establishing a so-called bug-bounty programme, which rewards people for highlighting faults in the coding, “signals a high level of security maturity,” meaning that the criminals might look for easier prey.

If they do fall victim to an attack, should organisations accede to ransomware demands? CrowdStrike estimates that just over a quarter of victims end up paying the hackers to unlock their systems. Nearly 60% of UK businesses would enter negotiations, according to Sam Curry, chief security officer at Cybereason. 

Gangs are advertising their services on the dark web, collaborating to share code, infrastructure, techniques and profits

“We’d advise against paying ransoms. But in extreme situations, where lives are at risk or a national emergency is likely, it could be better to pay,” he says. “Before making that decision, it’s essential to notify your legal counsel, your insurer and the relevant law-enforcement agencies.”

Even when a business does cough up, there’s no guarantee that this will put an end to its problems. Peter Yapp, former deputy director at the UK’s National Cyber Security Centre and now a partner at law firm Schillings, cites the Travelex attack in December 2019 as an example. Many of the company’s web pages were still out of action two months later and a $2.3m ransom was eventually paid to the hackers. Later in 2020, Travelex sank into administration, “partly due to the losses and reputational damage caused by the attack”, he says.

Charles Brook, threat intelligence specialist at cybersecurity company Tessian, acknowledges that it’s a tough decision. “Ethically speaking, you have to consider that you are enabling cybercrime by paying a ransom,” he says. “But I can sympathise with organisations that may have no other option.”

There are other considerations, Brook adds. “If you pay, you could put a target on your back for further attacks. And, even after your files are decrypted, there may still be something malicious left behind.”

With the hackers in the ascendancy, Yapp believes that the government needs to step up its efforts to combat ransomware. “This has become such a serious problem that perhaps it’s time to lobby for the UK’s new National Cyber Force to fight back against these criminals in a different, military, way,” he suggests.

Perhaps the hackers won’t have the last laugh, after all.

This article was originally written for Raconteur’s Connected Business report, published as a supplement in The Times in June 2021