Go Flux Yourself: Navigating the Future of Work (No. 10)

TL;DR: October’s Go Flux Yourself explores the dark and light sides of AI through Nobel Prize winners and cybersecurity experts, weighs the impact of disinformation ahead of the US election, confronts haunting cases of AI misuse, and finds hope in a Holocaust survivor’s legacy of ethical innovation …

Image created on Midjourney with the prompt  “a scary megalomaniac dressed as halloween monster with loads of computers showing code behind him in the style of an Edward Hopper painting”

The future

“Large language models are like young children – they grow and develop based on how you nurture and treat them.”

I’m curious by nature – it’s central to my profession as a truth-seeking human-work evolution journalist. But sometimes, it’s perhaps best not to peek behind the curtain, as what lies behind might be terror-inducing. Fittingly, this newsletter is published on Halloween, so you might expect some horror. Consider yourself warned!

I was fortunate enough to interview two genuine cybersecurity luminaries in as many days towards the end of October. First, Dr Joye Purser, Field CISO at Veritas Technologies and a former White House director who was a senior US Government official during the Colonial Pipeline attack in 2021, was over from Atlanta. 

And, the following day, the “godfather of Israeli cybersecurity”, Shlomo Kramer, Co-Founder and CEO of Cato Networks, treated me to lunch at Claridge’s – lucky me! – after flying in from Tel Aviv.

The above quotation is from my conversation with Joye, who warned that if a nation isn’t democratic, they’ll train their AI systems very differently, with state-controlled information. 

Both she and Shlomo painted a sobering picture of our technological future, particularly as we approach what could be the most digitally manipulated vote in history: the United States presidential election. Remember, remember the fifth of November, indeed.

“The risk is high for disinformation campaigns,” Joye stressed, urging voters to “carefully scrutinise the information they receive for what is the source, how recent or not recent is the information, and just develop an increasing public awareness of the warning signs or red flags that something’s not right with the communication”.

Shlomo, who co-founded Check Point Software Technologies in 1993, offered a stark analysis of how social media has fractured our society. “People don’t care if it’s right or wrong, whether a tweet is from a bot or a Russian campaign,” he said. “They just consume it, they believe it – it becomes their religion.” 

Shlomo drew a fascinating parallel between modern social media echo chambers and medieval church communities, suggesting we’ve come full circle from faith-based societies through the age of reason and back to tribal belief systems.

And, of course, most disagreements that develop into wars are primarily down to religious beliefs, at least on the surface. Is it a coincidence that two of the largest wars we have known in four decades are raging? (And if Donald Trump is voted back into the White House in a week, then what will that mean for Europe if – as heavily hinted – funding for Ukraine’s military is strangled?) 

After the collective trauma of the coronavirus pandemic, the combination of echo chambers on social media and manipulated AIs is creating flames for a smouldering society. Have you noticed how, generally, people are snappier with one another?

The cybersecurity challenges are equally worrying. Both experts highlighted how AI is supercharging traditional threats. Shlomo’s team recently uncovered an AI tool that can generate entire fake identities – complete with convincing video, passport photos, and multiple corroborating accounts – capable of fooling sophisticated know-your-customer systems at financial institutions.

Maybe most concerning was their shared view that cybersecurity isn’t a problem we can solve but must constantly manage. As Shlomo said: “You have to run as fast as possible to stay in the same place.” It’s a perpetual arms race between defenders and increasingly sophisticated threats.

Still, there’s hope. The very technologies that create these challenges might also help us overcome them. Both experts emphasised that while bad actors can use AI for deception, it’s also essential for defence. The key is ensuring we develop these tools with democratic values and human welfare in mind.

When I asked about preparing our children for this uncertain future – as I often do when interviewing experts who also have kids – their responses were enlightening. Joye emphasised the importance of teaching children to be “informed consumers of information” who understand the significance of trusted sources and proper journalism. 

Shlomo’s advice was more philosophical: children must learn to “listen to themselves and believe what they hear is true” – to trust their inner voice amid the cacophony of digital noise.

In the post-truth era, who can we trust if not ourselves?

A couple of years ago, John Elkington, a world authority on corporate responsibility and sustainable development who coined the term “triple bottom line”, told me: “In the vacuum of effective politicians, people are turning to businesses for leadership, so business leaders must accept that responsibility.” (Coincidently, this year marks three decades since the British environmental thinker coined the “3 Ps” of people, planet, and profit.)

For this reason, CEOs, especially, have to speak up with authority, authenticity and original thought. Staying curious, thinking critically, and calling out bad practices are increasingly important, particularly by industry leaders. 

With an eye on the near future and the need for truth, I’m pleased to announce the soft launch of Pickup_andWebb, a collaboration with brand strategist and client-turned-friend Cameron Webb. Pickup_andWebb develops incisive, issue-led thought leadership for ambitious clients looking to provoke stakeholder and industry debate and enhance their expert reputation.

“In an era of unprecedented volatility, CEOs navigate treacherous waters,” I wrote recently in our opening Insights article titled Speak up or sink. “The growing list of headwinds is formidable – from geopolitical tensions and wars reshaping global alliances to the relentless march of technological advancements disrupting entire industries. 

“Add to this the perfect storm of rising energy and material costs, traumatised supply chains, and the ever-present spectre of climate change, and it’s clear that the modern CEO’s role has never been more challenging – or more crucial. Yet, despite this incredible turbulence, the truly successful CEO of 2024 must remain a beacon of stability and vision. They are the captains who keep their eyes fixed on the distant horizon, refusing to be distracted by the immediate squalls. 

“More than ever, they must embody the role of progressive visionaries, their gaze penetrating years into the future to seize nascent opportunities or deftly avoid looming catastrophes. But vision alone is not enough.

“Today’s exemplary leaders are expected to steer with a unique blend of authenticity, humility, and vulnerability. They understand that true strength lies not in infallibility but in the courage to acknowledge uncertainties and learn from missteps. 

“These leaders aren’t afraid to swim against the tide, challenging conventional wisdom when necessary and inspiring their crews to navigate uncharted waters.”

If you are – or you know – a leader who might need help swimming against the tide and spreading their word, let’s start a conversation and co-create in early 2025.

The present

This month’s news perfectly illustrated AI’s Jekyll-and-Hyde nature on the subject of truth and technology. We saw the good, the bad, and the downright ugly.

While I’ve shared the darker future possibilities outlined by cybersecurity experts Joye and Shlomo, the 2024 Nobel Prizes highlighted AI’s extraordinary potential for good.

Sir Demis Hassabis, chief executive of Google DeepMind, shared the chemistry prize for using AI to crack a 50-year-old puzzle in biology: predicting the structure of every protein known to humanity. His team’s creation, AlphaFold, has already been used by over two million scientists worldwide, helping develop vaccines, improve plant resistance to climate change, and advance our understanding of the human body.

The day before, Geoffrey Hinton – dubbed the “godfather of AI” – won the physics prize for his pioneering work on neural networks, the very technology that powers today’s AI systems. Yet Hinton, who left Google in May 2023 to “freely speak out about the risk of AI”, now spends his time advocating for greater AI safety measures.

It’s a fitting metaphor for our times: the same week that celebrated AI’s potential to revolutionise scientific discovery also saw warnings about its capacity for deception and manipulation. As Hassabis himself noted, AI remains “just an analytical tool”; how we choose to use it matters, echoing Joye’s comment about how we feed LLMs.

Related to this topic, I was on stage twice at the Digital Transformation EXPO (DTX) London 2024 at the start of the month. Having been asked to produce a write-up of the two-day conference – the theme was “reinvention” – I noted how “the tech industry is caught in a dizzying dance of progress and prudence”.

I continued: “As industry titans and innovators converged at ExCeL London in early October, a central question emerged: how do we harness the transformative power of AI while safeguarding the essence of our humanity?

“As we stand on the brink of unprecedented change, one thing becomes clear: the path forward demands technological prowess, deep ethical reflection, and a renewed focus on the human element in our digital age.”

In the opening keynote, Derren Brown, Britain’s leading psychological illusionist, called for a pause in AI development to ensure technological products serve humans, not vice versa.

“We need to keep humanity in the driving seat,” Brown urged, challenging the audience to rethink the breakneck pace of innovation. This call for caution contrasted sharply with the rest of the conference’s urgency.

Piers Linney, Founder of ImplementAI and former Dragons’ Den investor, provided the most vivid analogy of the event. He likened competing in today’s market without embracing AI to “cage fighting – to the death – against the world champion, yet having Ironman in one’s corner and not calling him for help”.

Meanwhile, Michael Wignall, Customer Success Leader UK at Microsoft, warned: “Most businesses are not moving fast enough. You need to ask yourself: ‘Am I ready to embrace this wave of transformation?’ Your competitors may be ready.” His advice was unequivocal: “Do stuff quickly. If you are not disrupting, you will be disrupted.”

I was honoured to moderate a main-stage panel exploring human-centred tech design, offering a crucial counterpoint to the “move-fast-and-break-things” mantra. Gavin Barton, VP of Engineering at Booking.com, Sue Daley, Director of Tech and Innovation at techUK, and Dr Nicola Millard, Principal Innovation Partner at BT Group, joined me.

“Focus on the outcome you’re looking for,” advised Gavin. “Look at the problem rather than the metric; ask what the real problem is to solve.” Sue cautioned against unquestioningly jumping on the AI bandwagon, stressing: “Think about what you’re trying to achieve. Are you involving your employees, workforce, and potentially customers in what you’re trying to do?” Nicola introduced her “3 Us” framework – Useful, Useable, and Used – for evaluating tech innovation.

Regarding tech’s darker side, Jake Moore, Global Cybersecurity Advisor at ESET, delivered a hair-raising presentation titled The Rise of the Clones on DTX’s Cyber Hacker stage. His practical demonstration of deep fake technology’s potential for harm validated the warnings from both Joye and Shlomo about AI-enabled deception.

Moore revealed how he had used deep fake video and voice technology to penetrate a business’s defences and commit small-scale fraud. It was particularly unnerving given Shlomo’s earlier warning about AI tools generating entire fake identities that can fool sophisticated verification systems.

Quoting the late Sir Stephen Hawking’s prescient warning that “AI will be either the best or the worst thing for humanity”, Moore’s demonstration felt like a stark counterpoint to the Nobel Prize celebrations. Here, in one conference hall, we witnessed both the promise and peril of our AI future – rather like watching Dr Jekyll transform into Mr Hyde.

Later in the month, there were yet darker instances of AI’s misuse and abuse. In a story that reads like a Black Mirror episode, American Drew Crecente discovered his late teenage daughter, Jennifer, murdered in 2006, had been resurrected as an AI chatbot on Character.AI. The company claimed the bot was “user-created” and quickly removed it, but the incident raises profound questions about data privacy and respect for the deceased in our digital age.

Arguably even more distressing, and also in the United States, was the case of 14-year-old Sewell Setzer III, who took his own life after developing a relationship with an AI character based on Game of Thrones’ Daenerys Targaryen. His mother’s lawsuit against Character.AI highlights the dangers of AI companions that can form deep emotional bonds with vulnerable users – particularly children and teenagers.

Finally, in what police called a “landmark” prosecution, Bolton-based graphic design student Hugh Nelson was jailed for 18 years after using AI to create and sell child abuse images. The case exemplifies how rapidly improving AI technology can be weaponised for the darkest purposes, with prosecutors noting that “the imagery is becoming more realistic”.

While difficult to stomach, these stories validate warnings about AI’s destructive potential when developed without proper safeguards and ethical considerations. As Joye emphasised, how we nurture these technologies matters profoundly. The challenge ahead is clear: we must harness AI’s extraordinary potential for good while protecting the most vulnerable members of our society.

The past

During lunch at Claridge’s, Shlomo shared a remarkable story about his grandfather, Shlomo – whom he is named after – that feels particularly pertinent given the topic of human resilience in the face of technological change.

The elder Shlomo was an entrepreneur in Poland who survived Stalin’s Gulag through his business acumen. After enduring that horror, he navigated the treacherous post-war period in Austria – a time and place immortalised in Orson Welles’ The Third Man – before finally finding sanctuary in Israel in the early 1960s.

When the younger Shlomo co-founded Check Point Software Technologies over 30 years ago, the company’s first office was in his late grandfather’s vacant apartment. It feels fitting that a business focused on protecting people from digital threats began in a space owned by someone who had spent his life helping others survive very real ones.

The heart-warming story reminds us that while the challenges we face may evolve – from physical threats to digital deception – human ingenuity, ethical leadership, and the drive to protect others remain constant. 

As we grapple with AI’s implications for society, we would do well to remember this Halloween that technology is merely a tool; it’s the hands that wield it – and the values that guide those hands – that truly matter.

Statistics of the month

  • According to McKinsey and Company’s report The role of power in unlocking the European AI revolution, published last week, “in Europe, demand for data centers is expected to grow to approximately 35 gigawatts (GW) by 2030, up from 10 GW today. To meet this new IT load demand, more than $250 to $300 billion of investment will be needed in data center infrastructure, excluding power generation capacity.”
  • LinkedIn’s research reveals that more than half (56%) of UK professionals feel overwhelmed by how quickly their jobs are changing, which is particularly true of the younger generation (70% of 25-34 year olds), while 47% say expectations are higher than ever.
  • Data from Asana’s Work Innovation Lab reveals that AI use is still predominantly a “solo” activity for UK workers, with the majority feeling most comfortable using it alone compared to within a team or their wider organisation. The press release hypothesises: “This may be because UK individual workers think they have a better handle on technology than their managers or the business. Workers rank themselves as having the highest level of comfort with technology (86%) – compared to their team (78%), manager (74%) and organisation (76%). This trend is mirrored across industries and sectors.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Published by

Unknown's avatar

Oliver Pickup

Award-winning future-of-work Writer | Speaker | Moderator | Editor-in-Chief | Podcaster | Strategist | Collaborator | #technology #business #futureofwork #sport

One thought on “Go Flux Yourself: Navigating the Future of Work (No. 10)”

Leave a comment