Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.