Go Flux Yourself: Navigating the Future of Work (No. 12)

TL;DR: December’s Go Flux Yourself explores how AI agents are reshaping the workforce, draws philosophical parallels between death and AI uncertainty, examines predictions for 2025, considers the growing AI-induced loneliness epidemic, and reflects on key themes from the newsletter’s inaugural year … 

Image created on Midjourney with the prompt “a thoughtful AI agent working alongside humans in an office setting, with both harmony and tension visible, in the style of a Rembrandt painting”

The future

“Stop thinking of AI as doing the work of 50% of the people. Start thinking of AI as doing 50% of the work for 100% of the people.”

These words from Jensen Huang, the CEO of Nvidia (whose vision of a two-thirds non-human workforce featured in November’s Go Flux Yourself) crystallise a profound shift in how we should conceptualise the relationship between artificial intelligence and human work – a subject that has been top of mind for me this last year. On the eve of 2025, the conversation is evolving from whether AI will replace humans to how AI agents will augment and transform human capabilities.

At the start of December, I spoke with Sultan Saidov, Co-Founder and President at Beamery, an HR management software company headquartered in London. He offers valuable insights into this transformation and suggests we’re witnessing a fundamental restructuring of organisations, shifting from traditional pyramid structures to diamond shapes.

“In a world where AI might not be super smart or reliable, but it can do lots of low-cost tasks, you may not need as wide a pyramid at the bottom of the organisation,” Saidov explained. This structural evolution reflects Huang’s declaration.

The implications of this shift are far-reaching. “If you start with empowerment,” Saidov argued, “you don’t just have to think about what tasks you are doing today that an agent could do. You have to think about what work would be really valuable in my time in a world where agents are available to me.”

This transformation brings profound questions about human identity and purpose in an AI-augmented world. Chris Langan, the so-called “smartest man in the world”, according to the Daily Mail, whose IQ reportedly ranges between 190 and 210, offers an intriguing perspective through his Cognitive-Theoretic Model of the Universe. 

He suggests that when we die, we transition entirely to another plane of existence – one we cannot access while alive. In this new state, we might not even remember who we were before, existing in what Langan describes as a “meta-simultaneous” state where all possible incarnations exist at once.

The parallels between Langan’s metaphysical musings and our current AI moment are striking. Just as “no one knows what happens next”  – the motto above OpenAI CEO Sam Altman’s desk – when we die, we face similar uncertainty about how AI will transform our existence. Both scenarios involve a fundamental transformation of consciousness and being, with outcomes that remain tantalisingly beyond our current understanding.

Saidov recognises this uncertainty in the workplace context. “You have certain roles that are gradually becoming agents,” he noted, “like, let’s say, scheduling interviews and coordinating. That’s increasingly becoming this primarily non-human task. The humans that did that before are gradually moving into other things; otherwise, you won’t need as many of them.”

The emergence of “agentic AI” – autonomous systems that can perform complex tasks with minimal human supervision – represents perhaps the most significant shift in how work gets done since the Industrial Revolution. No doubt there will be opportunities and challenges (not least with security permissions and data sharing, as raised by SailPoint’s CEO Mark McClain in last month’s newsletter). Sultan emphasises that this isn’t just about efficiency: “The purpose of the HR function is increasingly to help navigate this massive human transformation.”

Just as Langan suggests, our consciousness might persist in a different form after death, but Saidov sees human work evolving rather than disappearing. “The nuance of how do you do this right, especially for people topics, has to come with a bit of governance,” he stressed. This governance becomes crucial as we navigate what he calls “task taxonomies” – the complex mapping of which AI agents should handle tasks versus humans.

In 2025, the challenge isn’t just technical implementation but preparing the next generation for this uncertain future. As I have come to ask many interviewees, I quizzed Saidov about how we should prepare our children. (I’m interested in these answers as the father of two children, aged 10 and four.) “It’s hard to predict what tools you will use [in the coming years], so probably the best thing you could do is encourage a curiosity for finding a passion, which isn’t so much a skill as a mindset that lets you explore what you care about more proactively,” he advised. 

I love this reply, and this emphasis on curiosity and adaptability echoes through Langan’s philosophical framework and Sultan’s practical insights. In a world where AI agents are transforming traditional roles, the ability to navigate uncertainty becomes paramount. “There’s probably going to be toolkits for kids who are trying to zoom in on what’s real and what’s not real, to be more sophisticated in thinking about what’s fake and not fake than we are,” Saidov added.

The transformation of human interaction extends beyond the workplace, however. A Financial Times article about Meta’s plans to create bots on its social media platforms confirmed fears that – thanks to misguided rapacious capitalism goals – the biggest players are missing the point of technology, and accelerating tech-induced loneliness and content that is eating itself. 

Indeed, Meta revealed its alarming vision for AI-generated characters to populate its social media platforms, with Connor Hayes, Vice-President of Product for Generative AI, suggesting these AI personas will exist “in the same way that accounts do”, complete with biographies, profile pictures, and the ability to generate and share content. 

While this might make platforms “more entertaining and engaging” – Meta’s stated priority for the next two years, according to Hayes – it raises profound questions about authenticity and connection in our digital future. As Becky Owen, former head of Meta’s creator innovations team, warned: “Unlike human creators, these AI personas don’t have lived experiences, emotions, or the same capacity for relatability.” 

This observation feels particularly pertinent when considered alongside Langan’s theories about consciousness and Saidov’s emphasis on human value in an AI-augmented workplace.

Like Langan’s view of consciousness, human-work evolution may exist simultaneously in multiple states – part human, part machine, with boundaries increasingly fluid. As organisations evolve toward diamond structures, the key to success is not resisting this transformation but embracing its uncertainty while maintaining our essential human qualities.

As we stand on this threshold of unprecedented change, perhaps the most valuable insight comes from combining Langan’s metaphysical perspective with Saidov’s practical wisdom: the future, whether of consciousness or work, may be unknowable, but our ability to adapt and maintain our humanity within it remains firmly within our control.

In early January, I’ll be talking about my latest thinking as the first guest of 2025 on the Leading the Future podcast – look out for that here.

The present

As we peer into the uncertainty of 2025, the predictions from industry leaders paint a picture of unprecedented transformation. Yet beneath the surface, enthusiasm for AI adoption lies a growing concern about its impact on human connection and wellbeing.

“The biggest threat AI offers is loneliness,” warned Scott Galloway, professor at NYU Stern School of Business, in his pre-Christmas annual predictions. His concern is far from theoretical – in March, SplitMetrics found that AI companion apps reached 225 million lifetime downloads on the Google Play store alone. We’re witnessing a fundamental shift in how humans seek connection. The rise of AI girlfriends, outpacing their male counterparts by a factor of seven, signals a troubling retreat from human-to-human relationships.

This shift towards digital relationships parallels broader workplace transformations. Roshan Kindred, Chief Diversity Officer at PagerDuty, predicts that “companies will face increasing demands for transparency in their DEI efforts, including publishing data on workforce demographics, pay equity, and diversity initiatives”. The human element becomes more crucial even as automation increases.

The numbers tell a sobering story. According to Culture Amp’s latest data, nearly one in four workers (23%) plan to quit their jobs in 2025. This predicted attrition rate is particularly pronounced in the UK, exceeding both US (19%) and Australian (18%) figures. Only Germany shows a higher potential turnover at 24%.

Leadership quality emerges as the critical factor in this equation. With a great manager and leader, employees’ commitment to stay reaches 94%; with poor leadership, it plummets to just 19%. The financial implications are staggering – replacing an employee can cost between 30% and 200% of their salary, with the average UK salary in 2024 being £37,400.

Jessie Scheepers, Belonging & Impact Lead at Pleo, envisions 2025 as a year when “human-centric leadership will come to the fore, with authentic and empathetic leaders placing a greater focus on their team’s mental health”. This prediction gains urgency when considered alongside Galloway’s warnings about technology-induced isolation.

Meanwhile, the education sector faces its own reckoning. Nikolaz Foucaud, Managing Director EMEA at Coursera, notes that 54% of employers now prioritise skills over traditional credentials. This shift comes as higher education faces financial challenges, with university fee caps rising to £9,535 a year. The solution, Foucaud suggests, lies in industry “micro-credentials” that bridge the gap between academic learning and workplace demands.

Perhaps most telling is the quantum-computing horizon outlined by Dominic Allon, CEO of Pipedrive. While quantum technologies promise revolutionary advances in optimisation and data security, their immediate impact may be less dramatic than feared. “Small businesses may benefit from breakthroughs in areas like optimisation, but those that adopt or integrate quantum solutions early on could gain a competitive edge in innovation, cost efficiency, and scalability,” he notes.

The financial landscape presents particular challenges. Hila Harel from Fiverr predicts UK businesses will face significant pressures, with average losses predicted to reach £138,000 and a quarter of companies expecting losses over £100,000. Yet within this disruption lies opportunity – particularly for freelancers and flexible workers who can navigate the evolving landscape.

These predictions collectively suggest that 2025 will be less about technological revolution and more about human evolution. The successful integration of AI and other advanced technologies will depend not on the tools’ sophistication but on our ability to maintain and strengthen human connections in an increasingly digital world.

The past

As it’s the end of 2024, I’d like to look back and reflect on the inaugural year of Go Flux Yourself. The themes that emerged across my monthly explorations feel eerily prescient. I began in January examining Sam Altman’s aforementioned desk motto – “no one knows what happens next” – a humble acknowledgement that set the tone for a year of thoughtful examination rather than confident predictions.

That uncertainty proved to be one of my most reliable companions through 2024. In February, I explored the concept of FOBO – fear of becoming obsolete – through an unexpected lens: a chance encounter with a tarot card reader in a cafe. The Page of Swords card suggested the need to embrace new forms of communication and continuous learning, while The Two of Wands warned about the dangers of remaining in our comfort zones while watching the world transform around us.

March’s Go Flux Yourself featured Minouche Shafik’s prescient observation that while “jobs were about muscles in the past, now they’re about brains, but in the future, they’ll be about the heart”. This insight gained particular resonance as the year progressed and AI capabilities expanded, emphasising the enduring value of human empathy and emotional intelligence.

In April, I shared my thoughts about values. In August, I revealed the CHUI Framework – Community, Health, Understanding, and Interconnectedness – providing structured guidance for navigating human-work evolution. These values proved essential as organisations grappled with technological change and the persistent challenge of maintaining human connection in increasingly digital workplaces.

During the summer months, I examined what Scott Galloway calls “the biggest threat we’re not discussing enough”: loneliness. June’s newsletter warned about the potential social costs of increasing reliance on digital relationships.

July’s Go Flux Yourself provided one of the most sobering insights of the year. Futurist Gerd Leonhard compared the arrival of artificial general intelligence (AGI) to “a meteor coming down from above, stopping culture and knowledge as we know it”. This metaphor gained particular potency when considered alongside Chris Langan’s theories about consciousness and existence, which we explored earlier in this edition.

August introduced the concept of being “kind explorers” in the digital age, inspired by my daughter’s career ambition shared at her nursery graduation. September reflected on the importance of wonder and magic in an increasingly automated world, while October examined the dark side of AI through conversations with cybersecurity luminaries Dr Joye Purser and Shlomo Kramer.

November channelled Marcus Aurelius’s wisdom about the quality of our thoughts determining the quality of our lives – a theme that resonates powerfully as we conclude our year’s journey. Throughout these explorations, certain constants emerged: the importance of human connection in an increasingly digital world, the need for thoughtful implementation of technology, and the enduring value of authentic leadership.

I’ve witnessed the workplace transform from a location to a concept, and documented the rise of what we now call the “relationship economy”. Further, research suggests that by 2025, up to 90% of online content could be AI-generated, making human authenticity more valuable than ever. (This is one of the reasons I have this year established Pickup_andWebb, a content company providing human-first thought leadership for businesses and C-suite executives. Read this blog on our thinking about the near future of thought leadership here.)

The year brought tangible changes too. Australia made history by banning social media for under-16s, while EE recommended against smartphones for under-11s. Microsoft’s Copilot AI demonstrated both the promise and perils of workplace AI integration, with privacy breaches highlighting the gap between technological capability and practical implementation.

My explorations of – and writing and speaking about – human-work evolution have taken me from London’s Silicon Roundabout to Barcelona’s tech hubs, from Manchester’s Digital Transformation Expo to a security conference in Rome, and elsewhere.

Looking back, perhaps Go Flux Yourself’s most significant achievement in 2024 has been maintaining a balanced perspective – neither succumbing to techno-optimism nor falling into dystopian pessimism. I’ve documented the challenges while highlighting opportunities, always emphasising the importance of human agency in shaping our technological future.

The questions I asked in January remain relevant. How do we maintain our humanity in an increasingly automated world? How do we ensure technology serves human flourishing rather than diminishing it? But I’ve gained valuable insights into answering them, understanding that the key lies not in resisting change but in thoughtfully shaping it.

As I close the last chapter of 2024 of Go Flux Yourself, we can appreciate that while uncertainty remains our constant companion in the coming year, our capacity for adaptation, innovation, and human connection provides a reliable compass for navigating whatever comes next. 

Ultimately, whether facing AGI, armies of AI agents, or augmented workplaces, the quality of our thoughts and the strength of our human bonds will determine the success of our journey.

Statistics of the month

  • AI is the fastest-growing skill among employees, job seekers and students in the UK and globally, with Coursera course enrolments in this domain having increased 866% year-on-year, according to newly released data
  • Nobel laureate Geoffrey Hinton – the “Godfather of AI” – has doubled his doomsday prediction, speaking to BBC Radio 4’s Today programme, now warning of a 1-in-5 chance that AI could wipe out humanity by 2054. He cautions we’ll be like toddlers attempting to control super-intelligent machines, adding: “We’ve never had to deal with things more intelligent than ourselves before.”

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.