Go Flux Yourself: Navigating the Future of Work (No. 23)

TL;DR: November’s Go Flux Yourself marks three years since ChatGPT’s launch by examining the “survival of the shameless” – Rutger Bregman’s diagnosis of Western elite failure. With responsible innovation falling out of fashion and moral ambition in short supply, it asks what purpose-driven technology actually looks like when being bad has become culturally acceptable.

Image created on Nano Banana

The future

“We’ve taught our best and brightest how to climb, but not what ladder is worth climbing. We’ve built a meritocracy of ambition without morality, of intelligence without integrity, and now we are reaping the consequences.”

The above quotation comes from Rutger Bregman, the Dutch historian and thinker who shot to prominence at the World Economic Forum in Davos in 2019. You may recall the viral clip. Standing before an audience of billionaires, he did something thrillingly bold: he told them to pay their taxes.

“It feels like I’m at a firefighters’ conference and no one’s allowed to speak about water,” he said almost seven years ago. “Taxes, taxes, taxes. The rest is bullshit in my opinion.”

Presumably, due to his truth-telling, he has not been invited back to the Swiss Alps for the WEF’s annual general meeting.

Bregman is this year’s BBC Reith Lecturer, and, again, he is holding a mirror up to society to reveal its ugly, venal self. His opening lecture, A Time of Monsters – a title borrowed from Antonio Gramsci’s 1929 prison notebooks – delivered at the end of November, builds on that Davos provocation with something more troubling: a diagnosis of elite failure across the Western world. This time, his target isn’t just tax avoidance. It’s what he calls the “survival of the shameless”: the systematic elevation of the unscrupulous over the capable, and the brazen over the virtuous.

Even Bregman isn’t immune to the censorship he critiques. The BBC reportedly removed a line from his lecture describing Donald Trump as “the most openly corrupt president in American history”. The irony, as Bregman put it, is that the lecture was precisely about “the paralysing cowardice of today’s elites”. When even the BBC flinches from stating the obvious – and presumably fears how Trump might react (he has threatened to sue the broadcaster for $5 billion over doctored footage that, earlier in November, saw the director general and News CEO resign) – you know something is deeply rotten.

Bregman’s opening lecture is well worth a listen, as is the Q&A afterwards. His strong opinions chimed with the beliefs of Gemma Milne, a Scottish science writer and lecturer at the University of Glasgow, whom I caught up with a couple of weeks ago, having first interviewed her almost a decade ago.

The author of Smoke & Mirrors: How Hype Obscures the Future and How to See Past It has recently submitted her PhD thesis at the University of Edinburgh (Putting the future to work – The promises, product, and practices of corporate futurism), and has been tracking this shift for years. Her research focuses on “corporate futurism” and the political economy of deep tech – essentially, who benefits from the stories we tell about innovation.

Her analysis is blunt: we’re living through what she calls “the age of badness”.

“Culturally, we have peaks and troughs in terms of how much ‘badness’ is tolerated,” she told me. “Right now, being the bad guy is not just accepted, it’s actually quite cool. Look at Elon Musk, Trump, and Peter Thiel. There’s a pragmatist bent that says: the world is what it is, you just have to operate in it.”

When Smoke and Mirrors came out in 2020, conversations around responsible innovation were easier. Entrepreneurs genuinely wanted to get it right. The mood has since curdled. “If hype is how you get things done and people get misled along the way, so be it,” Gemma said of the evolved attitude by those in power. “‘The ends justify the means’ has become the prevailing logic.”

On a not-unrelated note, November 30 marked exactly three years since OpenAI launched ChatGPT. (This end-of-the-month newsletter arrives a day later than usual – the weekend, plus an embargo on the Adaptavist Group research below.) We’ve endured three years of breathless proclamations about productivity gains, creative disruption, and the democratisation of intelligence. And three years of pilot programmes, failed implementations, and so much hype. 

Meanwhile, the graduate job market has collapsed by two-thirds in the UK alone, and unemployment levels have risen to 5%, the highest since September 2021, the height of the pandemic fallout, as confirmed by Office for National Statistics data published in mid-November.

New research from The Adaptavist Group, gleaned from almost 5,000 knowledge workers split evenly across the UK, US, Canada and Germany, underscores the insidious social cost: a third (32%) of workers report speaking to colleagues less since using GenAI, and 26% would rather engage in small talk with an AI chatbot than with a human.

So here’s the question that Bregman forces us to confront: if we now have access to more intelligence than ever before – both human and artificial – what exactly are we doing with it? And are we using technology for good, for human enrichment and flourishing? On the whole, with artificial intelligence, I don’t think so.

Bregman describes consultancy, finance, and corporate law as a “gaping black hole” that sucks up brilliant minds: a Bermuda Triangle of talent that has tripled in size since the 1980s. Every year, he notes, thousands of teenagers write beautiful university application essays about solving climate change, curing disease, or ending poverty. A few years later, most have been funnelled towards the likes of McKinsey, Goldman Sachs, and Magic Circle law firms.

The numbers bear this out. Around 40% of Harvard graduates now end up in that Bermuda Triangle of talent, according to Bregman. Include big tech, and the share rises above 60%. One Facebook employee, a former maths prodigy, quoted by the Dutchman in his first Reith lecture, said: “The best minds of my generation are thinking about how to make people click ads. That sucks.”

If we’ve spent decades optimising our brightest minds towards rent-seeking and attention-harvesting, AI accelerates that trajectory. The same tools that could solve genuine problems are instead deployed to make advertising more addictive, to automate entry-level jobs without creating pathways to replace them, and to generate endless content that says nothing new.

Gemma sees this in how technology and politics have fused. “The entanglement has never been stronger or more explicit.” Twelve months ago, Trump won the vote for his second term. At his inauguration at the White House in January, the front-row seats were taken by several technology leaders, happy to pay the price for genuflection in return for deregulation. But what is the ultimate cost to humanity for having such cosy relationships?

“These connections aren’t just more visible, they’re culturally embedded,” Gemma told me. “People know Musk’s name and face without understanding Tesla’s technology. Sam Altman is AI’s hype guru, but he’s also a political leader now. The two roles have merged.”

Against this backdrop, I spent two days at London’s Guildhall in early November for the Thinkers50 conference and gala. The theme was “regeneration”, exploring whether businesses can restore rather than extract.

Erinch Sahan from Doughnut Economics Action Lab offered concrete, heartwarming examples of businesses demonstrating that purpose and profit needn’t be mutually exclusive. For instance, Patagonia’s steward ownership model, Fairphone’s “most ethical smartphone in the world” with modular repairability, and LUSH’s commitment to fair taxes and employee ownership.

Erinch’s – frankly heartwarming – list, of which this trio is a small fraction, contrasted sharply with Gemma’s observation about corporate futurism: “The critical question is whether it actually transforms organisations or simply attends to the fear of perma-crisis. You bring in consultants, do the exercises, and everyone feels better about uncertainty. But does anything actually change?”

Some forms of the practice can be transformative. Others primarily manage emotion without producing radical change. The difference lies in whether accountability mechanisms exist, whether outcomes are measured, tracked, and tied to consequences.

This brings me to Delhi-based Ruchi Gupta, whom I met over a video call a few weeks ago. She runs the not-for-profit Future of India Foundation and has built something that embodies precisely the kind of “moral ambition” Bregman describes, although she’d probably never use that phrase. 

India is home to the world’s largest youth population, with one in every five young people globally being Indian. Not many – and not enough – are afforded the skills and opportunities to thrive. Ruchi’s assessment of the current situation is unflinching. “It’s dire,” she said. “We have the world’s largest youth population, but insufficient jobs. The education system isn’t skilling them properly; even among the 27% who attend college, many graduate without marketable skills or professional socialisation. Young people will approach you and simply blurt things out without introducing themselves. They don’t have the sophistication or the networks.”

Notably, cities comprise just 3% of India’s land area but account for 60% of India’s GDP. That concentration tells you everything about how poorly opportunities are distributed. 

Gupta’s flagship initiative, YouthPOWER, responds to this demographic reality by creating India’s first and only district-level youth opportunity and accountability platform, covering all 800 districts. The platform synthesises data from 21 government sources to generate the Y-POWER Score, a composite metric designed to make youth opportunity visible, comparable, and politically actionable.

“Approximately 85% of Indians continue to live in the district of their birth,” Ruchi explained. “That’s where they situate their identity; when young people introduce themselves to me, they say their name and their district. If you want to reach all young people and create genuine opportunities, it has to happen at the district level. Yet nothing existed to map opportunity at that granularity.”

What makes YouthPOWER remarkable, aside from the smart data aggregation, is the accountability mechanism. Each district is mapped to its local elected representative, the Member of Parliament who chairs the district oversight committee. The platform creates a feedback loop between outcomes and political responsibility.

“Data alone is insufficient; you need forward motion,” Ruchi said. “We mapped each district to its MP. The idea is to work directly with them, run pilots that demonstrate tangible improvement, then scale a proven playbook across all 543 constituencies. When outcomes are linked to specific politicians, accountability becomes real rather than rhetorical.”

Her background illuminates why this matters personally. Despite attending good schools in Delhi, her family’s circumstances meant she didn’t know about premier networking institutions. She went to an American university because it let her work while studying, not because it was the best fit. She applied only to Harvard Business School, having learnt about it from Eric Segal’s Love Story, without any work experience.

“Your background determines which opportunities you even know exist,” she told me. “It was only at McKinsey that I finally understood what a network does – the things that happen when you can simply pick up the phone and reach someone.” Thankfully, for India’s sake, Ruchi has found her purpose after time spent lost in the Bermuda Triangle of talent.

But the lack of opportunities and woeful political accountability are global challenges. Ruchi continued: “The right-wing surge you’re seeing in the UK and the US stems from the same problem: opportunity isn’t reaching people where they live. The normative framework is universal: education, skilling, and jobs on one side; empirical baselines and accountability mechanisms on the other. Link outcomes to elected representatives, and you create a feedback loop that drives improvement.”

So what distinguishes genuine technology for good from its performative alternative?

Gemma’s advice is to be explicit about your relationship with hype. “Treat it like your relationship with money. Some people find money distasteful but necessary; others strategise around it obsessively. Hype works the same way. It’s fundamentally about persuasion and attention, getting people to stop and listen. In an attention economy, recognising how you use hype is essential for making ethical and pragmatic decisions.”

She doesn’t believe we’ll stay in the age of badness forever. These things are cyclical. Responsible innovation will become fashionable again. But right now, critiquing hype lands very differently because the response is simply: “Well, we have to hype. How else do you get things done?”

Ruchi offers a different lens. The economist Joel Mokyr has demonstrated that innovation is fundamentally about culture, not just human capital or resources. “Our greatness in India will depend on whether we can build that culture of innovation,” Ruchi said. “We can’t simply skill people as coders and rely on labour arbitrage. That’s the current model, and it’s insufficient. If we want to be a genuinely great country, we need to pivot towards something more ambitious.”

Three years into the ChatGPT era, we have a choice. We can continue funnelling talent into the Bermuda Triangle, using AI to amplify artificial importance. Or we can build something different. For instance, pioneering accountability systems like YouthPOWER that make opportunity visible, governance structures that demand transparency, and cultures that invite people to contribute to something larger than themselves.

Bregman ends his opening Reith Lecture with a simple observation: moral revolutions happen when people are asked to participate.

Perhaps that’s the most important thing leaders can do in 2026: not buy more AI subscriptions or launch more pilots. But ask the question: what ladder are we climbing, and who benefits when we reach the top?

The present

Image created on Midjourney

The other Tuesday, on the 8.20am train from Waterloo to Clapham Junction, heading to The Portfolio Collective’s Portfolio Career Festival at Battersea Arts Centre, I witnessed a small moment that captured everything wrong with how we’re approaching AI.

The guard announced himself over the tannoy. But it wasn’t his (or her) voice. It was a robotic, AI-generated monotone informing passengers he was in coach six, should anyone need him.

I sat there, genuinely unnerved. This was the Turing trap in action, using technology to imitate humans rather than augment them. The guard had every opportunity to show his character, his personality, perhaps a bit of warmth on a grey November morning. Instead, he’d outsourced the one thing that made him irreplaceable: his humanity.

Image created on Nano Banana (using the same prompt as the Midjourney one above)

Erik Brynjolfsson, the Stanford economist who coined the term in 2022, argues we consistently fall into this software snare. We design AI to mimic human capabilities rather than complement them. We play to our weaknesses – the things machines do better – instead of our strengths. The train guard’s voice was his strength. His ability to set a tone, to make passengers feel welcome, to be a human presence in a metal tube hurtling through South London. That’s precisely what got automated away.

It’s a pattern I’m seeing everywhere. By blindly grabbing AI and outsourcing tasks that reveal what makes us unique, we risk degrading human skills, eroding trust and connection, and – I say this without hyperbole – automating ourselves to extinction.

The timing of that train journey felt significant. I was heading to a festival entirely about human connection – networking, building personal brand, the importance of relationships for business and greater enrichment. And here was a live demonstration of everything working against that.

It was also Remembrance Day. As we remembered those who fought for our freedoms, not least during a two-minute silence (that felt beautifully calming – a collective, brief moment without looking at a screen), I was about to argue on stage that we’re sleepwalking into a different kind of surrender: the quiet handover of our professional autonomy to machines.

The debate – Unlocking Potential or Chasing Efficiency: AI’s Impact on Portfolio Work – was held before around 200 ambitious portfolio professionals. The question was straightforward: should we embrace AI as a tool to amplify our skills, creativity, and flow – or hand over entire workflows to autonomous agents and focus our attention elsewhere?

Pic credit: Afonso Pereira

You can guess which side I argued. The battle for humanity isn’t against machines, per se. It’s about knowing when to direct them and when to trust ourselves. It’s about recognising that the guard’s voice – warm, human, imperfect – was never a problem to be solved. It was a feature to be celebrated.

The audience wanted an honest conversation about navigating this transition thoughtfully. I hope we delivered. But stepping off stage, I couldn’t shake the irony: a festival dedicated to human connection, held on the day we honour those who preserved our freedoms, while outside these walls the evidence mounts that we’re trading professional agency for the illusion of efficiency.

To watch the full video session, please see here: 

A day later, I attended an IBM panel at the tech firm’s London headquarters. Their Race for ROI research contained some encouraging news: two-thirds of UK enterprises are experiencing significant AI-driven productivity improvements. But dig beneath the headline, and the picture darkens. Only 38% of UK organisations are prioritising inclusive AI upskilling opportunities. The productivity gains are flowing to those already advantaged. Everyone else is figuring it out on their own – 77% of those using AI at work are entirely self-taught.

Leon Butler, General Manager for IBM UK & Ireland, offered a metaphor that’s stayed with me. He compared opaque AI models to drinking from an opaque test tube.

“There’s liquid in it – that’s the training data – but you can’t see it. You pour your own data in, mix it, and you’re drinking something you don’t fully understand. By the time you make decisions, you need to know it’s clean and true.”

That demand for transparency connects directly to Ruchi’s work in India and Gemma’s critique of corporate futurism. Data for good requires good data. Accountability requires visibility. You can’t build systems that serve human flourishing if the foundations are murky, biased, or simply unknown.

As Sue Daley OBE, who leads techUK’s technology and innovation work, pointed out at the IBM event: “This will be the last generation of leaders who manage only humans. Going forward, we’ll be managing humans and machines together.”

That’s true. But the more important point is this: the leaders who manage that transition well will be the ones who understand that technology is a means, not an end. Efficiency without purpose is just faster emptiness.

The question of what we’re building, and for whom, surfaced differently at the Thinkers50 conference. Lynda Gratton, whom I’ve interviewed a couple of times about living and working well, opened with her weaving metaphor. We’re all creating the cloth of our lives, she argued, from productivity threads (mastering, knowing, cooperating) and nurturing threads (friendship, intimacy, calm, adventure).

Not only is this an elegant idea, but I love the warm embrace of messiness and complexity. Life doesn’t follow a clean pattern. Threads tangle. Designs shift. The point isn’t to optimise for a single outcome but to create something textured, resilient, human.

That messiness matters more now. My recent newsletters have explored the “anti-social century” – how advances in technology correlate with increased isolation. Being in that Guildhall room – surrounded by management thinkers from around the world, having conversations over coffee, making new connections – reminded me why physical presence still matters. You can’t weave your cloth alone. You need other people’s threads intersecting with yours.

Earlier in the month, an episode of The Switch, St James’s Place Financial Adviser Academy’s career change podcast, was released. Host Gee Foottit wanted to explore how professionals can navigate AI’s impact on their working lives – the same territory I cover in this newsletter, but focused specifically on career pivots.

We talked about the six Cs – communication, creativity, compassion, courage, collaboration, and curiosity – and why these human capabilities become more valuable, not less, as routine cognitive work gets automated. We discussed how to think about AI as a tool rather than a replacement, and why the people who thrive will be those who understand when to direct machines and when to trust themselves.

The conversations I’m having – with Gemma, Ruchi, the panellists at IBM, the debaters at Battersea – reinforce the central argument. Technology for good isn’t a slogan. It’s a practice. It requires intention, accountability, and a willingness to ask uncomfortable questions about who benefits and who gets left behind.

If you’re working on something that embodies that practice – whether it’s an accountability platform, a regenerative business model, or simply a team that’s figured out how to use AI without losing its humanity – I’d love to hear from you. These conversations are what fuel the newsletter.

The past

A month ago, I fired my one and only work colleague. It was the best decision for both of us. But the office still feels lonely and quiet without him.

Frank is a Jack Russell I’ve had since he was a puppy, almost five years ago. My daughter, only six months old when he came into our lives, grew up with him. Many people with whom I’ve had video calls will know Frank – especially if the doorbell went off during our meeting. He was the most loyal and loving dog, and for weeks after he left, I felt bereft. Suddenly, no one was nudging me in the middle of the afternoon to go for a much-needed, head-clearing stroll around the park.

Pic credit: Samer Moukarzel

So why did I rehome him?

As a Jack Russell, he is fiercely territorial. And where I live and work in south-east London, it’s busy. He was always on guard, trying to protect and serve me. The postman, Pieter, various delivery folk, and other people who came into the house have felt his presence, let’s say. Countless letters were torn to shreds by his vicious teeth – so many that I had to install an external letterbox.

A couple of months ago, while trying to retrieve a sock that Frank had stolen and was guarding on the sofa, he snapped and drew blood. After multiple sessions with two different behaviourists, following previous incidents, he was already on a yellow card. If he bit me, who wouldn’t he bite? Red card.

The decision was made to find a new owner. I made a three-hour round trip to meet Frank’s new family, whose home is in the Norfolk countryside – much better suited to a Jack Russell’s temperament. After a walk together in a neutral venue, he travelled back to their house and apparently took 45 minutes to leave their car, snarling, unsure, and confused. It was heartbreaking to think he would never see me again.

But I knew Frank would be happy there. Later that day, I received videos of him dashing around fields. His new owners said they already loved him. A day later, they found the cartoon picture my daughter had drawn of Frank, saying she loved him, in the bag of stuff I’d handed them.

Now, almost a month on, the house is calmer. My daughter has stopped drawing pictures of Frank with tearful captions. And Frank? He’s made friends with Ralph, the black Labrador who shares his new home. The latest photo shows them sleeping side by side, exhausted from whatever countryside adventures Jack Russells and Labradors get up to together.

The proverb “if you love someone, set them free” helped ease the hurt. But there’s something else in this small domestic drama that connects to everything I’ve been writing about this month.

Bregman asks what ladder we’re climbing. Gemma describes an age where doing the wrong thing has become culturally acceptable. Ruchi builds systems that create accountability where none existed. And here I was, facing a much smaller question: what do I owe this dog?

The easy path was to keep him. To manage the risk, install more barriers, and hope for the best. The more challenging path was to acknowledge that the situation wasn’t working – not for him, not for us – and to make a change that felt like failure but was actually responsibility.

Moral ambition doesn’t only show up in accountability platforms and regenerative business models. Sometimes it’s in the quiet decisions: the ones that cost you something, that nobody else sees, that you make because it’s right rather than because it’s easy.

Frank needed space to run, another dog to play with, and owners who could give him the environment his breed demands. I couldn’t provide that. Pretending otherwise would have been a disservice to him and a risk to my family.

The age of badness that Gemma describes isn’t just about billionaires and politicians. It’s also about the small surrenders we make every day: the moments we choose convenience over responsibility, comfort over honesty, the path of least resistance over the path that’s actually right.

I don’t want to overstate this. Rehoming a dog is not the same as building YouthPOWER or challenging tax-avoiding elites at Davos. But the muscle is the same. The willingness to ask uncomfortable questions. The courage to act on the answers.

My daughter’s drawings have stopped. The house is quieter. And somewhere in Norfolk, Frank is sleeping on a Labrador, finally at peace.

Sometimes the most important thing you can do is recognise when you’re climbing the wrong ladder – and have the grace to climb down.

Statistics of the month

🛒 Cyber Monday breaks records
Today marks the 20th annual Cyber Monday, projected to hit $14.2 billion in US sales – surpassing last year’s record. Peak spending occurs between 8pm and 10pm, when consumers spend roughly $15.8 million per minute. A reminder that convenience still trumps almost everything. (National Retail Federation)

🎯 Judgment holds, execution collapses
US marketing job postings dropped 8% overall in 2025, but the divide is stark: writer roles fell 28%, computer graphic artists dropped 33%, while creative directors held steady. The pattern likely mirrors the UK – the market pays for strategic judgment; it’s automating production. (Bloomberry)

🛡️ Cybersecurity complacency exposed
Nearly half (43%) of UK organisations believe their cybersecurity strategy requires little to no improvement – yet 71% have paid a ransom in the past 12 months, averaging £1.05 million per payment. (Cohesity)

💸 Cyber insurance claims triple
UK cyber insurance claims hit at least £197 million in 2024, up from £60 million the previous year – a stark reminder that threats are evolving faster than our defences. (Association of British Insurers)

🤖 UK leads Europe in AI optimism
Some 88% of UK IT professionals want more automation in their day-to-day work, and only 10% feel AI threatens their role – the lowest of any European country surveyed. Yet 26% say they need better AI training to keep pace. (TOPdesk)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 21)


TL;DR: September’s Go Flux Yourself examines the fundamentals of AI success: invest £10 in people for every £1 on technology, build learning velocity into your culture, and show up as a learner yourself. England’s women’s rugby team went from amateurs juggling jobs to world champions through one thing: investing in people.

Image created on Midjourney

The future

“Some people are on [ChatGPT] too much. There are young people who just say ‘I can’t make any decision in my life without telling chat everything that’s going on. It knows me, it knows my friends, I’m going to do whatever it says.’ That feels really bad to me … Even if ChatGPT gives way better advice than any human therapist, there is something about collectively deciding we’re going to live our lives the way that the AI tells us feels bad and dangerous.”

The (unusually long) opening quotation for this month’s Go Flux Yourself comes – not for the first time – from the CEO of OpenAI, Sam Altman, arguably the most influential technology leader right now. How will future history books – if there is anyone with a pulse around to write them – judge the man who allegedly has “no one knows what happens next” as a sign in his office?

The above words come from an interview a few weeks ago, and smack of someone who is deeply alarmed by the power he has unleashed. When Altman starts worrying aloud about his own creation, you’d think more people would pay attention. But here we are, companies pouring millions into AI while barely investing in the people who’ll actually use it.

We’ve got this completely backwards. Organisations are treating AI as a technology problem when it’s fundamentally a people problem. Companies are spending £1 on AI technology when they should spend an additional £10 on people, as Kian Katanforoosh, CEO and Founder of Workera, told me over coffee in Soho a couple of weeks ago.

We discussed the much-quoted MIT research, published a few weeks ago (read the main points without signing up to download the paper in this Forbes piece), which shows that 95% of organisations are failing to achieve a return on investment from their generative AI pilots. Granted, the sample size was only 300 organisations, but that’s a pattern you can’t ignore.

Last month’s newsletter considered the plight of school leavers and university students in a world where graduate jobs have dropped by almost two-thirds in the UK since 2022, and entry-level hiring is down 43% in the US and 67% in the UK since Altman launched ChatGPT in November 2022.

It was easily the most read of all 20 editions of Go Flux Yourself. Why? I think it captured many people’s concerns about how blindly following the AI path could be for human flourishing. If young people are unable to gain employment, what happens to the talent pipeline, and where will tomorrow’s leaders come from? The maths doesn’t work. The logic doesn’t hold. And the consequences are starting to show.

To continue this critically important conversation, I met (keen Arsenal fan) Kian in central London, as he was over from his Silicon Valley HQ. Alongside running Workera – an AI-powered skills intelligence platform that helps Fortune 500 and Global 2000 organisations assess, develop, and manage innovation skills in areas such as AI, data science, software engineering, cloud computing, and cybersecurity – he is an adjunct lecturer in computer science at Stanford University.

“Companies have bought a huge load of technology,” he said. “And now they’re starting to realise that it can’t work without people.”

That’s the pattern repeated everywhere. Buy the tools. Deploy the systems. Wonder why nothing changes. The answer is depressingly simple: your people don’t know how to use what you’ve bought. They don’t have the foundational skills. And when they try, they’re putting you at risk because they don’t know what they’re uploading to these tools.

This is wrongheaded. We’ve treated AI like it’s just another software rollout when it’s closer to teaching an entire workforce a new language. And business leaders have to invest significantly more in their current and future human workforce to maximise the (good) potential of AI and adjacent technologies, or everyone fails. Updated leadership thinking is paramount to success.

McKinsey used to advocate spending $1 (or £1) on technology for every $1 / £1 on people. Then, last year, the company revised it: £1 on technology, £3 on people. “Our experience has shown that a good rule of thumb for managing gen AI costs is that for every $1 spent on developing a model, you need to spend about $3 for change management. (By way of comparison, for digital solutions, the ratio has tended to be closer to $1 for development to $1 for change management.)”

Kian thinks this is still miles off what should be spent on people. “I think it’s probably £1 in technology, £10 in people,” he told me. “Because when you look at AI’s potential productivity enhancements on people, even £10 in people is nothing.”

That’s not hyperbole. That’s arithmetic based on what he sees daily at Workera. Companies contact him, saying they’ve purchased 25 different AI agents and software packages, but employee usage starts strong for a week and then collapses. What’s going on? The answer is depressingly predictable.

“Your people don’t even know how to use that technology. They don’t even have the 101 skills to understand how to use it. And even when they try, they’re putting you (the organisation) at risk because they don’t even know what they’re uploading to these tools.”

One of the main things Workera offers is an “AI-readiness test”, and Kian’s team’s findings uncover a worrying truth: right now, outside tech companies, only 28 out of 100 people are AI-ready. That’s Workera’s number, based on assessing thousands of employees in the US and elsewhere. In tech companies, the readiness rate is over 90%, which is perhaps unsurprising. Yet while the gap is a chasm between tech-industry businesses and everyone else, it is growing.

But here’s where it gets really interesting. Being AI-ready today means nothing if your learning velocity is too slow. The technology changes every month. New capabilities arrive. Old approaches become obsolete. Google just released Veo, which means anyone can become a videographer. Next month, there’ll be something else.

“You can be ahead today,” Kian said. “If your learning velocity is low, you’ll be behind in five years. That’s what matters at the end of the day.”

Learning velocity. I liked that phrase. It captures something essential about this moment: that standing still is the same as moving backwards, that capability without adaptability is a temporary advantage at best.

However, according to Kian, the UK and Europe are already starting from behind, as his data shows a stark geographic divide in AI readiness. American companies – even outside pure tech firms – are moving faster on training and adoption. European organisations are more cautious, more bound by regulatory complexity, and more focused on risk mitigation than experimentation.

“The US has a culture of moving fast and breaking things,” Kian said. “Europe wants to get it right the first time. That might sound sensible, but in AI, you learn by doing. You can’t wait for perfect conditions.”

He pointed to the EU AI Act as emblematic of the different approaches. Comprehensive regulation arrived before widespread adoption. In the US, it’s the reverse: adoption at scale, regulation playing catch-up. Neither approach is perfect, but one creates momentum while the other creates hesitation.

The danger isn’t just that European companies fall behind American competitors. It’s that European workers become less AI literate, less adaptable, and less valuable in a global labour market increasingly defined by technological fluency. The skills gap becomes a prosperity gap.

“If you’re a European company and you’re waiting for clarity before you invest in your people’s AI skills, you’ve already lost,” Kian said. “Because by the time you have clarity, the game has moved on.”

Fresh research backs this up. (And a note on the need for the latest data – as a client told me a few days ago, data is like milk, and it has a short use-by date. I love that metaphor.) A new RAND Corporation study examining AI adoption across healthcare, financial services, climate and energy, and transportation found something crucial: identical AI technologies achieve wildly different results depending on the sector. A chatbot in banking operates at a different capability level than the same technology in healthcare, not because the tech differs but because the context, regulatory environment, and implementation constraints differ.

RAND proposes five levels of AI capability.

Level 1 covers basic language understanding and task completion: chatbots, simple diagnostic tools, and fraud detection. Humanity has achieved this.

Level 2 involves enhanced reasoning and problem-solving across diverse domains: systems that analyse complex scenarios and draw inferences. We’re emerging into this now.

Level 3 is sustained autonomous operation in complex environments, where systems make sequential decisions over time without human intervention. That’s mainly in the future, although Waymo’s robotaxis and some grid management pilots are testing it.

Levels 4 and 5 – creative innovation and full organisational replication – remain theoretical.

Here’s what matters: most industries currently operate at Levels 1 and 2. Healthcare lags behind despite having sophisticated imaging AI, as regulatory approval processes and evidence requirements slow down adoption. Finance advances faster because decades of algorithmic trading have created infrastructure and acceptance. Climate and energy sit in the middle, promising huge optimisation gains but constrained by infrastructure build times and regulatory uncertainty. Transportation is inching toward Level 3 autonomy while grappling with ethical dilemmas about life-or-death decisions.

The framework reveals why throwing technology at problems doesn’t work. You can’t skip levels. You can’t buy Level 3 capability and expect it to function in an organisation operating at Level 1 readiness. The gap between what the technology can do and what your people can do with it determines the outcome.

RAND identified six challenges that cut across every sector: workforce transformation, privacy protection, algorithmic bias, transparency and oversight, disproportionate impacts on smaller organisations, and energy consumption. Small institutions serving rural and low-income areas face particular difficulties. They lack resources and technical expertise. The benefits of AI concentrate among major players, while vulnerabilities accumulate at the edges.

For instance, the algorithmic bias problem is insidious. Even without explicitly considering demographic characteristics, AI systems exhibit biases. Financial algorithms can devalue real estate in vulnerable areas. Climate models might overlook impacts on marginalised communities. The bias creeps in through training data, through proxy variables, through optimisation functions that encode existing inequalities.

Additionally, and as I’ve written about previously, the energy demands are staggering. AI’s relationship with climate change cuts both ways. Yes, it optimises grids and accelerates the development of green technology. However, if AI scales productivity across the economy, it also scales emissions, unless we intentionally direct applications toward efficiency gains and invest heavily in clean energy infrastructure. The transition from search-based AI to generative AI has intensified computational requirements. Some experts argue potential efficiency gains could outweigh AI’s carbon footprint, but only if we pursue those gains deliberately through measured policy and investment rather than leaving it to market forces.

RAND’s conclusion aligns with everything Kian told me: coordination is essential, both domestically and internationally. Preserve optionality through pilot projects and modular systems. Employ systematic risk management frameworks. Provide targeted support to smaller institutions. Most importantly, invest in people at a ratio that reflects the actual returns.

The arithmetic remains clear across every analysis: returns on investing in people dwarf the costs. But we’re not doing it.

How, though, do you build learning velocity into an organisation? Kian had clear thoughts on this. Yes, you need to dedicate time to learning. Ten per cent of work time isn’t unreasonable. But the single most powerful thing a leader can do is simpler than that: lead by example.

“Show up as a learner,” he said. “If your manager, or your manager’s manager, or your manager’s manager’s manager is literally showing you how they learn and how much time they spend learning and how they create time for learning, that is already enough to create a mindset shift in the employee base.”

Normalising learning, then, is vital. That shift in culture matters more than any training programme you can buy off the shelf.

We talked about Kian’s own learning habits. Every morning starts with readings. He’s curated an X feed of people he trusts who aren’t talking nonsense, scans it quickly, and bookmarks what he wants to read deeper at night. He tracks top AI conferences, skims the papers they accept – thousands of them – looking at figures and titles to gain the gist. Then he picks 10% to read more carefully, and maybe 3% to spend an entire day on. “You need to have that structure or else it just becomes overwhelming,” he said.

The alternative is already playing out, and it’s grim. Some people – particularly young people – are on ChatGPT too much, as Altman admitted. They can’t make any decision without consulting the chatbot. It knows them, knows their friends, knows everything. They’ll do whatever it says.

Last month, Mustafa Suleyman, Co-Founder of DeepMind and now in charge of AI at Microsoft, published an extended essay about what he calls “seemingly conscious AI”: systems that exhibit all the external markers of consciousness without possessing it. He thinks we’re two to three years away from having the capability to build such systems using technology that already exists.

“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship,” he wrote.

Researchers working on consciousness tell him they’re being inundated with queries from people asking whether their AI is conscious, whether it’s acceptable to love it, and what it means if it is. The trickle has become a flood.

Tens of thousands of users already believe their AI is God. Others have fallen in love with their chatbots. Indeed, a Harvard Business Review survey of 6,000 regular AI users – the results of which were published in April (so how stale is the milk?) – found that companionship and therapy were the most common use cases.

This isn’t speculation about a distant future. This is happening now. And we’re building the infrastructure – the long memories, the empathetic personalities, the claims of subjective experience – that will make these illusions even more convincing.

Geoffrey Hinton, the so-called godfather of AI, who won the Nobel Prize last year, told the Financial Times in a fascinating lunch profile published in early September, that “rich people are going to use AI to replace workers. It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”

Dark, but there’s something clarifying about his honesty. The decisions we make now about how to implement AI, whether to invest in people or just technology, whether to prioritise adoption or understanding – these will shape what comes next.

The Adaptavist Group’s latest report, published last week, surveyed 900 professionals responsible for introducing AI across the UK, US, Canada and Germany. They found a divide: 42% believe their company’s AI claims are over-inflated. These “AI sceptics” work in environments where 65% believe their company’s AI stance puts customers at risk, 67% worry that AI adoption poses a threat to jobs, and 59% report having no formal AI training.

By contrast, AI leaders in companies that communicated AI’s value honestly reported far greater benefits. Some 58% say AI has improved work quality, while 61% report time savings. 48% note increased output. Only 37% worry about ethics issues, compared with 74% in over-hyped environments.

The difference? Training. Support. Honest communication. Investing in people rather than just technology.

Companies are spending between £1 million and £10 million implementing AI. Some are spending over £10 million. But 59% aren’t providing basic training. It’s like buying everyone in your company a Formula One car and being shocked when most people crash it.

“The next year is all going to be about adoption, skills, and doing right by employees,” Kian said. “Companies that do it well are going to see better adoption and more productivity. Those who don’t? They’re going to get hate from their employees. Like literally. Employees will be really mad at companies for not being human at all.”

That word – human – kept coming up in our conversation. In a world increasingly mediated by AI, being human becomes both more difficult and more essential. The companies that remember this, that invest in their people’s ability to learn, adapt, and think critically, will thrive. The ones that don’t will wonder why their expensive AI implementations gather digital dust.

The present

Image created on Midjourney

On Thursday (October 2), I’ll be at DTX London moderating a main-stage session asking: is your workforce ready for what’s next? The questions we’ll tackle include how organisations can create inclusive, agile workplaces that foster belonging and productivity, how AI will change entry-level positions, and crucially, how we safeguard critical thinking in an AI-driven world. These are urgent, practical challenges that every organisation faces right now. (I’ll also be recording episode three of DTX Unplugged, the new podcast series I co-host, looking at business evolution – listen to the series so far here.)

Later in October, on the first day of the inaugural Data Decoded in Manchester (October 21-22), I’ll moderate another session on a related topic to the above: what leadership looks like in a world of AI, because leadership must evolve. The ethical responsibilities are staggering. The pace of change is relentless. And the old playbooks simply don’t work.

I’ve also started writing the Go Flux Yourself book (any advice on self-publishing welcome). More on that soon. The conversations I’m having, the research I’m doing, the patterns I’m seeing all point towards something bigger than monthly newsletters can capture. We’re living through a genuine transformation, and I’m in a unique and privileged position to document what it feels like from the inside rather than just analysing it from the outside.

The responses to last month’s newsletter on graduate jobs and universities showed me how hungry people are for honest conversations about what’s really happening, on the ground and behind the numbers. Expect more clear-eyed analysis of where we are and what we might do about it. And please do reach out if you think you can contribute to this ongoing discussion, as I’m open to featuring interviewees in the newsletter (and, in time, the book).

The past

Almost exactly two years ago, I took my car for its annual service at a garage at Elmers End, South East London. While I waited, I wandered down the modest high street and discovered a Turkish café. I ordered a coffee, a lovely breakfast (featuring hot, gooey halloumi cheese topped with dripping honey and sesame seeds) and, on a whim, had my tarot cards read by a female reader at the table opposite. We talked for 20 minutes, and it changed my life (see more on this here, in Go Flux Yourself No.2).

A couple of weeks ago, I returned for this year’s car service. The café is boarded up now, alas. A blackboard dumped outside showed the old WiFi password: kate4cakes. Another casualty of our changing times, a small loss in the great reshuffling of how we live, work, and connect with each other. With autumn upon us, the natural state of change and renewal is fresh in the mind. However, it still saddened me as I pondered what the genial Turkish owner and his family were doing instead of running the café.

Autumn has indeed arrived. Leaves are twisting from branches and falling to create a multicoloured carpet. But what season are we in, really? What cycle of change?

I thought about that question as I watched England’s women’s rugby team absolutely demolish Canada 33-13 in the World Cup final at Twickenham last Saturday, with almost 82,000 people in attendance, a world record. The Red Roses had won all 33 games since their last World Cup defeat, the final against New Zealand Black Ferns.

Being put through my paces with Katy Mclean (© Tina Hillier)

In July 2014, I trained with the England women’s squad for pieces I wrote for the Daily Telegraph (“The England women’s rugby team are tougher than you’ll ever be“) and the Financial Times (“FT Masterclass: Rugby training with Katy Mclean” (now Katy Daley-McLean)). They weren’t professional then. They juggled jobs with their international commitments. Captain Katy Daley-McLean was a primary school teacher in Sunderland. The squad included policewomen, teachers, and a vet. They spent every spare moment either training or playing rugby.

I arrived at Surrey Sports Park in Guildford with what I now recognise was an embarrassing air of superiority. I’m bigger, stronger, faster, I thought. I’d played rugby at university. Surely I could keep up with these amateur athletes.

The England women’s team knocked such idiotic thoughts out of my head within minutes.

We started with touch rugby, which was gentle enough. Then came sprints. I kept pace with the wingers and fullbacks for the first four bursts, then tailed off. “Tactically preserving my energy,” I told myself.

Then strength and conditioning coach Stuart Pickering barked: “Malcolms next.”

Katy winked at me. “Just make sure you keep your head up and your hands on your hips. If you show signs of tiredness, we will all have to do it again … so don’t.”

Malcolms – a rugby league drill invented by the evidently sadistic Malcolm Reilly – involve lying face down with your chin on the halfway line, pushing up, running backwards to the 10-metre line, going down flat again, pushing up, sprinting to the far 10-metre line. Six times.

By the fourth repetition, I was blowing hard. By the final on,e I was last by some distance, legs burning, expelling deeply unattractive noises of effort. The women, heads turned to watch me complete the set, cheered encouragement rather than jeered. “Suck it up Ollie, imagine it’s the last five minutes of the World Cup final,” full-back Danielle Waterman shouted.

Then came the circuit training. Farmers’ lifts. Weights on ropes. The plough. Downing stand-up tackle bags. Hit and roll. On and on we moved, and as my energy levels dipped uncomfortably low, it became a delirious blur.

The coup de grâce was wrestling the ball off 5ft 6in fly-half Daley-Mclean. I gripped as hard as I could. She stole it from me within five seconds. Completely zapped, I couldn’t wrest it back. Not to save my life.

Emasculated and humiliated, I feigned willingness to take part in the 40-minute game that followed. One of the coaches tugged me back. “I don’t think you should do this mate … you might actually get hurt.”

I’d learned my lesson. These women were tougher, fitter, and more disciplined than I’d ever be.

That was 2014. The England women, who went on to win the World Cup in France that year, didn’t have professional contracts. They squeezed their training around their jobs. Yet they were world-class athletes who’d previously reached three consecutive World Cup finals, losing each time to New Zealand.

Then something changed. The Rugby Football Union invested heavily. The women’s team went professional. They have the same resources, support systems, and infrastructure as the men’s team.

The results speak for themselves. Thirty-three consecutive victories. A World Cup trophy, after two more final defeats to New Zealand. Record crowds. A team that doesn’t just compete but dominates.

This is what happens when you invest in people, providing them with the training, resources, time, and support they need to develop their skills. You treat them not as amateur enthusiasts fitting excellence around the edges of their lives, but as professionals whose craft deserves proper investment.

The parallels to AI adoption are striking. Right now, most organisations are treating their workers like those 2014 England rugby players and expecting them to master AI in their spare time. To become proficient without proper training. To deliver world-class results with amateur-level support.

It’s not going to work.

The England women didn’t win that World Cup through superior technology. They won it through superior preparation. Through investment in people, in training, and in creating conditions for excellence to flourish.

That’s the lesson for every organisation grappling with AI. Technology is cheap. Talent is everything. Training matters more than tools. And if you want your people to keep pace with change, you need to create a culture where learning isn’t a luxury but the whole point.

As Kian put it: “We need to move from prototyping to production AI. And you need 10 times more skills to put AI in production reliably than you need to put a demo out.”

Ten times the skills, and £10 spent on people for every £1 on technology. The arithmetic isn’t complicated. The will to act on it is what’s missing.

Statistics of the month

📈 Sick days surge
Employees took an average of 9.4 days off sick in 2024, compared with 5.8 days before the pandemic in 2019 and 7.8 days just two years ago. (CIPD)

📱 Daily exposure
Children are exposed to around 2,000 social media posts per day. Over three-quarters (77%) say it harms their physical or emotional health. (Sway.ly via The Guardian)

📉 UK leadership crisis
UK workers’ confidence in their company leaders has plummeted from 77% to 67% between 2022 and 2025 – well below the global average of 73% – while motivation fell from 66% to just 60%. (Culture Amp)

🎯 L&D budget reality
Despite fears that AI could replace their roles entirely (43% of L&D leaders believe this), learning and development budgets are growing: 70% of UK organisations and 84% in Australia/New Zealand increased L&D spending in 2025. (LearnUpon)

🔒 Email remains the weakest link
83% of UK IT leaders have faced an email-related security incident, with government bodies hit hardest at 92%. Yet email still carries over half (52%) of all organisational communication. (Exclaimer UK Business Email Report)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, pass it on! Please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Why ‘re-recruiting’ existing employees is critical for 2023

As the long tail of the Great Resignation continues to swish and sting, labor markets contract and economic uncertainty bites, organizations should make every effort in 2023 to hold on to their employees. More specifically, they should “re-recruit” workers already at the company, urged Microsoft’s Liz Leigh-Bowler.

To support the case for re-recruiting, the product marketing leader, based in Epson, U.K., cited the results of Microsoft’s recent global hybrid work survey, which captured answers from over 20,000 employees in 11 countries. Of the many telling statistics surfaced by the report, she said a handful stood out on this subject.

For example, two-thirds of employees would stay longer at their company if it were easier to switch jobs internally. Similarly, 76% of respondents would remain with their employer if they could benefit more from learning-and-development support. 

Unsurprisingly, without growth opportunities, most workers across all levels would depart. Without chances to develop, 68% of business decision-makers would not hang around. Worryingly, 55% of all employees reckoned the best way for them to learn or enhance skills would be to change employers. 

The level of workforce thirst for development has never been higher, according to the research. In fact, the opportunity to learn and grow is the number-one driver of a great work culture – a jump from ninth position in the rankings in 2019.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How the drive to improve employee experience could trigger a ‘data-privacy crisis’

How much personal information would you feel comfortable with your company knowing, even if it improves the working experience? Where is the line? Also, will that boundary be different for your colleagues?

Right now, it’s all a gray area, but it could darken quickly. Because of that fuzziness and subjectivity, it’s a tricky balance to strike for employers. On the one hand, they are being encouraged — if not urged — to dial up personalization to attract and retain top talent. On the other hand, however, with too much information on staff, they might be accused of taking liberties and trespassing on data privacy issues. 

In 2023, organizations are increasingly using emerging technologies — artificial intelligence (AI) assistants, wearables, and so on — to collect more data on employees’ health, family situations, living conditions, and mental health to respond more effectively to their needs. But embracing these technologies has the potential to trigger a “data-privacy crisis,” warned Emily Rose McRae, senior director of management consultancy Gartner’s human resources practice.

Earlier in January, Gartner identified that “as organizations get more personal with employee support, it will create new data risks” as one of the top nine workplace predictions for chief human resource offices this year.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

People are struggling to make new pals at work

Greek philosopher Aristotle, who died in 322 BC, considered “friendliness” one of his 12 virtues. Over two millennia later, Woodrow Wilson, the U.S. president throughout World War I, said: “Friendship is the only cement that will ever hold the world together.” However, almost a century after his death, the business world is crumbling — because having a best mate at work is increasingly ancient history.

New data from audio-only social media platform Clubhouse, based on a sample size of 1,000 U.S. workers, suggested 74% of people lost touch with a work friend during the coronavirus crisis.

The combination of the Great Resignation, enforced hybrid working policies, and organizations chopping and changing staff — in addition to any health complications suffered — means that fewer people now have besties at work.

Meanwhile, playing “you’re-on-mute” tennis on videoconferencing is not conducive to achieving game, set, match for a smashing new work friendship. The report reveals a worrying statistic: 61% of respondents said work friends are more critical post-pandemic. 

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in November 2022 – to read the complete piece, please click HERE.

How fair are employers really being about pay raises during the cost-of-living crisis?

You’d think the resignation of U.K. Prime Minister Liz Truss would have sent shockwaves of relief across the country. Perhaps it did in some ways, but the scorched earth she left behind, as a result of her cabinet’s hasty economic decisions, has U.K. public morale at an all-time low.

With inflation at a 40-year high and employees mired in a cost-of-living crisis that looks set to deepen, financial anxiety is sky-high. The worries pile up — including that some may not be able to afford their mortgage this time next year, due to the latest changes made by the Bank of England in response to the disastrous “mini budget. It’s clear we’re in for a shaky recovery.

A new Indeed and YouGov survey of 2,500 U.K. workers reaffirmed this. It showed 52% don’t think they are currently being paid enough to weather the current cost-of-living crisis. And that has a direct correlation to employees feeling undervalued, found the same report. Notably, healthcare and medical staff were most likely to feel underpaid (64%). Next on the list of unhappy workers were those who work in hospitality and leisure (61%) and legal (58%) industries.

To boost bank balances, 13% of those surveyed asked their employers for a pay raise. However, despite the real-earning squeeze, 61% of those who requested an increase either received less than they wanted or nothing at all. Little wonder that overall, 9% had applied for a new role, while others have resorted to taking on additional jobs.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Broken meetings culture is causing people to switch off, literally

It was only a matter of time. The endless meetings cycles that have become embedded in the working cultures of so many organizations across industries have escalated to the point where people are simply tuning out during them.

And with so many meetings still taking place on video, rather than in-person, a large number of people don’t think they need to be in them at all – which is leading to mass disengagement, according to some workplace sources.

A whopping 43% of 31,000 workers, polled from across 31 countries by Microsoft, said they don’t feel included in meetings. 

“Meeting culture is broken, and it’s having a significant impact on employee productivity and business efficiency,” said Sam Liang, CEO and co-founder of Otter.ai, a California-based software company that uses artificial intelligence to convert speech to text.

A recent Otter.ai study revealed that, on average, workers spend one-third of their time in meetings, 31% of which are considered unnecessary. But employers continue to plow ahead without changing these embedded structures.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Is long-term employee retention a losing battle?

Is the concept of a job for life dead?

The mass reassessment of careers people have undergone over the past few years – described by many as the Great Resignation, by others as the Great Reshufffle – is showing no signs of calming down. In fact, in the U.K., the trend seems to be accelerating.

More than 6.5 million people (20% of the U.K. workforce) are expected to quit their job in the next 12 months, according to estimates from the Charted Institute of Personnel and Development (CIPD), which published the data in June after surveying more than 6,000 workers. That’s up from 2021, when 16% of the U.K. workforce said they plan to quit within a year, according to the CIPD. Meanwhile, in March Microsoft’s global Work Trend Index found that 52% of Gen Zers and Millennials — the two generations that represent the vast majority of the workforce — were likely to consider changing jobs within the following year.

Tania Garrett, chief people officer at Unit4, a global cloud software provider for services companies, argued that it is time for organizations to get real — they are no longer recruiting people for the long term. Instead, they should embrace this reality, and stop creating rewards that encourage more extended service from employees. 

This article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Most HR professionals have got it wrong – longer hours do not mean better performance

The phrase “hard work pays off” (or subtle variations thereof) has to be one of the most popular nuggets of advice in the last century and beyond. This maxim, passed down from generation to generation, has conditioned us to believe that the more we do something, the more we will be rewarded. 

However, there is growing evidence that shows this attitude is counter-productive. Moreover, overworking is dangerous. And most worryingly, over two-thirds (68%) of European human resources professionals are peddling the idea that high-performing employees work longer hours than average employees, according to a study by Gartner.

How, then, can performance be improved in a world where people are exhausted (because they are working harder)?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

How employee monitoring has shifted from creepy to empowering HR teams

A friend giddily informed me a few days ago that she had “found the perfect eraser.” Perplexed as to why something that rubs out pencil marks would evoke such glee, I asked for more details. “This eraser is the ideal weight; I can rest it on the space bar, so the screen stays awake if I leave the desk,” she said. “That way, my manager thinks I’m still being active at my computer.”

Employees who feel they are being observed for no good reason tend to find a way to game the system, argued Brian Kropp, group vp and chief of research for Gartner’s HR practice. “If your employer is trying to screw you by creepily monitoring you, there are various things you can do to screw them over,” he said.

For instance, he revealed that if computer mouse activity is being tracked, then an analog watch can help. If you position the mouse on the watch, then the second hand creates just enough motion to make it still active.

Monitoring is on the rise, though. According to Gartner’s research, around 30% of the medium and large corporate organizations it assesses had tracking systems in place before the pandemic. “Now the percentage is more than 60%,” said Brian Kropp, group vp and chief of research for Gartner’s HR practice.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in September 2022 – to read the complete piece, please click HERE.

WTF is pre-covery?

Before employees start working at SevenRooms, a global guest experience and retention platform for the hospitality industry, they are automatically provided two weeks paid time off by their new employer.

The initiative, called Fresh Start, is part of the growing “pre-covery” trend — a term to describe an acknowledgment that employees must recharge before beginning a new challenge to avoid burnout.

Some professionals believe it can be a protective layer between success and failure. “The best organizations have realized employees can’t run at 100% for 100% of the time,” said Brian Kropp, group vice president and chief of research for Gartner’s HR practice. “We have to create time for breaks, moments of rest and recovery. The best organizations are increasingly thinking about ‘pre-covery.’”

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

People are being harsher in the workplace post-pandemic – how did we get here?

Be honest: are you snappier with your colleagues and harsher with your spoken and written words than two years ago? We might not like to admit it, but the pandemic altered us all, to a degree – at work and home. 

Individually, the change might be imperceptible. However, collectively it adds up to a negative conclusion. And if left unchecked, this general lack of positivity will toxify the workplace and corrode relationships.

Brian Kropp, group vp and chief of research for Gartner’s HR practice, expressed his concern for employers and their staff. “There are numerous things pulling employees apart from each other, and that’s incredibly difficult as an organization because the purpose of having a company is bringing people together, to collaborate, and to achieve something bigger than any individual could achieve alone,” he said.

Could this be the start of a worrying trend? “We’re finding that we are entering a period where things inside and outside our organizations are causing the workforce fragmentation,” Kropp added. 

This article was first published on DigiDay’s future-of-work platform, WorkLife, in August 2022 – to read the complete piece, please click HERE.

EY and others are offering employees MBAs and masters degrees – but is it a good investment?

When global accountancy firm EY discovered, through an internal survey, that almost three-quarters (74%) of its 312,000 staff in over 150 countries wanted to “participate in activities that help communities and the environment,” action was swiftly taken.

In late February, a unique course was launched: the EY Masters in Sustainability, in association with Hult International Business School in the U.K. The best part? It is free for all EY employees, regardless of rank, tenure or location.

The online-only learning program, which students can work through at their own pace, is designed to expand sustainability and climate literacy among EY workers. The hope is that these newly acquired skills will accelerate innovative sustainability services for clients.

EY’s budget for staff training is likely to be significantly larger than most other organizations. But as the Great Resignation trend drags on, more companies realize that investing in employee education – even if it’s not directly related to work – is good value. It can boost morale, generate fresh thinking, accelerate innovation, and – possibly most importantly right now – help attract and retain the best people.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in July 2022 – to read the complete piece, please click HERE.

What business leaders can learn from the rise of cross-industry U.K. strikes and union activism in the U.S.

It’s ironic that Britons, who stereotypically bask in small talk about the weather, are experiencing a so-called “summer of discontent” as temperatures hit record highs. The strike action in late June of railway workers, followed by criminal barristers, is likely to be copied in the coming weeks by inflamed teachers, airport staff, and healthcare workers, among others. 

With the cost-of-living crisis raging, it’s a tinderbox. And while it’s broadly understood that the predominant reason for the strike action is pay, some believe there is more to it. In fact, employees feeling they are not being heard by leadership is at the crux of the issue.

Business leaders in the U.K., U.S., or elsewhere would be wise to heed the lessons or risk sparking employee revolts that they can’t contain.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in July 2022 – to read the complete piece, please click HERE.

WTF is an employee engagement platform?

Every successful company has realized that its people are its greatest asset for decades – if not centuries. Now, more than ever before in the history of work, employers have to understand in great detail what their employees want and need because of the seismic shifts happening. 

With most organizations figuring out flexible and hybrid working models, their employees are the most critical stakeholders. For this reason, to gauge their sentiments, companies are turning to employee experience (EX) platforms.

What exactly are EX platforms, and when did they become a thing?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in May 2022 – to continue reading please click HERE.

The seven biggest hybrid-working challenges, and how to fix them

The phrase “new normal” is a misnomer, given the state of flux in the business world. Few organizations have been able to normalize operations; who can say they’ve nailed their hybrid working strategy with a straight face?

As Kate Thrumble, executive director of talent at marketing company R/GA London, said: “We are all on a – to use an overused word – ‘journey’ with the post-pandemic way of working. No one has cracked it yet. Even those with the best intentions will have to wait a year or two to understand the impact of today’s decisions.”

However, by matching the right technology solutions with the most pressing hybrid-working challenges, organizations will reach their end destination quicker: a happy, productive, engaged and empowered workforce.

So what exactly are the seven most significant business challenges and the best tech, tools and processes to solve them and speed up progress?

This article was first published on DigiDay’s future-of-work platform, WorkLife, in May 2022 – to continue reading please click HERE.

‘It’s going to get messy’: How rising generational divides could kill workplace culture

Intergenerational divides are more expansive than ever, and if left unchecked could quickly lead to toxic workplace cultures, experts warn.

Opinions on post-pandemic work values vary wildly across generations, according to a report from London-based global recruitment firm Robert Walters published in early March.

Some 60% of the 4,000 U.K. office workers surveyed reported a rise in “new challenges” when working with teammates from different generations. And 40% of respondents are “annoyed” at the post-pandemic working values and global-minded outlooks of colleagues in other age ranges.

This article was first published on DigiDay’s WorkLife platform in March 2022 – to continue reading please click here.

‘What’s in it for me?’: The employee question that needs answering in any return-to-office playbook

It’s crunch time for hybrid return-to-office plans, again.

After numerous false starts (thanks Delta and Omicron) it looks like a full-scale return to the office, in whatever shape or form that takes, has arrived. As such, a growing number of major organizations have started to show what hybrid model they’re going for.

Last week, Google told staff in the San Francisco Bay Area and several other U.S. locations that it will end its voluntary work-from-home phase in April, in favor of a plan where most employees will spend three days in the office and two working remotely.

Microsoft has also said it will reopen its Washington state and Bay Area offices, and that employees can configure what days they come to the office with their managers. Likewise, with all coronavirus restrictions officially lifted in England, organizations there are being pressured to articulate and activate their return-to-the-office plans.

Trite as it may be, it’s vital to acknowledge that an incredible amount has changed in the world of work since the pandemic struck almost precisely two years ago. And the most significant transformation has been where most of us work.

Models will naturally vary depending on the company, but there are a few essential guidelines that are worthwhile for all employers to take note of. Here’s a breakdown of five key areas employers need to have in their playbook.

This article was first published on DigiDay’s WorkLife platform in March 2022 – to continue reading please click here.

How to steer clear of ‘employee whiplash’ if driving a return to the office

On Valentine’s Day, Microsoft showed its affection to staff by announcing plans to reopen its Washington state and California Bay Area offices on February 28 — but will workers love it?

Due to the ongoing pandemic, the technology titan had indefinitely postponed return-to-work plans for its 103,000 employees, last September. But now its hybrid-working strategy has been revealed, and staff members are being called back into the office, it will likely spur other prominent organizations to follow suit. 

But could the sudden shift from remote to in-office working cause what Brian Kropp, chief of research for Gartner’s HR practice, calls “employee whiplash”? And, if so, what are the likely short- and long-term effects, and how can they be avoided?

This article was first published on DigiDay’s WorkLife platform in February 2022 – to continue reading please click here.