Go Flux Yourself: Navigating Human-Work Evolution (No. 28)

TL;DR: April’s Go Flux Yourself celebrates  World Autism Acceptance Month and examines why the economy is being rewired to reward pattern recognition, analytical depth and creative problem-solving while simultaneously locking out the population that does those things best.

Image created using Luma’s Uni-1

The future

“The disorder framing says: something is wrong with you. Fix it. Suppress it. Medicate it away. The superpower framing says: actually you’re special! Lucky you! Neither is useful. Neither is honest. Neither asks the more important question, which is: ‘What does this person need to understand about how they function?'”

I want to start with a confession. I ended contractual work at newspapers and established Pickup Media Limited well over a decade ago. While I have constructed various respectable-sounding explanations for that decision over the years, the honest version is simpler: I am just not very good at working in offices or with other people’s agendas.

I love collaborating, and I’m lucky to have a roster of fun clients, most of them longstanding, across a range of industries. But working on my own terms, with flexibility, in my own rhythm, knowing my value, and organising my days around how my brain actually functions has been the difference between surviving and thriving. I mention this not because I am claiming any particular neurological distinction, but because the older I become, the more I notice how many of the most capable people I know have quietly arranged their working lives around similar principles, whether or not they have a diagnosis to explain why.

I should also say clearly: I am in no way an expert on neurodiversity. I am a journalist. What follows in this month’s Go Flux Yourself is the product of research, interviews, and reporting, not clinical authority or lived experience. But the story I found when I started pulling at this thread is so striking, and so relevant to the questions this newsletter returns to every month – who benefits from the way we organise work, who loses, and what happens when we get it wrong? – that it felt irresponsible not to write about it. If I have got anything wrong, I would genuinely like to hear about it.

Here, then, is the story. Something peculiar is happening in the global labour market. The skills that employers say they most urgently need – pattern recognition, analytical depth, sustained focus, creative problem-solving, the capacity to hold a complex system in your head and spot the fault line nobody else can see – are the cognitive traits that many neurodivergent people demonstrate as their default setting.

Can you imagine doing a job that perfectly suits you? Not tolerably, but properly: one that uses the way your particular brain works as an asset rather than an inconvenience. Over three years ago, I interviewed Professor Erik Brynjolfsson for Raconteur. Brynjolfsson directs the Digital Economy Lab at the Stanford Institute for Human-Centered AI and is arguably the world’s leading authority on the relationship between digital technology and productivity. His argument was bracingly simple and remains, in 2026, largely unaddressed. “Human capital is a $200 trillion asset in the US, bigger than all the other assets put together, and about 10 times the country’s gross domestic product,” he told me in early 2023. “The most important asset on the planet is the one we’ve been measuring the worst.” Three years and an AI revolution later, there is no evidence that the measurement has improved. The consequence? Human capital is “probably the most misallocated asset on the planet. Businesses are not putting the right people in the right jobs.”

Consider what that means. Not just at the macro level, where the numbers are so large they become abstract, but at the human level. Brynjolfsson put it plainly: “Think of how many people are not in the right job, living lives of quiet desperation. They probably have some capabilities that could fulfil another job much better, but they’re not being matched to it because the infrastructure is not there.”

Around five years ago, he and his Stanford colleagues started building a platform called work2vec, which used data from 200 million online job postings to map the distance between skills, roles and people in a multidimensional space, making it possible to see how close an electrician is to a fibreoptics engineer, or a data scientist to a machine learning specialist. If you can see the adjacencies, you can redesign the pathways. For instance, if you need a fibre optics technician and you can see that electricians share 85% of the required skills, you train the gap rather than advertising for a unicorn. 

In 2026, the matching infrastructure Brynjolfsson called for is still largely missing. The World Economic Forum’s January 2026 scenario analysis suggests that the variable that determines whether AI augments or displaces us is not the technology. It is whether we invest in the people who use it.

The WEF’s Future of Jobs Report 2025, surveying over 1,000 employers representing 14 million workers, found that analytical thinking remains the top core skill, with seven out of ten companies considering it essential, followed by resilience, creative thinking, and curiosity. Demand for AI literacy skills increased by 70% in a single year. And 39% of all skills required in the job market are expected to change by 2030.

Brynjolfsson was talking about the entire labour market. But there is one population where the misallocation is so severe, so well documented, and so absurdly at odds with what the economy actually needs, that it amounts to a case study in institutional self-sabotage.

In the 2024/25 financial year, just 34% of disabled people with autism – a form of neurodivergence – in the UK were in employment, compared with 82% of non-disabled adults. That figure has improved over time, from roughly 15% in full-time work (the National Autistic Society’s long-standing survey figure), to 22% when the ONS first measured it formally in 2020, to 30% at the time of the Buckland Review in 2024, and to 34% now. A recent National Autistic Society report found that 77% of unemployed autistic people want to work. 

This is not a population that has opted out of employment. Rather, it is a segment that has been designed out, by application forms that penalise unconventional communication, interview processes that assess social performance rather than professional capability, and office environments built for a cognitive profile that represents, generously, 80% of the population. It is rather like designing a restaurant that only serves right-handed diners and then wondering why 15% of the population never books a table.

Autistic graduates, according to the Buckland Review, are twice as likely to be unemployed 15 months after university as non-disabled graduates. Those who do find work are the most likely of any group to be overqualified, the most likely to be on zero-hour contracts, and the least likely to be in a permanent role. If you deliberately set out to take some of the most analytically capable minds in the labour market and make them grateful for insecure work beneath their qualifications, you would struggle to design a more efficient system than the one we have now.

Indeed, in cybersecurity – an industry where the ISC2’s 2024 Workforce Study found 5.5 million people active globally against a gap of 4.8 million unfilled positions, meaning the workforce needs to grow by 87% to meet current demand – 19% of UK professionals already self-identify as neurodivergent, and NeuroCyberUK estimates up to three quarters of cognitively-able autistic adults could possess the aptitude for the field. We have a talent emergency the size of Ireland in cybersecurity alone, and a largely untapped population whose brains are wired for exactly the work that needs to be done. The fact that these two things have not been connected at scale is Brynjolfsson’s “most misallocated asset” argument in miniature.

April is World Autism Acceptance Month, the prompt for this edition of Go Flux Yourself. The opening quotation for this edition comes from Ben Branson, founder of Seedlip, the world’s first distilled non-alcoholic spirit, and more recently of The Hidden 20%, an award-winning neurodiversity charity, chart-topping podcast and community of over 250,000 people. 

Ben spent 39 years not knowing his brain worked differently. In those years, he built Seedlip – which began with a 17th-century book on herbal remedies, The Art of Distillation by John French, and a copper still bought online – from his kitchen in the Chilterns to 35 countries and 7,500 venues, sold a majority stake to Diageo, and pioneered a category now worth billions.

He also experienced addiction, homelessness, institutionalisation and childhood bullying so severe that he was seeing a psychiatrist at seven. I asked him how much of Seedlip’s success was his autism working for him, and how much was it destroying him. “Both,” he said. “Simultaneously. The whole time. The pattern recognition, the obsessive research, finding a book from 1651 and seeing a business nobody else could see, the sensory acuity: smell, taste, memory. One part of my brain was building something. The other was quietly unravelling.”

Image provided by Ben Branson

His diagnoses, for autism and Attention Deficit Hyperactivity Disorder (ADHD), arrived at 39. I asked what first made sense. I expected something structural, a relationship pattern perhaps, or a career decision. Instead: “The hearing. I’ve always had extraordinary hearing. I can listen to a TV on volume one. Then there’s the skin. Wool hurts me. Labels in clothing. I’d spent 39 years thinking that was just me being fussy.” Then, more quietly: “Before: I was broken. After: I had a different operating system.”

The Hidden 20%’s position is deliberately, almost stubbornly, precise: neurodivergence is neither a superpower nor a disease. “Both sides are lazy,” he said. “The disorder camp produces terrible statistics, not because neurodivergent people are broken, but because the system was built for a brain it doesn’t include. The superpower camp produces something almost as damaging: a toxic positivity where suffering gets minimised, and anyone who isn’t thriving is somehow doing neurodivergence wrong.” More plainly: “My brain has given me everything I’ve built. It’s also put me in hospital. The goal isn’t the silver lining. The goal is the truth.”

March 2025’s Go Flux Yourself explored the “Lost Einsteins”: economist Raj Chetty’s term for those with the potential to innovate but without the credentials, connections or capital to do so. Neurodivergent people are a particularly visible subset of those Lost Einsteins. Indeed, over 20 years ago, researchers at Cambridge, led by Sir Simon Baron-Cohen (Sacha Baron-Cohen’s cousin), assessed both Sir Isaac Newton and Albert Einstein as “fairly certain” to have been on the autism spectrum. The minds that changed the world, from Newton to Turing to Tesla, were rarely the ones that fitted the systems designed to assess them (more on this, and on Brynjolfsson’s concept of “the Turing Trap,” in The past, below).

I wrote last year for the New Statesman about teenage hackers and the pipeline from bedroom gaming to cybercrime. Holly Foxcroft, a neurodivergent cybersecurity specialist at OneAdvanced, told me that autistic minds are “analytical and data-driven, spotting patterns others miss”, and that neurodivergent teenagers often lack dopamine, making hacking “basically a puzzle with a reward at the end: social acceptance otherwise lacking in the physical world”. These teenagers are not short on talent. They are short on legitimate education and (legal) work pathways. And when you brick up the front door, you should not be entirely surprised when people start climbing through the window.

Caroline Cavanagh, a hypnotherapist, speaker and anxiety specialist based in Wiltshire – and, like me, represented by Pomona Partners for speaking – put it to me with rather less academic caution. “You are missing out on the Mozarts of their time, the Turings of their time. You are missing out on having this employee in your business.” One of her current clients, a severely autistic young man, spent most of secondary school at home because his school could not cope. His talent proved exceptional: video and animation of a quality most professional studios would envy. He is now making a documentary about why employers should be more aware of neurodivergent needs, which is either deeply inspiring or deeply damning, depending on how long you sit with it. “All of our systems were designed at times when neurodiversity wasn’t acknowledged,” Caroline told me. “If you weren’t within the typical, you either had to bend and cut your raw edges off to fit in, or you were excluded.”

The problem starts long before anyone reaches a workplace. A 2023 report by the National Autistic Society found that only 39% of primary teachers and 14% of secondary teachers surveyed had received more than half a day of autism-relevant training in the course of their careers, and one in seven children is estimated to be neurodivergent, according to the government’s latest estimate. Before anyone reaches a clinician, there is the queue: as of December 2025, 254,108 people in England were waiting for an autism assessment, more than the population of Southampton. 

Ben was blunter: “If someone reads a resource on our website and it’s the only support they’ve had in five years, something has gone catastrophically wrong upstream of us.”

Some pioneering, supportive companies are producing results that make the inaction of others look increasingly peculiar. For instance, JPMorgan Chase’s Autism at Work programme now employs over 150 people on the spectrum across nine countries with a 99% retention rate. Auticon, the global IT consultancy where every consultant is on the autism spectrum, does not accommodate autism so much as it is engineered around it. Microsoft has run a Neurodiversity Hiring Program since 2015, replacing traditional interviews with work trials, a move so obviously sensible you wonder why it took a trillion-dollar company to think of it.

But the argument is no longer just about inclusion: it is about competitive advantage in an AI-driven economy. Josh Hough, founder of home care software firm CareLineLive, put it well during Neurodiversity Celebration Week, in March: as AI reshapes the workplace, traits often linked to neurodivergence – focus, pattern recognition, problem-solving – are becoming more valuable, not less. “A lot of businesses still want people who tick every box,” he said. “The reality is, people who think differently often solve problems differently. You need people who don’t just follow a process, but can see a better way of doing things.” In an economy that automates the routine and rewards the lateral, the neurodivergent mind is not a risk to be managed but an advantage. So why do so few companies have formal neuroinclusion policies in place? And what actually needs to change? 

Both Ben and Caroline are specific in ways most corporate neurodiversity initiatives are not, largely because they work with actual humans rather than advisory boards. Ben ran a no-internal-email policy at Seedlip from day one, routing everything through Slack, not as a quirk but because his brain needs segmentation and clarity to function, and it turned out to make the whole company faster besides. His advice to the CEO who reads his story and thinks it sounds admirable but irrelevant: “You already have the answer. You just haven’t asked the right question. You’ve been told ‘get different voices in the room’. You nod. You hire a diverse panel. You put it in the deck. But have you ever asked ‘do we have a diverse mix of brains in here?’”

The people companies are losing – the undiagnosed, the mismanaged, the ones quietly burning out in environments designed for a cognitive profile they do not share – are, in Ben’s view, “probably your best problem-solvers. You just haven’t built the conditions for them to show you.” 

Caroline told me about a woman she works with who was ready to leave a job she was excelling at, for no other reason than that nobody had ever told her she was doing it well. Not a pay dispute, not a culture clash, not a better offer elsewhere. Just silence where a sentence would have been enough. “It might just be a positive email, ‘well done today’,” Caroline said. “With that tiny investment, you can get such a phenomenal return.” This is not indulgence. It is calibration, the kind of management attentiveness good bosses already apply to their strongest performers without ever calling it an accommodation.

Kay Sargent, a workplace design specialist I spoke to earlier this year, extends the argument to the physical environment: workplaces should offer a spectrum of sensory experiences rather than standardising for a single cognitive norm. “Over 50% of 20-year-olds consider themselves to be neurodivergent,” she told me. “That is no longer a neurominority.”

The Buckland Review described autistic people as “an untapped workforce” and made recommendations across five areas. An independent panel led by Professor Amanda Kirby was appointed in January 2025 to improve job prospects for neurodivergent people. Their report was due last summer. The silence since has been conspicuous, though perhaps not surprising from a system that has managed to build a 254,000-person assessment waiting list without apparently regarding it as urgent.

Ben’s daughter River has been diagnosed autistic. Nine generations of Bransons have worked in agriculture and invention; his great-grandfather was knighted for services to both. “Is neurodivergence the thread?” he said. “I find it very hard to believe it isn’t. My great-grandfather was knighted for what his brain produced. His great-great-granddaughter has been diagnosed autistic. They almost certainly share something fundamental. We just finally have the word for it.”

When I asked what he is doing differently for River, he said: “She already knows. She has the language. She won’t spend 39 years gathering questions without answers.”

Brynjolfsson told me in 2023 that the next decade could be “the best we’ve ever had on this planet,” provided we close the gap between technological capability and our human response to it. Three years on, that gap is wider. For the 34% of autistic adults in work, and the 77% who want to be, and the quarter of a million people waiting for a diagnosis that might open a door the system should never have closed, Brynjolfsson’s optimism remains theoretical.

I asked Ben, finally, what he would tell the 25-year-old version of himself about his brain. His answer was six words: “It’s not chaos. It’s a pattern.”

The pattern is there. The talent is there. What remains is for UK employers to do something as simple, and apparently as difficult, as asking the right question about the brains already inside their buildings.

The present

This last month, I’ve been in various rooms where the arguments in this newsletter played out in real time, with real people.

For instance, on 15 April, I attended a session at the Houses of Parliament hosted by the All-Party Parliamentary Group on the Future of Work, chaired by Lord Knight of Weymouth and organised by the Institute for the Future of Work. The topic was technological disruption and its impact on young people entering the labour market. The panellists included Fiona Aldridge, chief executive of the Skills Federation and member of the Skills England board, and Darius Norell, founder of Radical Employability (I followed up with both, and their words of wisdom about how to better prepare and serve youngsters for more successful careers will appear in May’s Go Flux Yourself). 

Fiona framed the paradox neatly, paraphrasing Bill Gates’ line: we are probably overestimating the short-term impact of AI on youth employment and underestimating the long-term impact. Much of what is currently being attributed to AI is actually being driven by economic conditions and businesses’ technology choices in response to them. But the long game is far less comfortable. Her financial services members found that only between 0.5% and 1.5% of the workforce will ever need to be real AI specialists, rather surprisingly. The rest will need to work alongside AI, which is a fundamentally different proposition and one that the education system is not yet configured to deliver.

David Hughes, CEO at Association of Colleges, was blunter in the IFOW session. The knowledge-rich curriculum to age 16 trains young people to pass tests, not to learn for life, he argued. The skills employers consistently say they want – confidence, problem-solving, working with others, the belief that you can work through a challenge – he called “middle-class skills”, because most middle-class kids absorb them at home. This hit me hard, as a middle-class father to two school-age children, each of whom has access to and enjoys a raft of extra-curricular activities, ranging from sport to scouts to drama and dancing.  

Many others do not, and the education system does not compensate. Some 40% of free-school-meals children leave school without good GCSEs in English and maths, Hughes added. And there are nine million adults in the UK with poor literacy. If you cannot read or write confidently, your chances of adapting to technological change are slim.

The most affecting contribution came from Deborah, a King’s Trust Young Ambassador who is neurodivergent and has ADHD. She described doing everything the system asks of her: upskilling, posting on LinkedIn, attending networking events, and completing courses. Then … nothing. “A lot of the young people I’ve spoken to feel like they’ve been led to a cliff.” The system invests in getting people to the edge and then stops.

Lord Knight identified the absurdity that lies beneath much of this: candidates using AI to optimise their applications, employers using AI to filter them, everyone applying in the analogue way, yet with AI on both sides of the process. Nobody in the room could explain why this was better than an AI-native, skills-portfolio-based approach where people are matched to roles on the basis of what they can actually do. Brynjolfsson’s work2vec, in other words, is not just a Stanford research project. It is the answer to a question many are asking.

Image created using Luma’s Uni-1

A week later, on April 22, I was at Olympia London, moderating a panel at Data Decoded called “Creating a data-led culture: the barriers and how to break them”, with Jason Foster (CEO of Cynozure), Matt Yates (Head of Talent Acquisition EMEA at Uber), Sam Davies (Director of Global Product Insight & Analytics for Comcast/Sky), and Kam Karaji (Director of Cybersecurity and Risk Management at the NFL, and holder of a Queen’s Gallantry Medal for counter-terrorism).

The main thrust of the session was that organisations are drowning in data yet starved for wisdom, and the barrier is not the technology: it is the culture. You can spend millions on dashboards, but if nobody in the room trusts the numbers enough to act on them, or is brave enough to say “I don’t know what this means” then the dashboards are (expensive) furniture. Meanwhile, the people most fluent in these tools, the ones who grew up with AI in their pockets, are the generation having the door shut on them. Entry-level roles are being hollowed out across sectors, and with them, the apprenticeship layer where mid-level judgment used to be built.

Image created using Luma’s Uni-1

The day before Data Decoded, I moderated a CX Divide breakfast roundtable for Moneypenny at the Ivy’s Granary Square Brasserie in King’s Cross, alongside 13 senior customer experience leaders – and I wrote about it here. Moneypenny had surveyed 2,000 UK business decision-makers and 5,001 consumers on the same questions, and the perception gap is extraordinary: on social media, businesses rated their own performance 36 percentage points higher than customers did. On web forms, 32 points. On chatbots, 28. Businesses think they are nailing it. Customers know otherwise. The gap between what organisations believe they deliver and what people actually receive is, I suspect, the CX equivalent of the neurodiversity employment gap: a system that measures its own performance by criteria the people inside it never agreed to.

Every room I was in this month, from the House of Lords to Olympia to the Ivy in King’s Cross, asked the same question: are we building systems for the people we actually have, or for the people we assume?

The past

In 1950, famed mathematician and codebreaker Alan Turing proposed what he called an “imitation game”: could a machine imitate a human so convincingly that an observer could not tell the difference? The question launched an entire field (and there is even a 2014 film with the same name about him). It also, inadvertently, set a trap for technology innovators.

Erik Brynjolfsson, the Stanford economist I quoted earlier in this edition, has written extensively about what he calls “the Turing Trap”. His argument is elegant and uncomfortable. When AI is focused on replicating human capabilities (passing the Turing Test, in other words), it creates machines that substitute for human labour. Workers lose bargaining power. Value concentrates. The people who control the technology become richer. The people the technology replaces become poorer.

However, when AI is focused on augmenting human capabilities – doing things humans cannot do alone, rather than mimicking what they already can – then humans remain indispensable. Complementarity, not substitution, is the path to shared prosperity.

The trap, then, is this: our instinct is to build machines in our own image, because imitation is how we measure intelligence. But the more successfully we do that, the more we make human workers replaceable. The goal, Brynjolfsson argues, should not be human-like AI but human-complementary AI. Not a machine that thinks like us, but a machine that thinks differently from us, so that together we can do more than either could alone.

Image created using Luma’s Uni-1

The irony is exquisite. The man who set the original test (can a machine pass for a human?) was himself a mind that the human system could barely accommodate. Turing was highly literal, socially unconventional, obsessively focused, and thought in patterns so alien to his contemporaries that his 1936 paper on computability was not widely understood for years. His colleagues at Bletchley Park found him brilliant but baffling. He chained his tea mug to a radiator to stop people borrowing it, cycled to work wearing a gas mask to avoid hay fever, and was, by every informed modern assessment, almost certainly autistic.

He was also prosecuted for homosexuality in 1952, chemically castrated, and died two years later at the age of 41. The system he saved could not tolerate the mind that saved it.

Turing is the most striking example, but he is not alone. Isaac Newton spent days without eating when absorbed in a problem, had virtually no close relationships, and was so socially withdrawn that his lectures at Cambridge were often delivered to an empty room. Henry Cavendish, who discovered hydrogen and measured the density of the Earth with extraordinary precision, was so averse to human contact that he communicated with his servants by letter. Oliver Sacks wrote that Cavendish’s biography “constitutes perhaps the fullest account we shall ever have of the life and mind of a unique autistic genius”.

Elsewhere, Charles Darwin was a solitary, anxious child who preferred long walks and letter-writing to social interaction; Michael Fitzgerald at Trinity College Dublin published research concluding he had Asperger’s syndrome. Nikola Tesla, whose alternating-current motor powers the world, had an obsessive need for numerical patterns (he would circle a building three times before entering), extreme sensory sensitivity, and virtually no capacity for casual social engagement.

What connects them is not just that they were brilliant. It is that their brilliance was inseparable from the way their brains worked, the traits that made them difficult to employ, difficult to manage, and difficult to assess by conventional means. Newton’s ability to focus for days without interruption was not a personal quirk. It was the cognitive engine that produced the Principia. Tesla’s compulsive pattern-seeking was not a disorder. It was the source of his inventions. And Turing’s literalism and unconventional thinking were not social deficits but the attributes that enabled him to conceive of a machine that could think.

We tend to celebrate these minds in retrospect while systematically excluding their contemporary equivalents. Newton gets a statue; the autistic physics graduate gets a zero-hour contract. Turing gets a posthumous pardon; the neurodivergent teenager gets a 17-month wait for a diagnosis.

Brynjolfsson’s Turing Trap offers a way of thinking about why. If the default instinct is to measure intelligence by how closely it resembles the norm (the imitation game), then any mind that deviates from the norm will be penalised, regardless of what it can do. The trap is not just economic: it is cognitive. We have built assessment, education, and employment systems around the assumption that intelligence looks a particular way: fluent, sociable, compliant, generalist. The minds that break through are the ones lucky enough, or stubborn enough, to route around the system entirely.

Ben Branson built Seedlip from his kitchen because no employer would have known what to do with him. Turing broke the Enigma code because Bletchley Park was desperate enough to overlook his eccentricities. Newton produced his greatest work during the plague, when Cambridge shut and he was left alone. So far, civilisation’s most reliable method of supporting neurodivergent genius has been to accidentally leave it alone. If I were a teacher marking some work, I would probably write “room for improvement”.

Tech for good: Liverpool City Region Combined Authority

It is one thing to talk about AI serving people. It is another to be handed health, transport and education for 1.6 million residents and told to make it happen.

Tiffany St James is the Chief AI Officer for the Liverpool City Region Combined Authority, the UK’s first regional public sector CAIO, a role she took up in September 2025. I spoke to her for a recent episode of DTX Unplugged, the podcast I co-host. She clearly articulated the gap between AI enthusiasm and AI usefulness.

“Almost everyone starts with the problems,” she said. “What are the wicked problems? How can I fix them? It’s not bad doctrine, but I like to start with maturity. Where are we now? Where are we trying to get to?”

That sequencing matters because Liverpool is not starting from scratch. The region has over 450 kilometres of high-fibre infrastructure enabling 5G, a supercomputer at STFC Hartree offering affordable slices of compute to smaller businesses, and undersea cables running from the US and Ireland into Southport. The building blocks are there. What was missing was someone to connect them to outcomes that matter to people’s lives.

Image of Tiffany taken from DTX Manchester in April

One programme Tiffany inherited and is now expanding puts assistive learning technology into primary schools, currently 10% of the region’s primaries, focused on the critical Year 6 transition before secondary school, offering personalised support in science, maths and English. “We can’t fund this in perpetuity,” she said. “But what we can do is help our teachers have exposure to different tools to enable them to be more critical consumers of the technology on offer.”

What makes her approach distinctive is the insistence on people before technology. “What I see time and time again is organisations leaning into technology first, and that just amplifies good and bad culture and processes. The pace of change of people, their skills, their infrastructure, their confidence, is mismatched with the pace of technology.”

That principle was forged under pressure. In June 2017, Tiffany was called into Gold Command to run digital communications during the Grenfell Tower response. From that experience, she developed what she calls a “single-line strategy”: one sentence that gives a team the clarity to say no to distractions and yes to what matters. For Liverpool’s AI programme, the strategic touchstone is: We are here to help better outcomes for residents, citizens and visitors, enabled by AI.

Liverpool has also produced a resident-led Data and AI Charter, developed through a Civic Data Cooperative, with 11 principles setting out how the region’s residents want their data to be used. It has been recognised by central government as a model of best practice.

“If you say, ‘Please give me your data,’ they’ll say no,” Tiffany told me. “But if you say, ‘We could understand if your house was at risk of fire and get the fire brigade to you faster,’ they’ll say yes.”

This is what AI for good looks like when someone actually has to deliver it: a maturity assessment, a single-line strategy, a resident charter, and the honesty to admit you cannot fund everything in perpetuity.

Statistics of the month

🧠 The engagement slump
Global employee engagement fell to 20% in 2025, its lowest level since 2020 (remember what happened that year?), marking the first time Gallup has recorded two consecutive years of decline. Low engagement costs the world economy an estimated $10 trillion in lost productivity, equivalent to 9% of global GDP. Europe reports the lowest regional engagement at just 12%. (Gallup, State of the Global Workplace 2026)

📉 The manager crisis
Manager engagement dropped from 31% in 2022 to 22% in 2025, accounting for most of the wider engagement decline, according to the same Gallup report. Non-manager engagement has stayed roughly flat. In best-practice organisations, 79% of managers are engaged, nearly four times the global average. The technology works. The management doesn’t. (Gallup, State of the Global Workplace 2026)

🔐 The shadow AI blind spot
Two-thirds of UK organisations do not know what data is being shared with AI tools. A third admit employees are sharing data through external, unsanctioned tools. When 44% of UK workers have used unapproved AI in the past 30 days, and 39% have done so with confidential data, the question is no longer whether shadow AI is a problem. It is whether anyone is looking. (SailPoint)

🏙️ London’s AI exposure
More than a million Londoners are in jobs facing major change from AI, roughly one in five. The exposure is heaviest in the same entry-level and administrative roles that the IFOW session at Parliament identified as already being hollowed out. The future is arriving fastest for the people least prepared for it. (Mayor of London / GLA Economics)

🏥 The wellbeing strategy gap
Some 43% of UK companies do not have a formal health and wellbeing strategy in place. For 18%, simply offering benefits is the strategy. A further 13% offer support on an ad hoc basis. In a labour market where engagement has hit a five-year low, nearly half of UK employers are, to use a technical term, winging it. (Everywhen)

If you’re reading this – thank you – and haven’t yet subscribed, you can sign up for Go Flux Yourself (there should be a pop-up). Please feel free to share it with friends and colleagues, too. Each edition lands on the last day of the month.

Get in touch: oliver@pickup.media. I write, speak, and strategise on the future of work, AI, and human capability. For speaking enquiries, contact Pomona Partners.

Go Flux Yourself: Navigating Human-Work Evolution (No. 25)

TL;DR: January’s Go Flux Yourself explores the backlash against cognitive outsourcing, led by the generation most fluent in AI. Some 79% of Gen Z worry that AI makes people lazier; IQ scores are declining within families; and children are arriving at school unable to finish books. Welcome to the “stupidogenic society”. 

Image created by Nano Banana

The future

“Gen Z’s relationship with AI is fraught. Even as they use AI extensively, they harbor [sic] concerns about its long-term effects on human capability. It may be that, as young adults observe themselves and their peers offload more and more cognitive work to AI, they wonder whether convenience today brings diminished capacities tomorrow.”

The above quotation comes not from a Luddite technophobe or a concerned headteacher, but from Harvard Business Review, published a couple of days ago. The researchers surveyed nearly 2,500 American adults aged 18 to 28, the most AI-fluent generation on the planet, and discovered something that should give pause to anyone in the business of building intelligent machines: the people using AI most are the people most worried about what it’s doing to them.

This is rather like discovering that Cadbury employees are most concerned about chocolate addiction. It suggests they know something the rest of us don’t.

According to the HBR paper, 79% of adult Gen Zers worry that AI makes people lazier, while 62% worry it makes people less intelligent. One respondent offered a comparison that would make a public health official wince: “The mind is a muscle like any other. When you don’t use it, that muscle atrophies incredibly fast. Any regular use of AI to outsource thinking is as bad for you as a pack of cigarettes or a hit of heroin.”

Perhaps that’s overstating things. A ChatGPT query probably won’t give you lung cancer (although it is reckoned that being lonely has the equivalent mortality impact of smoking 15 cigarettes a day). Nonetheless, the underlying anxiety is worth taking seriously because it comes from people with no ideological opposition to the technology. They use it constantly. They just don’t like what’s happening to themselves.

The backlash against an insidious over-reliance on tech, in general and AI in particular, is, in other words, being spearheaded by digital-native users. This is the equivalent of a focus group for a new soft drink concluding that the product is delicious, highly refreshing, and almost certainly corroding their internal organs.

By the end of this decade, Gen Z will comprise around a third of the global workforce. These are the people who will inherit the AI systems we’re building today. And increasingly, they’re asking an existential question that their (current or prospective) employers haven’t yet considered: what happens to human capability when machines do our thinking for us? It’s a reasonable thing to wonder. Especially if you’re the one whose capability is at stake.

Meanwhile, it was a case of back to the future with the theme of this agenda-setting year’s World Economic Forum, held in mid-January in Davos, “a spirit of dialogue”. Not “innovation” or “transformation” or “the intelligent age”, but dialogue. Conversation. The radical proposition that perhaps we should talk to each other.

As Børge Brende, the Forum’s President, put it: “Dialogue is not a luxury in times of uncertainty; it is an urgent necessity.” The meeting drew record numbers: close to 65 heads of state and government, nearly 850 of the world’s top CEOs, and almost 100 unicorn founders and technology pioneers. All gathered, ostensibly, to talk. One hopes they did more listening than is typical at such events.

It’s a telling choice of theme. After years of breathless enthusiasm for disruption, the global elite has discovered the merits of slowing down and having a chat. Historically, the technology industry – especially on the other side of the Atlantic – has operated under the principle that it’s easier to ask forgiveness than to ask for permission. The fact that Davos is now hosting panels on “cognitive atrophy” suggests that even those who profit from disruption are starting to wonder whether they’ve disrupted something important.

At one such session, William Hague, the former Foreign Secretary and current Chancellor of the University of Oxford, offered a framing that deserves wider circulation: “As AI adoption accelerates, cognitive skills are becoming more economically valuable. Yet we are outsourcing more of those than ever before.”

This is a genuinely fascinating paradox. The more valuable thinking becomes, the less of it we seem inclined to do. It’s as if, upon discovering that physical fitness correlates with longevity, we collectively decided to spend more time on the sofa. Which, come to think of it, is exactly what happened.

Analytical thinking remains the most sought-after core skill among employers, according to WEF’s research. But human-centric skills, empathy, active listening, and judgment can be quietly eroded without regular practice. “Core cognitive capabilities deteriorate over time,” Hague noted. “Will we divide as humanity between people whose mental faculties are enhanced by AI, and others for whom they are reduced?”

It’s the right question. And the answer, increasingly, looks like yes. We may be building a world with two kinds of people: those who use AI as a tool, and those who become tools of AI. The distinction is subtle but important. A carpenter uses a hammer. A nail does not.

A couple of weeks ago, I spoke with Daisy Christodoulou, Director of Education at No More Marking and one of the sharpest thinkers on teaching in Britain. She offered a diagnosis that’s both illuminating and magnificently ugly.

“We created an obesogenic society,” she told me. “Physical machines got better and better, the need for physical labour went down, and we ended up with conditions where it’s easy to become obese. I think we’re heading for something similar with thinking. As AI gets better, there’ll be less thinking for humans to do. And if you don’t practise something, it wastes away.”

Daisy calls it “a stupidogenic society”. It sounds like a dystopian novel that’s trying too hard. But the concept is precise: an environment where intelligent machines reduce the need for mental effort, and unused mental muscles atrophy accordingly.

Nobody set out to make people fat. We simply made food abundant, cheap, and delicious, then acted surprised when waistlines expanded. Similarly, nobody is setting out to make people stupid. We’re simply making thinking optional, then acting surprised when fewer people bother.

Daisy draws an illuminating historical parallel. “Read any diary from the past, Samuel Pepys’ is full of it, and you’ll find people playing instruments constantly. Not because they were more musical than us, but because if they didn’t play, they’d have no entertainment.” Now Spotify delivers better music than most amateurs can produce, so the incentive to practise has evaporated. Why spend 10,000 hours mastering the violin when you can summon Hilary Hahn with a tap?

This chimes with something an old University of St Andrews friend, Jono Stevens, told me over spicy pho (which seems to have cleared a nascent cold) this week. Jono co-founded Just So, a brilliant, human-story-focused film company, and he’s grappling with the same questions as a parent. His daughter announced that she didn’t see the point of piano lessons anymore. Why struggle through scales when Spotify can deliver Chopin on demand?

The piano student isn’t just learning to play the piano. She’s learning to persist, to tolerate frustration, to hear the gap between intention and execution and narrow it through practice. Spotify, however wonderful, teaches you nothing except how to make a playlist.

Is this where we’re going? Where we can’t be bothered to do the hard work of mastering things, whether it’s the piano, philosophy, or more practical cognitive skills?

“What happened with music over centuries is being sped up for writing because of AI,” Daisy argues. And writing, she stresses, isn’t merely communication. “There’s a limit to the level of complex thoughts you can have without the process of writing. If you offload that to AI, you risk losing the ability to develop sophisticated ideas at all.”

The act of putting thoughts into words forces a rigour that thinking alone does not. Anyone who has tried to explain something and realised mid-sentence that they don’t actually understand it will recognise this. Outsourcing writing is, in a very real sense, outsourcing thought itself.

The HBR researchers found that the dominant worry among Gen Z, raised by 68%, was “crowding out learning by doing”. As one respondent put it: “Bots do the work for people, so they don’t have to learn anything.” A study from MIT’s Media Lab last year backs this up: participants who used AI assistance showed reduced activation in brain regions associated with critical thinking and creative problem-solving.

The brain, it turns out, is not a hard drive. You can’t store capabilities and expect them to remain intact. It’s more like a garden: without regular tending, things wither. The Victorian educationalists who insisted that learning Latin was good for the mind were probably onto something, even if their reasoning was wrong. The struggle itself was the point.

Humans are “cognitive misers”, Daisy explains. “We instinctively want to conserve mental energy. That’s a good thing. The philosopher Alfred North Whitehead said civilisation expands by the number of things you can do without thinking about them. But that drive for progress has a byproduct: we just stop thinking as much.”

This is the great irony of efficiency. We automate tedious tasks so we can focus on the important ones, only to discover that the tedious tasks were training for the important ones.

The evidence predates smartphones. The Flynn effect, the observed rise in intelligence quotient scores across generations throughout the 20th century, has reversed in several developed nations, from the 1990s onwards. The reverse Flynn effect shifted into a different gear in 2010, when the iPhone became ubiquitous. The most striking data comes from Scandinavia, where military conscription provides consistent testing across generations. Researchers have observed IQ declining within families: the third brother scoring lower than the second, who scored lower than the first. Same genetics, same household, but something cultural shifting between cohorts. “It’s not genetic,” Daisy emphasises. “It’s environmental.”

Colin Macintosh sees this play out in his classrooms every day. He’s the headmaster of Cargilfield, an Edinburgh prep school, and – full disclosure – my A-level English teacher at Shrewsbury School. He’s the reason I studied at St Andrews. When Colin reached out to discuss children’s attention spans, following December’s New Statesman piece on the under-16s social media ban in Australia, his concern was palpable, and this is a man who has spent decades dealing with 12-year-olds. He is not easily rattled.

“We’ve noticed that pupils with phones at home struggle more with sustained reading,” he told me. “Year 7 and 8: they can’t finish books. Their attention has been fragmented before they even arrive.”

Without due care and consideration, children arrive at secondary school already damaged, having spent too much of their formative years on a screen. Their capacity for sustained focus is compromised before anyone has had a chance to develop it. This is a failure of the environment in which education happens.

Colin has become an advocate for what Cal Newport calls “deep work”: the ability to focus without distraction on a cognitively demanding task. His school is minimising screen use rather than trying to use screens “in a better way”. No one-to-one devices. Deliberate cultivation of concentration as a skill, scaffolded by teachers, built up over time.

“Deep work doesn’t appear on the WEF’s list of future skills,” Colin observes. “But it underpins virtually everything else on that list. Critical thinking, complex problem-solving, creativity: none of them is possible without the ability to sustain focus.”

It’s rather like noting that “breathing” doesn’t appear on the list of skills needed to run a marathon.

The WEF’s scenario planning paper, Four Futures for Jobs in the New Economy: AI and Talent in 2030, published this month, inadvertently illustrates the point. After surveying over 10,000 executives globally, it finds a telling asymmetry: 54% expect AI to displace existing jobs, 45% expect it to increase profit margins, but only 12% expect it to raise wages. The gains flow to capital; the losses fall on labour. 

The paper’s proposed solution is a new category of worker: the “agent orchestrator”, someone who manages “portfolios of capable machines”, overseeing hundreds of AI systems simultaneously, defining objectives, evaluating outputs, and handling exceptions. It’s presented as the human role that survives automation. But consider what orchestration actually demands: task decomposition, systems thinking, quality judgement across multiple complex domains, the ability to spot when AI is subtly wrong rather than obviously broken.

These are not skills you develop while scrolling. They require sustained, deep cognitive work that our attention-fragmenting environment systematically erodes. We’re being told the future belongs to people who can concentrate intensely on complicated problems, while raising a generation that struggles to finish a book.

Colin revealed his lowest professional moment, from a couple of years ago. An Ofsted inspection in which colleagues panicked that inspectors hadn’t seen enough screens during lessons. “The next day, screens appeared everywhere. Madness.”

We’ve been so worried about preparing children for the digital future that we’ve undermined the cognitive foundations they need to thrive in it.

Not everyone agrees that technology is the enemy, however. Neil Trivedi, head of mathematics at MyEdSpace, teaches thousands of students simultaneously through TikTok. His results are remarkable: 51% of his students achieve grades 7-9 at GCSE, roughly triple the national average.

“Blanket bans push children underground,” Neil argues. “They remove adult visibility and ignore the genuine democratisation of excellent teaching that these platforms enable.” Students who might never access great maths teaching in their local school can now learn from qualified teachers anywhere.

This is genuinely good. The postcode lottery of educational quality is a scandal, and anything that addresses it deserves credit. But even Trivedi acknowledges the “wild west” nature of social media education, where credentials are rarely verified. “Do your due diligence,” he urges parents. “Verify qualifications, teaching experience, exam track records.”

The question isn’t whether technology can be beneficial. It clearly can. It’s whether we’re building the cognitive foundations children need to use it well. A calculator is a wonderful tool if you understand mathematics. If you don’t, it’s just a box that produces numbers you can’t evaluate.

On the Defying Cognitive Atrophy panel in Davos, Omar Abbosh, CEO at Pearson, described devices as “weapons of mass distraction”. The smartphone is not a neutral tool. It’s a device optimised by some of the world’s cleverest engineers to capture and hold attention (“behavioural cocaine”, as Aza Raskin, who designed the infinite scroll in 2006, called it). Giving one to a child and expecting them to use it wisely is like handing a teenager a bottle of vodka and expecting them to appreciate the subtle notes of grain.

At another Davos session, Dario Amodei, CEO of Anthropic, and Demis Hassabis, CEO of Google DeepMind, sat down for what was billed as “The Day After AGI”.

Amodei was characteristically direct about labour market impacts. “Half of entry-level white-collar roles could be affected within one to five years. We’re already seeing early signs at Anthropic: fewer junior and intermediate roles needed in software.”

Here we have a sitting CEO of one of the world’s leading AI companies describing what’s already happening in his own organisation.

Hassabis called for international coordination and minimum safety standards, suggesting “slightly slowing the pace to align societal readiness”. From a man whose company is racing to build artificial general intelligence. When even the competitors think they’re going too fast, it’s worth paying attention.

At another panel on parenting in an anxious world, speakers warned that after a decade of tech companies successfully “hacking” human attention, they are now poised to hack the fundamental human attachment system. The development of frictionless, sycophantic relationships with AI companions threatens to corrupt the blueprint for human connection that children form in their early years.

Attention is one thing. We can rebuild attention spans, with effort. But attachment? The way children learn to relate to other humans? If that gets corrupted, we’re not talking about a generation that struggles to read books. We’re talking about a generation that struggles to love.

A “state of emergency” was called for by Jonathan Haidt, author of The Anxious Generation. Any application intended for children, the panel argued, must come with “a mountain of evidence” proving it is both effective and safe before being introduced. Ah, yes, the precautionary principle, the thing we abandoned somewhere around 2007, when the iPhone was launched.

Perhaps the starkest illustration of where we are came from OpenAI this month. The company advertised a new role: head of preparedness. The salary: $555,000. The job: defending against risks from ever more powerful AIs to human mental health, cybersecurity, and biological weapons, before worrying about the possibility that AIs may soon begin training themselves.

“This will be a stressful job,” CEO Sam Altman said, “and you’ll jump into the deep end pretty much immediately.”

If your business requires a dedicated position to defend against the risks it creates, you might consider creating fewer risks. But that would be naive. The train has left the station.

It has been reported that Altman has a sign above his desk that reads: “No one knows what happens next.” I’ve quoted it before, and it still reeks of diminished responsibility. But I find myself wondering: will Altman allow his own young children access to AI, social media, and chatbots? Many tech leaders, going back to Steve Jobs, have limited or banned their children’s access to the technologies they build for everyone else.

When drug dealers don’t use their own product, we draw conclusions.

There is, perhaps, a different path. Adam Hammond, whom I interviewed at the end of last year about IBM’s quantum computing programme, offered a useful distinction. “Unlike AI, where we have this amazing technology, and we’re now looking for the problems to solve with it, with quantum, we’ve actually got a pretty good idea of the problems we’ll be able to solve,” says the Business Leader of IBM Quantum across EMEA.

IBM expects to demonstrate quantum advantage within the next 12 months. The technology is designed to augment human expertise, not replace it. And critically, it requires human judgment to direct. No one is worried about quantum computers becoming our friends or raising our children.

Technology that serves human capability, rather than substituting for it. That’s the distinction we need to hold onto.

Daisy believes the tide is turning. “I’ve been saying the same things for 15 years: more in-person exams, fewer devices, more focus on concentration. I used to get real opposition. Now people just say, ‘Yeah, it’s got to happen.'”

Will we act before the damage becomes irreversible? A generation that cannot think is a generation that cannot solve problems, including the problem of how to think. At some point, the decline becomes self-reinforcing.

The present

I appeared on the Work is Weird Now podcast in the middle of the month, in a LinkedIn live session (with all the technical issues you would imagine), on the eve of the WEF’s AGM in Davos. We discussed what 2026 might hold for work, and I found myself struck by a tension that I suspect will define the year.

Days earlier, the Consumer Electronics Show in Las Vegas had showcased the year’s technological highlights. The theme was unmistakable: robotic AI, physical AI,  humanoid machines that walk and talk and, we’re told, will soon fold our laundry. The energy was bullish. The future belongs to the machines. Very expensive machines that currently fall over on uneven surfaces, but machines nonetheless.

Then Davos happened, with “a spirit of dialogue” and panels on cognitive atrophy. In one city, technologists were racing to build autonomous systems. In another, policymakers were asking whether we’ve built too much, too fast. Accelerating and soul-searching at the same time. Driving at 100 miles per hour while earnestly discussing whether we should have taken a different road.

To attempt to capture more of what I have learnt and am thinking about, I’ve started a new weekly video called “Thank Flux It’s Friday” and even set up a YouTube channel, Go Flux Yourself. It’s rough and ready: two or three minutes of me talking through what I’ve seen and thought about during the week. The first edition featured me red-faced after a morning jog, too close to the camera, and wandering more than a drunken sailor. I’ve since invested in a vlogging kit (who have I become?). 

The idea emerged from a conversation with Simon Bullmore, Lead Facilitator on the Digital Marketing Strategy and Analytics programme at the London School of Economics, among other things, who coined a phrase I now can’t stop using: “AI slopportunity”. 

“AI slop” – low-quality, generic content churned out by generative AI – has reached such prominence that Australia’s Macquarie Dictionary named it Word of the Year for 2025. The term captures a growing weariness with content that technically exists but says nothing worth reading.

People are craving authentic human content because so much of what they encounter is machine-generated. A colossal majority of content on LinkedIn is now AI-written. You can spot it a mile off: the relentless enthusiasm, the suspiciously perfect structure, the way it says nothing while appearing to say something.

If I’m genuinely concerned about AI making us intellectually lazy, I need to do something about it. Humans need to be the lead dance partner. So I’m showing up, unscripted, every Friday – and now with a tripod.

The AI slopportunity is real because AI slop is everywhere. On LinkedIn a couple of weeks ago, I saw a post from someone who had bought a book on  Amazon. It was a legitimate-looking business title with a proper cover and ISBN. Inside, at the end of every chapter, the author had left the ChatGPT response in the text. “Excellent flow and structure, Simon,” the AI had written. “This content is shaping up beautifully: informative, sharp, and just irreverent enough to keep readers engaged.”

The “author” had forgotten to delete the AI’s praise for his own work. Scaffolding still attached to the building. Except that the scaffolding is smarter than the building. As the LinkedIn poster wrote: “The bar is on the floor at this point.”

When slop becomes indistinguishable from substance, how do we know what to trust? Authenticity, even clumsy authenticity, has value. Perhaps especially clumsy authenticity.

Image created by Nano Banana

Who – or what – can we trust is a question that is multidimensional now. My son turns 12 later this year, the age I was when my father attempted to give me “the talk” about the birds and the bees. That chat is still necessary, of course. But I’m wondering whether we need an additional version: not about bodies, but about social media.

Call it the “feeds and the feels” chat. Or “the likes and the lies”. Suggestions welcome.

Here’s what I’d want my children to understand:

  1. You are the product. These apps are free because they’re selling your attention. When something is free, you’re not the customer. You’re the merchandise.
  2. Nothing disappears. Screenshots exist, and your digital fingerprints are on everything you create or comment on. What you post at 11 can follow you to job interviews at 21. The internet has a longer memory than you do, and considerably less forgiveness.
  3. Not everyone is real. Fake profiles, AI-generated people, adults pretending to be kids. If someone online feels off, they probably are. Trust your instincts. They evolved over millions of years. The people trying to fool you have had about 15.
  4. Likes are a vanity metric. Highlight reels aren’t real life. The person with 10,000 followers can be just as lonely as anyone else. Probably lonelier. Nobody counts their friends if they feel they have enough.
  5. Possibly most importantly, tell me when it goes wrong. You won’t be in trouble. I’d rather know than have you deal with it alone.

None of this is radical. But unlike sex education, there’s no curriculum, no embarrassed teacher with a banana. Parents are largely on their own, navigating platforms we don’t fully understand, making rules that feel arbitrary because they are.

The good news is that policy is catching up. A few days ago, the House of Lords voted 261 to 150 to back an amendment to the Children’s Wellbeing and Schools Bill that would ban social media for children under 16. The amendment, supported by a cross-party coalition of Conservative, Liberal Democrat, and cross-bench peers, puts significant pressure on the government to strengthen online safety regulations.

Baroness Hilary Cass, the paediatrician who led the landmark review into NHS treatment of children with gender dysphoria, was characteristically direct: “Direct harms are really overwhelming. This vote begins the process of stopping the catastrophic harm that social media is inflicting on a generation.”

The government has indicated it will try to overturn the amendment in the House of Commons, preferring a three-month consultation to explore options. One suspects the consultation will conclude that more consultation is needed.

Some 60 Labour MPs had already written to Keir Starmer urging him to “show leadership”. The letter notes that the average 12-year-old now spends 29 hours a week on a smartphone. More than a part-time job. Except the job is being advertised to and manipulated by algorithms, and the pay is anxiety.

Indeed, more than 500 children a day are being referred for anxiety in England alone. For teenage boys, going from zero to five hours of daily social media use is associated with a doubling of depression rates. For girls, rates triple. If a medication produced these outcomes, it would be withdrawn immediately.

Cass offered a powerful analogy: “Consider nut allergies. When children died, their families demanded action to protect others. We did not tell grieving parents we needed more data, or that causation wasn’t conclusive, or that most children like nuts, so we wouldn’t act. Why is social media different?”

Good question. The answer, presumably, is that nuts don’t have lobbyists.

Cross-party consensus is forming at an unusual speed. Denmark, France, Norway, New Zealand, and Greece are expected to follow Australia’s lead. The generation we’re trying to protect may be the last one capable of understanding why it matters.

The past

In 1944, the Office of Strategic Services, the wartime predecessor to the Central Intelligence Agency, produced a classified document called the Simple Sabotage Field Manual. Its purpose was to instruct ordinary citizens in occupied Europe on how to disrupt enemy operations from within. Not through explosives or assassinations, but through bureaucratic friction.

The section on “General Interference with Organisations and Production” reads thus: “Insist on doing everything through ‘channels’. Never permit short-cuts to be taken in order to expedite decisions. Make speeches. Talk as frequently as possible and at great length. When possible, refer all matters to committees for ‘further study and consideration’. Attempt to make the committees as large as possible, never less than five. Bring up irrelevant issues as frequently as possible. Haggle over precise wordings of communications, minutes, resolutions.”

You’ve probably attended a meeting this week that followed this playbook exactly.

The super-smart and inspirational BS-cutter Rebecca Hinds, a Stanford-trained organisational behaviour expert, opens her new book, Your Best Meeting Ever, with this manual. The first chapter is titled “How Meetings Turned into Weapons of Mass Dysfunction”. Her argument is that the meeting culture that paralyses modern organisations wasn’t designed to help us work. It was literally designed to sabotage.

The central thesis is deceptively simple: treat meetings like products. “Meetings are your most powerful product,” Rebecca writes, “but like any great product, they require deliberate design, constant iteration, and the courage to rethink everything.” Reid Hoffman’s endorsement puts it well: “We should design great meetings like we design great products.”

Few organisations do this. Most default to 30 or 60-minute slots because that’s what the calendar offers. Most over-invite, adding spectators rather than stakeholders. Most use agendas as checklists rather than action plans. The result, Rebecca calculates, is that inefficient meetings cost American companies an estimated $1.4 trillion annually. That’s roughly Australia’s GDP.

Her 4D CEO test offers a brutal filter: does this meeting involve a decision, a debate, a discussion, or the development of yourself or your team? If not, it probably shouldn’t be a meeting. Status updates fail the test. Boss briefings fail the test. Even brainstorming, she argues, often fails the test.

And then there’s the aftermath. “You leave the meeting,” Rebecca told me when we spoke earlier this month. “The meeting doesn’t leave you.” She calls it a “meeting hangover”: the cognitive residue that lingers for minutes or hours after the calendar slot ends. Even good meetings are cognitively taxing. Bad ones are worse. The true cost is far higher than salary multiplied by time.

Linking this subject to cognitive outsourcing, if meetings were deliberately designed to derail enemy operations, what exactly are we designing when we let AI do our thinking? When we hand children devices that fragment their attention before they can read? When we build systems that make the hard work of mastery feel optional?

The OSS knew what it was doing. It understood that the accumulation of small inefficiencies, each one seemingly reasonable in isolation, could bring an organisation to its knees.

We’re doing the same thing to the human mind. And unlike the saboteurs of 1944, we’re not even doing it on purpose.

The manual was declassified in 2008. It’s available online, free to read (here). I recommend it. Rebecca’s book offers the antidote. The saboteurs had a plan. Now, at least, we have one too.

Tech for good example of the month: Alba Health

This month’s edition has focused heavily on what technology is doing to children. It seems fitting to end with an example of what technology can do for them.

Alba Health is a gut health startup using AI and microbiome science to support childhood gut health. Designed for children aged 0 to 12, the platform offers research-grade gut analysis, personalised nutrition plans, and one-to-one coaching from certified health experts.

The science is increasingly clear: the gut microbiome in early childhood influences everything from immune function to mental health. Conditions like allergies, eczema, and asthma often have roots in gut dysbiosis that, if caught early, can be addressed through dietary and lifestyle changes rather than medication.

Founded in 2022 by molecular biologist Eleonora Cavani and microbiome expert Professor Willem M de Vos, Alba Health emerged from Cavani’s personal experience. Severe eczema that had plagued her for years resolved through gut-focused lifestyle changes. The question she asked: why isn’t this knowledge accessible to parents when it matters most?

What makes Alba Health interesting isn’t just the AI, which analyses microbiome data and generates personalised recommendations, but the human layer on top. Every family gets access to certified health coaches who translate the science into practical advice. The technology augments human expertise rather than replacing it.

This is the distinction Adam Hammond drew when discussing quantum computing: technology that serves human capability, rather than substituting for it. Alba Health uses AI to process complex microbiome data at scale, but the relationship, the trust, the behaviour change stays human.

It’s also preventive rather than reactive. Instead of waiting for chronic conditions to develop and then treating them, Alba Health aims to intervene before the damage is done. We could all learn a lot from Alba Health’s approach.

Do you know of a “tech for good” company, initiative, or innovation that deserves a spotlight? I’m building a database of examples for future editions. Drop me a line at oliver@pickup.media.

Statistics of the month

🌊 AI tsunami warning
The head of the International Monetary Fund, Kristalina Georgieva, told Davos that AI will be “a tsunami hitting the labour market”, with young people worst affected. The IMF expects 60% of jobs in advanced economies to be affected by AI – enhanced, eliminated, or transformed – and 40% globally. (IMF)

😰 Job anxiety rises
Some 27% of UK workers worry their jobs could disappear within five years due to AI. Meanwhile, 56% say their employers are already encouraging AI use at work. The gap between encouragement and reassurance is telling. (Randstad Workmonitor 2026)

⏱️ 85 seconds to midnight
The Doomsday Clock advanced to 85 seconds to midnight, the closest it has ever been to catastrophe in its 79-year history. The Bulletin of the Atomic Scientists cited nuclear tensions, climate breakdown, and, for the first time, the increasing sophistication of large language models as factors. In 2017, when I wrote about it for Raconteur to mark its 70th anniversary, it stood at two and a half minutes, the closest to the apocalypse since the 1950s. Now we’re measuring in seconds. (Bulletin of the Atomic Scientists)

💔 AI as boyfriend Two-thirds of Gen Z adults use AI chatbots as a replacement for Google searches. But the social uses are growing: 32% turn to AI for relationship or life advice, 23% use chatbots “as a friend”, and one in ten use an AI chatbot “as a girlfriend or boyfriend”. (Harvard Business Review)

📸 Deepfake abuse industrialised
A Guardian investigation identified at least 150 Telegram channels offering AI-generated “nudified” photos and videos, with users in countries from the UK to Brazil, China to Nigeria. Some charge fees to create deepfake pornography of any woman from a single uploaded photo. The industrialisation of digital abuse. (The Guardian)

⚖️ The other billion
Obesity now affects over one billion people worldwide, a figure projected to double by 2035 on current trends. As we build a “stupidogenic society” that atrophies cognitive muscles, it’s worth remembering we’ve already built an obesogenic one. The pattern is familiar. (JAMA)

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Go Flux Yourself: Navigating the Future of Work (No. 11)

TL;DR: November’s Go Flux Yourself channels the wisdom of Marcus Aurelius to navigate the AI revolution, examining Nvidia’s bold vision for an AI-dominated workforce, unpacks Australia’s landmark social media ban for under-16s, and finds timeless lessons in a school friend’s recovery story about the importance of thoughtful, measured progress …

Image created on Midjourney with the prompt “a dismayed looking Roman emperor Marcus Aurelius looking over a world in which AI drone and scary warfare dominates in the style of a Renaissance painting”

The future

“The happiness of your life depends upon the quality of your thoughts.” 

These sage – and neatly optimistic – words from Marcus Aurelius, the great Roman emperor and Stoic philosopher, feel especially pertinent as we scan 2025’s technological horizon. 

Aurelius, who died in 180 and became known as the last of the Five Good Emperors, exemplified a philosophy that teaches us to focus solely on what we can control and accept what we cannot. He offers valuable wisdom in an AI-driven future for communities still suffering a psychological form of long COVID-19 drawn from the collective trauma of the pandemic, in addition to deep uncertainty and general mistrust with geopolitical tensions and global temperatures rising.

The final emperor in the relatively peaceful Pax Romana era, Aurelius seemed a fitting person to quote this month for another reason: I’m flying to the Italian capital this coming week, to cover CSO 360, a security conference that allows attendees to take a peek behind the curtain – although I’m worried about what I may see. 

One of the most eye-popping lines from last year’s conference in Berlin was that there was a 50-50 chance that World War III would be ignited in 2024. One could argue that while there has not been a Franz Ferdinand moment, the key players are manoeuvring their pieces on the board. Expect more on this cheery subject – ho, ho, ho! – in the last newsletter of the year, on December 31.

Meanwhile, as technological change accelerates and AI agents increasingly populate our workplaces (“agentic AI” is the latest buzzword, in case you haven’t heard), the quality of our thinking about their integration – something we can control – becomes paramount.

In mid-October, Jensen Huang, Co-Founder and CEO of tech giant Nvidia – which specialises in graphics processing units (GPUs) and AI computing – revealed on the BG2 podcast that he plans to shape his workforce so that it is one-third human and two-thirds AI agents.

“Nvidia has 32,000 employees today,” Huang stated, but he hopes the organisation will have 50,000 employees and “100 million AI assistants in every single group”. Given my focus on human-work evolution, I initially found this concept shocking, and appalling. But perhaps I was too hasty to reach a conclusion.

When, a couple of weeks ago, I interviewed Daniel Vassilev, Co-Founder and CEO of Relevance AI, which builds virtual workforces of AI agents that act as a seamless extension of human teams, his perspective on Huang’s vision was refreshingly nuanced. He provided an enlightening analogy about throwing pebbles into the sea.

“Most of us limit our thinking,” the San Francisco-based Australian entrepreneur said. “It’s like having ten pebbles to throw into the sea. We focus on making those pebbles bigger or flatter, so they’ll go further. But we often forget to consider whether our efforts might actually give us 20, 30, or even 50 pebbles to throw.”

His point cuts to the heart of the AI workforce debate: rather than simply replacing human workers, AI might expand our collective capabilities and create new opportunities. “I’ve always found it’s a safe bet that if you give people the ability to do more, they will do more,” Vassilev observed. “They won’t do less just because they can.”

This positive yet grounded perspective was echoed in my conversation with Five9’s Steve Blood, who shared fascinating insights about the evolution of workplace dynamics, specifically in the customer experience space, when I was in Barcelona in the middle of the month reporting on his company’s CX Summit. 

Blood, VP of Market Intelligence at Five9, predicts a “unified employee” future where AI enables workers to handle increasingly diverse responsibilities across traditional departmental boundaries. Rather than wholesale replacement, he envisions a workforce augmented by AI, where employees become more valuable by leveraging technology to handle multiple functions.

(As an aside, Blood predicts the customer experience landscape of 2030 will be radically different, with machine customers evolving through three distinct phases. Starting with today’s ‘bound’ customers (like printers ordering their own ink cartridges exclusively from manufacturers), progressing to ‘adaptable’ customers (AI systems making purchases based on user preferences from multiple suppliers), and ultimately reaching ‘autonomous’ customers, where digital twins make entirely independent decisions based on their understanding of our preferences and history.)

The quality of our thinking about AI integration becomes especially crucial when considering what SailPoint’s CEO Mark McClain described to me this month as the “three V’s”: volume, variety, and velocity. These parameters no longer apply to data alone; they’re increasingly relevant to the AI agents themselves. As McClain explained: “We’ve got a higher volume of identities all the time. We’ve got more variety of identities, because of AI. And then you’ve certainly got a velocity problem here where it’s just exploding.” 

This explosion of AI capabilities brings us to a critical juncture. While Nvidia’s Huang envisions AI employees as being managed much like their human counterparts, assigned tasks, and engaged in dialogues, the reality might be more nuanced – and handling security permissions will need much work, which is perhaps something business leaders have not thought about enough.

Indeed, AI optimism must be tempered with practical considerations. The cybersecurity experts I’ve met recently have all emphasised the need for robust governance frameworks and clear accountability structures. 

Looking ahead to next year, organisations must develop flexible frameworks that can evolve as rapidly as AI capabilities. The “second mouse gets the cheese” approach – waiting for others to make mistakes first, as explained during a Kolekti roundtable looking at the progress of generative AI on ChatGPT’s second birthday, November 28, by panellist Sue Turner, the Founding Director of AI Governance – may no longer be viable in an environment where change is constant and competition fierce. 

Successful organisations will emphasise complementary relationships between human and AI workers, requiring a fundamental rethink of traditional organisational structures and job descriptions.

The management of AI agent identities and access rights will become as crucial as managing human employees’ credentials, presenting both technical and philosophical challenges. Workplace culture must embrace what Blood calls “unified employees” – workers who can leverage AI to operate across traditional departmental boundaries. Perhaps most importantly, organisations must cultivate what Marcus Aurelius would recognise as quality of thought: the ability to think clearly and strategically about AI integration while maintaining human values and ethical considerations.

As we move toward 2025, the question isn’t simply whether AI agents will become standard members of the workforce – they already are. The real question is how we can ensure this integration enhances rather than diminishes human potential. The answer lies not in the technology itself, but in the quality of our thoughts about using it.

Organisations that strike and maintain this balance – embracing AI’s potential while preserving human agency and ethical considerations – will likely emerge as leaders in the new landscape. Ultimately, the quality of our thoughts about AI integration today will determine the happiness of our professional lives tomorrow.

The present

November’s news perfectly illustrates why we need to maintain quality of thought when adopting new technologies. Australia’s world-first decision to ban social media for under-16s, a bill passed a couple of days ago, marks a watershed moment in how we think about digital technology’s impact on society – and offers valuable lessons as we rush headlong into the AI revolution.

The Australian bill reflects a growing awareness of social media’s harmful effects on young minds. It’s a stance increasingly supported by data: new Financial Times polling reveals that almost half of British adults favour a total ban on smartphones in schools, while 71% support collecting phones in classroom baskets.

The timing couldn’t be more critical. Ofcom’s disturbing April study found nearly a quarter of British children aged between five and seven owned a smartphone, with many using social media apps despite being well below the minimum age requirement of 13. I pointed out in August’s Go Flux Yourself that EE recommended that children under 11 shouldn’t have smartphones. Meanwhile, University of Oxford researchers have identified a “linear relationship” between social media use and deteriorating mental health among teenagers.

Social psychologist Jonathan Haidt’s assertion in The Anxious Generation that smart devices have “rewired childhood” feels particularly apposite as we consider AI’s potential impact. If we’ve learned anything from social media’s unfettered growth, it’s that we must think carefully about technological integration before, not after, widespread adoption.

Interestingly, we’re seeing signs of a cultural awakening to technology’s double-edged nature. Collins Dictionary’s word of the year shortlist included “brainrot” – defined as an inability to think clearly due to excessive consumption of low-quality online content. While “brat” claimed the top spot – a word redefined by singer Charli XCX as someone who “has a breakdown, but kind of like parties through it” – the inclusion of “brainrot” speaks volumes about our growing awareness of digital overconsumption’s cognitive costs.

This awareness is manifesting in unexpected ways. A heartening trend has emerged on social media platforms, with users pushing back against online negativity by expressing gratitude for life’s mundane aspects. Posts celebrating “the privilege of doing household chores” or “the privilege of feeling bloated from overeating” represent a collective yearning for authentic, unfiltered experiences in an increasingly synthetic world.

In the workplace, we’re witnessing a similar recalibration regarding AI adoption. The latest Slack Workforce Index reveals a fascinating shift: for the first time since ChatGPT’s arrival, almost exactly two years ago, adoption rates have plateaued in France and the United States, while global excitement about AI has dropped six percentage points.

This hesitation isn’t necessarily negative – it might indicate a more thoughtful approach to AI integration. Nearly half of workers report discomfort admitting to managers that they use AI for common workplace tasks, citing concerns about appearing less competent or lazy. More tellingly, while employees and executives alike want AI to free up time for meaningful work, many fear it will actually increase their workload with “busy work”.

This gap between AI urgency and adoption reflects a deeper tension in the workplace. While organisations push for AI integration, employees express fundamental concerns about using these tools.

A more measured approach echoes broader societal concerns about technological integration. Just as we’re reconsidering social media’s role in young people’s lives, organisations are showing due caution about AI’s workplace implementation. The difference this time? We might actually be thinking before we leap.

Some companies are already demonstrating this more thoughtful approach. Global bank HSBC recently announced a comprehensive AI governance framework that includes regular “ethical audits” of their AI systems. Meanwhile, pharmaceutical giant AstraZeneca has implemented what they call “AI pause points” – mandatory reflection periods before deploying new AI tools.

The quality of our thoughts about these changes today will indeed shape the quality of our lives tomorrow. That’s the most important lesson from this month’s developments: in an age of AI, natural wisdom matters more than ever.

These concerns aren’t merely theoretical. Microsoft’s Copilot AI spectacularly demonstrated the pitfalls of rushing to deploy AI solutions this month. The product, designed to enhance workplace productivity by accessing internal company data, became embroiled in privacy breaches, with users reportedly accessing colleagues’ salary details and sensitive HR files. 

When less than 4% of IT leaders surveyed by Gartner said Copilot offered significant value, and Salesforce’s CEO Marc Benioff compared it to Clippy – Windows 97’s notoriously unhelpful cartoon assistant – it highlighted a crucial truth: the gap between AI’s promise and its current capabilities remains vast. 

As organisations barrel towards agentic AI next year, with semi-autonomous bots handling everything from press round-ups to customer service, Copilot’s stumbles serve as a timely reminder about the importance of thoughtful implementation

Related to this point is the looming threat to authentic thought leadership. Nina Schick, a global authority on AI, predicts that by 2025, a staggering 90% of online content will be generated by synthetic-AI. It’s a sobering forecast that should give pause to anyone concerned about the quality of discourse in our digital age.

If nine out of ten pieces of content next year will be churned out by machines learning from machines learning from machines, we risk creating an echo chamber of mediocrity, as I wrote in a recent Pickup_andWebb insights piece. As David McCullough, the late American historian and Pulitzer Prize winner, noted: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.”

This observation hits the bullseye of genuine thought leadership. Real insight demands more than information processing; it requires boots on the ground and minds that truly understand the territory. While AI excels at processing vast amounts of information and identifying patterns, it cannot fundamentally understand the human condition, feel empathy, or craft emotionally resonant narratives.

Leaders who rely on AI for their thought leadership are essentially outsourcing their thinking, trading their unique perspective for a synthetic amalgamation of existing views. In an era where differentiation is the most prized currency, that’s more than just lazy – it’s potentially catastrophic for meaningful discourse.

The past

In April 2014, Gary Mairs – a gregarious character in the year above me at school – drank his last alcoholic drink. Broke, broken and bedraggled, he entered a church in Seville and attended his first Alcoholics Anonymous meeting. 

His life had become unbearably – and unbelievably – chaotic. After moving to Spain with his then-girlfriend, he began to enjoy the cheap cervezas a little too much. Eight months before he quit booze, Gary’s partner left him, being unable to cope with his endless revelry. This opened the beer tap further.

By the time Gary gave up drinking, he had maxed out 17 credit cards, his flatmates had turned on him, and he was hundreds of miles away from anyone who cared – hence why he signed up for AA. But what was it like?

I interviewed Gary for a recent episode of Upper Bottom, the sobriety podcast (for people who have not reached rock bottom) I co-host, and he was reassuringly straight-talking. He didn’t make it past step three of the 12 steps: he couldn’t supplicant to a higher power. 

However, when asked about the important changes on his road to recovery, Gary talks about the importance of good habits, healthy practices, and meditation. Marcus Aurelius would approve. 

In his Meditations, written as private notes to himself nearly two millennia ago, Aurelius emphasised the power of routine and self-reflection. “When you wake up in the morning, tell yourself: The people I deal with today will be meddling, ungrateful, arrogant, dishonest, jealous, and surly. They are like this because they can’t tell good from evil,” he wrote. This wasn’t cynicism but rather a reminder to accept things as they are and focus on what we can control – our responses, habits, and thoughts.

Gary’s journey from chaos to clarity mirrors this ancient wisdom. Just as Aurelius advised to “waste no more time arguing what a good man should be – be one”, Gary stopped theorising about recovery and simply began the daily practice of better living. No higher power was required – just the steady discipline of showing up for oneself.

This resonates as we grapple with AI’s integration into our lives and workplaces. Like Gary discovering that the answer lay not in grand gestures but in small, daily choices, perhaps our path forward with AI requires similar wisdom: accepting what we cannot change while focusing intently on what we can – the quality of our thoughts, the authenticity of our voices, the integrity of our choices.

As Aurelius noted: “Very little is needed to make a happy life; it is all within yourself, in your way of thinking.” 

Whether facing personal demons or technological revolution, the principle remains the same: quality of thought, coupled with consistent practice, lights the way forward.

Statistics of the month

  • Exactly two-thirds of LinkedIn users believe AI should be taught in high schools. Additionally, 72% observed an increase in AI-related mentions in job postings, while 48% stated that AI proficiency is a key requirement for the companies they applied to.
  • Only 51% of respondents of Searce’s Global State of AI Study 2024 – which polled 300 C-Suite and senior technology executives across organisations with at least $500 million in revenue in the US and UK – said their AI initiatives have been very successful. Meanwhile, 42% admitted success was only somewhat achieved.
  • International Workplace Group findings indicate just 7% of hybrid workers describe their 2024 hybrid work experience as “trusted”, hinting at an opportunity for employers to double down on trust in the year ahead.

Stay fluxed – and get in touch! Let’s get fluxed together …

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

WTF is Quittok – and why Gen Z is increasingly doing it when they leave jobs

You’ve heard of quiet quitting but what about loud quitting?

Last year, there was a great deal of noise about quiet quitting — namely, doing the minimum amount required per someone’s job description. Gen Zers led that trend. (Click here for WorkLife’s guide to The Quiet Workplace).

Now many young professionals are taking a very different approach to head for the exit, being as loud as possible by live-streaming their resignations on social media. Their platform of choice: TikTok. Hence the inevitable hashtag #quittok.

So what exactly is quittok, where does it come from, and what are the pros and cons?

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in April 2023 – to read the complete piece, please click HERE.

What ‘human-centric’ tech is fixing HR challenges

Has anyone had it more difficult at work than human resources professionals in the last three years?

First, they had to manage and enable a workforce that suddenly couldn’t come into the office due to lockdowns – aside from in industries with increasingly stressed frontline staff, such as healthcare workers, emergency services workers, and teachers. 

Next, the great resignation trend, spurred by the pandemic and elongated by, in particular, Gen Zers’ innovative approach to career development and well-being, made life even more challenging. 

On top of most companies rethinking their work policies – adding to the HR workload – the criticality of attracting and retaining workers during this period of economic uncertainty, a tightening labor market, and technological advancement, was matched by the need to train and upskill staff so the organization could operate in the coming years.

No wonder a new global survey published by Humaans – a London-headquartered employee management software company – found that 54% of the 1,000 HR managers quizzed considered their roles to have grown more complex, as they navigated an increasingly rocky landscape with ever-shrinking teams and fewer resources.

Thankfully, various HR technology tools have made their working lives more manageable. And there is little surprise that almost half (46%) of HR leaders are planning to invest more in HR tech, according to Gartner research shared in early March. 

But what exactly is the most effective HR tech?

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

‘Full-time no experience’: How cost-of-living crisis is shaping labor trends

Newly released data from global job-search platform Indeed has confirmed what most people already suspected: that the cost-of-living crisis is shaping labor trends, and specifically what prospective employees want from their job. 

While some findings were predictable – for example, there were more searches for “full-time no experience” positions, zero-hour contracts, and greater demand for weekly pay in the three months leading up to Jan. 2023 compared to the same period a year ago – the alarmingly steep rise in these areas might shock business leaders and human resources professionals.

For instance, in the U.K., searches on Indeed for zero-hour contracts were up 70%, requests for part-time work had increased by 65%, and “weekend-only” searches jumped 120%. Demand for weekly pay surged by 122%, “full-time no experience” searches rose 219%, and “support worker no experience” was 337% higher.

Ultimately, the results indicated that recruitment models, learning and development and employee experience should urgently be modernized to keep pace with and accommodate workers’ needs and wants.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

Businesses are not putting people in the right jobs – how tech can help

Most business leaders who offer variants of the cliché that “people are the company’s greatest asset” seldom match words with deeds. More worrying, though, is that people are not being matched to jobs in which they can excel – now more than ever. 

Alarmingly, a vast majority of organizations were taking the wrong, outdated approach to managing and developing human capital, argued professor Erik Brynjolfsson, director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI, and arguably the world’s leading expert on the role of digital technology in improving productivity.

“Human capital is a $220 trillion asset in the U.S. – bigger than all the other assets put together, and about ten times the country’s gross domestic product,” said Prof. Brynjolfsson. “The most important asset on the planet is the one we’ve been measuring the worse.” 

As a result, human capital has been “probably the most misallocated asset on the planet. Businesses are not putting the right people in the right jobs; they’re not hiring, firing, and reassigning where they need to be doing it.”

This gloomy analysis is a lose-lose for employer, employee, and society, added Brynjolfsson. “Think of how many people are not in the right job, living lives of quiet desperation,” he said. 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

‘These challenges will only deepen’: Confessions of a PR exec on mounting hybrid-working pressures

Numerous studies indicate that middle managers are feeling the squeeze in the post-pandemic rush to move to hybrid- and remote-working models. Further, they are not being adequately supported, financially or otherwise.

At the start of 2023, Gartner identified “managers will be sandwiched by leader and employee expectations” as one of the top nine workplace predictions for chief human resource officers this year. A workplace culture and recognition firm O.C. Tanner’s 2023 Global Culture Report, published last September, found that 41% of U.K. managers felt pressured to choose between what their leaders want and the demands of their direct reports.

For WorkLife’s latest installment of Confessions, where anonymity is traded for candor, a senior PR executive based in London shared how rising pressure to manage expectations from above and below is unsustainable, and she feels unsupported and under-compensated. She’s currently looking for another job.

To what extent is managing a hybrid team making you more squeezed and why?

Managing a remote team in a distributed environment requires more time to support junior staff members. While some junior team members thrive, others – unaccustomed to keeping up the pace from home – fall behind.

To some extent, it’s understandable. After all, we are individuals who work well in different environments. However, it’s the responsibility of senior leadership to step in and resolve these challenges early within an employee’s onboarding cycle. Unfortunately, this often doesn’t happen, allowing these challenges to deepen and develop over time.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in February 2023 – to read the complete piece, please click HERE.

‘Protirement’ is trending again – but ageism remains rife

In late January, Jeremy Hunt, chancellor of the U.K. government, invoked the spirit of Uncle Sam, who had implored Americans to enroll for World War I action over a century earlier. “I want YOU for the U.S. Army,” read the caption on the four million recruitment posters – featuring the scowling, pointing, bearded fictitious character – plastered across the country. 

With, at the last count, 1.1 million job vacancies to fill in the U.K., Hunt adopted a similarly commanding tone, this time to persuade troops to rejoin the workforce and ease the war for talent. “To those who retired early after the pandemic or haven’t found the right role after furlough, I say ‘Britain needs you,’” he said. “We will look at the conditions necessary to make work worth your while.”

This plea was part of a campaign to encourage the 630,000 people who left the U.K. workforce between 2019 and 2022 – so-called “protirees” – to return to employment and help the country fight off the recession.

However, more recent research from the Chartered Management Institute (CMI) that surveyed more than 1,000 managers working in U.K. businesses and public services indicated firms are overlooking older people and instead opting for younger workers. Indeed, just 42% of respondents were open “to a large extent” to hiring people aged between 50 and 64 years old.

How, then, can protirees who want to return to employment be better welcomed by organizations so that their considerable talents are not squandered? 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in February 2023 – to read the complete piece, please click HERE.

Why ‘re-recruiting’ existing employees is critical for 2023

As the long tail of the Great Resignation continues to swish and sting, labor markets contract and economic uncertainty bites, organizations should make every effort in 2023 to hold on to their employees. More specifically, they should “re-recruit” workers already at the company, urged Microsoft’s Liz Leigh-Bowler.

To support the case for re-recruiting, the product marketing leader, based in Epson, U.K., cited the results of Microsoft’s recent global hybrid work survey, which captured answers from over 20,000 employees in 11 countries. Of the many telling statistics surfaced by the report, she said a handful stood out on this subject.

For example, two-thirds of employees would stay longer at their company if it were easier to switch jobs internally. Similarly, 76% of respondents would remain with their employer if they could benefit more from learning-and-development support. 

Unsurprisingly, without growth opportunities, most workers across all levels would depart. Without chances to develop, 68% of business decision-makers would not hang around. Worryingly, 55% of all employees reckoned the best way for them to learn or enhance skills would be to change employers. 

The level of workforce thirst for development has never been higher, according to the research. In fact, the opportunity to learn and grow is the number-one driver of a great work culture – a jump from ninth position in the rankings in 2019.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How recruitment firms are embracing flexible working policies

Are recruitment firms practising what they preach when it comes to flexible working?

After all, these organizations have had a front-row seat to spot the evolving workforce trends, which in the last three years have seen demand for flexibility and, for some candidates, part- or fully-remote roles.

To find out how the most pioneering recruitment firms have changed their working methods, WorkLife spoke to various organizations within the industry.

Here we consider the challenges and opportunities of embracing a four-day week – aka “Flex Friday” – digital detox holidays, and supporting employees to achieve the optimal work-life balance.

This article is the third of a three-part series in which DigiDay’s future-of-work platform, WorkLife, rounds up a range of flexible models used by employers in different sectors.

The full version of this piece was first published on WorkLife, in December 2022. To read the complete piece, please click HERE. And to read the other two articles in the series please use the links below.
What media and marketing execs have learned from flexible-working experiments
Remote-first, WFA, nine-day weeks: Flexible working experiments of 2022

HR teams admit fault for why most new hires aren’t working out

Most human resource departments across the planet are feeling deep buyer’s remorse, according to new research.

Thomas International, a talent assessment platform provider, surveyed 900 HR professionals globally and found nearly two-thirds (60%) of new hires are not working out. And the majority of respondents blamed themselves for effectively taking shortcuts that turned out to be dead ends.

Nearly half (49%) of hiring managers said recruits were unsuccessful because of a “poor fit between the candidate and the role,” and 74% admitted to compromising candidate quality due to time pressures in response to the Great Resignation and a tight labor market.

It seems that this post-job-move remorse hasn’t just been a burden on HR teams, but the new hires themselves. “We see a higher level of regretted choices because things have not worked out the way the candidate had hoped,” said Piers Hudson, senior director of Gartner’s HR functional strategy and management research team, referencing trends his organization’s proprietary data has highlighted.

However, he added that overall, there has been an “elevation in expectations,” particularly among younger generations, that employers are finding it difficult to live up to.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in November 2022 – to read the complete piece, please click HERE.

Cost-of-living worries prompt workers to seek higher-paid jobs

Sorry kids, Santa’s sack might not be so full this year. According to new research, an alarming 88% of U.K. workers are unsure whether their current role can sustain them financially during this economically uncertain period.

Further, productivity platform ClickUp’s study, published in late November, calculated that 26% of Britain-based employees are planning to switch jobs because of the cost-of-living crisis — inflation hit 11.1% in October, a 41-year high — and the desperate need to earn more money.

“With the highest inflation rate among the G7 countries [consisting of Canada, France, Germany, Italy, Japan, the U.K and the U.S.], there’s no doubt almost every working family in the U.K. is feeling the pinch,” said Alan Bradstock, a senior insolvency practitioner at Accura Accountants in London. “Many have no choice but to seek higher paid work.”

Citizens Advice, a U.K. charity, said the number of employed people seeking crisis support between July and September jumped 150% compared to the same three-month span two years ago. “Every day, our advisers hear stories of people skipping meals, going without essentials, and then coming to us when they simply can’t cut back anymore,” said Morgan Wild, the charity’s head of policy. “This cannot continue.”

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in November 2022 – to read the complete piece, please click HERE.

‘It’s a future that’s upon us’: Will robots ever have the top jobs?

How would you feel about having a robot boss? And not just a line manager but the head honcho of the company.

You might think this is an idle, hypothetical question. Indeed, back in 2017, then-Alibaba CEO Jack Ma stated we are mere decades from having robots at the helm of organizations. He predicted that by 2047, a robot CEO would make the cover of Time magazine.

And yet, those provocative guesstimates from five years ago now look generous. In late August, the world’s first artificial intelligence-powered, humanoid robot CEO, called Mika, was appointed to the top job at Dictador, a luxury rum company.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

How fair are employers really being about pay raises during the cost-of-living crisis?

You’d think the resignation of U.K. Prime Minister Liz Truss would have sent shockwaves of relief across the country. Perhaps it did in some ways, but the scorched earth she left behind, as a result of her cabinet’s hasty economic decisions, has U.K. public morale at an all-time low.

With inflation at a 40-year high and employees mired in a cost-of-living crisis that looks set to deepen, financial anxiety is sky-high. The worries pile up — including that some may not be able to afford their mortgage this time next year, due to the latest changes made by the Bank of England in response to the disastrous “mini budget. It’s clear we’re in for a shaky recovery.

A new Indeed and YouGov survey of 2,500 U.K. workers reaffirmed this. It showed 52% don’t think they are currently being paid enough to weather the current cost-of-living crisis. And that has a direct correlation to employees feeling undervalued, found the same report. Notably, healthcare and medical staff were most likely to feel underpaid (64%). Next on the list of unhappy workers were those who work in hospitality and leisure (61%) and legal (58%) industries.

To boost bank balances, 13% of those surveyed asked their employers for a pay raise. However, despite the real-earning squeeze, 61% of those who requested an increase either received less than they wanted or nothing at all. Little wonder that overall, 9% had applied for a new role, while others have resorted to taking on additional jobs.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Time to break the stereotypes about Gen Z attitudes to work

Organizations are over-relying on stereotypes to try and understand what makes them tick in the scramble to attract and retain the best young talent.

Sure, Generation Zers have unique perspectives on careers and how to succeed in the workforce that differs from previous generations, but in the race to better understand an entire generation, important details are falling through the cracks.

For instance, Gen Z bore the brunt of the criticism for harboring so-called lazy work ethics like “quiet quitting.” But that falls short of the full truth, talent execs have asserted.

Meanwhile, new research has emerged that disproves another myth: that Gen Zers don’t want to work in an office, ever. It turns out a large proportion does want to experience in-person workplace environments. Indeed, 72% of 4,000 U.K. Gen Zers said they want to be in the office between three and five days a week, according to research published in September by Bright Network, a graduate careers and employment firm.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Is long-term employee retention a losing battle?

Is the concept of a job for life dead?

The mass reassessment of careers people have undergone over the past few years – described by many as the Great Resignation, by others as the Great Reshufffle – is showing no signs of calming down. In fact, in the U.K., the trend seems to be accelerating.

More than 6.5 million people (20% of the U.K. workforce) are expected to quit their job in the next 12 months, according to estimates from the Charted Institute of Personnel and Development (CIPD), which published the data in June after surveying more than 6,000 workers. That’s up from 2021, when 16% of the U.K. workforce said they plan to quit within a year, according to the CIPD. Meanwhile, in March Microsoft’s global Work Trend Index found that 52% of Gen Zers and Millennials — the two generations that represent the vast majority of the workforce — were likely to consider changing jobs within the following year.

Tania Garrett, chief people officer at Unit4, a global cloud software provider for services companies, argued that it is time for organizations to get real — they are no longer recruiting people for the long term. Instead, they should embrace this reality, and stop creating rewards that encourage more extended service from employees. 

This article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

Amid economic turmoil, HR budgets are under threat

As the specter of a global financial crash looms, businesses are pruning budgets, and human resources departments are first in line for the chop, according to new research by HR software company Personio.

More than half (55%) of HR managers have either had their budgets slashed already, or expect them to be cut in the coming months, according to the report, which surveyed 500 HR professionals and 1,000 workers in the U.K. and Ireland. Fifty-two percent of the respondents said they’re used to their department’s budget being the first to get trimmed when businesses tighten their belts.

But this approach is wrongheaded and will have lasting ramifications, argued Ross Seychell, Personio’s chief people officer. “HR should be even more of a priority now, not less,” he said.

That’s because areas typically within the HR remit — like company culture and employee experience — are more important than ever, as organizations continue to battle to get people into the office and ensure the experience is worthwhile when they do. All at a time when talent retention is just as vital.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in October 2022 – to read the complete piece, please click HERE.

How to ask for a raise amid a cost-of-living crisis

Asking the boss for a raise can be awkward at the best of times.

As the cost-of-living crisis deepens in the U.K. and U.S. and company purse strings are pulled tight, it’s arguably even more difficult. However, given the perilous state of the economy, it’s critical to pluck up the courage to discuss a pay bump.

The temptation might be to blunder into an informal chat, but that could come across as desperate. Instead, a better strategy is to prepare well to effectively make your business case.

Below are some tried and tested expert tips to help those seeking a raise seal the deal.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in July 2022 – to read the complete piece, please click HERE.

How companies are attempting to tackle diversity ‘blind spots’ at the hiring stage

In an attempt to root out all biases – conscious or unconscious – at the hiring stage, more organizations are overhauling their recruitment processes.

For many, that’s meant stripping their recruitment methods to the bare bones and examining everything from how language in job ads can influence who applies, to improving interview questions so they focus on a person’s aptitude and skill, rather than background and experience.

This article was first published on DigiDay’s future-of-work platform, WorkLife, in July 2022 – to read the complete piece please click HERE.