Go Flux Yourself: Navigating the Future of Work (No. 16)


TL;DR: April’s Go Flux Yourself explores the rise of AI attachment and how avatars, agents and algorithms are slipping into our emotional and creative lives. As machines get more personal, the real question isn’t what AI can do. It’s what we risk forgetting about being human …

Image created on Ninja AI

The future

“What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

The robots aren’t coming. They’re already in the room, nodding along, offering feedback, simulating empathy. They don’t sleep. They don’t sigh. And increasingly, they feel … helpful.

In 2025, AI is moving beyond spreadsheets and slide decks and entering our inner lives. According to new Harvard Business Review analysis, written by co-founder of Filtered.com and author Marc Zao-Sanders, the fastest-growing use for generative AI isn’t work but therapy and companionship. In other words, people are building relationships with machines. (I’ve previously written about AI companions – including in June last year.)

Some call this disturbing. Others call it progress. At DTX Manchester earlier this month, where I moderated a panel on AI action plans on the main stage (and wrote a summary of my seven takeaways from the event), the conversation was somewhere in between. One question lingered among the panels and product demos: how will we relate to one another when technology becomes our emotional rehearsal partner?

This puzzler is no longer only theoretical. RealTalkStudio, founded by Toby Sinclair, provides AI avatars that help users prepare for hard conversations: delivering bad news, facing conflict, and even giving feedback without sounding passive-aggressive. These avatars pick up on tone, hesitation, and eye movement. They pause in the right places, nod, and even move their arms around.

I met Toby at DTX Manchester, and we followed up with a video call a week or so later, after I’d road-tested RealTalkStudio. The prompts on the demo – a management scenario – were handy and enlightening, especially for someone like me, who has never really managed anyone (do children count?). They allowed me to speak with my “direct report” adroitly, to achieve a favourable outcome for both parties. 

Toby had been at JP Morgan for almost 11 years until he left to establish RealTalkStudio in September, and his last role was Executive Director of Employee Experience. Why did he give it all up?

“The idea came from a mix of personal struggle and tech opportunity,” he told me over Zoom. “I’ve always found difficult conversations hard – I’m a bit of a people pleaser, so when I had to give feedback or bad news, I’d sugarcoat it, use too many pillow. My manager [at JP Morgan] was the opposite: direct, no fluff. That contrast made me realise there isn’t one right way – but practise is needed. And a lot of people struggle with this, not just me.”

The launch of ChatGPT, in November 2022, prompted him to explore possible solutions using technology. “Something clicked. It was conversational, not transactional – and I immediately thought, this could be a space to practise hard conversations. At first, I used it for myself: trying to become a better manager at JP Morgan, thinking through career changes, testing it as a kind of coach or advisor. That led to early experiments in building an AI coaching product, but it flopped. The text interface was too clunky, the experience too dull. Then, late last year, I saw how far avatar tech had come.” 

Suddenly, Toby’s idea felt viable. Natural, even. “I knew the business might not be sustainable forever, but for now, the timing and the tech felt aligned. I could imagine it being used for manager training, dating, debt collectors, airline … so many use cases.”

Indeed, avatars are not just used in work settings. A growing number of people – particularly younger generations – are turning to AI to rehearse dating, for instance. Toby has been approached by an Eastern European matchmaking service. “They came to me because they’d noticed a recurring issue, especially with younger men: poor communication on dates, and a lack of confidence. They were looking for ways to help their clients – mainly men – have better conversations. And while practice helps, finding a good practice partner is tricky. Most of these men don’t have many female friends, and it’s awkward to ask someone: ‘Can we practise going on a date?’ That’s where RealTalk comes in. We offer a realistic, judgment-free way to rehearse those conversations. It’s all about building confidence and clarity.”

These avatars flirt back. They guide you through rejection. They help you practise confidence without fear of humiliation. It’s Black Mirror, yes. But also oddly touching. On one level, this is useful. Social anxiety is rising. Young people in particular are navigating a digital-first emotional landscape. An AI avatar offers low-risk rehearsal. It doesn’t laugh. It doesn’t ghost.

On another level, it’s deeply troubling. The ability to control the simulation – to tailor responses, remove ambiguity, and mute discomfort – trains us to expect real humans to behave predictably, like code. We risk flattening our tolerance for emotional nuance. If your avatar never rolls its eyes or forgets your birthday, why tolerate a flawed, chaotic, human partner?

When life feels high-stakes and unpredictable, a predictable conversation with a patient, programmable partner can feel like relief. But what happens when we expect humans to behave like avatars? When spontaneity becomes a bug, not a feature?

That’s the tension. These tools are good, and only improving. Too good? The quotation I started this month’s Go Flux Yourself with comes from Toby, who has a two-year-old boy, Dylan. As our allotted 30 minutes neared its end, the hugely enjoyable conversation turned philosophical, and he posed this question: “What does relationship communication – and attachment in particular – look like in a future where our most meaningful conversations may be with digital humans?”

It’s clear that AI avatars are no longer just slick customer service bots. They’re surprisingly lifelike. Character-3, the latest from Hedra, mimics micro-expressions with startling accuracy. Eyebrows arch. Shoulders slump. A smirk feels earned.

This matters because humans are built to read nuance. We feel it when something’s off. But as avatars close the emotional gap, that sense of artifice starts to slip. We begin to forget that what we engage with isn’t sentient – it’s coded.

As Justine Moore from Andreessen Horowitz stressed in an article outlining the roadmap for avatars (thanks for the tip, Toby), these aren’t talking heads anymore. They’re talking characters, designed to be persuasive. Designed to feel real enough.

So yes, they’re useful for training, coaching, even storytelling. But they’re also inching closer to companionship. And once a machine starts mimicking care, the ethics get blurry.

Nowhere is the ambivalence more acute than in the creative industries. The spectre of AI-generated music, art, and writing has stirred panic among artists. And yet – as I argued at Zest’s Greenwich event last week – the most interesting possibilities lie in creative amplification, not replacement.

For instance, the late Leon Ware’s voice, pulled from a decades-old demo, now duets with Marcos Valle on Feels So Good, a track left unfinished since 1979. The result, when I heard it at the Jazz Cafe last August, when I was lucky enough to catch octogenarian Valle, was genuinely moving. Not because it’s novel, but because it’s human history reassembled. Ware isn’t being replaced. He’s being recontextualised.

We’ve seen similar examples in recent months: a new Beatles song featuring a de-noised John Lennon; a Beethoven symphony completed with machine assistance. Each case prompts the same question: is this artistry, or algorithmic taxidermy?

From a technical perspective, these tools are astonishing. From a legal standpoint, deeply fraught. But from a cultural angle, the reaction is more visceral: people care about authenticity. A recent UK Music study found that 83% of UK adults believe AI-generated songs should be clearly labelled. Two-thirds worry about AI replacing human creativity altogether.

And yet, when used transparently, AI can be a powerful co-creator. I’ve used it to organise ideas, generate structure, and overcome writer’s block. It’s a tool, like a camera, or a DAW, or a pencil. But it doesn’t originate. It doesn’t feel.

As Dean, a community member of Love Will Save The Day FM (for whom my DJ alias Boat Floaters has a monthly show called Love Rescue), told me: “Real art is made in the accidents. That’s the magic. AI, to me, reduces the possibility of accidents and chance in creation, so it eliminates the magic.”

That distinction matters. Creativity is not just output. It’s a process. It’s the struggle, the surprise, the sweat. AI can help, but it can’t replace that.

Other contributions from LWSTD members captured the ambivalence of AI and creativity – in music, in this case, but these viewpoints can be broadened out to the other arts. James said: “Anything rendered by AI is built on the work of others. Framing this as ‘democratised art’ is disingenuous.” He noted how Hayao Miyazaki of Studio Ghilbli expressed deep disgust when social media feeds became drowned in AI-parodies of his art. He criticised it as an “insult to life itself”.

Sam picked up this theme. “The Ghibli stuff is a worrying direction of where things can easily head with music – there’s already terrible versions of things in rough styles but it won’t be long before the internet is flooded with people making their own Prince songs (or whatever) but, as with Ghibli, without anything beyond a superficial approximation of art.”

And Jed pointed out that “it’s all uncanny – it’s close, but it’s not right. It lacks humanity.”

Finally, Larkebird made an amusing distinction. “There are differences between art and creativity. Art is a higher state of creativity. I can add coffee to my tea and claim I’m being creative, but that’s not art.”

Perhaps, though, if we want to glimpse where this is really headed, we need to look beyond the avatars and look to the agents, which are currently dominating the space.

Ray Smith, Microsoft’s VP of Autonomous Agents, shared a fascinating vision during our meeting in London in early April. His team’s strategy hinges on three tiers: copilots (assistants), agents (apps that take action), and autonomous agents (systems that can reason and decide).

Imagine an AI that doesn’t just help you file expenses but detects fraud, reroutes tasks, escalates anomalies, all without being prompted. That’s already happening. Pets at Home uses a revenue protection agent to scan and flag suspicious returns. The human manager only steps in at the exception stage.

And yet, during Smith’s demo … the tech faltered. GPU throttling. Processing delays. The AI refused to play ball.

It was a perfect irony: a conversation about seamless automation interrupted by the messiness of real systems. Proof, perhaps, that we’re still human at the centre.

But the direction of travel is clear. These agents are not just tools. They are colleagues. Digital labour, tireless and ever-present.

Smith envisions a world where every business process has a dedicated agent. Where creative workflows, customer support, and executive decision-making are all augmented by intelligent, autonomous helpers.

However, even he admits that we need a cultural reorientation. Most employees still treat AI as a search box. They don’t yet trust it to act. That shift – from command-based to companion-based thinking – is coming, slowly, then suddenly (to paraphrase Ernest Hemingway).

A key point often missed in the AI hype is this: AI is inherently retrospective. Its models are trained on what has come before. It samples. It predicts. It interpolates. But it cannot truly invent in the sense humans do, from nothing, from dreams, from pain.

This is why, despite all the alarmism, creativity remains deeply, stubbornly human. And thank goodness for that.

But there is a danger here. Not of AI replacing us, but of us replacing ourselves – outsourcing our process, flattening our instincts, degrading our skills, compromising originality in favour of efficiency.

AI might never write a truly original poem. But if we rely on it to finish our stanzas, we might stop trying.

Historian Yuval Noah Harari has warned against treating AI as “just another tool”. He suggests we reframe it as alien intelligence. Not because it’s malevolent, but because it’s not us. It doesn’t share our ethics. It doesn’t care about suffering. It doesn’t learn from heartbreak.

This matters, because as we build emotional bonds with AI – however simulated – we risk assuming moral equivalence. That an AI which can seem empathetic is empathetic.

This is where the work of designers and ethicists becomes critical. Should emotional AI be clearly labelled? Should simulated relationships come with disclaimers? If not, we risk emotional manipulation at industrial scale, especially among the young, lonely, or digitally naive. (This recent New York Times piece, about a married, 28-year-old woman in love with her CPT, is well worth a read, to show how easy – and frightening, plus costly – it is to become attached to AI.)

We also risk creating a two-tier society: those who bond with humans, and those who bond with their devices.

Further, Harari warned in an essay, published in last Saturday’s Financial Times Weekend, that the rise of AI could accelerate political fragmentation in the absence of shared values and global cooperation. Instead of a liberal world order, we gain a mosaic of “digital fortresses”, each with its own truths, avatars, and echo chambers. 

Without robust ethics, the future of AI attachment could split into a thousand isolated solitudes, each curated by a private algorithmic butler. If we don’t set guardrails now, we may soon live in a world where connection is easy – and utterly empty.

The present

At DTX Manchester this month, the main-stage AI panel I moderated felt very different from those even last year. The vibe was less “what is this stuff?” and more “how do we control the stuff we’ve already unleashed?”

Gone are the proof-of-concept experiments. Organisations are deploying AI at scale. Suzanne Ellison at Lloyds Bank described a knowledge base now used by 21,000 colleagues, reducing information retrieval by half and boosting customer satisfaction by a third. But more than that, it’s made work more human, freeing up time for nuanced, empathetic conversations.

Likewise, the thought leadership business I co-founded last year, Pickup_andWebb, uses AI avatars for client-facing video content, such as a training programme. No studios. No awkward reshoots. Just instant script updates. It’s slick, smart, and efficient. And yes, slightly unsettling.

Dominic Dugan of Oktra, a man who has spent decades designing workspaces, echoed that tension. He’s sceptical. Most post-pandemic office redesigns, he argues, are just “colouring in”– performative, superficial, Instagram-friendly but uninhabitable. We’ve designed around aesthetics, not people.

Dugan wants us to talk about performance. If an office doesn’t help people do better work, or connect more meaningfully, what’s the point? Even the most elegantly designed workplace means little if it doesn’t accommodate the emotional messiness of human interaction – something AI, for all its growth, still doesn’t understand.

And yet, that fragility of our human systems – tech included – was brought into sharp relief in these last few days (and is ongoing, at the time of writing) when an “induced atmospheric vibration” reportedly caused widespread blackouts in Spain and Portugal, knocking out connectivity across major cities for hours, and in some cases days. No internet. No payment terminals. No AI anything. Life slowed to a crawl. Trains stopped. Offices went dark. Coffee shops switched to cash, or closed altogether. It was a rare glimpse into the abyss of analogue dependency, a reminder that our digital lives are fragile scaffolds built on uncertain foundations.

The outage was temporary. But the lesson lingers: the more reliant we become on these intelligent systems, the greater our vulnerability when they fail. And fail they will. That’s the nature of systems. But it’s also the strength of humans: our capacity to improvise, to adapt, to find ways around failure. The more we automate, the more we must remember this: resilience cannot be outsourced.

And that brings me to my own moment of reinvention.

This month I began the long-overdue overhaul of my website, oliverpickup.com. The current version – featuring a photograph on the home page of me swimming in the Regents Park Serpentine at a shoot interviewing Olympic triathlete Jodie Stimpson, goggles on upside down – has served me well, but it’s over a decade old. Also people think I’m into wild swimming. I’m not, and detest cold water. 

(The 2015 article in FT Weekend has one of my favourite opening lines: “Jodie Stimpson is discussing tactical urination. The West Midlands-based triathlete, winner of two Commonwealth Games golds last summer, is specifically talking about relieving herself in her wetsuit to flood warmth to the legs when open-water swimming.”) 

But it’s more than a visual rebrand. I’m repositioning, due to FOBO (fear of becoming obsolete). The traditional freelance model is eroding, its margins squeezed by algorithmic content and automated writing. While it might not have the personality, depth, and nuance of human writing, AI doesn’t sleep, doesn’t bill by the hour, and now writes decently enough to compete. I know I can’t outpace it on volume. So I have to evolve. Speaking. Moderating. Podcasting. Hosting. These are uniquely human domains (for now).

The irony isn’t lost on me: I now use AI to sharpen scripts, test tone, even rehearse talks. But I also know the line. I know what cannot be outsourced. If my words don’t carry me in them, they’re not worth publishing.

Many of us are betting that presence still matters. That real connection – in a room, on a stage, in a hard conversation – will hold value, even as screens whisper more sweetly than ever.

As such, I’m delighted to have been accepted by Pomona Partners, a speaker agency led by “applied” futurist Tom Cheesewright, whom I caught up with over lunch when at DTX Manchester. I’m looking forward to taking the next steps in my professional speaking career with Tom and the team.

The past

Recently, prompted by a friend’s health scare and my natural curiosity, I spat into a tube and sent off the DNA sample to ancestry.com. I want to understand where I come from, what traits I carry, and what history pulses through me.

In a world where AI can mimic me – my voice, writing style, and image – there’s something grounding about knowing the real me. The biological, lived, flawed, irreplaceable me.

It struck me as deeply ironic. We’re generating synthetic selves at an extraordinary rate. Yet we’re still compelled to discover our origins: to know not just where we’re going, but where we began.

This desire for self-knowledge is fundamental. It sits at the heart of my CHUI framework: Community, Health, Understanding, Interconnectedness. Without understanding, we’re at the mercy of the algorithm. Without roots, we become avatars.

Smith’s demo glitch – an AI agent refusing to cooperate – was a reminder that no matter how advanced the tools, we are still in the loop. And we should remain there.

When I receive my ancestry results, I won’t be looking for royalty. I’ll be looking for roots. Not to anchor me in the past, but to help me walk straighter into the future. I’ll also share those findings in this newsletter. Meanwhile, I’m off to put tea in my coffee.

Statistics of the month

📈 AI is boosting business. Some 89% of global leaders say speeding up AI adoption is a top priority this year, according to new LinkedIn data. And 51% of firms have already seen at least a 10% rise in revenue after implementation.

🏙️ Cities aren’t ready. Urban economies generate most of the world’s GDP, but 44% of that output is at risk from nature loss, recent World Economic Forum data shows. Meanwhile, only 37% of major cities have any biodiversity strategy in place. 🔗

🧠 The ambition gap is growing. Microsoft research finds that 82% of business leaders around the globe say 2025 is a pivotal year for change (85% think so in the UK). But 80% of employees feel too drained to meet those expectations. 🔗

📉 Engagement is slipping. Global employee engagement is down to 21%, according to Gallup’s latest State of the Global Workplace annual report (more on this next month). Managers have been hit hardest – dropping from 30% to 27% – and have been blamed for the general fall. The result? $438 billion in lost productivity. 🔗

💸 OpenAI wants to hit $125 billion. That’s their projected revenue by 2029 – driven by autonomous agents, API tools and custom GPTs. Not bad for a company that started as a non-profit. 🔗

Thank you for reading Go Flux Yourself. Subscribe for free to receive this monthly newsletter straight to your inbox.

All feedback is welcome, via oliver@pickup.media. If you enjoyed reading, please consider sharing it via social media or email. Thank you.

And if you are interested in my writing, speaking and strategising services, you can find me on LinkedIn or email me using oliver@pickup.media.

Published by

Unknown's avatar

Oliver Pickup

Award-winning future-of-work Writer | Speaker | Moderator | Editor-in-Chief | Podcaster | Strategist | Collaborator | #technology #business #futureofwork #sport

One thought on “Go Flux Yourself: Navigating the Future of Work (No. 16)”

Leave a comment