Leaders are blindly ignoring the dangers of ‘confidently incorrect’ AI – and why it’s a massive problem

Why don’t scientists trust atoms? Because they make everything up. 

When Greg Brockman, president and co-founder of OpenAI, demonstrated the possibilities of GPT-4 – Generative Pre-trained Transformer 4, the fourth-generation autoregressive language model that uses deep learning to produce human-like text – upon launch on Mar. 14, he tasked it to create a website from a notebook sketch

Brockman prompted GPT-4, on which ChatGPT is built, to select a “really funny joke” to entice would-be viewers to click for the answer. It chose the above gag. Presumably, the irony wasn’t purposeful. Because the issues of “trust” and “making things up” remain massive, despite the incredible yet entrancing capabilities of generative artificial intelligence. 

Many business leaders are spellbound, stated futurist David Shrier, professor of practice (AI and innovation) at Imperial College Business School in London. And it was easy to understand why if the technology could build websites, invent games, create pioneering drugs, and pass legal exams – all in mere seconds.

Those impressive feats are making it more challenging for leaders to be clear-eyed, said Shrier, who has written books on nascent technologies. In the race to embrace ChatGPT, companies, and individual users, are “blindly ignoring the dangers of confidently incorrect AI.” As a result, he warned that significant risks are emerging as companies rapidly race to re-orient themselves around ChatGPT without being aware of – or ignoring – the numerous pitfalls.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

WTF is the Turing trap – and how businesses that embrace AI can avoid it

All the recent chatter about ChatGPT and advancements in generative artificial intelligence has been impossible to avoid for business leaders. More than ever, they are being urged to embrace AI. 

True, if used correctly, it can improve efficiencies and forecasting while reducing costs. But many people make the mistake of thinking AI could – and should – be more human. 

Science-fiction tropes do not help this perception. Additionally, Alan Turing’s famous test for machine intelligence, proposed in 1950, has conditioned us to think about this technology in a certain way. Originally called the imitation game, the Turing test was designed to gauge the cleverness of a machine compared to humans. Essentially, if a machine displays intelligent behavior equivalent to, or indistinguishable from, that of a human, it passes the Turing test.

But this is a wrongheaded strategy, according to professor Erik Brynjolfsson, arguably the world’s leading expert on the role of digital technology in improving productivity. Indeed, the director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI recently coined the term the Turing trap, as he wanted people to avoid being snared by this approach.

So what exactly is the Turing trap?

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.

Stanford professor on the AI skills gap and the dangers of exponential innovation

ChatGPT and its ilk represent a welcome quantum leap for productivity, according to eminent AI expert professor Erik Brynjolfsson. But he adds that such rapid developments also present a material risk

Erik Brynjolfsson is in great demand. The US professor whose research focuses on the relationship between digital tech and human productivity is nearing the end of a European speaking tour that’s lasted nearly a month. Despite this, he’s showing no signs of fatigue – quite the opposite, in fact. 

Speaking via Zoom as he prepares for his imminent lecture in Oxford, the director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI is enthused by recent “seminal breakthroughs” in the field.

Brynjolfsson’s tour – which has included appearances at the World Economic Forum in Davos and the Institute for the Future of Work in London – is neatly timed, because the recent arrival of ChatGPT on the scene has been capturing human minds, if not yet hearts. 

The large-scale language model, fed 300 billion words by developer OpenAI, caused a sensation with its powerful capabilities, attracting 1 million users within five days of its release in late November 2022. At the end of January, Microsoft’s announcement of a substantial investment in OpenAI “to accelerate AI breakthroughs” generated yet more headlines. 

ChatGPT’s popularity is likely to trigger an avalanche of similarly extraordinary AI tools, Brynjolfsson predicts, with a possible economic value extending to “trillions of dollars”. But he adds that proper safeguards and a better understanding of how AI can augment – not replace – jobs are urgently required.

What’s next in AI?

“There have been some amazing, seminal breakthroughs in AI lately that are advancing the frontier rapidly,” Brynjolfsson says. “Everyone’s playing with ChatGPT, but this is just part of a larger class of ‘foundation models’ that is becoming very important.”

He points to the image generator DALL-E (another OpenAI creation) and lists similar tools designed for music, coding and more. Such advances are comparable to that of deep learning, which enabled significant leaps in object recognition a decade ago. 

“There’s been a quantum improvement in the past couple of years as these foundational models have been introduced more widely. And this is just the first wave,” Brynjolfsson says. “The folks working on them tell me that there’s far more in the pipeline that we’ll be hearing about in the coming weeks.”

As much as I’m blown away by these technologies, the bottleneck is our human response

When pushed for examples of advances that could shape the future of work, he reveals that Generative Pre-trained Transformer 3 (GPT-3) – the language model that uses deep learning to emulate human writing – will be superseded by GPT-4 “within weeks. This is a ‘phased change of improvement’ compared with the last one, but it’ll be even more capable of solving all sorts of problems.” 

Elsewhere, great strides are being made with “multi-agent systems” designed to enable more effective interactions between AI and humans. In effect, AI tech will gain the social skills required to cooperate and negotiate with other systems and their users. 

“This development is opening up a whole space of new capabilities,” Brynjolfsson declares.

The widening AI skills gap

As thrilling as these pioneering tools may sound, the seemingly exponential rate of innovation presents some dangers, he warns. 

“AI is no longer a laboratory curiosity or something you see in sci-fi movies,” Brynjolfsson says. “It can benefit almost every company. But governments and other organisations haven’t been keeping up with developments – and our skills haven’t either. The gap between our capabilities and what the technology enables and demands has widened. I think that gap will be where most of the big challenges – and opportunities – for society lie over the next decade or so.”

Brynjolfsson, who studied applied maths and decision sciences at Harvard in the 1980s, started in his role at Stanford in July 2020 with the express aim of tackling some of these challenges. 

“We created the Digital Economy Lab because, as much as I’m blown away by these technologies, the bottleneck is our human response,” he says. “What will we do about the economy, jobs and ethics? How will we transform organisations that aren’t changing nearly fast enough? I want to speed up our response.”

Brynjolfsson spoke passionately about this subject at Davos in a session entitled “AI and white-collar jobs”. In it, he advised companies to adopt technology in a controlled manner. Offering a historical analogy, he pointed out that, when electricity infrastructure became available about a century ago, it took at least three decades for most firms to fully realise the productivity gain it offered because they first needed to revamp their workplaces to make the best use of it. 

“We’re in a similar period with AI,” Brynjolfsson told delegates. “What AI is doing is affecting job quality and how we do the work. So we must address to what extent we keep humans in the loop rather than focus on driving down wages.”

Why AI will create winners and losers 

The risk of technology racing too far ahead of humanity for comfort is a familiar topic for Brynjolfsson. In both Race Against the Machine (2011) and The Second Machine Age (2014), he and his co-author, MIT scientist Andrew McAfee, called for greater efforts to update organisations, processes and skills. 

AI can benefit almost every company. But governments and other organisations haven’t been keeping up with developments – and our skills haven’t either

How would he assess the current situation? “When we wrote those books, we were optimistic about the pace of technological change and pessimistic about our ability to adapt,” Brynjolfsson says. “It turns out that we weren’t optimistic enough about the technology or pessimistic enough about our institutions and skills.”

In fact, the surprising acceleration of AI means that the “timeline for when we’ll have artificial general intelligence” should be shortened by decades, he argues. “AGI will be able to do most of the things that humans can. Some predicted that this would be achieved by the 2060s, but now people are talking about the 2030s or even earlier.”

Given the breakneck speed of developments, how many occupations are at risk of obsolescence through automation? 

Brynjolfsson concedes that the range of roles affected is looking “much broader than earlier thought. There will be winners and losers. Jobs will be enhanced in many cases, but some will be eliminated. Routine work will become increasingly automated – and there will also be a flourishing of fantastic creativity. If we use these tools correctly, there will be positive disruption. If we don’t, inequality could deepen, further concentrating wealth and political power.” 

How to apply AI in the workplace

How, then, should businesses integrate AI into their operations? First, they must avoid what Brynjolfsson has labelled the Turing trap

“One of the biggest misconceptions about AI – especially among AI researchers, by the way – is that it needs to do everything that humans do and replace them to be effective,” he explains, arguing that the famous test for machine intelligence, proposed by Alan Turing in 1950, is “an inspiring but misguided vision”.

Brynjolfsson contends that a “mindset shift” at all levels – from scientists and policy-makers to employers and workers – is required to harness AI’s power to shape society for good. “We should ask: ‘What do we want these powerful tools for? And how can we use them to achieve our goals?’ The tools don’t decide; we decide.”

One of the biggest misconceptions about AI is that it needs to do everything that humans do and replace them

He adds that many business leaders have the wrong attitude to applying new tech in general and AI in particular. This amounts to a “pernicious problem”. 

To illustrate this, he cites Waymo’s experiments with self-driving vehicles: “These work 99.9% of the time, but there is a human safety driver overseeing the system and a second safety driver in case the first one falls asleep. People watching each other is not the right path to driverless cars.”

Brynjolfsson commends an alternative route, which has been taken by the Toyota Research Institute, among others. When he was in Davos, the institute’s CEO, Dr Gill Pratt “told me how his team has flipped things around so that the autonomous system is used as the guardian angel. Creating a self-driving car that works in all possible conditions is tough, but humans can handle those exceptions.” 

With a person making most decisions in the driving seat, the AI intervenes “occasionally – for instance, when there’s a looming accident. I think this is a good model, not only for self-driving cars, but for many other applications where humans and machines work together.” 

For similar reasons, Brynjolfsson lauds Cresta, a provider of AI systems for customer contact centres. Its products keep humans “at the forefront” of operations instead of chatbots, whose apparent Turing test failures continue to frustrate most people who deal with them. 

“The AI gives them suggestions about what to mention to customers,” he says. “This system does dramatically better in terms of both productivity and customer satisfaction. It closes the skills gap too.”

Does Brynjolfsson have a final message for business leaders before he heads off to give his next lecture? “We need to catch up and keep control of these technologies,” he says. “If we do that, I think the next 10 years will be the best decade we’ve ever had on this planet.”

This article was first published by Raconteur, as part of the Future of Work special report in The Times, in February 2023

WTF is learning quotient – and why it matters now

In January, at the World Economic Forum in the Swiss Alps, there was much chat about ChatGPT, OpenAI’s large-scale language model that has been fed 300 billion words to help it generate plausible, passable answers to most questions. An Elon Musk tweet summed up the sentiment for many. “It’s a new world. Goodbye homework!”

With generative AI advanced enough to produce eerily-human text responses, and other related foundational models now able to create music, art, and code, is it time to turn the page on traditional education? Further, is rote learning and cramming for exams, only to forget the key facts instantly afterwards, finished? Granted, it has its place for times tables and languages, but what else, really? 

While some may want to defer answering these uncomfortable puzzlers, speakers on oversubscribed AI-related panels at Davos 2023 heralded LQ as the new IQ.

So what exactly is LQ?

It stands for “learning quotient” – as opposed to intelligence quotient. Essentially, it’s a measure of adaptability and one’s desire and ability to update our skills throughout life.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in February 2023 – to read the complete piece, please click HERE.