All the recent chatter about ChatGPT and advancements in generative artificial intelligence has been impossible to avoid for business leaders. More than ever, they are being urged to embrace AI.
True, if used correctly, it can improve efficiencies and forecasting while reducing costs. But many people make the mistake of thinking AI could – and should – be more human.
Science-fiction tropes do not help this perception. Additionally, Alan Turing’s famous test for machine intelligence, proposed in 1950, has conditioned us to think about this technology in a certain way. Originally called the imitation game, the Turing test was designed to gauge the cleverness of a machine compared to humans. Essentially, if a machine displays intelligent behavior equivalent to, or indistinguishable from, that of a human, it passes the Turing test.
But this is a wrongheaded strategy, according to professor Erik Brynjolfsson, arguably the world’s leading expert on the role of digital technology in improving productivity. Indeed, the director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI recently coined the term the Turing trap, as he wanted people to avoid being snared by this approach.
So what exactly is the Turing trap?
The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in March 2023 – to read the complete piece, please click HERE.