How financial services operators are dialling up conversational AI to catch out fraudsters

Organisations are using new technology to analyse the voices of those posing as customers in real time while reducing false positives

Great Britain is the fraud capital of the world, according to a Daily Mail investigation published in June. The study calculated that 40 million adults have been targeted by scammers this year. In April, a reported £700m was lost to fraud, compared to an average of £200m per month in 2021. As well as using convincing ruses, scammers are increasingly sophisticated cybercriminals.

If the UK does go into recession, as predicted, then the level of attacks is likely to increase even further. Jon Holden is head of security at digital-first bank Atom. “Any economic and supply-chain pressure has always had an impact and motivated more fraud,” he says. He suggests that the “classic fraud triangle” of pressure, opportunity and rationalisation comes into play. 

Financial service operators are investing in nascent fraud-prevention technologies such as conversational AI and other biometric solutions to reduce fraud. “Conversational AI is being used across the industry to recognise patterns in conversations, with agents or via chatbots, that may indicate social engineering-type conversations, to shut them down in real time,” continues Holden. “Any later than real time and the impact of such AI can be deadened as the action comes too late. Linking this to segmentation models that identify the most vulnerable customers can help get action to those that need it fastest and help with target prevention activity too.”

This last point is crucial because educating customers about swindlers is not straightforward. “Unfortunately, there will always be vulnerable people being scammed,” Holden says. “The banks are doing a lot of work to identify and protect vulnerable customers, but clever social engineering, often over a long period, will always create more victims of romance scams, investment scams, or purchase scams when victims send money for goods never received.”

How AI can help fight fraud

AI is a critical tool to fight fraud. Not only does it reduce the possibility of human error but it raises the flag quickly, which enables faster, smarter interventions. Additionally, it provides “far better insight of the cyber ecosystem”, adds Holden, “almost at the point of predictive detection, which helps with both threat decisioning and threat hunting”. 

Jason Costain is head of fraud prevention at NatWest, which serves 19 million customers across its banking and financial services brands. He agrees it is vital for conversational AI to join the chat. Because the call centre is an important customer service channel and a prime target for fraudulent activity – both from lone-wolf attackers and organised crime networks – he resolved to establish more effective security mechanisms while delivering a fast, smooth experience for genuine customers. 

In late 2020, NatWest opted for a speech recognition solution by Nuance, a company which Microsoft recently acquired. It screens every incoming call and compares voice characteristics – including pitch, cadence, and accent – to a digital library of voices associated with fraud against the bank. The software immediately flags suspicious calls and alerts the call centre agent about potential fraud attempts.

Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible

Before the end of the first year of deploying the Nuance Gatekeeper system, NatWest had screened 17 million incoming calls. Of those, 23,000 led to alerts and the bank found that around one in every 3,500 calls is a fraud attempt. As well as a library of ‘bad’ voices, NatWest agents now have a safe list of genuine customer voices that can be used for rapid authentication without customers needing to recall passwords and other identifying information. That knowledge enables the bank to identify and disrupt organised crime activities to protect its customers and assist law enforcement.

“We’re using voice-biometric technology to build a clear picture of our customers’ voices and what criminal voices sound like,” Costain says. “We can detect when we get a fraudulent voice coming in across our network as soon as it happens. Using a combination of biometric and behavioural data, we now have far greater confidence that we are speaking to our genuine customers and keeping them safe.”

He estimates the return on investment from the tool is more than 300%. “As payback from technology deployment, it’s been impressive. But it’s not just about stopping financial loss; it’s about disrupting criminals.” For instance, NatWest identified a prolific fraudster connected to suspect logins on 1,500 bank accounts, and an arrest followed.

“For trusted organisations like banks, where data security is everything, the identification of the future is all about layers of security: your biometrics, the devices you use, and understanding your normal pattern of behaviour,” adds Costain. “At NatWest, we are already there, and our customers are protected by it.”

Benefits of investing in conversational AI

There are other benefits to be gained by investing in conversational AI solutions. Dr Hassaan Khan is head of the School of Digital Finance at Arden University. He points to a recent survey that indicates almost 90% of the banking sector’s interactions will be automated by 2023. “To stay competitive, organisations must rethink their strategies for improved customer experience. Banks are cognisant that conversational AI can help them be prepared and meet their customers’ rising demands and expectations,” he says.

This observation chimes with Livia Benisty. She is the global head of anti-money laundering at Banking Circle, the B2B bank relied on by Stripe, Paysafe, Shopify and other big businesses, responsible for settling approximately 6% of the world’s ecommerce payments. “With AML fines rocketing – the Financial Conduct Authority dished out a record $672 million (£559m) in 2021 – it’s clear that transaction monitoring cannot cope in its current state,” Benisty says. “That’s why adopting AI and machine learning is vital for overturning criminal activity.”

She argues, however, that many in the financial services industry are reluctant to invest in the newest AML solutions for fear of being reprimanded by regulators. “If you’re a bank, you come under a lot of scrutiny and there’s been resistance to using AI like ours,” she says. “AI is seen as unproven and risky to use but the opposite is true. Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible. AI alleviates admin-heavy processes, enhancing detection by increasing rules precision and highlighting red flags the naked human eye could never spot.”

Even regulators would be impressed by the results revealed by Banking Circle’s head of AML. More than 600 bank accounts have been closed or escalated to the compliance department, thanks to AI-related findings. Further, the solution “dramatically reduces” the so-called false positive alerts. “It’s well known the industry can see rates of a staggering 99%,” adds Benisty. “In highlighting fewer non-risky payments, fewer false positives are generated, ultimately meaning more time to investigate suspicious payments.”

As the economy weakens, and criminals grow stronger, financial services operators would be wise to dial up their conversational AI capabilities to improve customer experience today and pave the way to a password-less tomorrow.

This article was first published in Raconteur’s Fraud, Cybersecurity and Financial Crime report in July 2022

Mastercard cyber chief on using AI in the fight against fraud

Ajay Bhalla, Mastercard’s president of cyber and intelligence solutions, thinks innovations like AI can tackle cybercrime – and help save the planet

The fight against fraud has always been a messy business, but it’s especially grisly in the digital age. To keep ahead of the cybercriminals, investment in technology – particularly artificial intelligence – is paramount, says Ajay Bhalla, president of cyber and intelligence solutions at Mastercard. 

Since the opening salvo of the coronavirus crisis, cybercriminals have launched increasingly sophisticated attacks across a multitude of channels, taking advantage of heightened emotions and poor online security.

Some £1.26 billion was lost to financial fraud in the UK in 2020, according to UK Finance, a trade association, while there was a 43% year-on-year explosion in internet banking fraud losses. The banking industry managed to stop some £1.6 billion of fraud over the course of the year, equivalent to £6.73 in every £10 of attempted fraud.

If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line

The landscape has rapidly evolved over the past year, says Bhalla, due to factors like the rapid growth of online shopping and the emergence of digital solutions in the banking sector and beyond. These changes have broken down the barriers to innovation, driving an unprecedented pace of change in the way we pay, bank and shop, says the executive, who’s responsible for deploying innovative technology to ensure the safety and security of 90 billion transactions every year. 

“Against that backdrop, cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded.”

AI: the new electricity

It isn’t just the growing number of transactions that attracts criminal attention, but the diversity of opportunity, according to London-based Bhalla, who has held various roles at Mastercard around the world since 1993. 

“As the Internet of Things becomes ever more pervasive, so the size of the attack surface grows,” he says, noting that there will be 50 billion connected devices by 2025. 

Against this backdrop, AI will be essential to tackle cyber threats. 

“AI is fundamental to our work in areas such as identity and ecommerce, and we think of it as the new electricity, powering our society and driving forward progress,” says the 55-year-old.

Mastercard has pioneered the use of AI in banking through its worldwide network of R&D labs and AI innovation centres, and its AI-powered solutions have saved more than $30bn being lost to fraud over the past two years. 

In 2020, it opened an Intelligence and Cyber Centre in Vancouver, aimed at accelerating innovation in AI and IoT. The company filed at least 40 AI-related patent applications last year; it has developed the biggest cyber risk assessment capability on the planet, according to Bhalla. 

“We are constantly testing, adapting and improving algorithms to solve real-world challenges.”

Turning to examples of the company’s work, Bhalla says Mastercard has built an ability to trace and alert on financial crime across its network, a world first. He also points to the recently launched Enhanced Contactless, or ECOS, which leverages state-of-the-art security and privacy technology to make contactless payments resistant to attacks from quantum computers, using next-generation algorithms and cryptography. 

“With ECOS, contactless payments still happen in less than half a second, but they are three million times harder to break.”

Building security through biometrics


Such innovations are transforming customers’ interactions with financial services providers. For example, Mastercard has combined AI-powered technologies with physical biometrics – like face, fingerprint and palm – to identify legitimate account holders. These technologies recognise behavioural traits, like the way in which customers hold their phone or how fast they type, actions that can’t be replicated by fraudsters. 

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay.”

Excited by developments in this area, Bhalla says Mastercard recently detected an attack that involved hundreds of devices attempting to log in from a phone that had reported itself as lying flat on its back. “Given the speed at which the credentials were typed, we knew it was unlikely it could be done with the phone flat on a surface,” Bhalla says. “In this way, a sophisticated attack that looked otherwise legitimate was detected before any fraud losses could occur.”

Cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded

Mastercard might boast an impressive list of successful fraud-fighting solutions, but wrong turns are vital for the journey, Bhalla admits. “If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line,” he says. “At Mastercard, trust in and reliance on our services is far too important to take that risk, so rigorously testing solutions before they get anywhere near the end user is our standard operating procedure.”

Trust is a must

A keen rower and golfer, Bhalla volunteers as an executive-in-residence at the University of Oxford’s Saïd Business School. He has a bachelor’s degree in commerce from Delhi University and a master’s degree in management from the University of Mumbai. 

Even with his experience and tech knowledge, Bhalla insists that Mastercard and others within the industry must go back to basics and focus on customer experience. The company’s leadership in standards has been core to earning and retaining the trust of its customers, he notes. 

The technology may be evolving quickly, but one core principle remains, says Bhalla. “Our business is based on trust, which is hard-won and easily lost.”

The correct operating processes and standards must be in place from the outset so that both customers and businesses can have confidence in the technology and trust that it will be useful, safe and secure. 

“What has changed is the sharp focus now placed on developing leading-edge solutions that prevent fraud and manage its impact, which is not surprising given that the average cost of a single data breach has now grown to $3.86 million,” Bhalla says.

Providing a blueprint for business leaders, Bhalla strongly believes that “innovation must be good for people … and address their needs at the fundamental design stage of the systems and solutions we create.”

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay

Bhalla is using tech to fight fraud and drive financial inclusion, with Mastercard aiming  to connect 1 billion people globally to the digital economy by 2025. His ambitions are wider still, with much of his work focused on “protecting the world we have”. 

Mindful that climate change is high on the agenda, especially for younger generations, Mastercard has launched a raft of programmes in the area, including this year’s Sustainable Card Badge, which looks to identify cards made more sustainably from recyclable, recycled, bio-sourced, chlorine-free, degradable or ocean plastics.

Much like fighting fraud, global warming is reaching a crucial stage. Thanks to the efforts of industry leaders like Bhalla, the world stands a better chance of ultimate triumph on both fronts.

This article was originally written for Raconteur’s Fighting Fraud report, published in June 2021

Fighting fraud in times of crisis

Cybercrime is always distressing for those affected, but when the resultant losses come from the public purse, it must be taken even more seriously

Coronavirus has coursed through every facet of our lives, and society and business have already paid a colossal price to restrict its flow. We will be counting the cost for years, if not decades. And while people have become almost anaesthetised to the enormous, unprecedented sums of support money administered by the government, it was still painful to learn, in October, that taxpayers could face losing up to £26 billion on COVID-19 loans, according to an alarming National Audit Office report.

Given the likely scale of abuse, it raises the question of how authorities should go about eliminating public sector fraud? Could artificial intelligence (AI) fraud detection be the answer?

Admittedly, the rapid deployment of financial-aid schemes, when the public sector was also dealing with a fundamental shift in service delivery, created opportunities for both abuse and risk of systematic error. Fraudsters have taken advantage of the coronavirus chaos. But their nefariousness is not limited to the public sector.

Ryan Olson, vice president of threat intelligence at American multinational cybersecurity organisation Palo Alto Networks, says COVID-19 triggered “the cybercrime gold rush of 2020”.

Indeed, the latest crime figures published at the end of October by the Office for National Statistics show that, in the 12 months to June, there were approximately 11.5 million offences in England and Wales. Some 51 per cent of them were made up of 4.3 million incidents of fraud and 1.6 million cybercrime events, a year-on-year jump of 65 per cent and 12 per cent respectively.

Cybercrime gold rush – counting the cost

Jim Gee, national head of forensic services at Crowe UK, a leading audit, tax, advisory and risk firm, says: “Even more worryingly, while the figures are for a 12-month period, a comparison with the previous quarterly figures shows this increase has occurred in the April-to-June period of 2020, the three months after the COVID-19 health and economic crisis hit. The size of the increase needed in a single quarter to result in a 65 per cent increase over the whole 12 months could mean actual increases of up to four times this percentage.”

In terms of eliminating public sector fraud, Mike Hampson, managing director at consultancy Bishopsgate Financial, fears an expensive game of catch-up. “Examples of misuse have increased over the last few months,” he says. “These include fraudulent support-loan claims and creative scams such as criminals taking out bounce-back loans in the name of car dealerships, in an attempt to buy high-end sports cars.”

AI fraud detection and machine-learning algorithms should be put in the driving seat to pump the brakes on iniquitous activity, he argues. “AI can certainly assist in carrying out basic checks and flagging the most likely fraud cases for a human to review,” Hampson adds.

John Whittingdale, media and data minister, concedes that the government “needs to adapt and respond better”, but says AI and machine-learning are now deemed critical to eliminating public sector fraud. “As technology advances, it can be used for ill, but at the same time we can adapt new technology to meet that threat,” he says. “AI has a very important part to play.”

Teaming up with technology leaders

Technology is already vital in eliminating public sector fraud at the highest level. In March, the Cabinet Office rolled out Spotlight, the government grants automated due-diligence tool built on a Salesforce platform. Ivana Gordon, head of the government grants management function COVID-19 response at the Cabinet Office, says Spotlight “speeds up initial checks by processing thousands of applications in minutes, replacing manual analysis that, typically, can take at least two hours per application”. The tool draws on open datasets from Companies House, the Charity Commission and 360Giving, plus government databases that are not available to the public.

“Spotlight has proven robust and reliable,” says Gordon, “supporting hundreds of local authorities and departments to administer COVID-19 funds quickly and efficiently. To date Spotlight has identified around 2 per cent of payment irregularities, enabling grant awards to be investigated and payments halted to those who are not eligible.”

We need to watch how the technology fits into the whole process. AI doesn’t get things right 100 per cent of the time

She adds that Spotlight is one of a suite of countermeasure tools, including AI fraud detection, developed with technology companies, and trialled and implemented across the public sector to help detect and prevent abuse and error.

Besides, critics shouldn’t be too hard on the public sector, argues David Shrier, adviser to the European Parliament in the Centre for AI, because it was “understandably dealing with higher priorities, like human life, which may have distracted somewhat from cybercrime prevention”. He believes that were it not for the continued investment in the National Cyber Security Centre (NCSC), the cost of fraudulent activity would have been significantly higher.

Work to be done to prevent fraud

Greg Day, vice president and chief security officer, Europe, Middle East and Africa, at Palo Alto Networks, who sits on Europol’s cybersecurity advisory board, agrees. Day points to the success of the government’s Cyber Essentials digital toolkit. He thinks, however, that the NCSC must “further specialise, tailor its support and advice, and strengthen its role as a bridge into information both from the government, but also trusted third parties, because cyber is such an evolving space”.

The public sector has much more to do in combating cybercrime and fraud prevention on three fronts, says Peter Yapp, who was deputy director of incident management at the NCSC up to last November. It must encourage more reporting, make life difficult for criminals by upping investment in AI fraud detection and reallocate investigative resources from physical to online crime, he says.

Yapp, who now leads law firm Schillings’ cyber and information security team, says a good example of an initiative that has reduced opportunity for UK public sector fraud is the NCSC’s Mail Check, which monitors 11,417 domains classed as public sector. “This is used to set up and maintain good domain-based message authentication, reporting and conformance (DMARC), making email spoofing much harder,” he says. Organisations that deploy DMARC can ensure criminals do not successfully use their email addresses as part of their campaigns.”

While such guidance is welcome, there are potential problems with embracing tech to solve the challenge of eliminating public sector fraud, warns Dr Jeni Tennison, vice president and chief strategy adviser at the Open Data Institute. If unchecked, AI fraud detection could be blocking people and businesses that are applying for loans in good faith, or worse, she says.

“We need to watch out how the technology and AI fit into the whole process,” says Tennison. “As we have seen this year, with the Ofqual exam farrago, AI doesn’t get things right 100 per cent of the time. If you assume it is perfect, then when it doesn’t work, it will have a very negative impact on the people who are wrongly accused or badly affected to the extent they, and others, are fearful of using public sector services.”

There are certainly risks with blindly following any technology, concurs Nick McQuire, senior vice president and head of enterprise research at CCS Insight. But the public sector simply must arm itself with AI or the cost to the taxpayer will be, ultimately, even more significant. “Given the scale of the security challenge, particularly for cash-strapped public sector organisations that lack the resources and skills to keep up with the current threat environment, AI, warts and all, is going to become a crucial tool in driving automation into this environment to help their security teams cope.”

This article was originally published in Raconteur’s Public Sector Technology report in December 2020