WTF is an insider threat – and why is it a growing problem for businesses?

Most (95%) cybersecurity incidents were caused by human error last year, the World Economic Forum calculated. Such incidents appear to be spiraling, with the annual cost of cybercrime predicted to reach $8 trillion this year, according to Cybersecurity Ventures. If that wasn’t alarming enough, experts warn that bad actors within organizations are a growing security risk.

Some employees, manipulated and compromised through “social engineering,” might not even realize they are aiding and abetting criminals. Similarly, employers might not know they have been attacked, until it’s too late.

Worse, all too often, businesses — which are, in the post-pandemic era, being urged to provide greater autonomy to and trust in employees — are blindsided by this so-called “insider threat.”

In a nutshell, an insider threat refers to someone who steals data or breaks the internal systems of the organization they work for, for their own purposes. For example, in 2017, an administrator working for Dutch hosting provider Verelox, deleted all customer data and wiped most of the company’s servers.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How the drive to improve employee experience could trigger a ‘data-privacy crisis’

How much personal information would you feel comfortable with your company knowing, even if it improves the working experience? Where is the line? Also, will that boundary be different for your colleagues?

Right now, it’s all a gray area, but it could darken quickly. Because of that fuzziness and subjectivity, it’s a tricky balance to strike for employers. On the one hand, they are being encouraged — if not urged — to dial up personalization to attract and retain top talent. On the other hand, however, with too much information on staff, they might be accused of taking liberties and trespassing on data privacy issues. 

In 2023, organizations are increasingly using emerging technologies — artificial intelligence (AI) assistants, wearables, and so on — to collect more data on employees’ health, family situations, living conditions, and mental health to respond more effectively to their needs. But embracing these technologies has the potential to trigger a “data-privacy crisis,” warned Emily Rose McRae, senior director of management consultancy Gartner’s human resources practice.

Earlier in January, Gartner identified that “as organizations get more personal with employee support, it will create new data risks” as one of the top nine workplace predictions for chief human resource offices this year.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

How hybrid working brings teams closer but also creates ‘micro cultures’ and internal conflicts

Who needs a water cooler in the digital age? Paradoxically, the pandemic-induced shift to hybrid and remote working has, in many instances, drawn teams closer together, according to Gartner research. 

“We have seen that people have stronger ties with their immediate hybrid team as they have more interactions with those members,” said Piers Hudson, senior director of Gartner’s HR functional strategy and management research team.

Conversely, Hudson noted bonds between people from different departments, who they would have previously run into more often when in an office environment, have weakened in hybrid and remote setups. “We found that employees interact once a week or less with their ‘weak ties’ — people outside their function — versus several times a week before the pandemic,” he said. 

For most hybrid or remote workers, though, team members are “the only people they interact with several times a day,” added Hudson. 

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

Why cybersecurity leaders are actively recruiting neurodiverse talent

In an attempt to clamp down harder on the increased risk of cybersecurity threats to businesses, tech leaders are actively hiring neurodivergent people because of the strong problem-solving and analytical skills they can offer.

The neurodiversity spectrum is wide, ranging from attention deficit hyperactivity disorder (ADHD), dyslexia, dyspraxia and Tourette syndrome, to autism and bi-polarity. But common characteristics of neurodivergent individuals – including pattern-spotting, creative insights and visual-spatial thinking – are finally being realized, not least in the cyber security sector.

Holly Foxcroft, head of neurodiversity in cyber research and consulting at professional search firm London-centered Stott and May Consulting, said that neurodivergent individuals have “spiky profiles.” Foxcroft, who is neurodivergent herself, explained that these visual representations highlight the strengths and areas needed for development or support. “Neurodivergent profiles show that individuals perform highly in areas where neurotypicals have a consistent and moderate line,” she said. 

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in January 2023 – to read the complete piece, please click HERE.

The future of work is not evenly distributed – how employers can prepare

“The future is already here; it’s just not evenly distributed.” U.S.-Canadian writer William Gibson, the father of the cyberpunk sub-genre of science fiction, has had his finger on the pulse of breakthrough innovations for decades. However, in early 2023, this perceptive comment is especially apt for the working world, which is going through the most seismic transformation in our history.

The digital revolution, accelerated by the pandemic fallout, presents challenges and opportunities. For instance, technology has enabled remote working. And yet, employees are clocking up more hours when not in the office, and loneliness that harms mental health is becoming a worrying side effect. Plus, the number of meetings has also shot up, and often people mistake being busy for being productive.

Moreover, while workers demand more time and location flexibility, where does that leave industries in which it isn’t feasible? It’s all very well for those in desk-based jobs to use tech to improve their work-life balance, yet around 80% of global workers are “deskless.” They need to be physically present to do their jobs. 

To help navigate the journey ahead, WorkLife selected nine recent statistics to show the direction of travel, identify the most prominent likely obstacles, and offer advice from experts on how employers can overcome them. In this article, we have included four, and the remaining five will be published separately.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

WTF is social engineering?

Who can you trust online? Given the surging number of global identity thefts, it seems we are nowhere near cautious enough regarding digital interactions.

Neil Smith, partner success manager for EMEA North at cybersecurity firm Norton, said 55% of people in the U.K. admit that they would have no idea what to do if their identity was stolen. “The biggest worry is that it is often ourselves that is the root cause of identity theft,” he added.

Further, Allen Ohanian, chief information security officer of Los Angeles County said that, alarmingly, 67% of us trust people online more than in the physical world.

In early 2022, the World Economic Forum calculated that 95% of cybersecurity incidents occur due to human error. “Almost every time there’s an attack, it’s down to a mistake by or manipulation of people like you and me,” said Jenny Radcliffe, who goes by the moniker “The People Hacker.”

Indeed, 98% of all cyberattacks involve some form of social engineering, cyber security experts Purplesec worked out.

But what exactly is social engineering?

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

How the move to hybrid working has become a ‘buffet’ for cybercriminals

The future of work may be flexible, but are businesses – particularly small- to medium-sized organizations – investing enough time, money, and effort to ramp up cybersecurity sufficiently? No, is the short answer, and it’s a massive concern on the eve of 2023.

With the sophistication of cyber threats on the rise and the increased attack vectors exposed by hybrid working, bad actors are preying on the weakest links in the chain to reach top-tier targets. 

A witticism doing the rounds on the cybersecurity circuit jokes that the hackers who have transformed ransomware attacks – whereby criminals lock their target’s computer systems or data until a ransom is paid – into a multibillion-dollar industry are more professional than their most high-profile corporate victims. But it’s no laughing matter.

The full version of this article was first published on DigiDay’s future-of-work platform, WorkLife, in December 2022 – to read the complete piece, please click HERE.

The appliance of prescience

Advances in artificial intelligence are giving organisations in both the public and private sectors increasingly powerful forecasting capabilities. How much further down this predictive path is it possible for them to go?

Minority Report, Steven Spielberg’s 2002 sci-fi thriller based on a short story by Philip K. Dick, explores the concept of extremely proactive policing. The film, starring Tom Cruise, is set in 2054 Washington DC. The city’s pre-crime department, using visions provided by three clairvoyants, can accurately forecast where a premeditated homicide is about to happen. The team is then able to dash to the scene and collar the would-be murderer just before they strike.

While police forces are never likely to have crack teams of incredibly useful psychics at their disposal, artificial intelligence has advanced to such an extent in recent years that its powerful algorithms can crunch huge volumes of data to make startlingly accurate forecasts.

Could a Minority Report style of super-predictive governance ever become feasible in the public sector – or, indeed, in business? If so, what would the ethical implications of adopting such an approach be?

There is a growing list of narrow-scope cases in which predictive analytics has been used to fight crime and save lives. In Durham, North Carolina, for instance, the police reported a 39% fall in the number of violent offences recorded between 2007 and 2014 after using AI-based systems over that period to observe trends in criminal activities and identify hotspots where they could make more timely interventions.

AI has also been used to tackle human trafficking in the US, where it has helped the authorities to locate and rescue thousands of victims. Knowing that about 75% of child trafficking cases involve grooming on the internet, the government’s Defense Advanced Research Projects Agency monitors suspicious online ads, detects coded messages and finds connections between these and criminal gangs.

In Indonesia, the government has partnered with Qlue, a specialist in smart city technology, to predict when and where natural disasters are most likely to strike. Its systems analyse flood data collected from sensors and information reported by citizens. This enables it to identify the localities most at risk, which informs disaster management planning and enables swifter, more targeted responses.

While all these cases are positive examples of the power of predictive AI, it would be nigh-on impossible to roll out a Minority Report style of governance on a larger scale. That’s the view of Dr Laura Gilbert, chief analyst and director of data science at the Cabinet Office. “To recreate a precognitive world, you would need an incredibly advanced, highly deterministic model of human behaviour – using an AI digital-twin model, perhaps – with low levels of uncertainty being tolerable,” she says. “It’s not certain that this is even possible.”

An abundance of information is required to understand a person’s likely behaviour, such as their genetic make-up, upbringing, current circumstances and more. Moreover, achieving errorless results would require everyone to be continuously scrutinised.

“Doing this on a grand scale – by closely monitoring every facet of every life; accurately analysing and storing (or judiciously discarding) all the data collected; and creating all the technology enhancements to enable such a programme – would be a huge investment and also cost us opportunities to develop other types of positive intervention,” Gilbert says. “This is unlikely to be even close to acceptable, socially or politically, in the foreseeable future.”

Tom Cheesewright, a futurist, author and consultant, agrees. He doubts that such an undertaking would ever be considered worthwhile, even in 2054. “The cost to the wider public in terms of the loss of privacy would be too great,” Cheesewright argues, adding that, in any case, “techniques for bypassing surveillance are widely understood”.

Nonetheless, Vishal Marria, founder and CEO of enterprise intelligence company Quantexa, notes that the private sector, particularly the financial services industry, is making great use of AI in nipping crimes such as money-laundering in the bud. “HSBC has pioneered a new approach to countering financial crime on a global scale across billions of records,” he says. “Only by implementing contextual analytics technology could it identify the risk more accurately, remove it and enable a future-proof mitigation strategy.”

Alex Case, senior director in EMEA for US software company Pegasystems, believes that governments and their agencies can take much from the private sector’s advances. Case, who worked as a deputy director in the civil service from 2018 to 2021, says: “The levels of service being routinely provided by the best parts of the private sector can be replicated in government. In contrast with the dystopian future depicted in Minority Report, the increasing use of AI by governments may lead to a golden age of citizen-centric public service.”

Which other operations or business functions have the most to gain from advances in predictive analytics? Cheeswright believes that “the upstream supply chain is an obvious one in the current climate. If you can foresee shortages owing to pandemics, wars, economic failures and natural disasters, you could gain an enormous competitive advantage.”

The biggest barriers to wielding such forecasting power are a lack of high-quality data and a shortage of experts who can analyse the material and draw actionable insights from it. “Bad data can turn even a smooth deployment on the technology side into a disaster for a business,” notes Danny Sandwell, data strategist at Quest Software. “Data governance – underpinned by visibility into, and insights about, your data landscape – is the best way to ensure that you’re using the right material to inform your decisions. Effective governance helps organisations to understand what data they have, its fitness for use and how it should be applied.”

Sandwell adds that a well-managed data governance programme will create a “single version of the truth”, eliminating duplicate data and the confusion it can cause. Moreover, the most advanced organisations can build self-service platforms by establishing standards and investing in data literacy. “Data governance enables a system of best practice, expertise and collaboration – the hallmarks of an analytics-driven business,” he says.

Gilbert offers business leaders one final piece of advice in this area: recruit carefully. She argues that “a great data analyst is worth, at a conservative estimate, 20 average ones. They can often do things that any number of average analysts working together still can’t achieve. What’s more, a bad analyst will cost you both money and time.”

And, as Minority Report’s would-be criminals in discover to their cost, time is the one resource that’s impossible to claw back.

This article was first published in Raconteur’s Future of Data report in October 2022

How financial services operators are dialling up conversational AI to catch out fraudsters

Organisations are using new technology to analyse the voices of those posing as customers in real time while reducing false positives

Great Britain is the fraud capital of the world, according to a Daily Mail investigation published in June. The study calculated that 40 million adults have been targeted by scammers this year. In April, a reported £700m was lost to fraud, compared to an average of £200m per month in 2021. As well as using convincing ruses, scammers are increasingly sophisticated cybercriminals.

If the UK does go into recession, as predicted, then the level of attacks is likely to increase even further. Jon Holden is head of security at digital-first bank Atom. “Any economic and supply-chain pressure has always had an impact and motivated more fraud,” he says. He suggests that the “classic fraud triangle” of pressure, opportunity and rationalisation comes into play. 

Financial service operators are investing in nascent fraud-prevention technologies such as conversational AI and other biometric solutions to reduce fraud. “Conversational AI is being used across the industry to recognise patterns in conversations, with agents or via chatbots, that may indicate social engineering-type conversations, to shut them down in real time,” continues Holden. “Any later than real time and the impact of such AI can be deadened as the action comes too late. Linking this to segmentation models that identify the most vulnerable customers can help get action to those that need it fastest and help with target prevention activity too.”

This last point is crucial because educating customers about swindlers is not straightforward. “Unfortunately, there will always be vulnerable people being scammed,” Holden says. “The banks are doing a lot of work to identify and protect vulnerable customers, but clever social engineering, often over a long period, will always create more victims of romance scams, investment scams, or purchase scams when victims send money for goods never received.”

How AI can help fight fraud

AI is a critical tool to fight fraud. Not only does it reduce the possibility of human error but it raises the flag quickly, which enables faster, smarter interventions. Additionally, it provides “far better insight of the cyber ecosystem”, adds Holden, “almost at the point of predictive detection, which helps with both threat decisioning and threat hunting”. 

Jason Costain is head of fraud prevention at NatWest, which serves 19 million customers across its banking and financial services brands. He agrees it is vital for conversational AI to join the chat. Because the call centre is an important customer service channel and a prime target for fraudulent activity – both from lone-wolf attackers and organised crime networks – he resolved to establish more effective security mechanisms while delivering a fast, smooth experience for genuine customers. 

In late 2020, NatWest opted for a speech recognition solution by Nuance, a company which Microsoft recently acquired. It screens every incoming call and compares voice characteristics – including pitch, cadence, and accent – to a digital library of voices associated with fraud against the bank. The software immediately flags suspicious calls and alerts the call centre agent about potential fraud attempts.

Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible

Before the end of the first year of deploying the Nuance Gatekeeper system, NatWest had screened 17 million incoming calls. Of those, 23,000 led to alerts and the bank found that around one in every 3,500 calls is a fraud attempt. As well as a library of ‘bad’ voices, NatWest agents now have a safe list of genuine customer voices that can be used for rapid authentication without customers needing to recall passwords and other identifying information. That knowledge enables the bank to identify and disrupt organised crime activities to protect its customers and assist law enforcement.

“We’re using voice-biometric technology to build a clear picture of our customers’ voices and what criminal voices sound like,” Costain says. “We can detect when we get a fraudulent voice coming in across our network as soon as it happens. Using a combination of biometric and behavioural data, we now have far greater confidence that we are speaking to our genuine customers and keeping them safe.”

He estimates the return on investment from the tool is more than 300%. “As payback from technology deployment, it’s been impressive. But it’s not just about stopping financial loss; it’s about disrupting criminals.” For instance, NatWest identified a prolific fraudster connected to suspect logins on 1,500 bank accounts, and an arrest followed.

“For trusted organisations like banks, where data security is everything, the identification of the future is all about layers of security: your biometrics, the devices you use, and understanding your normal pattern of behaviour,” adds Costain. “At NatWest, we are already there, and our customers are protected by it.”

Benefits of investing in conversational AI

There are other benefits to be gained by investing in conversational AI solutions. Dr Hassaan Khan is head of the School of Digital Finance at Arden University. He points to a recent survey that indicates almost 90% of the banking sector’s interactions will be automated by 2023. “To stay competitive, organisations must rethink their strategies for improved customer experience. Banks are cognisant that conversational AI can help them be prepared and meet their customers’ rising demands and expectations,” he says.

This observation chimes with Livia Benisty. She is the global head of anti-money laundering at Banking Circle, the B2B bank relied on by Stripe, Paysafe, Shopify and other big businesses, responsible for settling approximately 6% of the world’s ecommerce payments. “With AML fines rocketing – the Financial Conduct Authority dished out a record $672 million (£559m) in 2021 – it’s clear that transaction monitoring cannot cope in its current state,” Benisty says. “That’s why adopting AI and machine learning is vital for overturning criminal activity.”

She argues, however, that many in the financial services industry are reluctant to invest in the newest AML solutions for fear of being reprimanded by regulators. “If you’re a bank, you come under a lot of scrutiny and there’s been resistance to using AI like ours,” she says. “AI is seen as unproven and risky to use but the opposite is true. Since our initial implementation of AI three years ago, the improvements to alert quality have been incredible. AI alleviates admin-heavy processes, enhancing detection by increasing rules precision and highlighting red flags the naked human eye could never spot.”

Even regulators would be impressed by the results revealed by Banking Circle’s head of AML. More than 600 bank accounts have been closed or escalated to the compliance department, thanks to AI-related findings. Further, the solution “dramatically reduces” the so-called false positive alerts. “It’s well known the industry can see rates of a staggering 99%,” adds Benisty. “In highlighting fewer non-risky payments, fewer false positives are generated, ultimately meaning more time to investigate suspicious payments.”

As the economy weakens, and criminals grow stronger, financial services operators would be wise to dial up their conversational AI capabilities to improve customer experience today and pave the way to a password-less tomorrow.

This article was first published in Raconteur’s Fraud, Cybersecurity and Financial Crime report in July 2022

Ransomware is your biggest threat, NCSC CEO’s tells business

As head of the National Cyber Security Centre, Lindy Cameron believes company leaders must improve preparedness and resilience by educating staff – and themselves

Lindy Cameron is a difficult person to reach. That’s understandable: as CEO of the National Cyber Security Centre (NCSC), she’s at the forefront of the UK’s fight against computer security threats. While it’s tough for a journalist to negotiate an interview, it’s reassuring that she’s dedicated to her task. 

The NCSC provides advice and support for public and private sector organisations, helping them avoid computer security threats. Cameron took the helm in October 2020, succeeding inaugural CEO Ciaran Martin, who stepped aside after four years in the job.

Ransomware presents the most immediate danger to the UK

Her assessment of cyber threats, themes and advice should be required reading for CIOs and other members of the C-suite. Indeed, on the rare occasions she has spoken in public since taking up the role, she hasn’t held back.

For instance, in March she warned of the UK’s need to be “clear-eyed about Chinese ambition in technological advancement”. Speaking in her first address as CEO, she chided China’s “hostile activity in cyberspace” while adding that “Russia [is] the most acute and immediate threat” to the country.

Ransomware: an immediate danger 

The former number two at the Northern Ireland Office has over two decades of experience working in national security policy and crisis management. She was equally forthright and insightful in October’s keynote speech at Chatham House’s Cyber 2021 conference, where she reflected on her first year at the NCSC and identified four key cybersecurity themes. The most alarming is the pervasiveness of ransomware, the scourge of business leaders.

In May, US cloud-based information security company Zscaler calculated that cybercrime was up 69% in 2020. Ransomware accounted for over a quarter (27%) of all attacks, with a total of $1.4 billion demanded in payments. And those figures didn’t include two hugely damaging breaches that occurred in 2021, marking an elevated scope for bad actors.

July’s ransomware attack on multinational remote management software company Kaseya affected thousands of organisations and saw the largest ever ransomware demand of $70 million. The REvil ransomware gang that claimed responsibility for the attack ordered ransoms ranging from a few thousand dollars to multiple millions, although it’s unclear how much was paid. The gang said 1 million systems had been impacted across almost 20 countries. While those numbers are likely to be exaggerated, the attack triggered widespread operational downtime for over 1,000 companies.

The Kaseya incident came two months after the attack on Colonial Pipeline, one of the largest petroleum pipelines in the United States. The attack disabled the 5,500-mile system, sparking fuel shortages and panic buying at gas stations. Within hours of the breach, a $4.4m ransom was paid to DarkSide, an aptly named Russian hacking group. Despite the payment – later recovered – the pipeline was down for a week.

“Ransomware presents the most immediate danger to the UK, UK businesses and most other organisations – from FTSE 100 companies to schools; from critical national infrastructure to local councils,” Cameron told the October conference. “Many organisations – but not enough – routinely plan and prepare for this threat, and have confidence their cybersecurity and contingency planning could withstand a major incident. But many have no incident response plans, or ever test their cyber defences.”

Managing and mitigating cyber risk

The sheer number of cyberattacks, their broader scope and growing sophistication should keep CIOs awake at night. The latest Imperva Cyber Threat Index score is 764 out of 1,000, nearing the top-level “critical” category. Other statistics hint at the prevalence of cybercrime in 2021: some 30,000 websites on average are breached every day, with a cyberattack occurring every 11 seconds, almost twice as often as in 2019.

Cybersecurity organisation Mimecast reckons six in 10 UK companies suffered such an attack in 2020. In her Raconteur interview, conducted a fortnight after her appearance at Chatham House, Cameron reiterated her concerns.

“Right now, ransomware poses the most immediate threat to UK businesses, and sadly it is an issue which is growing globally,” she says. “While many organisations are alert to this, too few are testing their defences or their planned response to a major incident.”

Organisations can prevent the vast majority of high-profile cyber incidents we’ve seen following guidance we have already issued

Despite the headline-stealing attacks, businesses aren’t doing enough to prepare for ransomware attacks, says Cameron. Cyber risks can and must be managed and mitigated. To an extent, CIOs and chief information security officers (CISOs) are responsible for communicating the potentially fatal threat to various stakeholders.

Cyberattacks are different from other shocks as they aren’t readily perceptible. They are deliberate and can be internal and external. They hit every aspect of an organisation – human resources, finance, operations and more – making them incredibly hard to contain.

“The impact of a ransomware attack on victims can be severe,” Cameron continues, “and I’ve heard powerful testimonies from CEOs facing the repercussions of attacks they were unprepared for. Attacks can affect an organisation’s finances, operations and reputation, both in the short and long term.”

Building cyber resilience 

CEOs can’t hide behind their security teams if breached by a cyberattack. Cameron warns that defending against these incidents can’t be treated as “just a technical issue” – it’s a board-level matter, demanding action from the top. 

“A CEO would never say they don’t need to understand legal risk just because they have a General Counsel. The same applies to cybersecurity.” 

Cybersecurity should be central to boardroom thinking, Cameron adds. “We need to go further to ensure good practice is understood and resilience is being built into organisations. Investing resources and time into putting good security practices into place is crucial for boosting cyber resilience.”

Cameron notes that the NCSC’s guidance, updated in September, will reduce the likelihood of becoming infected by malware – including ransomware – and limit the impact of the infection. It also includes advice on what CIOs, CISOs and even CEOs should do if systems are already infected with malware. 

Cameron, who was previously director general responsible for the Department for International Development’s programmes in Africa, Asia and the Middle East, echoes Benjamin Franklin’s famous maxim: “By failing to prepare, you are preparing to fail.” 

There’s a wide range of practical, actionable advice available on the NCSC website, she notes.

“One of the key things I have learned in my first year as NCSC CEO is that organisations can prevent the vast majority of high-profile cyber incidents we’ve seen following guidance we have already issued,” she adds. 

Low-hanging fruit

At the Chatham House event, Cameron acknowledged that small- and medium-sized enterprises are especially vulnerable to cyberattacks. “I completely understand this is getting harder, especially for small businesses with less capability,” she said. “But it is crucial to build layered defences that are resilient to this.”

SMEs are the low-hanging fruit for cybercriminals, as they usually don’t have the budget or the access for sufficient IT support or security. “We appreciate smaller organisations may not have the same resources to put into cybersecurity as larger businesses,” Cameron says. 

The NCSC has produced tailored advice for such organisations in its Small Business Guide. This explains what to consider when backing up data, how to protect an organisation from malware, tips to secure mobile devices and the information stored on them, things to bear in mind when using passwords and advice on identifying phishing attacks.

Criminals will seek to exploit a weak point, which could include an SME in a supply chain. Larger organisations, says Cameron, have a “responsibility to work with their suppliers to ensure operations are secured. In the past year, we have seen an increase in supply chain attacks with impacts felt around the world, underlining how widespread supply networks can be.”

Supply chain concerns

Supply chain attacks were another of Cameron’s four key themes at the Chatham House conference. Such vulnerabilities “continue to be an attractive vector at the hand of sophisticated actors and … the threat from these attacks is likely to grow,” she said. “This is particularly the case as we anticipate technology supply chains will become increasingly complicated in the coming years.”

The most infamous recent supply chain attack was on SolarWinds, said Cameron. According to the former CEO and other SolarWinds officials, the breach happened because criminals hacked a key password – it was solarwinds123. This highlights the importance of strong passcodes for companies large and small. 

“SolarWinds was a stark reminder of the need for governments and enterprises to make themselves more resilient should one of their key technology suppliers be compromised,” Cameron said at Chatham House.

The two other areas of cyber concern she promoted were the vulnerabilities exposed by the coronavirus and the development of strategically important technology. “We are all increasingly dependent on that technology and it is now fundamental to both our safety and the functioning of society,” she said of the latter.

On the former theme, Cameron said that malicious actors are trying to access Covid-related information, whether vaccine procurement plans or data on new variants. 

“Some groups may also seek to use this information to undermine public trust in government responses to the pandemic. The coronavirus pandemic continues to cast a significant shadow on cybersecurity and is likely to do so for many years to come.”

CIOs must keep this in mind as many organisations grapple with post-pandemic ways of working. This involves more remote workers using personal or poorly protected devices on unsecured networks, all of which play into the hands of bad actors.

“Over the past 18 months, many organisations will have likely increased remote working for staff and introduced new online services and devices to stay connected,” says Cameron. “While this has offered a solution for many businesses, it’s vital for the risks to be mitigated so users and networks work securely. Our home-working guidance offers practical steps to help with safe remote working.”

Post-pandemic cybersecurity 

Providing other essential advice, Cameron underlines the importance for organisations of all sizes to build their cyber resilience. 

“It’s vital that organisations of all sizes take the right steps to build their cyber resilience. Educating employees is an important aspect of keeping any business secure. Staff can be an effective first line of defence against cyberattacks if they are equipped with the right understanding and feel they can report anything suspicious.”

Businesses should put a clear IT policy in place that guides employees on best practices, while staff should be encouraged to use the NCSC’s “Top Tips for Staff” training package. 

“These steps are about creating a positive cybersecurity culture and we believe senior leaders should lead by example,” she adds. 

The NCSC’s Board Toolkit is particularly useful for CIOs, designed to help facilitate cybersecurity discussions between board members and technical experts. It will “help ensure leaders are informed and cybersecurity considerations can be integrated into business objectives”.

These conversations are now critical, as advances in artificial intelligence, the internet of things, 5G and quantum computing multiply attack surfaces. Reflecting on the NCSC’s work since its inception five years ago, Cameron says the organisation has achieved a huge amount, including dealing with significant cyber incidents, improving the resilience of critical networks and developing a skills pipeline for the future. 

“This is delivering real benefits for the nation, from protecting multinational companies to defending citizens against online harm. However, the challenges we face in cyberspace are always changing, so we can’t rest on our laurels.”

This article was first published in Raconteur’s Future CIO report in November 2021

Is China dominating the West in the artificial intelligence arms race?

The US has warned that it is behind its historical foe in the East, and the European bloc is also concerned, but there are ways in which the UK, for example, could catch up, according to experts

If you ask technology experts in the West which country is winning the artificial intelligence arms race, a significant majority will point to China. But is that right? Indeed, Nicolas Chaillan, the Pentagon’s first Chief Software Officer, effectively waved the white flag when, in September, his resignation letter lamented his country’s “laggard” approach to skilling up for AI and a lack of funding. 

A month later, he was more explicit when telling the Financial Times: “We have no competing fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over, in my opinion.”

The 37-year old spent three years steering a Pentagon-wide effort to increase the United States’ AI, machine learning, and cybersecurity capabilities. After stepping down, he said there was “good reason to be angry.” He argued that his country’s supposed slow technological transformation was allowing China to achieve global dominance and effectively take control of critical areas, from geopolitics to media narratives and everywhere in between.

 Chaillan suggested that some US government departments had a “kindergarten level” of cybersecurity and stated he was worried about his children’s future. He made his outspoken comments mere months after a congressionally mandated national security commission predicted in March that China could speed ahead as the world’s AI superpower within the next decade.

 Following a two-year study, the National Security Commission on Artificial Intelligence concluded that the US needed to develop a “resilient domestic base” for creating semiconductors required to manufacture a range of electronic devices, including diodes, transistors, and integrated circuits. Chair Eric Schmidt, the former Google CEO, warned: “We are very close to losing the cutting edge of microelectronics, which power our companies and our military because of our reliance on Taiwan.”

Countering the rise of China

Jens Stoltenberg, the Nato Secretary-General since 2014, echoed the US concerns about how China is galloping away from competitors due to its investment in innovative technology, which other countries have embraced. The implicit – yet hard-to-prove – worry is that the ubiquitous tech is a strategic asset for the Chinese government. But is this a case of deep-rooted, centuries-old mistrust of the East by the West?

 The former Norwegian Prime Minister, ever the diplomat, was at pains to stress that China was not considered an “adversary.” However, he did make the point that its cyber capabilities, new technologies, and long-distance missiles were on the radar of European security services. 

 In late October, Stoltenberg admitted that Nato would expand its focus to counter the “rise of China” in an interview with the Financial Times. “Nato is an alliance of North America and Europe,” he said, “but this region faces global challenges: terrorism, cyber but also the rise of China.”

 Ominously, Stoltenberg continued: “China is coming closer to us. We see them in the Arctic. We see them in cyberspace. We see them investing heavily in critical infrastructure in our countries. They have more and more high-range weapons that can reach all Nato-allied countries.”

 But is China truly so far in front of others? According to the venerated Global AI Index, calculated by Tortoise Media, the US leads the race, with China second. In late September, the UK – currently third in the rankings, slightly ahead of Canada and South Korea – unveiled its National AI Strategy, which sets out a 10-year plan to make it a “global AI superpower”.

 UK plans to become global AI superpower

Some £2.3 billion has already been poured into AI initiatives by the UK government since 2014, though this document – the country’s first package solely focused on AI and machine learning – will accelerate progress, enthuses the Department for Digital, Culture, Media and Sport’s digital minister, Chris Philp. 

“The UK already punches above its weight internationally, and we are ranked third in the world behind the US and China in the list of top countries for AI,” he said. “AI technologies generate billions [of pounds] for the economy and improve our lives. They power the technology we use daily and help save lives through better disease diagnosis and drug discovery.”

A self-styled AI champion and World Economic Forum AI Council member, Simon Greenman, states that the UK is home to the most significant number of AI companies and start-ups (8%) aside from the US (40%). Additionally, venture capital investment in UK AI projects was £2.4bn in 2019. 

“Money isn’t the issue,” says the Checkit Non-Executive Director, when discussing the perceived lack of progress being made by the UK. “The problem is we don’t have enough good commercial AI skills, such as product management and enterprise sales, to put the theory, research, and vision into practice.

“For instance, the ‘Office of AI’ doesn’t have an AI implementation budget. If we’re going to realise the potential that AI can bring to the UK, the government needs to put its money where its mouth is and appoint somebody who has a central budget to implement large-scale AI deployments when it comes to public policy.”

Greater collaboration needed

Fakhar Khalid, Chief Scientist of London-headquartered SenSat, a cloud-based 3D interactive virtual engineering platform, is more optimistic about the UK’s chances of becoming an AI superpower and calls for patience. While he agrees that “the US and China are the leading nations in terms of AI innovation and commercialisation,” he notes that China published its first AI strategy in 2017. The US followed with equivalent plans two years later. 

 “Although these strategies have recently started to emerge in the public and policy domain, these countries have been investing healthily in their ecosystems since the early 1990s,” he says. “In the 90s, the US was not only the leading country for AI education, but its academic innovation also had strong ties with the industry, ensuring a direct impact on the growth of their economy.”

Hinting at the different types of government that enable more collaboration in China compared to the US, the UK, and even Europe as a bloc, he continues: “China, on the other hand, has been radical and ambitious in building its technology capabilities by strongly linking government, academia, and industry to show the beneficial impact of AI on their economy. The government centrally controls China’s AI strategy with hyperlocal implementation.

“The UK’s long overdue AI strategy is a clear indication that we are here to declare ourselves as the key leader in this field, yet we have much to learn from these nations about commercialising our research and creating a strong and impactful link between academia and industry.”

For Dr Mahlet Zimeta, Head of Public Policy at the Open Data Institute in the UK, while China and the US are ahead in the AI race, there are ways in which her country can catch up. “The territories that are lined up to be global AI superpowers are China, US, and the European Union,” she says, “because the great access to and availability of data means the analysis is better. They have massive advantages of scale, but the UK could show international leadership around AI ethics.”

With a greater focus on data skills, standards, and sharing, and encouraging an international collaborative ecosystem driving AI innovation, the West can leap ahead of China. And perhaps, in time, all AI superpowers will work together, in harmony, to the benefit of humanity.  

How critical infrastructure is dealing with the threat of cyber attacks

A crippling ransomware attack on one of the largest fuel distribution networks in the US has brought into sharp focus the cyber threats facing infrastructure of national importance

In 2020, the Cybersecurity and Infrastructure Security Agency alerted the US to the risk of a devastating cyber attack on a crucial system of national importance. On 7 May this year, the UK’s National Cyber Security Centre (NCSC) issued a stark warning along similar lines. By coincidence, it was the same day that hackers would cripple one of the largest fuel distribution networks in North America. 

The taking of the Colonial Pipeline brought the authorities’ worst fears to life. The ransomware attack disabled the 5,500-mile network, causing fuel shortages in the south-eastern states of the US and prompting the Biden administration to declare a state of emergency. Although the Colonial Pipeline Company’s CEO, Joseph Blount, controversially paid the $4.4m (£3.2m) ransom, the network was out of action for a week.

Transparency and trust are key to having robust and executable action plans. Everyone has a role to play in security

This case was “not shocking” to Sarah Lyons, the NCSC’s deputy director for economy and society. There had been warnings aplenty. Only three months previously, for instance, a hacker unsuccessfully attempted to poison the water supply of Oldsmar, a city in Florida. 

“The pandemic has exacerbated cyber attacks targeting organisations, including providers of critical national infrastructure, which will always be an attractive target,” she says. “The Colonial Pipeline incident confirmed our belief that any such attack could have wide-ranging societal ramifications. It also gave us a glimpse at the kind of attack with a physical impact that could materialise in future if connected places providing critical public services are compromised.”

Fatal warning: potential cyber-physical attacks

The way that critical national infrastructure has evolved to use interconnected digital networks makes it far more vulnerable than it used to be, according to Lyons, who believes that the risks could be even greater when 5G is more widely adopted. 

“Regulated industries such as telecoms and energy are being connected to unregulated services and suppliers,” she explains. “These industries, which we all rely on daily, are an attractive target for a range of threat actors, unfortunately. A successful attack could cause significant disruptions to key public services and compromise citizens’ sensitive data.” 

Lyons urges operators to “recognise that it’s vital that we ensure these networks are resilient to cyber attacks. In a worst-case scenario, a successful one could endanger people.”

George Patsis, CEO of Obrela Security Industries, agrees, warning that “the sky is the limit” when it comes to the extent of the damage that cyber attacks on critical infrastructure could wreak. “These have the potential to be cyber physical, putting many people’s lives at risk,” he says. 

Patsis uses the London Underground as an example. “Computers control the timing of when trains arrive at junctions. If someone were to infiltrate the network and alter their synchronisation by only a few seconds, it could cause multiple fatal crashes,” he says.

Most worrying is a lack of robustness in operational technology (OT) security, which Gartner defines as “practices and technologies used to protect people, assets, and information; monitor and/or control physical devices, processes and events; and initiate state changes to enterprise OT systems.”

Patsis says: “As OT increasingly becomes internet-enabled, it creates new attack avenues. There is now a big focus on securing OT in the same way we do the IT estate.” 

While he notes that the Colonial Pipeline affair has been a “huge driver” for improving OT security, Patsis stresses that there is much work to do in this area.

Unique challenge: securing operational technology

Theresa Lanowitz, head of evangelism at AT&T Cybersecurity, takes much the same view. “With the convergence of IT and OT systems, there has been an exponential growth in internet-of-things devices that has heightened concerns about the digital security of these systems,” she says. 

Lanowitz calls for a “mindset shift” in securing OT assets. “Legacy infrastructure has been in place for decades and is now being combined as part of the convergence of IT and OT,” she says. “This can be challenging for organisations that previously used separate security tools for each environment and now require holistic asset visibility to prevent blind spots. Attacks are coming from all sides and are creeping across from IT to OT and vice versa. Organisations should adopt a risk-based approach that recognises that there is no perfect security solution.” 

She continues: “Enterprises that strategically balance security, scalability, access, usability and cost can ultimately provide the best long-term protection against an evolving adversary.”

Has the Colonial Pipeline attack encouraged infrastructure providers to take more effective defensive measures? “Frankly, not enough,” argues Rob Carew, chief product officer at Arcadis Gen, the digital arm of Arcadis, a Dutch engineering consultancy. “There is still a disconnect between cybersecurity and critical infrastructure.” 

He suggests that cybersecurity is widely seen in the sector as an “add-on”, rather than intrinsic, when it comes to monitoring the health of critical infrastructure.

“The problem is compounded by ageing hardware and software technology, which can often be exploited through unforeseen vulnerabilities,” Carew says. “Transparency and trust are key in having robust and executable action plans. Everyone has a role to play in security. If it becomes a regular topic of conversations among asset owners, operators, managers, maintainers and the supply chain, it will become part of the organisation’s DNA.”

Actions, though, speak louder than words. While the Colonial Pipeline incident may have set alarm bells ringing, there is still – months later – high panic across the infrastructure network, with the cybercriminals seemingly better equipped to expose vulnerabilities and gain financially from doing so.

This article first appeared in Raconteur’s Future of Infrastructure report in September 2021

Mastercard cyber chief on using AI in the fight against fraud

Ajay Bhalla, Mastercard’s president of cyber and intelligence solutions, thinks innovations like AI can tackle cybercrime – and help save the planet

The fight against fraud has always been a messy business, but it’s especially grisly in the digital age. To keep ahead of the cybercriminals, investment in technology – particularly artificial intelligence – is paramount, says Ajay Bhalla, president of cyber and intelligence solutions at Mastercard. 

Since the opening salvo of the coronavirus crisis, cybercriminals have launched increasingly sophisticated attacks across a multitude of channels, taking advantage of heightened emotions and poor online security.

Some £1.26 billion was lost to financial fraud in the UK in 2020, according to UK Finance, a trade association, while there was a 43% year-on-year explosion in internet banking fraud losses. The banking industry managed to stop some £1.6 billion of fraud over the course of the year, equivalent to £6.73 in every £10 of attempted fraud.

If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line

The landscape has rapidly evolved over the past year, says Bhalla, due to factors like the rapid growth of online shopping and the emergence of digital solutions in the banking sector and beyond. These changes have broken down the barriers to innovation, driving an unprecedented pace of change in the way we pay, bank and shop, says the executive, who’s responsible for deploying innovative technology to ensure the safety and security of 90 billion transactions every year. 

“Against that backdrop, cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded.”

AI: the new electricity

It isn’t just the growing number of transactions that attracts criminal attention, but the diversity of opportunity, according to London-based Bhalla, who has held various roles at Mastercard around the world since 1993. 

“As the Internet of Things becomes ever more pervasive, so the size of the attack surface grows,” he says, noting that there will be 50 billion connected devices by 2025. 

Against this backdrop, AI will be essential to tackle cyber threats. 

“AI is fundamental to our work in areas such as identity and ecommerce, and we think of it as the new electricity, powering our society and driving forward progress,” says the 55-year-old.

Mastercard has pioneered the use of AI in banking through its worldwide network of R&D labs and AI innovation centres, and its AI-powered solutions have saved more than $30bn being lost to fraud over the past two years. 

In 2020, it opened an Intelligence and Cyber Centre in Vancouver, aimed at accelerating innovation in AI and IoT. The company filed at least 40 AI-related patent applications last year; it has developed the biggest cyber risk assessment capability on the planet, according to Bhalla. 

“We are constantly testing, adapting and improving algorithms to solve real-world challenges.”

Turning to examples of the company’s work, Bhalla says Mastercard has built an ability to trace and alert on financial crime across its network, a world first. He also points to the recently launched Enhanced Contactless, or ECOS, which leverages state-of-the-art security and privacy technology to make contactless payments resistant to attacks from quantum computers, using next-generation algorithms and cryptography. 

“With ECOS, contactless payments still happen in less than half a second, but they are three million times harder to break.”

Building security through biometrics

Such innovations are transforming customers’ interactions with financial services providers. For example, Mastercard has combined AI-powered technologies with physical biometrics – like face, fingerprint and palm – to identify legitimate account holders. These technologies recognise behavioural traits, like the way in which customers hold their phone or how fast they type, actions that can’t be replicated by fraudsters. 

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay.”

Excited by developments in this area, Bhalla says Mastercard recently detected an attack that involved hundreds of devices attempting to log in from a phone that had reported itself as lying flat on its back. “Given the speed at which the credentials were typed, we knew it was unlikely it could be done with the phone flat on a surface,” Bhalla says. “In this way, a sophisticated attack that looked otherwise legitimate was detected before any fraud losses could occur.”

Cybercrime is a $5.2 trillion annual problem that must be met head-on. Standing still will mean effectively going backwards, as fraudsters are increasingly persistent, agile and well-funded

Mastercard might boast an impressive list of successful fraud-fighting solutions, but wrong turns are vital for the journey, Bhalla admits. “If you don’t test things to break them, you can be sure their vulnerabilities will be discovered down the line,” he says. “At Mastercard, trust in and reliance on our services is far too important to take that risk, so rigorously testing solutions before they get anywhere near the end user is our standard operating procedure.”

Trust is a must

A keen rower and golfer, Bhalla volunteers as an executive-in-residence at the University of Oxford’s Saïd Business School. He has a bachelor’s degree in commerce from Delhi University and a master’s degree in management from the University of Mumbai. 

Even with his experience and tech knowledge, Bhalla insists that Mastercard and others within the industry must go back to basics and focus on customer experience. The company’s leadership in standards has been core to earning and retaining the trust of its customers, he notes. 

The technology may be evolving quickly, but one core principle remains, says Bhalla. “Our business is based on trust, which is hard-won and easily lost.”

The correct operating processes and standards must be in place from the outset so that both customers and businesses can have confidence in the technology and trust that it will be useful, safe and secure. 

“What has changed is the sharp focus now placed on developing leading-edge solutions that prevent fraud and manage its impact, which is not surprising given that the average cost of a single data breach has now grown to $3.86 million,” Bhalla says.

Providing a blueprint for business leaders, Bhalla strongly believes that “innovation must be good for people … and address their needs at the fundamental design stage of the systems and solutions we create.”

“We see a future where biometrics don’t just authenticate a payment; they are the payment, with consumers simply waving to pay

Bhalla is using tech to fight fraud and drive financial inclusion, with Mastercard aiming  to connect 1 billion people globally to the digital economy by 2025. His ambitions are wider still, with much of his work focused on “protecting the world we have”. 

Mindful that climate change is high on the agenda, especially for younger generations, Mastercard has launched a raft of programmes in the area, including this year’s Sustainable Card Badge, which looks to identify cards made more sustainably from recyclable, recycled, bio-sourced, chlorine-free, degradable or ocean plastics.

Much like fighting fraud, global warming is reaching a crucial stage. Thanks to the efforts of industry leaders like Bhalla, the world stands a better chance of ultimate triumph on both fronts.

This article was originally written for Raconteur’s Fighting Fraud report, published in June 2021

The worrying rise of ransomware as a service

The Colonial cyberattack that cost a US fuel pipeline $4.4m in May highlights why businesses need to treat the fast-emerging threat of ‘ransomware as a service’ more seriously

A wry observation doing the rounds among cybersecurity experts is that the hackers who’ve transformed ransomware attacks into a multibillion-dollar industry are more professional than their high-profile corporate victims. 

It was certainly no laughing matter for the CEO of the Colonial Pipeline, one of the largest fuel-distribution networks in the US, when an attack in early May disabled the 5,500-mile system, triggering fuel shortages and panic-buying at filling stations. Within hours of the breach, Joseph Blount controversially paid a $4.4m (£3.1m) ransom to DarkSide, the Russian hacking group that mounted the attack, on the basis that it was “for the good of the country”. Despite this, the network was still out of action for a week.

The Colonial Pipeline case is one of many similar incidents, which have increased sharply in number since the pandemic started but have tended to go under the radar, as the victims are understandably reluctant to publicise their security failings. This high-profile example has exposed the rise of so-called ransomware as a service (RaaS), which DarkSide and various other professional hackers are now offering. 

Ethically speaking, you have to consider that you are enabling cybercrime by paying a ransom

The number of cybercrimes committed worldwide in 2020 was 69% higher than the previous year’s total. Ransomware was involved in 27% of these and a total of $1.4bn was demanded, according to a report published in May by US data security company Zscaler. In the UK, cybersecurity specialist Mimecast believes that as many as 60% of companies suffered a ransomware attack during the year. 

Ransomware is on the rise (Soumil Kumar from Pexels)

“Covid-19 has driven a huge ransomware surge,” reports Deepen Desai, Zscaler’s chief information security officer. “Our researchers witnessed a fivefold increase in such attacks starting in March 2020, when the World Health Organization declared the pandemic.”

Criminals seeking to exploit the network vulnerabilities created by the general shift to remote working during the Covid crisis either developed more sophisticated hacking methods or, seeking a shortcut, paid for RaaS. 

RaaS business model rings alarm bells

“RaaS has enabled even the least technically advanced criminals to launch attacks,” says George Papamargaritis, director of managed security services operations at Obrela Security Industries. “Gangs are advertising their services on the dark web, collaborating to share code, infrastructure, techniques and profits.” 

The RaaS model means that the spoils are split among three partners in crime: the programmer, the service provider and the attacker. “This is a highly structured and organised machine that operates much like many other legitimate organisations,” he adds.

The earliest reference to RaaS can be traced back to 2016. But, as Jen Ellis, vice-president of community and public affairs at Rapid7 and co-chair of the Ransomware Task Force, notes: “There are indications that it’s on the rise as more criminals take the chance to make a quick, easy and relatively risk-free profit by entering the ransomware market.”

This collaborative approach to ransomware attacks is terrible news for businesses, warns Ian Pratt, global head of security for personal systems at Hewlett-Packard. “Once, it was the preserve of opportunistic individuals who targeted consumers with demands of a few hundred pounds. Today, criminal gangs operating ransomware make millions from corporate victims in so-called big-game hunts,” he says. “This should have the alarm bells ringing in boardrooms.”

By educating themselves and their employees, business leaders can improve company-wide security protocols and so minimise the risk of ransomware attacks. Pratt explains that “users are the point of entry for most attacks”, accounting for 70% of successful network breaches. Malware is “almost always delivered via email attachments, web links and downloadable files”.

Prevention better than cure

Michiel Prins, co-founder of HackerOne, a vulnerability-disclosure platform connecting businesses with penetration testers, agrees. “Difficult as it may seem to prevent these attacks, prevention is always better than cure when it comes to ransomware,” he says. “This means maintaining a nimble and adversarial approach to cybersecurity that takes into account the perspective of an attacker, getting beyond traditional solutions that miss more elusive vulnerabilities.”

Prins argues that working with ethical hackers will “strengthen an organisation’s overall security posture”, as potential weak spots are reported and fixed “before serious damage is done”. Additionally, establishing a so-called bug-bounty programme, which rewards people for highlighting faults in the coding, “signals a high level of security maturity,” meaning that the criminals might look for easier prey.

If they do fall victim to an attack, should organisations accede to ransomware demands? CrowdStrike estimates that just over a quarter of victims end up paying the hackers to unlock their systems. Nearly 60% of UK businesses would enter negotiations, according to Sam Curry, chief security officer at Cybereason. 

Gangs are advertising their services on the dark web, collaborating to share code, infrastructure, techniques and profits

“We’d advise against paying ransoms. But in extreme situations, where lives are at risk or a national emergency is likely, it could be better to pay,” he says. “Before making that decision, it’s essential to notify your legal counsel, your insurer and the relevant law-enforcement agencies.”

Even when a business does cough up, there’s no guarantee that this will put an end to its problems. Peter Yapp, former deputy director at the UK’s National Cyber Security Centre and now a partner at law firm Schillings, cites the Travelex attack in December 2019 as an example. Many of the company’s web pages were still out of action two months later and a $2.3m ransom was eventually paid to the hackers. Later in 2020, Travelex sank into administration, “partly due to the losses and reputational damage caused by the attack”, he says.

Charles Brook, threat intelligence specialist at cybersecurity company Tessian, acknowledges that it’s a tough decision. “Ethically speaking, you have to consider that you are enabling cybercrime by paying a ransom,” he says. “But I can sympathise with organisations that may have no other option.”

There are other considerations, Brook adds. “If you pay, you could put a target on your back for further attacks. And, even after your files are decrypted, there may still be something malicious left behind.”

With the hackers in the ascendancy, Yapp believes that the government needs to step up its efforts to combat ransomware. “This has become such a serious problem that perhaps it’s time to lobby for the UK’s new National Cyber Force to fight back against these criminals in a different, military, way,” he suggests.

Perhaps the hackers won’t have the last laugh, after all.

This article was originally written for Raconteur’s Connected Business report, published as a supplement in The Times in June 2021

Silver surfers ride the digital learning wave

Record numbers of baby boomers and older retirees are enjoying the manifold benefits of taking online courses

The proverb “you can’t teach an old dog new tricks” is barking up the wrong tree in 2021. Record numbers of baby boomers, aged between 57 and 75, and older retirees, including care home residents, are taking advantage of digital technologies to acquire novel skills and develop hobbies. In droves, they are turning on, logging in and not dropping out. 

The enforced lockdowns of the last year have accelerated this trend. Silver web surfers, unable to hug friends and family, have had the time, confidence and access to technology to embrace digital learning. Indeed, 41 per cent of people in the UK over 55 said they were comfortable learning a new digital skill during lockdown, according to BT research. 

Moreover, older generations are expressing a greater thirst for knowledge when compared to younger cohorts. The 2020 LinkedIn Opportunity Index suggested that not only are baby boomers more willing to welcome change (84 per cent) than millennials (74 per cent) and members of Generation Z (72 per cent), they are also more likely to invest time in learning transferable skills (78 per cent) than the two other groups (72 and 74 per cent, respectively).

Rocketing interest in online groups provided by the University of the Third Age (u3a), whose network has expanded to almost 500,000 older adults no longer working full time, supports this data. A year ago, with members forced to stay at home in an attempt to stem the spread of coronavirus, the UK-wide charity, which celebrates its 40th anniversary in 2022, pivoted online, establishing Trust u3a. 

“We’ve been excited to see huge numbers of members embracing digital learning and turning to online and social media, sometimes for the first time, to keep their interest groups going,” says Sam Mauger, chief executive of u3a.

Online learning opens minds and virtual doors

Trust u3a’s online offering has attracted hundreds of new members and spawned more than 80 online groups and courses, ranging from Japanese to birds of prey, from cooking to painting. “Digital technology has empowered us to keep learning and active, and allowed us to remain connected with one other,” says Mauger. 

“Instead of meeting face to face, photography groups can share images on WhatsApp, ukulele players have turned to Twitch to make music together and ballroom dancers are using Zoom to show off their moves.”

She plans to adopt a blended learning model when lockdown restrictions lift, as going digital has opened minds and virtual doors. “It has removed geographical barriers and enabled members to expand their learning and forge new relationships across the movement, from Scotland to Cornwall,” she says.

Discovering new interests and friends is one of the biggest pluses of digital learning for retirees, according to Amanda Rosewarne, business psychologist and co-founder of the Professional Development Consortium, which accredits online courses. “By learning via live online classes, you can interact with others who may also be feeling isolated and lonely,” she says. 

Elderly students enjoy several other benefits. “Studies show that learning new things triggers serotonin release in the brain, which is akin to the effect of antidepressants,” says Rosewarne. 

Further, a 2017 study for Age UK, Europe’s largest charity supporting older people, found that keeping the mind active can prevent age-related conditions, such as dementia. Committed learning, rather than crosswords or sudoku puzzles, is most effective, though.

Thanks to a variety of user-friendly devices and online courses, picking up a language, for instance, has never been easier or more convenient for retirees willing to enter the digital classroom. 

Trust issues: beware scammers

Birmingham-based septuagenarian John Bishop has attended a Greek class for years. Soon after his course went online in the autumn, with lessons conducted on Zoom, he “took the plunge” and bought a smartphone. Technology is not all Greek to Bishop now; all that is required to join his group is the click of a hotlink. “The ease of access and ease of use are key for my generation when it comes to online learning,” he says. “My advice is keep it simple and provide non-bot help.”

While Bishop is delighted that his lessons can continue online, he is looking forward to returning to in-person sessions. “Zoom is not superior to live lessons,” he says. “Video conferencing requires more concentrated eye focus, because all you are seeing is the screen rather than a room, and student interaction is less fluid. It also lacks the ancillary benefits, like the exercise of walking to and from the class.”

Sarah-Jane McQueen, general manager of, argues the convenience of online learning is hugely appealing to elderly students. “Rather than having to get up early and travel a sizeable distance to learn,” she says, “users can now get the same experience from the comfort of their own home and at a time that suits them, allowing them to easily balance learning around their daily schedules.”

However, McQueen notes the surging popularity of online courses for retirees has not gone unnoticed by those seeking to make quick money. “Particularly since lockdown, there has been a rise in the number of fraudulent courses being offered by scammers who are looking to profit from people’s willingness to learn,” she warns. 

“To help address these concerns, providers should make a concerted effort to highlight the feedback and reviews they’ve obtained from previous users that can work as testimonials which assure new users they are legitimate.”

Building trust so older people feel comfortable online, and don’t get left in the wake of technology, is vital. Pleasingly, there is now a vast number of online resources and initiatives designed to boost digital literacy among the elderly. For example, Barclays’ Digital Eagles scheme, launched in 2013, has delivered digital skills training to staff and residents in more than 500 UK care homes.

“There are many retirees who have achieved great things thanks to digital learning, often in fields that were perhaps far removed from what their previous careers encompassed,” McQueen adds. 

Clearly, a more apposite idiom for 2021 is “you are never too old to learn” and, with easy-to-use digital technology, there is no obstacle to becoming a very mature student.

This article first appeared in Raconteur’s Digital Learning report, published as a supplement in The Times in March 2021

Fighting fraud in times of crisis

Cybercrime is always distressing for those affected, but when the resultant losses come from the public purse, it must be taken even more seriously

Coronavirus has coursed through every facet of our lives, and society and business have already paid a colossal price to restrict its flow. We will be counting the cost for years, if not decades. And while people have become almost anaesthetised to the enormous, unprecedented sums of support money administered by the government, it was still painful to learn, in October, that taxpayers could face losing up to £26 billion on COVID-19 loans, according to an alarming National Audit Office report.

Given the likely scale of abuse, it raises the question of how authorities should go about eliminating public sector fraud? Could artificial intelligence (AI) fraud detection be the answer?

Admittedly, the rapid deployment of financial-aid schemes, when the public sector was also dealing with a fundamental shift in service delivery, created opportunities for both abuse and risk of systematic error. Fraudsters have taken advantage of the coronavirus chaos. But their nefariousness is not limited to the public sector.

Ryan Olson, vice president of threat intelligence at American multinational cybersecurity organisation Palo Alto Networks, says COVID-19 triggered “the cybercrime gold rush of 2020”.

Indeed, the latest crime figures published at the end of October by the Office for National Statistics show that, in the 12 months to June, there were approximately 11.5 million offences in England and Wales. Some 51 per cent of them were made up of 4.3 million incidents of fraud and 1.6 million cybercrime events, a year-on-year jump of 65 per cent and 12 per cent respectively.

Cybercrime gold rush – counting the cost

Jim Gee, national head of forensic services at Crowe UK, a leading audit, tax, advisory and risk firm, says: “Even more worryingly, while the figures are for a 12-month period, a comparison with the previous quarterly figures shows this increase has occurred in the April-to-June period of 2020, the three months after the COVID-19 health and economic crisis hit. The size of the increase needed in a single quarter to result in a 65 per cent increase over the whole 12 months could mean actual increases of up to four times this percentage.”

In terms of eliminating public sector fraud, Mike Hampson, managing director at consultancy Bishopsgate Financial, fears an expensive game of catch-up. “Examples of misuse have increased over the last few months,” he says. “These include fraudulent support-loan claims and creative scams such as criminals taking out bounce-back loans in the name of car dealerships, in an attempt to buy high-end sports cars.”

AI fraud detection and machine-learning algorithms should be put in the driving seat to pump the brakes on iniquitous activity, he argues. “AI can certainly assist in carrying out basic checks and flagging the most likely fraud cases for a human to review,” Hampson adds.

John Whittingdale, media and data minister, concedes that the government “needs to adapt and respond better”, but says AI and machine-learning are now deemed critical to eliminating public sector fraud. “As technology advances, it can be used for ill, but at the same time we can adapt new technology to meet that threat,” he says. “AI has a very important part to play.”

Teaming up with technology leaders

Technology is already vital in eliminating public sector fraud at the highest level. In March, the Cabinet Office rolled out Spotlight, the government grants automated due-diligence tool built on a Salesforce platform. Ivana Gordon, head of the government grants management function COVID-19 response at the Cabinet Office, says Spotlight “speeds up initial checks by processing thousands of applications in minutes, replacing manual analysis that, typically, can take at least two hours per application”. The tool draws on open datasets from Companies House, the Charity Commission and 360Giving, plus government databases that are not available to the public.

“Spotlight has proven robust and reliable,” says Gordon, “supporting hundreds of local authorities and departments to administer COVID-19 funds quickly and efficiently. To date Spotlight has identified around 2 per cent of payment irregularities, enabling grant awards to be investigated and payments halted to those who are not eligible.”

We need to watch how the technology fits into the whole process. AI doesn’t get things right 100 per cent of the time

She adds that Spotlight is one of a suite of countermeasure tools, including AI fraud detection, developed with technology companies, and trialled and implemented across the public sector to help detect and prevent abuse and error.

Besides, critics shouldn’t be too hard on the public sector, argues David Shrier, adviser to the European Parliament in the Centre for AI, because it was “understandably dealing with higher priorities, like human life, which may have distracted somewhat from cybercrime prevention”. He believes that were it not for the continued investment in the National Cyber Security Centre (NCSC), the cost of fraudulent activity would have been significantly higher.

Work to be done to prevent fraud

Greg Day, vice president and chief security officer, Europe, Middle East and Africa, at Palo Alto Networks, who sits on Europol’s cybersecurity advisory board, agrees. Day points to the success of the government’s Cyber Essentials digital toolkit. He thinks, however, that the NCSC must “further specialise, tailor its support and advice, and strengthen its role as a bridge into information both from the government, but also trusted third parties, because cyber is such an evolving space”.

The public sector has much more to do in combating cybercrime and fraud prevention on three fronts, says Peter Yapp, who was deputy director of incident management at the NCSC up to last November. It must encourage more reporting, make life difficult for criminals by upping investment in AI fraud detection and reallocate investigative resources from physical to online crime, he says.

Yapp, who now leads law firm Schillings’ cyber and information security team, says a good example of an initiative that has reduced opportunity for UK public sector fraud is the NCSC’s Mail Check, which monitors 11,417 domains classed as public sector. “This is used to set up and maintain good domain-based message authentication, reporting and conformance (DMARC), making email spoofing much harder,” he says. Organisations that deploy DMARC can ensure criminals do not successfully use their email addresses as part of their campaigns.”

While such guidance is welcome, there are potential problems with embracing tech to solve the challenge of eliminating public sector fraud, warns Dr Jeni Tennison, vice president and chief strategy adviser at the Open Data Institute. If unchecked, AI fraud detection could be blocking people and businesses that are applying for loans in good faith, or worse, she says.

“We need to watch out how the technology and AI fit into the whole process,” says Tennison. “As we have seen this year, with the Ofqual exam farrago, AI doesn’t get things right 100 per cent of the time. If you assume it is perfect, then when it doesn’t work, it will have a very negative impact on the people who are wrongly accused or badly affected to the extent they, and others, are fearful of using public sector services.”

There are certainly risks with blindly following any technology, concurs Nick McQuire, senior vice president and head of enterprise research at CCS Insight. But the public sector simply must arm itself with AI or the cost to the taxpayer will be, ultimately, even more significant. “Given the scale of the security challenge, particularly for cash-strapped public sector organisations that lack the resources and skills to keep up with the current threat environment, AI, warts and all, is going to become a crucial tool in driving automation into this environment to help their security teams cope.”

This article was originally published in Raconteur’s Public Sector Technology report in December 2020

Seven elearning scams to watch out for

While online learning is booming, charlatans and scammers are looking to take advantage. Cowboy coaches are flooding the market making official accreditation or authenticity essential for individual students and businesses. Here are seven online scams to beware

01 Cloak-and-dagger sales presentations

Online learning can be a crook’s cloak, where the course has little educational content and value and is instead a sales presentation full of commercial advertising. Through advertising and regular email communications, the course is a guise to persuade you to buy a sometimes unrelated product or service.

One anonymous respondent to a CPD (continuing professional development) Standards Office survey says: “I paid to attend a training conference that I thought would genuinely give me some training in beauty and aesthetics for my practice. However, it was a sell, sell, sell session for buying botox and chemical peel products.”

Amanda Rosewarne, chief executive of the CPD Standards Office, advises: “To avoid online scams like this, people should look for training courses listed with many learning objectives and seek out independent review sites such as Trustpilot.”

02 Fake qualifications

2. learning scams

It is easy to fall foul of scammers who promise professional qualifications. They hook you in by selling a course, but then fail to provide the correct certificate or licence.

Dr Emma Woodward, a New Zealand-based educational psychologist, says: “I’m concerned by the number of online courses offering training in areas that cross over into fields that are highly regulated, such as ‘diploma in child development’ or ‘diploma in cognitive behavioural therapy’.

“These courses allude to having more gravitas than what they offer, which is both unethical and dangerous as the application to real people is a skill that needs more than a few PDFs online.”

03 Promises of employment

“There are several ‘professional coaching organisations’ we have encountered that promise on completion of their, usually very expensive, coaching ‘qualification’ they will forward clients to you,” says Rosewarne at the CPD Standards Office.

“In this case, the course is not the problem, it’s just that the clients, business development opportunities or guaranteed financial guarantees given at the point of sale, do not materialise, leaving people at a loss of how to make a living, or develop a business, from their new skillset.”

Performing due diligence is critical. Robert Clarke, the managing editor of Learning News, says: “In these times of change and uncertainty, unscrupulous providers are on the make. Recognised training and CPD helps buyers avoid the tricksters and scams, and buy with greater confidence.”

04 Non-existent colleges and academies

4. learning scam

“The words ‘college’ and ‘academy’ are unprotected when registering an organisation at Companies House,” Rosewarne points out. Therefore, anyone can set up an online learning course linked to a fake education centre. There are two typical online scams. Firstly, the scammers charge for an expensive and prestigious course before liquidating the organisation. Alternatively, buyers are duped into long-term membership commitments that are impossible to cancel and the content is often freely available elsewhere.

“Make sure it is a well-known provider and check it with a phone call,” says Hilarie Owen, chief executive of the Leaders’ Institute. “Don’t part with any money until you have checked.”

05 Rogue conferences

Scamming global conference providers offer fake event agendas by using the names of top academics, business leaders and talking heads to advertise and sell tickets. Supposed keynote speakers will have “cancelled at the last minute” only to be replaced by lower-grade alternatives.

A Trustpilot ConferenceSeries Review provides an example of this dubious practice. “Attended the fifth International CAM Conference in Vancouver in October 2019. Only a few speakers showed up and the rest apparently had their visas rejected or had health issues. Total fabrication.” There were 44 names advertised originally, but only four speakers attended and there was no one from the organisation present. “I wish I had checked before registering,” the reviewer adds.

06 Poor-quality online learning courses

6. learning scam

“This online scam involves a concise overview course for a minimal fee, usually £50 or less, which offers what we call ‘skimpy content’,” says Rosewarne. “Buyers will encounter heavy promotion and sophisticated digital marketing tricks for purchasing a further, more expensive, course, which might be £1,000 or more. Sometimes these courses also lack engagement and are ‘chalk-and-talk’ presentations with little practical application.”

Simon de Cintra, director of Act Naturally, agrees. “Professional training providers know that reputation is key to long-term success and actively encourage well-informed purchasing at every stage,” he says, warning that users should always read reviews before buying.

07 Free online learning

Not only are numerous free online learning courses, peddled by charlatans, a waste of time, but the purported expertise they provide is also substandard and therefore potentially harmful. “This learning often focuses on a specific topic, such as beauty aesthetics, child mental health support, or IT engineering technical training,” says Rosewarne. “Most of the time, the authors have had a single fluke success online and are not at all experts in the topic.”

This chimes with Jo Cook, founder and director of Lightbulb Moment. “A lot of people are jumping on the COVID-19 bandwagon, either as a scam or with little expertise in how to provide quality remote courses and live online sessions,” she says. “Make sure to go to a company with years of experience behind them.”

This article was originally published in Raconteur’s Digital Learning report in September 2020

Hackers smell blood as crisis exposes cyber vulnerabilities

Less than two years ago, in June 2018, when Ticketmaster UK revealed cybercriminals had stolen data from up to 5 per cent of its global customer base via a supplier, it set alarm bells ringing.

The following month, a CrowdStrike report laid bare how ill-prepared organisations all around the globe were against hackers seeking to exploit third-party cybersecurity weaknesses. Two thirds of the 1,300 respondents said they had experienced a software supply chain attack. Almost 90 per cent believed that they were at risk via a third party. Yet, approximately the same number aadmitted they didn’t deem vetting suppliers a critical necessity.

Given Symantec’s latest Internet Security Threat Report, launched early last year, highlighted that supply chain attacks had increased by 78 per cent in 2018, one hopes organisations heeded the warning signs and shored up their third-party cybersecurity policies well before COVID-19 hit businesses.

Experts fear companies that failed to bolster their cyber defences are now even more exposed because supply chains have become fragmented, and hackers, like great white sharks, smell blood. “Criminal groups have recognised that to catch the big fish they need to catch some smaller fish first,” explains James McQuiggan, security awareness advocate at KnowBe4.

To extend the fishing – or rather phishing – analogy: to net the whopper organisations hackers are scooping up the tiddlers in the supply chain, McQuiggan says, as they “may not have the robust security programs and often unable to afford adequate cybersecurity resources or personnel.

“As such, they are potentially more susceptible to social engineering scams or attacks. The criminal groups will attempt to gain access and then leverage the connection to attack a larger organisation.”

You’re only as secure as your weakest link

Predators know when to attack vulnerable prey, and COVID-19 has weakened the cybersecurity of countless organisations. “Coronavirus passes from person to person, and a percentage of victims are asymptomatic, yet can infect others – cyberattacks work in a similar way,” says Matt Lock, UK technical director at Varonis.

“A smaller supplier that’s fallen behind on their basic cyber hygiene can become infected with malware and unknowingly spread it to their business partners.”

Alluding to the issues presented by lockdowns enforced because of the pandemic, he continues: “At first, we were seeing cases where companies took shortcuts to get their employees online to keep their businesses running. Now companies are starting to settle into their new normal. They’re taking a step back, actively trying to rein in access and resolve security issues that cropped up in their race to get everyone the access they needed to do their work.”

Chris Sherry, a regional vice president at Forescout, argues there has never been a more vital time to have a cyber-resilient supply chain. “COVID-19 is the ultimate stress test for many supply chains,” he says. “The demand for critical supplies has never been greater, and it’s the biggest challenge. It’s a marathon to continue with ‘business as usual’ while trying to achieve an output of 150 per cent. Industry 4.0 and the industrial internet of things are driving improvements in operational efficiency, but also leaving suppliers more vulnerable than ever to downtime or data loss if critical processes are interrupted.

“The benefits of operational technology and automation are clear, but they also significantly increase the potential attack surface of any organisation. As bad actors look to take advantage of the crisis, the cybersecurity strategy of any supplier should ensure this is well understood, continuously monitored, and appropriately secured.”

Top tips to shore up cybersecurity

If an organisation’s cybersecurity is only as sturdy as its weakest link in the supply chain, what could – and should – be done in the face of an increasing number of attacks?

“Ultimately, the relationship of ‘trust’ many organisations once had with their third-party suppliers is no longer enough,” says Sherry. “The National Cyber Security Centre puts out a huge amount of guidance on the right questions to ask, as well as the right parameters to measure the security of your supply chain.”

Nigel Stanley, chief technology officer at TÜV Rheinland, agrees that the NCSC is a good source of information, and points to its Cyber Essentials certification scheme, which offers a “base level of cybersecurity assurance”. For him, streamlining supplier assessments is crucial, as is how deeply the supply chain network is traversed.

However, he notes: “Managing this is a challenge as presenting suppliers with 150 questions to answer every month can be a real turn-off. Using supplier contracts to enforce cybersecurity controls can be useful as it links payments and contracts to cybersecurity performance. The problem is how such a program can be implemented proportionately, balancing supplier and customer requirements.”

Criminal groups have recognised that to catch the big fish they need to catch some smaller fish first

The ‘zero-trust’ certification offered by analyst firm Forrester is worth the money to improve cybersecurity across the supply chain, suggests Patrick Martin, head of threat intelligence at Skurio. “Securing the supply chain is key,” he says. “Look for suppliers with certifications like Cyber Essentials Plus and BS 10012 ISO/IEC 27001, and don’t be afraid to ask suppliers and partners to provide proof of their practices.”

Serving up a final piece of expert advice, he adds: “Another great first step is to monitor the deep and dark parts of the web for breached data, credentials and mentions in attack planning scenarios. In this way, businesses can be much better prepared to mitigate an attack if they see it coming.”

Considering Ticketmaster UK’s supply chain breach was almost two years ago, it’s fair to say organisations have had ample time to prepare, but those who failed need to move quickly with the fallout from COVID-19 likely to be long and painful.

This article was originally published in Raconteur’s Procurement and Supply Chain Innovation report in May 2020