Banning TikTok: Should companies follow the U.S. and U.K. governments?

With government workers in the U.S., U.K., Canada, France, and elsewhere recently banned from installing or having TikTok on their official devices, is it time for companies to follow their lead? With greater awareness of allegedly nefarious data-harvesting activity, the clock is ticking.

Political leaders posit that because TikTok is owned by Bytedance, China’s state-linked technology corporation with ties to the Chinese Communist Party, there is a significant cybersecurity risk. The wildly popular social media platform – with 150 million U.S. users, it is currently one of the country’s top-ranking apps – is being used to promote the party’s interests overseas, runs the logic. 

Organizations must think hard about whether these two supposed issues are worth not banning the app, and if, on balance, the company and employees benefit more or less from engaging with and using TikTok to inspire and amplify content.

The full version of this article was first published on Digiday’s future-of-work platform, WorkLife, in April 2023 – to read the complete piece, please click HERE.

Will the new national strategy make the UK an AI superpower?

Westminster’s new AI strategy is a step in the right direction, but there are hurdles – particularly concerning regulation, data-sharing and skills – that could hinder the UK’s progress

In the global AI investment, innovation and implementation stakes, the UK lies in a creditable third place. Trailing the US and second-placed China, it holds a slight lead over Canada and South Korea, according to the Global AI Index published in December 2020 by Tortoise Media. The moral of Aesop’s most famous fable involving a tortoise may be ‘more haste, less speed’, but Westminster is seeking to hare ahead in this race over the coming decade. Its national AI strategy, published in September 2021, is a 10-year plan to make the country an “AI superpower”. But what does that mean exactly?

Although Westminster has already poured more than £2.3bn into AI initiatives since 2014, this strategy will accelerate progress, promises Chris Philp, minister for technology and the digital economy at the Department for Digital, Culture, Media and Sport. 

“It’s a hugely significant vision to help the UK strengthen its position as a global science superpower and seize the potential of modern technology to improve people’s lives and solve global challenges such as climate change,” he declares.

The Croydon South MP explains that the strategy has three main aims. These are to ensure that the country invests in the long-term growth of AI; that the technology benefits every sector of the economy and all parts of the country; and that its development is governed in a way that protects the public and preserves the UK’s fundamental values while encouraging investment and innovation. 

“We have heard repeatedly from people working in and around AI that these issues are entirely connected,” says Philp, hinting at the complexity of the task at hand.

What will life be like for people living and working in an AI superpower? “There are huge opportunities for the government to capitalise on this technology to improve lives,” he says. “We can deliver more for less and give a better experience as we do so. For people working in the public sector, it could mean a reduction in the hours they spend on basic tasks, which will give them more time to find innovative ways of improving public services.” 

Philp continues: “For businesses, we want to ensure that there are clear rules, applied ethical principles and a pro-innovation regulatory environment that can create tech powerhouses across the country.”

AI will also be crucial in helping the UK to meet its legal obligations to achieve net-zero carbon emissions by 2050. Pleasingly for Philp, progress is already being made in this field. He notes that the Alan Turing Institute has been “exploring AI applications that could help to improve power storage and optimise renewable energy deployment by feeding solar and wind power into the national grid”.

The artificial elephant in the room is human resistance to data-sharing

The strategy has been generally well-received in the tech world, with most people acknowledging that it’s an important step in the right direction. But some experts have identified a few potential shortcomings.

Peter van der Putten is assistant professor of AI and creative research at Leiden University and director of decisioning and AI solutions at cloud software firm Pegasystems. He is “encouraged to see a shift from broad strategic statements to more concrete, action-oriented recommendations”, but he would have preferred to see a more complete ethical framework for AI application. 

“A large portion of the document focuses on AI governance, but it appears that a lot of the emphasis is still on analysis, discussion and policy-making. There is less on proposing hard legislation or determining which authority will be accountable for governance,” van der Putten explains. “This is an area in which the UK will need to accelerate, given that both the EU and China have made relatively concrete proposals for the regulation of AI recently.”

Liz O’Driscoll is head of innovation at Civica, a supplier of software designed to improve the efficiency of public services. She believes that the UK has “made great progress so far, with many organisations starting to embrace data standards and invest in data skills. But the artificial elephant in the room is human resistance to data-sharing. Privacy remains crucial, especially when it comes to citizens’ information, but wider uncertainty about issues such as regulation, public perception and peer endorsement will also prompt many in the public sector to play it safe with AI.”

There are some encouraging signs that people’s general reservations about data-sharing are softening, thanks to the success of collaborative AI solutions during the Covid crisis, O’Driscoll adds. 

“Sharing data has been essential in our defence against the virus. It has enabled key public services to stay focused on people who are most at risk,” she says. “Success stories have entered the public domain, so we need to make the most of these cases and continue driving further positive change.”

It’s clear that more education about the benefits of data-sharing and work on AI ethics are required, but could a shortage of recruits prove to be the most significant challenge for the national AI strategy? A survey published by Experian in September indicates that more than two-thirds (68%) of UK students wrongly believe that they would need to earn a STEM qualification to stand a chance of landing a data-related job.

Dr Mahlet Zimeta, head of public policy at the Open Data Institute, thinks that the widely held view that “the UK needs to produce more people who can code” is unhelpful at best. 

“Although improving data literacy is important, we’re going to need a much broader range of skills, including critical thinking,” she argues. “Leaders require a change of mindset to maximise the potential of AI. At the moment, it feels as though no one wants to be the first mover, but this is why experimenting and being transparent about the results will drive progress.”

From the government’s perspective, Philp urges both “students and businesses to equip themselves with the skills they’ll need to take advantage of future developments in AI”. For employers, this will include ensuring that their staff “have access to suitable training and development opportunities”, he adds, pointing out that the government’s online list of so-called skills bootcamps is an excellent place to start. Tortoise Media’s Global AI Index ranks the UK fourth in the world on its supply of talent and third for the quality of its research. The country is a relative laggard in terms of both infrastructure (19th) and development (11th), so there is plenty of ground to make up on both the US and China. The national AI strategy suggests that some haste will be required if the UK is to even keep these rivals within its sights. Ultimately, though, if all goes to plan, humanity stands to win.

This article was first published in Raconteur’s AI for Business report in October 2021

Is China dominating the West in the artificial intelligence arms race?

The US has warned that it is behind its historical foe in the East, and the European bloc is also concerned, but there are ways in which the UK, for example, could catch up, according to experts

If you ask technology experts in the West which country is winning the artificial intelligence arms race, a significant majority will point to China. But is that right? Indeed, Nicolas Chaillan, the Pentagon’s first Chief Software Officer, effectively waved the white flag when, in September, his resignation letter lamented his country’s “laggard” approach to skilling up for AI and a lack of funding. 

A month later, he was more explicit when telling the Financial Times: “We have no competing fighting chance against China in 15 to 20 years. Right now, it’s already a done deal; it is already over, in my opinion.”

The 37-year old spent three years steering a Pentagon-wide effort to increase the United States’ AI, machine learning, and cybersecurity capabilities. After stepping down, he said there was “good reason to be angry.” He argued that his country’s supposed slow technological transformation was allowing China to achieve global dominance and effectively take control of critical areas, from geopolitics to media narratives and everywhere in between.

 Chaillan suggested that some US government departments had a “kindergarten level” of cybersecurity and stated he was worried about his children’s future. He made his outspoken comments mere months after a congressionally mandated national security commission predicted in March that China could speed ahead as the world’s AI superpower within the next decade.

 Following a two-year study, the National Security Commission on Artificial Intelligence concluded that the US needed to develop a “resilient domestic base” for creating semiconductors required to manufacture a range of electronic devices, including diodes, transistors, and integrated circuits. Chair Eric Schmidt, the former Google CEO, warned: “We are very close to losing the cutting edge of microelectronics, which power our companies and our military because of our reliance on Taiwan.”

Countering the rise of China

Jens Stoltenberg, the Nato Secretary-General since 2014, echoed the US concerns about how China is galloping away from competitors due to its investment in innovative technology, which other countries have embraced. The implicit – yet hard-to-prove – worry is that the ubiquitous tech is a strategic asset for the Chinese government. But is this a case of deep-rooted, centuries-old mistrust of the East by the West?

 The former Norwegian Prime Minister, ever the diplomat, was at pains to stress that China was not considered an “adversary.” However, he did make the point that its cyber capabilities, new technologies, and long-distance missiles were on the radar of European security services. 

 In late October, Stoltenberg admitted that Nato would expand its focus to counter the “rise of China” in an interview with the Financial Times. “Nato is an alliance of North America and Europe,” he said, “but this region faces global challenges: terrorism, cyber but also the rise of China.”

 Ominously, Stoltenberg continued: “China is coming closer to us. We see them in the Arctic. We see them in cyberspace. We see them investing heavily in critical infrastructure in our countries. They have more and more high-range weapons that can reach all Nato-allied countries.”

 But is China truly so far in front of others? According to the venerated Global AI Index, calculated by Tortoise Media, the US leads the race, with China second. In late September, the UK – currently third in the rankings, slightly ahead of Canada and South Korea – unveiled its National AI Strategy, which sets out a 10-year plan to make it a “global AI superpower”.

 UK plans to become global AI superpower

Some £2.3 billion has already been poured into AI initiatives by the UK government since 2014, though this document – the country’s first package solely focused on AI and machine learning – will accelerate progress, enthuses the Department for Digital, Culture, Media and Sport’s digital minister, Chris Philp. 

“The UK already punches above its weight internationally, and we are ranked third in the world behind the US and China in the list of top countries for AI,” he said. “AI technologies generate billions [of pounds] for the economy and improve our lives. They power the technology we use daily and help save lives through better disease diagnosis and drug discovery.”

A self-styled AI champion and World Economic Forum AI Council member, Simon Greenman, states that the UK is home to the most significant number of AI companies and start-ups (8%) aside from the US (40%). Additionally, venture capital investment in UK AI projects was £2.4bn in 2019. 

“Money isn’t the issue,” says the Checkit Non-Executive Director, when discussing the perceived lack of progress being made by the UK. “The problem is we don’t have enough good commercial AI skills, such as product management and enterprise sales, to put the theory, research, and vision into practice.

“For instance, the ‘Office of AI’ doesn’t have an AI implementation budget. If we’re going to realise the potential that AI can bring to the UK, the government needs to put its money where its mouth is and appoint somebody who has a central budget to implement large-scale AI deployments when it comes to public policy.”

Greater collaboration needed

Fakhar Khalid, Chief Scientist of London-headquartered SenSat, a cloud-based 3D interactive virtual engineering platform, is more optimistic about the UK’s chances of becoming an AI superpower and calls for patience. While he agrees that “the US and China are the leading nations in terms of AI innovation and commercialisation,” he notes that China published its first AI strategy in 2017. The US followed with equivalent plans two years later. 

 “Although these strategies have recently started to emerge in the public and policy domain, these countries have been investing healthily in their ecosystems since the early 1990s,” he says. “In the 90s, the US was not only the leading country for AI education, but its academic innovation also had strong ties with the industry, ensuring a direct impact on the growth of their economy.”

Hinting at the different types of government that enable more collaboration in China compared to the US, the UK, and even Europe as a bloc, he continues: “China, on the other hand, has been radical and ambitious in building its technology capabilities by strongly linking government, academia, and industry to show the beneficial impact of AI on their economy. The government centrally controls China’s AI strategy with hyperlocal implementation.

“The UK’s long overdue AI strategy is a clear indication that we are here to declare ourselves as the key leader in this field, yet we have much to learn from these nations about commercialising our research and creating a strong and impactful link between academia and industry.”

For Dr Mahlet Zimeta, Head of Public Policy at the Open Data Institute in the UK, while China and the US are ahead in the AI race, there are ways in which her country can catch up. “The territories that are lined up to be global AI superpowers are China, US, and the European Union,” she says, “because the great access to and availability of data means the analysis is better. They have massive advantages of scale, but the UK could show international leadership around AI ethics.”

With a greater focus on data skills, standards, and sharing, and encouraging an international collaborative ecosystem driving AI innovation, the West can leap ahead of China. And perhaps, in time, all AI superpowers will work together, in harmony, to the benefit of humanity.