Businesses are feeling less confident about their ability to identify customers online as advancements in artificial intelligence (AI) fuel opportunities for fraudsters.


That’s a key takeaway from the Experian UK Fraud and FinCrime Report 2024, which found that the explosive evolution of AI technologies is mirrored by rising anxiety about identity-related theft and fraud among businesses and consumers alike.

For this year’s report, we surveyed more than 200 UK businesses about their fraud and FinCrime strategies, and more than 2,000 consumers about their online experiences.

Our research found that more than a third (34%) of businesses encountered identity theft in 2023, with click-n-mortar retailers, telecommunication providers and retail banks particularly affected.

Synthetic identity fraud – where fraudsters create identities from a mixture of real and fake information – also escalated, our research found, with 32% of businesses encountering it in 2023 up from 30% in 2022.

34% of businesses encountered identity theft in 2023, and 32% of business encountered synthetic identity fraud in 2023

What was striking this year was the wavering confidence of businesses in their ability to identify customers online. Whereas 80% were “very” or “extremely” confident they could recognise their customers digitally in 2022, rising to 83% in 2023, we saw the proportion slump to 68% at the start of 2024.

And consumer confidence has been dented too. Identity theft remains the dominant – and growing – concern for consumers conducting activities online, our survey shows. Eight out of 10 householders aged 55-69 told us they were worried about it, with 66% of respondents overall expressing concern – up from 58% in 2023.

Deepfake deluge

No doubt this anxiety is exacerbated by the recent surge in media reports about how fraudsters around the world are exploiting generative AI – particularly deepfake capabilities – to misuse stolen IDs and fabricate new identities for everything from phishing scams to account takeovers.

Hong Kong police reported in February that a finance worker at a multinational firm had been duped into paying $25 million to fraudsters who posed as his colleagues through an elaborate deepfake hoax. The worker was initially suspicious when he received an email, purportedly from his chief financial officer, discussing the need for a secret transaction. But he cast his doubts aside after attending a video conference call seemingly attended by the CFO and colleagues he recognised. All the attendees were deepfakes, however.

One type of cyber fraud causing alarm is the video ‘injection attack’ where criminals insert a stream of pre-recorded or fabricated content between a captured device (such as a mobile camera) and a security system (such as a live detection test). In 2021, a pair of fraudsters were caught using this method to issue fake tax invoices worth over US$76 million, circumventing the Chinese government’s facial recognition system, using affordable and widely available software. The duo had set up a shell company and ‘staffed’ it with synthetic identities they had created using real personal data bought on the black market. They then used widely available technology to create highly realistic video footage and hijack a mobile camera feed to convince the tax invoice system it was detecting a live person.

AI models that can clone voices and mimic speech patterns pose another challenge for individuals and businesses. In May, the chief executive of communications company WPP, Mark Read, revealed how fraudsters used a clone of his voice, alongside his profile picture and YouTube footage of a colleague in a fraudulent video call, with the aim of extracting money and personal details from a fellow executive. Fortunately, the targeted executive saw through the scam. But in 2023, a Guardian journalist was able to generate a clone of his own voice to skirt around the Australian taxation office’s security system, and there have been several reports of fraudsters using deepfake audio to trick individuals into thinking their loved ones have been kidnapped and require urgent financial help.

The AI arms race

So how can the industry respond to the emerging AI and deepfake threats?

The answer is, there is no single answer. To keep a step ahead, you’ll need to carefully layer multiple capabilities and technologies that restore your confidence in accurately and repeatedly identifying your customers online.

Grant MacDonald, Director, Fraud and AML Strategic Initiatives, Experian

Grant continues, “Businesses need to combine the power of multiple customer-authentication and fraud-prevention methods. We would recommend frictionless technologies such as behavioural biometrics, device and email intelligence.”

“Determining trusted combinations of customer credentials with email, phone and device data will be key. New data combinations, where applicant data is pieced together, is a key characteristic of synthetic ID fraud. The key is to adopt a holistic end-to-end strategy that leaves no door open to malicious actors.”

“Smart orchestration will be essential for designing the very best customer experience and carefully knitting all technologies together.”

Most businesses tell us that detecting and preventing identity theft and synthetic identity fraud is a top operational priority in 2024 and around half intend to ramp up investments on this front this year. Implementing and improving their own AI models for fraud prevention and detection is the number one driver of increased investments in 2024.

Grant added, “Adopting machine learning to optimise decision-making across all capabilities will be critical for accurately discriminating fraudulent applications from good customers. Advanced analytics will strike the right balance between customer privacy and security, and ensure all technologies and capabilities are working together optimally and in unison.”

“Businesses also need to ensure their document verification vendors are sufficiently investing in AI to maintain their ability to detect the very best deepfake selfies and AI-generated documentary proofs.”

“With fraud tactics evolving fast in line with the sophistication of AI models, fraud teams have no choice but to stay ahead of the curve in this technological arms race.”

How can we help?

Experian’s CrossCore platform can help you bring together a range of fraud, ID and authentication solutions to drive security and customer experience KPIs. Visit our website to discover more or get in touch to discuss your specific requirements.

Download our latest report

The Experian UK Fraud and FinCrime Report is now available

Let's go

All statistics within this article are available in the UK Fraud & FinCrime Annual report, a survey of 200 businesses and 2000 consumers about the latest trends in fraud and financial crime.

Copy Link Copied to clipboard
Post tagged in: Fraud Risk Management