Generative artificial intelligence (GenAI) has the capacity to create new opportunities, disrupt how we work, and change how we think about AI regulation. Some predict it will be as disruptive, if not more so, than the widespread adoption of the internet. But with new opportunities come new challenges and threats. While GenAI continues to dominate the attention of businesses, the media, and regulators, it’s also caught the attention of fraudsters.

Recent technological advances mean it’s never been cheaper or easier to be a fraudster. In this brave new digital-first world, fraudsters have more tools at their fingertips than ever before. And it’s set to cost. Online payment fraud losses are predicted to increase from $38 billion in 2023 to $91 billion in 2028.

The rise of the GenAI fraudster

Fraudsters generally fall into two groups: 1. the lone amateur and 2. organized criminal enterprises. Traditionally the latter, with more resources at their disposal, has posed a higher threat to businesses. But GenAI offers even the most amateur fraudsters easy access to more scalable and increasingly sophisticated types of fraud.

The evidence is in the data. Over the last few years, less sophisticated or “easy” fraud dominated. Proprietary data from Onfido, an Entrust company, found that between 2022 and 2023, 80.3% of fraud caught fell into this category. The remainder was classed as “medium” (19.6%) or “hard” (0.1%). But recently there’s been an increase in more sophisticated fraud. Comparing these figures to data from the last six months finds a jump in both medium fraud (accounting for 36.4%) and hard fraud (accounting for 1.4%).

How fraudsters are using generative AI

deepfakes

GenAI programs have made it easy for anyone to create realistic, fabricated content including audio, photos, and videos. Deepfake videos in particular, sophisticated synthetic media where a person’s likeness is replaced with someone else’s, are becoming increasingly common and convincing. Fraudsters have started using deepfakes to try and bypass biometric verification and authentication methods. These videos can be pre-recorded or generated in real time with a GPU and fake webcam, and typically involve superimposing one person’s face onto another’s.

This type of attack has surged in recent years. Comparing 2023 with 2022, there’s been a 3,000% increase in deepfake attempts. This is particularly concerning in the realm of digital onboarding and identity verification, where the integrity of personal identification is paramount.

Currently, a few fraudsters are responsible for creating deepfakes at scale. But the growing popularity of “fraud-as-a-service” offerings (where experienced fraudsters offer their services to others), combined with improvements in deepfake software, suggests their volume and sophistication will increase in 2024.

Document forgeries

Many customer due diligence processes involve the authentication of identity documents. But image manipulation software, and the emergence of websites such as OnlyFakes — an online service that sells the ability to create images of identity documents it claims are generated using AI — have made it easier for fraudsters to fake documents.

There are four different ways for fraudsters to create fake documents:

  1. Physical counterfeit: A fake physical document created from scratch
  2. Digital counterfeit: A fake digital representation of a document created from scratch (i.e., in Photoshop)
  3. Physical forgery: An existing document that is altered or edited
  4. Digital forgery: An existing document that is altered or edited using digital tools

Historically, most fake documents were physical counterfeits (fake documents fraudsters created entirely from scratch). In 2023, Onfido identified that 73.2% of all document fraud caught was from physical counterfeits. In the last six months, that’s dropped to 59.56%, with digital forgeries accounting for a larger proportion of document fraud than prior years (34.8%).

This increase in digital forgeries can be attributed to the emergence of websites such as OnlyFakes. Fraudsters have wised up to the fact it’s a faster, cheaper, and more scalable way to create fake documents.

Synthetic identity fraud

Synthetic identity fraud is a type of fraud where criminals combine fake and real personal information, such as Social Security Numbers (SSNs) and names, to create a new identity. This new, fake identity is then used to open fake accounts, access credit, or make fraudulent purchases.

Generative AI tools offer a way for fraudsters to generate fake information for synthetic identities at scale. Fraudsters can use AI bots to scrape personal information from online sources, including online databases and social platforms, before using this information to create synthetic identities.

With synthetic identity fraud projected to generate $23 billion USD in losses by 2030, businesses are adopting advanced fraud detection and prevention technologies to root out synthetic fraud. Keeping fraudsters from entering in the first place with a reliable identity verification solution at onboarding is the foundational element in this detection framework.

Phishing

During phishing attacks, fraudsters reach out to individuals via email or other forms of communication requesting they provide sensitive data or click a link to a malicious website, which may contain malware.

Generative AI tools offer fraudsters an easy way to create more sophisticated and personal social engineering scams at scale. For example, using AI tools to write convincing phishing emails or for card cracking. Research has found that the top tools used by bad actors in 2023 include the dark web, fraud as a service, and generative AI. This includes the tool wormGPT, which provides a fast method for generating phishing attacks and malicious code.

Combatting GenAI fraud with… AI

The advancement in GenAI means we’re entering a new phase of fraud and cyberattacks. But the good news is that any technology fraudsters can access is accessible to those building fraud detection solutions. The best cyber defense systems of tomorrow will need AI to power them to combat the speed and scale of attacks. Think of it as an “AI versus AI showdown.”

With the right training, AI algorithms can recognize the subtle differences between authentic and synthetic images or videos, which are often imperceptible to the human eye. Machine learning, a subset of AI, plays a crucial role in identifying irregularities in digital content. By training on vast datasets of both real and fake media, machine learning models can learn to differentiate between the two with high accuracy.

One of the strengths of using AI to fight deepfakes and other GenAI fraud is its ability to continuously learn and adapt. As deepfake technology evolves, so too do the AI algorithms designed to detect them.

Securing digital identities against fraud

With AI-driven attacks from phishing, deepfakes, and synthetic identities on the rise, Entrust’s AI-powered, identity-centric solutions are critical in ensuring the integrity and authenticity of digital identities.

By innovating and integrating Onfido capabilities across the Entrust portfolio, we’re committed to helping:

  • Fight phishing and credential misuse with enhanced authentication leveraging biometrics and digital certificates
  • Neutralize deepfakes while creating secure digital experiences with AI/ML-driven identity verification
  • Enable trusted digital onboarding, authenticating customers or employees, and issue credentials in a matter of minutes while reducing fraud exposure and staying compliant with regulations and standards
  • Secure data and cryptographic assets with cutting-edge encryption, key management, and compliance solutions

To learn more, download the full report here: https://go.entrust.com/identity-fraud-report-2024