Deepfake Dangers Part 2: How AI Is Fighting the Fraudsters

Deepfakes are a serious threat to our industry; but AI can help us fight back.

In my last blog article, I discussed how deepfake fraud is a growing threat in the real estate industry and what you can do to combat it in your workplace. This time, I thought it would be helpful to take a deeper dive into some of the latest AI tools on the market that may be able to assist in these efforts. Of course, careful consideration is warranted before implementing any new solution, and it’s important to consult with your IT and security team to ensure it aligns with your business needs and data security standards. With that said, let’s dig in!

What is deepfake fraud?

Deepfake fraud has exploded in recent years, with some reporting showing an increase of over 2,000%. Scammers are using AI-generated videos and voices to impersonate real people convincingly. To combat this technology, experts have developed cutting-edge tools and techniques to recognize and stop deepfakes.

What are people doing about it?

Here are some of the latest detection methods that your agency might consider to keep deepfake fraudsters at bay. Here’s how they work.

  • AI-powered detection tools are designed to analyze videos and images in real time to detect whether they have been manipulated. Just a couple promising tools include:
    • HONOR’s AI Deepfake Detection – Launching April 2025
      HONOR’s deepfake solution can be thought of as a built-in lie detector for images and videos. The technology scans media in real time and alerts users if something seems fake. This could help businesses and individuals avoid being misled by AI-generated content.[i]
    • Reality Defender – Real-time Deepfake Detection for Video Calls
      In a world of constant video meetings, it has unfortunately become possible for someone to get on a call with you and pretend to be your boss or a family member by using deepfake technology. Reality Defender combats this type of fraud by scanning facial movements, voice patterns and subtle glitches in real time. If anything is flagged, the technology alerts the user so they don’t become victims of scams.[ii]
  • Lightweight AI models are another tool people are deploying to deal with the rise of deepfakes and other fraudulent activity. These AI detection tools offer unique advantages to users. For one thing, they require far less computing power than other models, but they are still capable of effectively detecting deepfakes. Let’s look at a specific example:
    • Tiny-LaDeDa – A mini AI model with 96% accuracy
      Unlike traditional AI models that suck up an inordinate amount of power, Tiny-LaDeDa can sniff out deepfakes even while running on smaller devices. Despite being lightweight, it still claims to detect 96% of deepfake videos out there by analyzing tiny details in the way faces and voices are generated.[iii]

Comprehensive benchmarking frameworks

Given that deepfake technology is always evolving, cybersecurity researchers are not resting on their laurels. The industry has been developing standardized testing platforms to improve detection tools and ensure that security solutions can keep up with even the most creative of fraudsters. Let’s take a peek at some of the most notable:

  • DF40 – A giant deepfake training library
    The DF40 library can be thought of like a gym for deepfake detectors. It contains thousands of deepfake samples created using 40 different AI techniques. Researchers can train and test tools against a wide variety of fake content, which enables them to get far better at spotting new ones as they come online.[iv] 
  • DeepfakeBench – A fair testing ground
    As with many cybersecurity tools, not all deepfake detectors are created equal. Additionally, some detectors are good at spotting one type of fraud but perform poorly when dealing with another. DeepfakeBench seeks to remedy this by ensuring that every detection tool is tested under the same conditions. It is an important solution for those who want to compare different products and assess which ones are the most effective.[v]

Smarter deepfake detection techniques

Sometimes, deepfake detectors can cause more problems than they solve. For example, certain tools may focus too much on “fake-looking” elements instead of checking if a person’s identity is real by cross-referencing IDs against verified data or analyzing biometric consistency. Luckily, there are many researchers currently working hard to fix this problem:

  • Rebalanced Deepfake Detection Protocol (RDDP)
    RDDP improves deepfake detection by making sure tools don’t just look for obvious digital artifacts like weird lighting or blurry patches. This prevents hackers from bypassing detection by using better-quality deepfakes.[vi]

Government and military efforts

Governments are also stepping into the fight against deepfake fraud, especially because deepfakes can pose a considerable risk to national security and election integrity.

  • Defense Advanced Research Projects Agency (DARPA)
    DARPA is an agency within Defense Department that focuses on investigating emerging technologies. As part of that effort, it is investing in AI tools that go beyond simple detection and combat deepfakes on a forensic level. The agency sees this work as a critical piece of the puzzle in dealing with everything from misinformation and identity fraud to protecting against AI-generated impersonations.[vii] 

Tools for real estate transactions

While deepfake technology is advancing, so too are the tools designed to prevent all types of fraud in real estate transactions.

  • SecureMyTransaction® from Alliant National
    SecureMyTransaction (SMT) leverages AI-driven facial recognition to verify identities by comparing ID photos with selfie images, helping ensure that parties involved in a transaction are legitimate. In addition, SMT helps verify bank accounts and business entities to add multiple layers of security. By integrating these advanced fraud prevention tools into the title and escrow workflow, SMT provides an important safeguard against deepfakes and other fraud tactics. Learn more atsecuremytransaction.com.

Final thoughts

Scammers are increasingly using AI-powered deepfakes to target real estate transaction stakeholders—which makes them a major threat to our industry. But thankfully, new detection technologies are pushing back on these ambitious criminals. For title agencies, it is imperative to understand how these solutions work and how they may enhance your cybersecurity posture. The threat landscape is always evolving, but by staying apprised of the most cutting-edge solutions out there, you can fight fraud and keep your agency moving forward.


[i] HONOR to roll out AI-powered Deepfake Detection globally in April 2025

[ii] https://www.wired.com/story/real-time-video-deepfake-scams-reality-defender/

[iii] https://www.devdiscourse.com/article/technology/3264484-lightweight-ai-model-exposes-deepfake-threats-with-96-accuracy

[iv] https://github.com/YZY-stack/DF40?utm_source=chatgpt.com

[v] https://arxiv.org/abs/2307.01426?

[vi] https://arxiv.org/abs/2405.00483

[vii] https://www.biometricupdate.com/202502/darpa-continues-work-on-technology-to-combat-deepfakes

Leave a Reply

Your email address will not be published. Required fields are marked *