In August 2019, an employee at a British company was conned into sending the equivalent of about $260,000 to cyber criminals. Why? They used deepfakes to emulate his boss's voice on a phone call demanding urgent payment to a "supplier."
Deepfakes have been used to make politicians and celebrities seem to say and do things they've never actually done. However, the technology isn't just for entertainment or fake news. As deepfake technology advances, cyber criminals are stealing identities to access or create online accounts and commit fraud.
This short guide explains deepfake identity theft and how to protect your company.
Want to learn how to protect your organization against identity theft and data breaches? Watch our webinar with Carrie Kerskie, founder of the Association of Certified Identity Theft Investigators.
What are Deepfakes?
Deepfakes are audio, video and photo spoofs made with AI technology. By analyzing facial features, body movements and voice data, fraudsters can create realistic-looking media. In visual media, victims' faces may be superimposed onto other bodies. The AI can produce audio of any words in the person's voice, down to accent and speaking patterns.
Joseph Carson, Chief Security Scientist at Thycotic, explains that "with these data sets and layers of digital footprints, deepfakes are able to mimic identities." While it is certainly easier (and more impactful) to imitate someone who appears in the media frequently, anyone with a digital presence is susceptible to deepfake identity theft.
RELATED: Synthetic Identity Theft: A Serious Risk for Financial Institutions
What is Deepfake Identity Theft?
As with other types of identity theft, the fraudsters pretend to be someone else to access or create accounts, products or services. However, deepfake identity theft is harder to catch because the fraudsters falsify identity documents and/or fake a victim's voice on the phone to bypass verification measures.
Cyber criminals gather data for deepfake identity theft in a number of ways. They may hack into devices that use biometrics, especially facial recognition, to collect data. Using social media, they take audio, video and photo samples to feed into the AI. For audio clips, they may tap your phone.
Deepfake identity theft can take a lot of forms, including:
- Falsifying ID documents to access accounts
- Using the victim's face and voice to change account numbers, passwords and authorized users
- Fake the victim's voice on a phone call to family or friends asking for a funds transfer
Deepfakes at Work
Deepfake identity theft is such a complex process that most people aren't at major risk. However, fraudulent media "of those in positions of power can be effectively used to compel someone to conduct ruinous financial transactions—without their knowledge," says Rene Hendrikse, Vice President and Managing Director of Mitek Systems.
In the case of the British employee, employees don't want to risk their job by ignoring a boss's command. That's what makes deepfakes so effective when they target executives and why employees should be skeptical of seemingly odd messages.
Customers should be cautious online, but it's up to businesses to protect their clients from all types of fraud, including deepfake identity theft. Update your AI verification software frequently to ensure criminals with stolen identities, even the most convincing ones, don't access online accounts.
"Now that deepfaked IDs are a real concern, the 'naked eye' is no longer enough to protect against identity theft and ensure compliance—especially while maintaining real time customer service," Hendrikse says.
Case management software makes fraud investigations easier and more effective. Get more details in our free eBook.
Deepfake identity theft can be difficult to catch, even by the victims. Train your employees to be careful about what and how much they post online, and always double-check finance-related phone calls and online media before acting on them.