top of page

Deepfake Texts and Voices: The New Risk to Employers

Article written by: Chintana Bhaskara

Article designed by: Chintana Bhaskara & Sanvi Desai


Deepfake Texts and Voices: The New Risk to Employers


 AI Deepfake Voices

Cybercriminals are now creating AI-generated voices and videos to sound like real people. These deepfake voices trick employees into sharing sensitive data, money, or files, to their supposed “boss.” Attackers use old emails, texts, or videos to make their message sound more realistic and believable. In addition, cybercriminals use AI to change the tone, pace, and emotion of the voice to make it natural.

Image by Scientific American


This allows for the scams to be less detectable by the employers and shows how this new type of social engineering is only advancing.



Real Life AI Scams

These scams are not just what could potentially happen, but what has actually caused companies and people to face financial losses. With the fake voices and videos, companies are being tricked into sending large amounts of money into the wrong hands. In one case there was a $25 million transfer requested by what they thought was their CEO on a video call, when the employers were just clueless that they were being tricked by cybercriminals. 


Image by CFC Underwriting

In another company case, they lost hundreds of thousands of dollars because they used a voice clone of the original person that sounded like their instructions were by them. These two cases are just a few of the many AI deepfakes being used to exploit trust, making people worried if they are really talking to who they think they are talking to. This also shows how attackers can use a combination of tools such as deepfakes and phishing to increase the reality and the success of their scams.


AI Scam Protection

To stay away from AI scams, companies need new prevention strategies. This first starts with the employers who are the often victims, as they should be trained to stop and spot these deepfake videos and voices. This can be detected through subtle signs such as unnatural pauses or movements or strange phrasing that sounds like AI. Companies can also install security tools to detect these videos and voices that are fake. With voice biometrics and anomaly detection, employers can verify requests through phone calls and emails to a secure contact. Experts also suggest that AI based attacks should be simulated in training to get employees real practice recognizing these deepfakes.




Works Cited



 
 
 

Comments


bottom of page