Social Proof

How to Avoid AI Voice Scams

Speechify is the #1 audio reader in the world. Get through books, docs, articles, PDFs, emails - anything you read - faster.
Gwyneth Paltrow
English Female Voice
Play
Snoop Dogg
English Male Voice
Play
John
English Male Voice
Play
Mr. Beast
English Male Voice
Play
Try for free

Featured In

Wall Street JournalForbesOCBSTimeThe New York Times
Listen to this article with Speechify!
Speechify

In an era where artificial intelligence (AI) is weaving itself into the fabric of daily life, the innovation of AI voice cloning has opened new avenues...

In an era where artificial intelligence (AI) is weaving itself into the fabric of daily life, the innovation of AI voice cloning has opened new avenues for scammers. AI voice cloning scams involve cybercriminals using generative AI technology to mimic the voice of a loved one or a family member, creating a sense of urgency to trick people into parting with their money or sensitive information.

Every day, 1000’s of Americans, if not more, get calls from scammers who use AI voice technology. Here’s a guide on how to stay safe from these increasingly sophisticated identity theft and other scams.

Understand the Technology

AI voice cloning uses algorithms to generate a digital replica of a person’s voice from just a short audio clip. This technology, while revolutionary, can also be misused to create convincing scams. These deepfake audio messages can make it seem as though a family member is in trouble—perhaps claiming they’ve been in a car accident or are in legal trouble—and urgently need money, often in the form of cryptocurrency, credit card details, or gift cards.

Recognize the Red Flags

The Federal Trade Commission (FTC) highlights several red flags associated with AI voice cloning and other scams:

  1. Urgent requests for money: Be wary if the caller insists on immediate action.
  2. Requests for secrecy: Scammers often ask victims not to tell other family members about the situation.
  3. Unusual payment methods: Requests for cryptocurrencies, gift cards, or wire transfers are almost always a sign of fraud.

Top 7 most common AI voice scams

  1. Grandparent Scam: Scammers use AI-generated voice cloning to mimic a grandparent's voice, calling their grandchildren with urgent requests for money due to a fake emergency like a car accident, often asking for funds to be sent via gift cards or cryptocurrency.
  2. Fake Authority Imposter Calls: Cybercriminals clone the voices of law enforcement or government officials to intimidate victims into handing over sensitive information or money, leveraging the authority of agencies like the FTC to create a sense of urgency.
  3. Tech Support Scams: Using AI technology, scammers pose as tech support from reputable companies like Apple or Microsoft, claiming that your computer is at risk from malware. They manipulate victims into granting remote access or paying for unnecessary cybersecurity protection.
  4. Romance Scams on Social Media: AI voice cloning is employed to deepen fake relationships cultivated on platforms like Facebook or LinkedIn. Scammers create emotionally charged scenarios, asking for money to handle fake crises.
  5. Phishing Voicemails: This scam involves leaving AI-generated voicemails that sound like they're from a trusted source, such as a bank or Amazon, urging you to call back on a provided phone number that leads to a scammer ready to extract credit card details or personal information.
  6. Business Executive Fraud: AI is used to mimic the voice of a CEO or high-level executive in audio clips sent via email or phone calls to employees, requesting urgent wire transfers or sensitive data disclosure.
  7. Insurance Fraud: Scammers impersonate insurance agents after disasters, using cloned voices of real agents harvested from social media or company websites. They exploit victims' states of distress, pushing for immediate claim payments or personal information in exchange for aid.

In all these cases, the FTC advises verifying identities through direct communication on known, secure lines, implementing two-factor authentication, and remaining cautious of unknown numbers and unsolicited requests, ensuring robust protection against the evolving landscape of AI voice scams.

Verify Suspicious Calls

If you receive an unexpected phone call or voicemail that triggers alarm, take the following steps to protect yourself:

  1. Hang up and call back: Use a known phone number to contact the family member or friend who supposedly called you.
  2. Use a code word: Establish a family safe word for emergencies to quickly verify if the call is legitimate.
  3. Check the caller ID: Be skeptical of phone calls from unknown numbers or numbers that mimic known contacts, a tactic known as caller ID spoofing.

Stay Informed and Secure

Enhancing your cybersecurity measures can also help protect against scams:

  1. Educate yourself and others: Share information about scams with friends and family, especially with those who might be more vulnerable, like the elderly who may be targeted in grandparent scams.
  2. Secure your digital presence: Use two-factor authentication on all important accounts, from social media to financial services.
  3. Stay updated: Companies like McAfee and major tech platforms like Apple and Amazon consistently update their security features to guard against new threats.

What the Experts Say

Cybersecurity experts urge vigilance. According to recent insights from McAfee, scam calls and phishing attempts are trending higher every year. LinkedIn and other social media platforms are also common grounds for cybercriminals to gather personal data used in scams.

Taking Action

If you suspect you’ve encountered an AI voice cloning scam:

  1. Report it to the authorities: Notify the FTC or your local law enforcement.
  2. Inform your network: Alert your social circle to prevent further spread of the scam.

As AI technology evolves, so too do the tactics of cybercriminals. By staying educated about the latest in AI tools and cybersecurity threats, and by remaining vigilant in your daily communications, you can protect yourself and your loved ones from falling victim to these nefarious schemes.

Protecting Against AI Voice Cloning and Scams

To protect against AI voice cloning, establish verification methods like a unique code word with family and friends, and always verify suspicious calls by hanging up and calling back using a known number.

Protect yourself from AI threats by using strong cybersecurity practices like enabling two-factor authentication on your accounts, keeping your software updated, and educating yourself about the latest AI-driven threats.

Detecting voice cloning can be challenging, but signs include slight distortions or unusual tones in the voice, and inconsistencies in the person’s usual speech patterns or background noises; always verify through an alternate communication method if suspicious.

Cliff Weitzman

Cliff Weitzman

Cliff Weitzman is a dyslexia advocate and the CEO and founder of Speechify, the #1 text-to-speech app in the world, totaling over 100,000 5-star reviews and ranking first place in the App Store for the News & Magazines category. In 2017, Weitzman was named to the Forbes 30 under 30 list for his work making the internet more accessible to people with learning disabilities. Cliff Weitzman has been featured in EdSurge, Inc., PC Mag, Entrepreneur, Mashable, among other leading outlets.