A smartphone showing a scam exchange requesting money with a blue credit card in the background.

4 Ways to Stay Ahead of AI Scams

09/06/2025

AI threats are on the rise. Knowing how to spot scams is key to preventing fraud and protecting your identity in the age of generative AI.

Cybercrime is evolving fast—and artificial intelligence is its newest weapon.

In the past year, criminals have begun using generative AI to create more convincing, more personal and more widespread AI scams. Voice cloning and deepfakes are being used to mimic loved ones. Sophisticated phishing emails look like they came straight from your bank, your boss or your kid’s school. And AI-powered tools are giving novice criminals the power to scale scams like never before.

“AI is accelerating the threat landscape and creating an easy way for fraudsters to operate more effectively,” says Bradley Haacke, vice president and fraud director at Fifth Third Bank. “What used to take skill and people can now be automated and scaled with ease using technology.”

According to a 2024 report by Authority Hacker based on data from the Federal Trade Commission, Americans lost more than $108 million to scams involving AI in just one year. The biggest losses came from investment-related scams, imposter scams and business and job opportunity scams.

Key takeaways:

  • Scammers are using AI tools like voice cloning, deepfakes and personalized phishing emails to create more convincing and emotionally manipulative fraud attempts.
  • The best way to protect yourself from AI threats is to slow down, verify unexpected messages or requests through trusted channels, and stay alert to red flags like urgency or secrecy.
  • As part of Fifth Third’s SmartShield® security approach, the SmartShield® dashboard gives you access to tools like 24/7 fraud monitoring, identity alerts and scam protection tools to help safeguard your information in the age of AI.1

Here’s a look at four major ways scammers are using AI—and what you can do now to protect yourself and your family.

1. AI phishing scams are smarter—and harder to spot

Many scam emails are still chock-full of misspellings and too-good-to-be-true promises, but thanks to AI, some phishing messages may appear grammatically flawless, personalized and extremely convincing.

“We’re seeing criminals use AI to create fake websites, mimic legitimate businesses and craft emails and texts that don’t raise suspicion,” says Haacke. “They’ll even use it to analyze publicly available data or scan your social media to tailor their approach—using real names, locations or interests to make it more believable.”

AI-enhanced fraud attempts commonly employ:

  • Fake order confirmation emails from major retailers
  • Phony bank alerts prompting you to “verify your identity”
  • Rental or job listings designed to steal personal info
  • Fake websites that lead to online sale scams

How to prevent fraud: No matter how official a communication may appear, always verify it before clicking or responding.

  • Contact the company directly through a trusted source, like their website or app.
  • Log into the account in question to look for messages on a more secure platform, or call your provider to confirm the correspondence came from them.
  • Never use the contact phone number or website listed on a suspicious communication.
  • Keep an eye out for red flags, like a sense of urgency, secrecy, suspicious links or offers that are too good to be true.

2. AI voice cloning and AI deepfakes are fueling emotional manipulation

Generative AI tools can now mimic someone’s voice based on “hearing” a short clip of audio—which can be scraped from a voicemail, social media post, podcast or YouTube video. That’s made it easier than ever for scammers to impersonate your boss, a colleague, even loved ones in distress, tricking you into acting quickly without thinking.

“A caller’s voice may sound like a loved one or close friend,” says Haacke. “They’ll say they’ve been in an accident, need bail money or are stuck overseas. In that moment of panic, people often don’t stop to question it. Those who do question it rightly follow up with relatives or friends to confirm whether they’re really in distress.”

The same technology can create deepfake videos that show a celebrity or trusted figure endorsing a fraudulent cause, or videos that appear to come from a family member asking for a wire transfer or crypto payment. These videos are sometimes embedded in fake news articles or sent via text to boost credibility.

Common AI-powered voice and deepfake scams include:

  • Fake family emergencies asking for urgent money transfers.
  • Deepfake videos asking for charitable donations.
  • Romance scams where fraudsters use altered images and AI to build trust over time.

How to prevent fraud: When it comes to AI-powered impersonation scams, a few simple habits can go a long way.

  • Establish a family safe word for emergencies. If a caller doesn’t know that word on demand, they’re likely an imposter (and remember that criminals can fake caller ID, so that is no longer a reliable indicator of whether a caller is legit).
  • Be wary of any communication that’s inconsistent with your loved one’s typical behavior. If something feels off, disconnect and verify the situation independently.
  • Remind loved ones—especially seniors and teens—not to post voice notes or videos that could be used for cloning.

3. AI has led to more sophisticated malware attacks

Cybercriminals are now developing their own custom AI large-language models, such as WormGPT and FraudGPT (which don’t have the same built-in ethical guardrails as ChatGPT), and selling them via the dark web or on encrypted platforms.

They help criminals:

  • Write professional-sounding phishing emails.
  • Code malware that steals login credentials.
  • Generate fake documents, job offers and rental applications that collect your Social Security number and bank info.
  • Launch “smishing” campaigns (text-message scams) in bulk.

“Phishing-as-a-Service (PhaaS) has emerged as a business model,” says Haacke. “It allows low-skill scammers to rent powerful tools and templates to run full-fledged scams. They don’t need to be hackers—just opportunists.”

How to prevent fraud: The best way to protect yourself from stronger malware attacks is through awareness—supplemented by digital tools. Fifth Third’s SmartShield® dashboard, available to anyone with a Fifth Third Momentum® Checking account1, offers access to tools like:

  • 24/7 monitoring (and uses AI in fraud detection).
  • Check tracking, Zelle® fraud detection and Identity Alert® to help monitor your personal information.
  • A partnership with HackerOne, the world’s largest community of trusted ethical hackers who find vulnerabilities and report them to the bank.

4. When AI and fraud combine, scams scale faster than ever—and reach more people

AI’s automation power enables one bad actor to deploy bots that target millions of inboxes, phone numbers or accounts in minutes, each with slightly different, personalized language.

“With AI, criminals don’t need to cast a wide net with generic messages,” says Haacke. “They can send hyper-targeted emails or texts at scale—and with AI learning from past responses, they get better over time. Fraud in general is dynamic; it’s always changing. And these tools enhance its ability to do so.”

This scale means even the most cautious families may be exposed, not just once but repeatedly.

Common high-volume attacks include:

  • IRS or tax scams during filing season.
  • Fake delivery or utility notices with links to phony login pages.
  • Social engineering attacks disguised as bank or health plan updates.

How to prevent fraud: Smart security habits can reduce how often you’re targeted—and help you spot the rest before it’s too late.

  • Always practice basic digital hygiene. That starts with using strong, unique passwords, especially for sensitive accounts like those with your bank or healthcare providers.
  • Enable two-factor authentication, which adds an extra layer of verification at login.
  • Make sure to keep software and devices updated, particularly your smartphone (updates often include security fixes and patches to any vulnerabilities).
  • Be hyper-vigilant and suspicious of requests or notices via text. (For instance, the IRS will never initiate contact with you via text or phone.)

AI fraud prevention: What to do if you think you’ve been targeted

If you think you’ve been the victim of fraud or identity theft, contact your bank immediately. You can report suspicious account activity to Fifth Third Bank at 800-972-3030 or forward suspected phishing emails related to Fifth Third to 53investigation@security.53.com. You can also report any Bank-related phishing attempts via the SmartShield® dashboard in your Fifth Third Mobile App.

“We need to know as soon as possible,” says Haacke. “If we catch it early, there might still be funds in play that we’re able to recover.”

In addition to working with your bank, you should file a police report and contact the FBI’s Internet Crime Complaint Center.

The bottom line:

You can’t control how scammers use AI—but you can control how prepared you are. By staying alert, verifying communications and using tools like SmartShield®, you can help shield yourself and your family from the evolving threats of AI-powered fraud.

Here are some precautionary action items you can do to stay on top of or digital banking security: