Across the United States, a rising tide of anxiety is sweeping over citizens who fear that their life savings might fall prey to increasingly sophisticated AI scams. These fears are not unfounded, as technology continues to evolve, creating more convincing and deceptive frauds.
A recent survey conducted by the Daily Mail highlights this growing concern. The poll, which gathered responses from over 3,000 individuals, indicates that the possibility of falling victim to AI-driven fraud is now the foremost concern among Americans. This worry eclipses other apprehensions such as the risk of AI leaking personal data online and the threat of jobs being replaced by automation.
The data reveals that 37 percent of those surveyed identified AI-powered scams as one of their top three fears, a figure that significantly outnumbers other debated issues. Concerns about AI showing political bias, affecting educational integrity through chatbots, and stifling human creativity all ranked lower, at 18%, 19%, and 24% respectively.
These fears are not without merit. The FBI’s latest report on internet crime underscores the potential dangers posed by artificial intelligence. According to the FBI Internet Crime Complaint Center (IC3), nearly $900 million was lost to AI-related crimes in the past year alone. Alarmingly, more than two-thirds of this amount was linked to fraudulent investment schemes, demonstrating the real financial risks that technology can pose.
As AI continues to integrate into various facets of life, the need for awareness and vigilance becomes ever more critical. Americans are right to focus on the potential threats AI presents, not only to their financial security but also to their overall well-being.
The FBI Internet Crime Complaint Center (IC3) revealed that just under $900million had been lost to AI-related crimes last year. Over two-thirds of that money stolen was connected to schemes involving phony investment opportunities.
The FBI warned: ‘Investment clubs employ AI-generated videos and voices of celebrities, CEOs, or trusted figures to create fraudulent, high-stakes opportunities.’
‘These scams often feature fake, professional-looking endorsements on social media or in video calls. This makes it harder for victims to detect they are in a scam.’

American voters said their biggest concern about AI is falling for an AI-generated scam that steals their money (Stock Image)

AI chatbots have become an everyday tool in the US, but voters told the Daily Mail they have many concerns about their safety and influence (Stock Image)
AI tools have helped scammers create more sophisticated fakes than ever before, using tactics such as voice cloning and deepfake videos to convince everyday people to hand over their money or access to their bank accounts.
Voice cloning involves scammers taking short public audio clips, often from social media, and using them to recreate the person’s voice through advanced AI programs.
According to the US Federal Trade Commission (FTC), this has been a common tactic in the ‘grandparent scam,’ where the AI fakes an urgent call, often to senior citizens, claiming a family member is in trouble and needs money wired immediately.
Meanwhile, AI has enhanced the ability to create such perfect deepfake videos that even major companies have fallen victim to the scams. In 2024, UK-based engineering firm Arup lost $25.6 million after a deepfake video call impersonated their chief financial officer and authorized a fraudulent transfer.
The new poll, conducted by JL Partners between December 2025 and February 2026, also found that AI’s impact on the safety and security of children was a major concern, especially among younger adults between the ages of 18 and 49.
Overall, 14 percent of respondents ranked their fear of AI endangering children’s safety as their number one concern.
According to the National Center for Missing and Exploited Children, a nonprofit group dedicated to protecting children, generative AI has become the new favorite weapon of child predators in recent years.
In 2025, the group received more than 1.5 million reports involving generative AI video, images and deepfakes being used for child sex exploitation.

A new poll found 14 percent of Americans say the danger of AI on children’s safety is their greatest concern (Stock Image)
Nearly half of all respondents (48%) believed AI was having a negative impact on children. Voters over the age of 65 were the most likely to believe this, with one in three saying AI was having a ‘very negative’ impact.
Interestingly, adults between 30 and 49 were the least likely to think AI was bad for kids, with only 14 percent calling its impact ‘very negative’ and another 14 percent actually saying AI’s influence was ‘very positive’ for children.
The Daily Mail poll also found that, because of these growing concerns, there was bipartisan support for increased regulation of AI.
Although the strongest support came from respondents identifying as Republicans, 58 percent of all voters said there needs to be ‘somewhat more’ or ‘much more’ government control over AI.
As AI becomes a bigger part of everyday life, more and more space has been taken up by data centers, the power-hungry backbone of artificial intelligence that pack thousands of computers, servers and GPUs into giant facilities.
Thousands of these facilities throughout the US provide the immense computing power, storage and cooling needed to train, run and store large AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude and xAI’s Grok.
However, these giant facilities have been accused of pumping out dangerous pollutants that can cause asthma, cancer and even death around the communities they reside.
That may be why over one-third of the survey (35%) said there are too many data centers in America.

Pictured: An Amazon Web Services data center known as US East 1 in Ashburn, Virginia
As for what is coming out of those powerful AI chatbots in terms of information, Americans were equally as concerned.
Thirty-two percent of voters ranked the inaccuracy of the information coming from chatbots among their top concerns.
Recently, a pair of studies by the Massachusetts Institute of Technology and Stanford revealed that AI assistants such as ChatGPT, Claude and Google’s Gemini regularly provide overly agreeable answers, which sent users into a ‘delusion spiral.’
Specifically, researchers found that when people asked questions or described situations in which their beliefs or actions were incorrect, harmful, deceptive or unethical, AI replies were still 49 percent more likely to agree with the user and encourage their delusions as being the correct viewpoint compared to responses from real people.
Other topics Americans rated as being a top concern included surveillance and monitoring using AI (28%) and a lack of transparency from AI companies (19%).
With few Americans ranking fears of AI influencing their political beliefs or impacting education among their top concerns, it came as little surprise that only four percent of respondents said they get their news from AI summaries on the internet.
More than one in three people (35%) still said they go to local TV news programs for information on current events. Another 20 percent have shifted to social media and 13 percent trusted news websites.
Despite those findings, 31 percent of voters told the Daily Mail that AI has weakened their trust in what they see on the news each day.




