Experts Warn of Hard to Detect Frauds Due to AI and Deep Fakes
There’s no doubt that Artificial Intelligence (AI) has been effectively driving efficiency and productivity in making life easier for everyone. However, everyone includes those who perpetrate financial crimes, using AI and deep fake technologies in carrying out hard to detect fraudulent schemes and activities.
AI as Enabler of Scammers and Swindlers
The Federal Trade Commission (FTC) reported in February 2023 that financial scams have been on the rise. Data gathered show that compared to last year, a 19% increase in financial losses were caused by various swindling methods. In 2022 and in the US alone, consumers lost about $8.8 billion to scammers and fraudsters. The FTC report indicated that during the year, young people have been victimized more frequently than seniors.
Yet Kathy Stokes, director of the Fraud Watch Network of the American Association of Retired Persons (AARP), says the figures pertaining to seniors can be bigger because many of the online scams that transpired were not reported. Ms. Stokes said that the use of AI has been going on for years but they have become more sophisticated and more believable with the way they target people. Older adults are the likeliest targets, especially those with assets such as real estate, savings, retirement funds, pension and insurance benefits.
Haywood Talcove, CEO of LexisNexis Risk Solutions, a government group that performs data analytics as protection against identity theft, labels AI-supported fraud, Crime 3.0. Experts warn that generative AI and deep-fake technologies for altering facial features and voices. These could make scams and other forms of online illegal activities become totally undetectable and more difficult to track down.