- Haylee Friel
Finding Reality in a Deepfaked World: The AI Threat to Personal and Digital Safety
While Microsoft and Google keep up an ongoing competition in how quickly they can produce AI enhancements to their digital services, the latest advancements in AI are also causing ripples of concern in cybersecurity fields. Though there have been promises and studies from big tech companies to use AI to increase protection from a rapidly growing market of cybercrime, there are some holes in this as well: AI is making it easier for criminals to impersonate their victims on a terrifyingly quick scale.
What can be faked?
A better question might be what cannot be faked by AI and other technology right now. Social media has become inundated with AI-generated images, AI policies are becoming prevalent in bargaining between companies and unions to ensure the longevity of positions that may be affected by it in the future, and there is a rise of digital assistance that promises to redefine and streamline our workflows forever; the latest advancements in what AI is becoming capable of dominates technology news daily. Current AI technology that is readily available to the public has been shown to be able to mimic voices from a small sample of uploaded data, create entire political ad commercials, and has even created a now-popular song of a collaboration between music artists that doesn’t actually exist. While much of the public-facing use of voice AI has been primarily perceived in entertainment as making ‘AI Cover’ songs of popular songs by equally popular artists, and more controversially replacing voice actors for commercial work, there is a notable rise in much more horrific uses of voice technology that many people aren’t aware of yet.
What are the dangers so far?
Scammers are beginning to scrape voice data off of videos posted online on social media to create fake voice profiles of individuals. They then use these profiles to imitate the person in question to friends and family members on voice calls. These voices are practically clones designed to sound exactly like the individual in question—while begging for money for an emergency from a vulnerable relative or even imitating a kidnapping with that voice before ransom money is demanded.
As some people are discovering, these fake voices can also fool automated bank biometric security systems, putting their information at risk of exploitation. Another risk is on a larger, social scale: deepfake videos are already being used to alter social perceptions. As AI becomes more and more capable of both imitating videos and audio together, experts are concerned that the already prevalent issue of faked videos in social media might spiral into an age of disinformation where realistic videos created with malicious intent are both easily created and spread before they can be properly vetted and debunked.
Preventative Measures to Take Against AI Fraud
While there’s no way to fully prevent any type of cyberattack or fraud, it helps to be more aware of what kind of threats there are on the market and to make sure your friends and family are aware of these latest risks. Scams often hit our least tech-savvy associates, with elderly individuals making up a high percentage of victims of digital fraud. Reasonable measures to take are limiting the number of videos you post on social media that involve samples of your voice that could be used to copy it, and being aware of how many details you might post online that could make you a target of fraud. Be wary of mentioning who you bank with, and what your personal details might be. Pet names, places of birth, family names, or birthdays are among the most often used for security questions for private accounts. As always, be wary of what you share online, because you may never be sure of just who's watching. The most important thing one can do is to keep a reasonable level of suspicion. If a supposed relative is reaching out for help, try contacting them through another means of contact, like a messenger on a website, or ask them for information that only your relative would know and nothing that may have been posted online. With how easily some phone numbers can be imitated or ‘spoofed,’ they might even be calling from a familiar number. Having a family password that cannot be easily guessed or assumed can also help prevent falling for these scam efforts. It's important to do detailed research to avoid general misinformation in the modern age; videos uploaded by individuals online cannot always be trusted as reliable news sources. Before assuming that something drastic or shocking that you see online is real, do a double-check to see if it’s from a verifiable source.