We published a post on a malicious AI tool, known as wormGPT to let you know how attackers have started abusing AI technologies. In this article, we will talk about another example of AI abuse. AI and machine learning were created to help us work faster and better, but some people are using them in negative ways. Cybercriminals have figured out how to use AI to pretend to be real people and trick others into giving them money or information.
In today’s article, we will look into what virtual kidnapping is, how AI tools are used for abuse, and how ChatGPT helps scammers. We will also discuss the impact of new technology on scammers.
What is Virtual Kidnapping and How are AI Tools Used for Abuse?
Virtual kidnapping is a type of scam where criminals deceive and manipulate victims into believing that a loved one has been kidnapped. Young individuals and public figures, who are quick to embrace new technology and popular social platforms, are at higher risk of having their biometric information collected for use in virtual kidnapping scams. Platforms like TikTok, Facebook, and Instagram provide convenient avenues for criminals to identify potential victims and gather specific details to make their scams more convincing.
Virtual kidnapping is a deceptive scheme that tricks victims into paying a ransom, causing emotional distress. The attackers can target numerous victims and only need a few successful attempts to generate significant profits.
The key steps involved in a virtual kidnapping attack are as follows:
- Identifying a potential victim who is capable of paying the ransom, often a relative of the targeted individual.
- Selecting the virtual kidnapping victim, usually a family member, and creating an emotionally manipulative story to impair the victim’s judgment.
- Gathering voice biometrics from the victim’s social media or utilizing deepfake technology to create audio that sounds like the victim has been kidnapped.
- Identifying the optimal time and logistics for the attack based on the victim’s social media updates to ensure a successful ransom payment.
- Making a phone call using voice modulation software to sound intimidating, while playing the deepfake audio to add credibility to the ransom demand.
- Engaging in post-call activities such as laundering the ransom payment, deleting relevant files, and disposing of the burner phone used in the scam.
The attackers study and execute the plan carefully, waiting for a time when the kidnapping victim is physically away from the ransom victim to be more convincing. They might employ voice modulation software to create an intimidating tone during the call. To add credibility to their ransom demand, they may play the deep fake audio of the alleged kidnappee. After the call, the attackers proceed with post-call activities, which may involve money laundering, deleting relevant files, and disposing of the burner phone used.
Case Study – A Real-World Incident of Virtual Kidnapping
Recently, CNN reported a case in which a mother received a call claiming that her 15-year-old daughter had been kidnapped and held hostage. She even mentioned hearing her daughter’s voice yelling and crying for help in the background. The kidnappers initially demanded a ransom of $1 million, but after negotiation, it was decreased to $50,000. Fortunately, before the ransom was paid, the mother confirmed that her daughter was safe and had not been kidnapped. This incident was reported to the police, who mentioned that this is a common scam method.
Virtual kidnapping is a growing cybercrime that misuses AI technology to manipulate people’s decision-making. Malicious individuals exploit AI to evoke negative emotions and control victims for their own gain. In a real-life case of virtual kidnapping, the criminals took advantage of the intense distress experienced by child abduction victims to coerce them into paying the ransom.
To further amplify the victims’ fear and hopelessness, attackers may employ a technique called SIM jacking. This involves gaining unauthorized access to a victim’s mobile number and redirecting calls and data to a device under the attacker’s control. As a result, the victim’s phone becomes unreachable, increasing the likelihood of a successful ransom payment.
How Does ChatGPT Help the Scammers?
In addition to AI voice cloning tools, malicious actors can misuse an AI chatbot like wormGPT or ChatGPT to streamline their attack processes. This includes using the chatbot to automate the filtering of extensive victim data, which would otherwise be a manual and time-consuming task.
Using ChatGPT, an attacker can combine extensive datasets of potential victims, including voice, video, and geolocation information through API connectivity. The collected data is then processed by ChatGPT and sent to the target. The attacker receives the target’s response and can generate an improved response using ChatGPT. By leveraging public sources and advertising analytics, the attacker can filter and prioritize victims based on potential revenue and the probability of a successful ransom payout. This risk-based scoring system enhances the profitability and scalability of such attacks.
In the future, or with substantial research investment, attackers could generate audio files of ChatGPT texts using text-to-speech software, making both the attacker and virtual kidnapping victim fully virtual. By distributing these virtual files through mass calling services, virtual kidnapping could become more widespread and impactful.
How are Attackers Abusing AI Technologies?
Scammers are using the latest technologies to make the scams more convincing and original. One approach is propensity modeling, a statistical technique that predicts the likelihood of a user performing a certain action.
Virtual kidnapping is similar to social network analysis and propensity modeling. Social network analysis is a statistical technique that predicts people’s actions on social media and targets advertisements or services accordingly.
It is possible to misuse propensity modeling for criminal purposes, extracting a list of potential victims. Scammers may exploit social network analysis and propensity modeling to increase the efficiency and reach of their fraudulent activities, utilizing techniques similar to businesses seeking potential customers. They can streamline their operations by outsourcing specific tasks, such as acquiring ready-made SIM jacking exploits, procuring compromised credentials from data breaches, and accessing money mule services, all of which are available in the diverse and commercialized dark web.
Below are a few recommendations provided by the FBI based on previous scam cases:
- Avoid posting information about upcoming trips on social media.
- Create a family password that can be used to verify the identity of callers.
- If you receive a call claiming a kidnapping, buy yourself extra time to make a plan and inform law enforcement.
- Write a note to someone else in the house to let them know what’s happening, and call someone for help.
- Be cautious about sharing financial information with strangers over the phone, as scammers often demand ransom through wire transfers, cryptocurrency, or gift cards.
- Don’t solely rely on the voice you hear on the call; try to contact your loved one through another person in the room.
Virtual kidnapping scams are becoming more popular, and criminals are using techniques from cybercrime like ransomware attacks. They are now targeting communication paths like voice and video, as well as new environments like the metaverse. To combat this, advanced antifraud techniques that focus on identity will be necessary. As virtual kidnapping attacks increase, more data will be available for improving security analytics. This data can then be used to enhance identity-aware security systems and better protect against these scams.