The Federal Bureau of Investigation (FBI) has just posted an alert about a different type of sextortion. It involves malicious actors using innocent photos and videos of a target, taken from social media, public forums, or by request, and manipulating them using AI to make the end result sexually explicit. Deepfakes, in other words.
Technological advancements mean that deepfakes can now appear more convincing than ever. In this case, the altered images are circulated on social media, public forums, or pornographic websites. The deepfakes are then sent to the victim for sextortion or harassment.
The FBI has issued a warning regarding a new sextortion tactic in which cybercriminals use AI-generated deepfakes from stolen social media photos and videos to blackmail people by threatening to send such content to their family and friends.
Deepfakes are realistic changes created through AI to alter a person’s look and behavior, resulting in explicit visuals that can be posted on social media, web forums or pornographic sites.
Regular sextortion involves malicious actors making an empty threat to leak images/videos to get money from the target, but with AI-generated deepfakes, the situation changes.
The potential of realistically altered images and videos through technology is increasing, meaning that deepfakes are scarily convincing. It is up to victims to take appropriate measures to protect themselves.
Every year, minors become victims of a deceptive practice called sextortion, completely unaware until someone else brings attention to it.
Once images of the victims become online, it’s impossible to contain them. The FBI has been observing an increased number of reports of victims using fake images or videos in the process of sextortion.
Criminals often asked victims to give money or gift-cards or requested suggestive images and videos of them.
For prevention, the FBI recommends the parents to monitor their children and discuss the dangers of posting personal information.
The parents should also make sure that the social media account of their children is private, not public, and strong passwords and two-factor authentication should be used. If anybody’s acting out of character, they may have been hacked and caution should be employed.
Deepfakes, which are increasingly realistic and easier to generate with China’s Tencent’s $145 service, were made available in 2020 on the messaging platform Telegram wherein over 100,000 faked nude photos of women were generated and shared.