Deepfake anyone? AI synthetic media technology enters a perilous phase



December 13 (Reuters) – “Do you want to see yourself acted in a movie or on television?” said the description of an app on online stores, offering users the ability to create synthetic AI-generated media, also known as deepfakes.

“Do you want to see your best friend, colleague or boss dance?” ” he added. “Have you ever wondered what would you look like if your face was swapped with that of your friend or a celebrity?” “

The same app was advertised differently on dozens of adult sites: “Create deepfake porn in a second,” the ads read. “Deepfake anyone.”

Register now for FREE and unlimited access to

Register now

Applying increasingly sophisticated technology is one of the complexities faced by synthetic media software, where machine learning is used to digitally model faces from images and then swap them into movies. in the most transparent way possible.

The technology, just four years old, may be at a crossroads, according to Reuters interviews with businesses, researchers, policymakers and activists.

It is now sufficiently advanced that viewers in general have difficulty distinguishing many fake videos from reality, experts said, and has proliferated to the extent that it is available to almost anyone with a smartphone, without any. specialization required.

“Once the entry point is so low that it doesn’t require any effort, and an unsuspecting person can create a very sophisticated, non-consensual deepfake porn video – that’s the inflection point,” he said. said Adam Dodge, lawyer and founder of online security. EndTab company.

“This is where we start to get into trouble.”

With the genius of the technology out of the bottle, many online security activists, researchers and software developers say the key is securing the consent of those who are being faked, although that’s easier said than done. ‘to do. Some advocate a tougher approach to synthetic pornography, given the risk of abuse.

Non-consensual deepfake pornography made up 96% of a study sample of more than 14,000 deepfake videos posted online, according to a 2019 report from Sensity, a company that detects and monitors synthetic media. He added that the number of deepfake videos online was doubling roughly every six months.

“The vast, vast majority of the damage done by deepfakes right now is some form of digital gender-based violence,” said Ajder, one of the study’s authors and head of policies and partnerships for the AI ​​company. Metaphysic, adding that her research indicated that millions of women have been targeted worldwide.

Therefore, there is a “big difference” between whether an app is explicitly marketed as a pornographic tool or not, he said.


ExoClick, the online advertising network used by the “Make deepfake porn in a sec” app, told Reuters it was not familiar with this type of AI face swapping software. He said he has suspended the ad serving app and will not promote face swap technology irresponsibly.

“It’s a type of product that’s new to us,” said Bryan McDonald, ad compliance manager at ExoClick, which like other large ad networks gives customers a dashboard of sites they can customize themselves to decide where to place the advertisements.

“After a review of the marketing materials, we have decided that the wording used on the marketing materials is not acceptable. We are sure that the vast majority of users of these apps use them for entertainment without bad intentions, but we further recognize that this could also be used for malicious purposes. “

Six other major online ad networks approached by Reuters did not respond to requests for comment as to whether they had encountered deepfake software or had a policy on it.

There is no mention of possible pornographic use of the app in its description on the Apple App Store (AAPL.O) or Google Play Store (GOOGL.O), where it can be accessed at anyone over 12 years old.

Apple said there are no specific rules for deepfake apps, but its broader guidelines prohibit apps that include content that is defamatory, discriminatory, or likely to humiliate, intimidate, or harm anyone.

He added that developers were prohibited from marketing their products in a deceptive manner, inside or outside the App Store, and that he was working with the app development company to ensure that they were in accordance with its guidelines.

Google did not respond to requests for comment. After being contacted by Reuters about “Deepfake porn” ads on adult sites, Google temporarily removed the app’s Play Store page, which had been rated E for everyone. The page was restored after about two weeks, with the app now rated T for Teen due to its “sexual content.”


While there are bad players in the growing face swap software industry, there is a wide variety of apps available to consumers and many are taking steps to try and prevent abuse, said Ajder, who promotes the ethical use of synthetic media within the framework of the Synthetic Futures Industry Group.

Some apps only allow users to swap images in pre-selected scenes, for example, or require verification of the identity of the swapped person, or use AI to detect pornographic downloads, although these are not. always effective, he added.

Reface is one of the world’s most popular face swap apps, having attracted over 100 million downloads worldwide since 2019, with users being encouraged to face swap with celebrities, superheroes and celebrities. memes characters to create funny video clips.

The US-based company told Reuters it uses automatic and humane moderation of content, including a pornography filter, and has other controls to prevent abuse, including labeling and visual watermarks. to mark videos as synthetic.

“From the early days of the technology and the creation of Reface as a company, it was recognized that synthetic media technology could be abused or misused,” he said.


With the expansion of consumers’ access to powerful computing through smartphones comes advancements in deepfake technology and synthetic media quality.

For example, the founder of EndTab, Dodge and other experts interviewed by Reuters said that when these tools started in 2017, they required a large amount of data, often totaling thousands of images to get the same type. quality that could be produced today from a single image.

“With the quality of these images getting so high, the protests of ‘It’s not me! “Aren’t enough, and if that sounds like you, then the impact is the same as if it were you,” said Sophie Mortimer, head of the UK-based Revenge Porn Helpline.

Policymakers seeking to regulate deepfake technology are making uneven progress, also facing new technical and ethical grunts.

Laws specifically targeting online abuse using deepfake technology have been passed in some jurisdictions, including China, South Korea and California, where the malicious portrayal of a person in pornography without their consent, or the distribution of this material, may result in legal damages of $ 150,000.

“Specific legislative intervention or the criminalization of deepfake pornography is still lacking,” European Parliament researchers said in a study presented to a panel of lawmakers in October, suggesting that the legislation should widen the liability net to include actors such as developers or distributors, as well as attackers.

“As it stands, only the perpetrator is responsible. However, many perpetrators go to great lengths to launch such attacks on such an anonymous level that neither law enforcement nor platforms can them. identify.”

Marietje Schaake, director of international policy at the Cyber ​​Policy Center at Stanford University and a former member of the European Parliament, said general new digital laws, including the proposed AI law in the United States and GDPR in Europe, could regulate elements of deepfake technology, but that there were loopholes.

“While it may appear that there are many legal options to pursue, in practice it is a challenge for a victim to be empowered to do so,” Schaake said.

“The draft law on AI under consideration provides that manipulated content must be disclosed,” she added.

“But the question is whether being aware does enough to halt the damaging impact. If the virality of conspiracy theories is any indicator, information too absurd to be true can still have a broad societal impact. and harmful. “

Register now for FREE and unlimited access to

Register now

Reporting by Shane Raymond; Editing by Hazel Baker and Pravin Char

Our Standards: Thomson Reuters Trust Principles.



About Author

Comments are closed.