Perspective | Anyone with an iPhone can now make deepfakes. We arent ready for what happens next.

The past few months have brought advances in this controversial technology that I knew were coming, but am still shocked to see. A few years ago, deepfake videos named after the deep learning artificial intelligence used to generate faces required a Hollywood studio or at least a crazy powerful computer. Then around 2020 came apps, like one called Reface, that let you map your own face onto a clip of a celebrity.

Now with a single source photo and zero technical expertise, an iPhone app called Avatarify lets you actually control the face of another person like a puppet. Using your phones selfie camera, whatever you do with your own face happens on theirs. Avatarify doesnt make videos as sophisticated as pro fakes of Tom Cruise that have been flying on social network TikTok but it has been downloaded more than 6 million times since February alone. (See for yourself in the video I made on my phone to accompany this column.)

Another app for iPhone and Android devices called Wombo turns a straight-on photo into a funny lip-sync music video. It generated 100 million clips just in its first two weeks.

And MyHeritage, a genealogy website, lets anyone use deepfake tech to bring old still photos to life. Upload a shot of a long-lost relative or friend, and it produces a remarkably convincing short video of them looking around and smiling. Even the little wrinkles around the eyes look real. They call it Deep Nostalgia and have reanimated more than 65 million photos of people in the past four weeks.

These deepfakes may not fool everyone, but its still a cultural tipping point we arent ready for. Forget laws to keep fakes from running amok, we hardly even have social norms for this stuff.

All three of the latest free services say theyre mostly being used for positive purposes: satire, entertainment and historical re-creations. The problem is, we already know there are plenty of bad uses for deepfakes, too.

Its all very cute when we do this with grandpas pictures, says Michigan State University responsible-AI professor Anjana Susarla. But you can take anyones picture from social media and make manipulated images of them. Thats whats concerning.

So I spoke to the people making deepfake apps and the ethics experts tracking their rise to see if we can figure out some rules for the road.

You must make sure that the audience is aware this is synthetic media, says Gil Perry, the CEO of D-ID, the tech company that powers MyHeritages deepfakes. We have to set the guidelines, the frameworks and the policies for the world to know what is good and what is bad.

The technology to digitally alter still images Adobes Photoshop editing software has been around for decades. But deepfake videos pose new problems, like being weaponized, particularly against women, to create humiliating, nonconsensual fake pornography.

In early March, a woman in Bucks County, Pa., was arrested on allegations she sent her daughters cheerleading coaches fake photos and video of her rivals to try to get them kicked off the squad. Police say she used deepfake tech to manipulate photos of three girls on the Victory Vipers squad to make them look like they were drinking, smoking and even nude.

Theres potential harm to the viewer. Theres harm to the subject of the thing. And then theres a broader harm to society in undermining trust, says Deborah Johnson, emeritus professor of applied ethics at the University of Virginia.

Social networks say deepfakes havent been a major source of problematic content. We shouldnt wait for them to become one.

Its probably not realistic to think that deepfake tech could be successfully banned. One 2019 effort in Congress to forbid some uses of the technology faltered.

But we can insist on some guardrails from these consumer apps and services, the app stores promoting them and the social networks making the videos popular. And we can start talking about when it is and isnt okay to make deepfakes including when that involves reanimating grandpa.

Installing guardrails

Avatarifys creator, Ali Aliev, a former Samsung engineer in Moscow, told me hes also concerned that deepfakes could be misused. But he doesnt believe his current app will cause problems. I think the technology is not that good at this point, he told me.

That doesnt put me at ease. They will become that good, says Mutale Nkonde, CEO of the nonprofit AI For the People and a fellow at Stanford University. The way AI systems learn from being trained on new images, she says, its not going to take very long for those deepfakes to be really, really convincing.

Avatarifys terms of service say it cant be used in hateful or obscene ways, but it doesnt have any systems to check. Moreover, the app itself doesnt limit what you can make people say or do. We didnt limit it because we are looking for use cases and they are mainly for fun, Aliev says. If we are too preventive then we could miss something.

Hany Farid, a computer science professor at the University of California at Berkeley, says hes heard that move-fast-and-break-things ethos before from companies like Facebook. If your technology is going to lead to harm and its reasonable to foresee that harm I think you have to be held liable, he says.

What guardrails might mitigate harm? Wombos CEO Ben-Zion Benkhin says deepfake app makers should be very careful about giving people the power to control what comes out of other peoples mouths. His app is limited to deepfake animations from a curated collection of music videos with head and lip movements recorded by actors. Youre not able to pick something thats super offensive or that could be misconstrued, Benkhin says.

MyHeritage wont let you add lip motion or voices to its videos at all though it broke its own rule by using its tech to produce an advertisement featuring a fake Abraham Lincoln.

There are also privacy concerns about sharing faces with an app, a lesson we learned from 2019s controversial FaceApp, a Russian service that needed access to your photos to use AI to make faces look old. Avatarify (also Russian) says it doesnt ever receive your photos because it works entirely on the phone but Wombo and MyHeritage do take your photos to process them in the cloud.

App stores that distribute this technology could be doing a lot more to set standards. Apple removed Avatarify from its China App Store, saying it violated unspecified Chinese law. But the app is available in the United States and elsewhere and Apple says it doesnt have specific rules for deepfake apps aside from general prohibitions on defamatory, discriminatory or mean-spirited content.

Labels or watermarks that make it clear when youre looking at a deepfake could help, too. All three of these services include visible watermarks, though Avatarify removes it with a $2.50-per-week premium subscription.

Even better would be hidden watermarks in video files that might be harder to remove, and could help identify fakes. All three creators say they think thats a good idea but need somebody to develop the standards.

Social networks, too, will play a key role in making sure deepfakes arent used for ill. Their policies generally treat deepfakes like other content that misinforms or could lead to people getting hurt: Facebook and Instagrams policy is to remove manipulated media, though it has an exception for parodies. TikToks policy is to remove digital forgeries that mislead and cause harm to the subject of the video or society, such as inaccurate health information. YouTubes deceptive practices policy prohibits technically manipulated content that misleads and may pose a serious risk.

But its not clear how good of a job the social networks can do enforcing their policies when the volume of deep fakes skyrockets. What if, say, a student makes a mean joke deepfake of his math teacher and then the principal doesnt immediately understand its a fake? All the companies say theyll continue to evaluate their approaches.

One idea: Social networks could bolster guardrails by making a practice out of automatically labeling deepfakes a use for those hidden watermarks even if its not immediately obvious theyre causing harm. Facebook and Google have been investing in technology to identify them.

The burden here has to be on the companies and our government and our regulators, Farid says.

AI-generated videos that show a persons face on anothers body are called deepfakes. Theyre becoming easier to make and weaponize against women. (Drew Harwell, Jhaan Elker/The Washington Post)

New norms

Whatever steps the industry and government take, deepfakes are also where personal tech meets personal ethics.

You might not think twice about taking or posting a photo of someone else. But making a deepfake of them is different. Youre turning them into a puppet.

Deepfakes play with identity and agency, because you can take over someone else you can make them do something that theyve never done before, says Wombos Benkhin.

Nkonde, who has two teenagers, says families need to talk about norms around this sort of media. I think our norm should be ask people if you have their permission, she says.

But that might be easier said than done. Creating a video is a free-speech right. And getting permission isnt even always practical: One major use of the latest apps is to surprise a friend.

Permission to create a deepfake is also not entirely the point. What matters most is how theyre shared.

If someone in my family wants to take my childhood picture and make this video, then I would be comfortable with it in the context of a family event, Susarla says. But if that person is showing it outside an immediate family circle, that would make it a very uncomfortable proposition.

The Internet is great at taking things out of context. Once a video is online, you can quickly lose control over how it might get interpreted or misused.

Then theres a more existential question: How will deepfakes change us?

I discovered deepfake apps as a way to play with my nephew, livening up our Zoom chats by making him look like hes doing goofy things.

But then I started to wonder: What am I teaching him? Perhaps its a useful life lesson to know that even videos can be manipulated but hes also going to need to learn how to figure out what he should trust.

Aliev, from Avatarify, says the sooner everyone learns videos can be faked, the better off well all be. I think that the right approach is to make this technology a commodity like Photoshop, he said.

What really worries me is what you saw happen over the last few years where any fact that is inconvenient to an individual, a CEO, a politician, they just have to say its fake, Farid says.

And at the risk of sounding obvious: We dont want to lose sight of whats real.

Some people have shared on social media that reanimating the dead with MyHeritages videos made them weep with joy. I am sympathetic to that. D-ID says that in its own analysis, only 5 percent of tweets about the service were negative.

But when I tried it with the photo of a friend who died a few years ago, I didnt feel good at all. I knew my friend didnt move like that, with the limited range of these computer-generated mannerisms.

Do I really want these people and this technology messing with my memories? says Johnson, the U-Va. ethicist. If I want ghosts in my life, I want real ones.

Deepfakes are also a form of deception were using on ourselves.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *