How the faces of AI are militarized online

Originally Posted: FEB 20 20 11:58 ET

Updated: FEB 20 20 12:14 ET

(CNN) – As an activist, Nandini Jammi has grown used to being harassed online, often by anonymous social media accounts. But this time it was different: A threatening tweet was sent to her from an account with a profile photo of a woman with blond hair and a beaming smile.

The woman was only called by a first name, “Jessica,” and her short biography on Twitter read, “If you’re a bully, I’ll fight you.” In her tweet sent to Jammi last July, she said, “Why haven’t you cleaned up your Adult Friend Finder information? It’s only been three years.

Face of AI creates new threat with old information

The implication seemed clear. “Jessica” claimed to have potentially embarrassing information from a former online dating profile about Jammi, a social media activist and co-founder of Sleeping Giants, a group that campaigns for businesses not to run ads on websites that allow the spread of discrimination and hate. Jammi told CNN Business that she has never actively used the online dating account. “Jessica” also tweeted a reference to a former dating profile at EJ Gibney, an independent researcher who has been involved with the Sleeping Giants campaigns.

However, what sets “Jessica” apart from other Twitter users is that the smiling woman in the account’s profile picture apparently never existed. The image was created using new sophisticated artificial intelligence technologyseveral experts who examined the image told CNN Business.

Online trolls sometimes manage dozens or hundreds of accounts at the same time and use them to flood their targets’ social media feeds with hate and harassment messages. This is usually done on the condition of anonymity online.

AI made new photos of many faces

In an attempt to look like real accounts, anonymous online trolls will often use pictures stolen from other users’ accounts as profile pictures. “Jenna Abrams”, an account that featured the character of a conservative American woman, garnered more than 70,000 followers before being finally deleted by Twitter in 2017. The account was run by a group of trolls linked to the Russian government and the photo the account used actually belonged to a 26-year-old Russian woman who said she didn’t know her image was being used in this way until contacted by CNN in 2017.

Most of the major social media platforms have rules prohibiting the use of other people’s photos in this way and have the option for people to file identity theft complaints if their identity is used. But using AI-generated faces of people who don’t exist, trolls can potentially avoid being reported for impersonation.

“Jessica” was part of a coordinated network of about 50 accounts managed by the same person or people, Twitter confirmed to CNN Business. The accounts have been used to harass activists, according to details gathered by Gibney and shared with CNN Business. Similar images, appearing to show different people on other accounts used in the campaign, were also created using AI, experts told CNN Business.

Rapidly evolving technology

The technology for this has developed rapidly in recent years and allows people to create realistic fake videos and images, often referred to as deepfakes. While deepfake videos have arguably garnered more attention in recent months, the use of fake faces like “Jessica” shows how AI-generated images can potentially help give credibility to online harassment campaigns as well as coordinated information campaigns.

In December, Facebook announced that it had withdrawn accounts using AI-generated faces to attempt to play with corporate systems. The accounts were part of a network that published generally in favor of President Donald Trump and against the Chinese government, said experts who reviewed the accounts.

Artificially generated media, such as deepfakes, are already on the US government’s radar. The Pentagon has invested in research to detect deepfakes. Last year, the U.S. intelligence community warned in its Global Threat Assessment, “Opponents and strategic competitors will likely attempt to use deepfakes or similar machine learning technologies to create compelling, but bogus image, audio, and video files, to increase influence campaigns directed against states. -United and our allies and partners. “

Coordinated harassment

Last summer, Gibney began diligently documenting a network of accounts, including “Jessica,” who harassed him and his fellow activists.

CNN Business has asked two of the country’s top visual forensic experts to review the images used for a dozen accounts Gibney believes are part of the same campaign. The two experts agreed that the majority of the dozen images they examined, including the image used on the “Jessica” account, showed evidence to have been generated using AI – in particular via a method known as generative accusatory networks.

Hany Farid, a professor at the University of California, Berkeley, pointed to a “very distorted” earring on Jessica’s left ear and said the reflections in her left and right eyes were inconsistent.

Jeff Smith, associate director of the National Center for Media Forensics at the University of Colorado at Denver, made similar observations and also pointed out how the wall in the background of the image appeared to be distorted.

In addition, Siwei Lyu, professor of computer science at the State University of New York at Albany, examined the photo of “Jessica”. Lyu built a system to detect manipulated and synthetic images. He determined with “great confidence” that the image of “Jessica” was created using AI. (There is no single system yet to detect rigged images like this with 100% accuracy.)

Gibney flagged the accounts to Twitter as soon as they became active and targeted him and his colleagues last July. (One of the accounts used the address of the building next to Gibney’s house as the username.) Twitter says it deleted dozens of accounts at the time, but deleted others, including the account with the address, after being contacted by CNN Business. The company confirmed to CNN that it deleted around 50 accounts that appeared to be operated by the same person (s).

The false

Although they may seem sophisticated, fake images generated by AI are easily accessible online.

Last year, Phil Wang, a former software engineer at Uber, created a website called “This person does not exist.” Every time you visit the site you see a new face, which in most cases looks like a real person. However, the faces are created using AI.

People, as the name of the site suggests, literally don’t exist. Wang’s goal, he told CNN Business, is to show people what technology can do. By exposing people to these fake faces, he hopes the site will “vaccinate them against this future attack.”

There are other sites similar to Wang’s where people can upload fake images. Fun and enlightening, Wang’s site allows people to see this new technology in an accessible way. But it also reflects a vast ethical dilemma for Silicon Valley: Just because the technology exists and can do something, does that mean technologists should make it available to everyone?

Nathaniel Gleicher, who leads the Facebook team that tackles coordinated disinformation campaigns, including those linked to the Russian and Iranian governments, said developers need to think about how making tools like this accessible could be used by bad actors.

“Building these sets is essential for research, but it is just as important that we think about the consequences as we build” Gleicher tweeted in reaction to the release of a fake faces dataset released earlier this year.

After looking at the photo of “Jessica”, Wang couldn’t tell if it was created through his site – he does not save the images as they are generated. He was certain that “Jessica” was not real, pointing, like others, at the earring. The AI ​​system, he said, “hasn’t seen enough jewelry to learn it properly.”

But he also warned that fake faces like “Jessica” might just be a small sign of what’s to come.

“Faces are just the tip of the iceberg,” he said. “Ultimately, the underlying technology can synthesize coherent text, the voices of loved ones and even a video. Social media companies that have AI researchers should devote time and research funds to this ever-growing problem. “

The-CNN-Wire
™ & © 2020 Cable News Network, Inc., a WarnerMedia Company. All rights reserved.




Source link

About Margie Peters

Check Also

Call for investigation into allegations of ‘rape culture’ in independent schools

A An investigation should be opened to examine allegations of a “rape culture” in a …

Leave a Reply

Your email address will not be published. Required fields are marked *