Clemson Connection; Safeguarding reputations with artificial intelligence

Maybe the photo shows you in a bar, and your friends from church just wouldn’t understand. Or maybe the picture of you suntanning at the beach reveals a little more skin than you would like your co-workers to see.

Billions of photos have been posted to social media, and some of them can be troublesome or downright embarrassing, especially when shared out of context. They strain relationships, threaten careers and needlessly tarnish reputations.

For Hongxin Hu of Clemson University, it’s a job for artificial intelligence.

Hu and his students are creating an artificial intelligence system, AutoPri, that would be able to automatically detect which photos contain sensitive objects and assign them a sensitivity score. Depending on the score, parts of the photo would be blurred out, or a message could be sent to the social-media user who posted the photo.

Hu and his students are feeding 30,000 photos into AutoPri to train it. Some of the photos were perceived as private by social media users, and some were perceived as public. AutoPri looks for commonalities in each group to learn which objects should be considered sensitive.

Hongxin Hu (right) and and Ph.D. student Nishant Vishwamitra are using artificial intelligence to safeguard reputations on social media. Other faces are blurred to show how AutoPri would look to social media users

When it’s ready to unveil, the system will be aimed at helping protect people who appear in photos but don’t own them.

“The problem with existing social networks is that only photo owners can decide who can see their photos,” Hu said. “But the owners’ friends may not want their images shared. This is what we call privacy conflicts. Different people may have different privacy concerns, even with the same photo.”

Hu and his team have used the same AI technology to create an app to counteract visual cyberbullying. The app, Bully Defender, could be available in app stores within a year, said Hu, an associate professor and Dean’s Faculty Fellow of Computer Science.

“When we detect visual cyberbullying, we give a warning to the users,” Hu said. “We also send a notification to parents and the schools.”

The team has just started collecting data to apply AI technologies to detect hate speech.

With the research, students in Hu’s lab are learning cutting-edge techniques in artificial intelligence.

Nishant Vishwamitra, a Ph.D. student in computer science, has worked with Hu for four years and has had a hand in all three social-media projects. He is starting to decide whether he would like to stay in academia or go out into the private sector after receiving his doctorate.

Either way, he plans to stick with artificial intelligence research.

“What I like best is that we can use AI — a very powerful, emerging technology– to address problems like privacy, cyberbullying and hate,” Vishwamitra said. “The convergence of those is where I love to work.”

 

Be the first to comment on "Clemson Connection; Safeguarding reputations with artificial intelligence"

Leave a comment