Cadastre-se agora para um orçamento mais personalizado!

This MIT team is fighting malicious AI image manipulation a few pixels at a time

26 de julho de 2023 Hi-network.com
Screenshot by Artie Beaty/

As AI image creation and editing becomes more prevalent, a new digital privacy concern has arisen -- the unauthorized AI editing of someone's artwork or picture. To date, there's nothing to stop someone from taking a picture online, uploading it to an AI program, and manipulating it for all sorts of purposes.

But a new technique from a team at MIT could change that. 

Also: The best AI image generators to try

Simply called "PhotoGuard," the method involves a deep understanding of the algorithms that AI operates on. With that understanding, the team developed ways to very subtly change a picture, disrupting how AI interprets it. And if AI can't understand an image, it can't edit it.

"At the core of our approach," the MIT team explained in a paper on their project, "is the idea of image immunization -- that is, making a specific image resistant to AI-powered manipulation by adding a carefully crafted (imperceptible) perturbation to it."

PhotoGuard works by altering a few select pixels in each image in such a way that AI sees things that aren't there. These changes aren't visible to the human eye, but they're blindingly bright to AI. When the AI sees the edited pixels, it overestimates their importance and edits the image tothose pixelsinstead of the rest of the image.

Also: How to use ChatGPT: Everything you need to know

To test their results, the MIT team took 60 images and generated AI edits using various prompts -- both on immunized and non-immunized versions of the same image. Once the new image was created, they used several metrics to determine how similar the edits were. The end result? In a test of 60 images, the team found that edits of immunized images were "noticeably different from those of non-immunized images."  

Of course, the method isn't foolproof. If someone wanted badly enough, they could still maliciously edit an image -- perhaps by cropping a photo until they cut out the pixel causing trouble or by simply applying a filter to the image. Still, this presents a significant hurdle that could deter a lot of people.

And while this method is effective against this generation of AI, that doesn't necessarily mean it will be in the future. That's why PhotoGuard creators encourage growth in this area not just through technical methods, but through "collaboration between organizations that develop large diffusion models, end-users, as well as data hosting and dissemination platforms."

Also: Generative AI is coming for your job. Here are 4 reasons to get excited

Right now, PhotoGuard is simply a technique. There's no software available to the public, and the creator admits that there's a lot of work to do for this to be practical and available to the general public. Still, this is a step forward to guard against new threats from AI, the MIT team says, and a sign that companies need to invest in the fight.

Artificial Intelligence

Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advancesChatGPT's new web browsing feature is a big disappointment. Use this plugin insteadWhat is Amazon Bedrock? 4 ways it can help businesses use generative AI toolsCan generative AI solve computer science's greatest unsolved problem?
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • ChatGPT's new web browsing feature is a big disappointment. Use this plugin instead
  • What is Amazon Bedrock? 4 ways it can help businesses use generative AI tools
  • Can generative AI solve computer science's greatest unsolved problem?

tag-icon Tags quentes : Inteligência artificial Inovação

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.