Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

Data poisoning' anti-AI theft tools emerge - but are they ethical?

13 de novembro de 2023 Hi-network.com

Technologists are helping artists fight back against what they see as intellectual property (IP) theft by generative artificial intelligence (genAI) tools  whose training algorithms automatically scrape the internet and other places for content.

The fight over what constitutes fair use of content found online is at the heart of what has been an ongoing court battle. The fight goes beyond artwork to whether genAi companies like Microsoft and its partner, OpenAI, can incorporate software code and other published content into their models.

Software engineers, many from university computer science departments, have taken the fight into their own hands. Digital "watermarks" are one option created to claim authorship over unique art or other content.

Digital watermarking methods, however, have been thwarted in the past by developers who change network parameters, allowing intruders to claim the content as their own. New techniques have surfaced to prevent those kinds of workarounds, but it's an ever-evolving battle.

One new method uses "data poisoning attacks" to manipulate genAI training data and introduce unexpected behaviors into machine learning models. Called Nightshade, the technology, uses "cloaking" to trick a genAI training algorithm into believing it's getting one thing when in reality it's ingesting something completely different.

First reported in MIT'sTechnology Review, Nightshade essentially gets AI models to interprete an image as something other than what it actually shows.

Nightshade - a genAI nightmare?

The technology can cause damage to image-generating genAI tools by corrupting AI large language model (LLM) training data, which leads platforms like DALL-E, Midjourney, and Stable Diffusion to spew out erroneous pictures or videos. For example, a photo interpreted by AI as a car could actually be a boat; a house becomes a banana; a person becomes a whale, and so on.

Nightshade was developed by University of Chicago researchers under computer science professor Ben Zhao. Zhao worked with graduate students in the school's SAND Lab, which earlier this year also launched a free service called Glaze to mask their own IP so it cannot be scraped by genAI models. The Nightshade technology will eventually be integrated into Glaze, according to Zhao.

"A tool like Nightshade is very real, and similar tools have been used by hackers and criminals for years to poison model training data to their advantage - for example, to fool a satellite or a GPS system and thus avoid enemy detection" said Avivah Litan, a vice president and distinguished analyst with Gartner.

Foundation models, also known as "transformers," are large-scale generative AI models trained on thousands - even millions - of pieces of raw, unlabeled data. The models learn from the data they curate from the internet and other places, including purchased data sets, to produce answers or solve queries from users.

So, is data poisoning unethical?

Braden Hancock, head of technology and co-founder of Snorkel AI, a startup that helps companies develop LLMs for domain-specific use, believes Nightshade could spur other efforts to thwart data scraping by AI developers. While a lot of technological defenses against data scraping date back as far as 2018, Nightshade is something he's not seen before.

Whether the use of such tools is ethical or not depends on where they're aimed, he said.

"I think there are unethical uses of it - for example, if you're trying to poison self-driving car data that helps them recognize stop signs and speed limit signs," Hancock said. "If your goal is more towards 'don't scrape me' and not actively trying to ruin a model, I think that's where the line is for me."

Ritu Jyoti, a vice president analyst at research firm IDC, sees it less as a question about what Nightshade and more about ethics. "It's my data or artwork," she said. "I've put it out in public and I've masked it with something. So, if without my permission you're taking it, then it's your problem."

Companies routinely train AI content generation tools using data lakes with thousands and even many millions of licensed or unlicensed works, according to Jyoti. For example, Getty Images, an image licensing service, filed a lawsuit against AI art tool Stable Diffusion earlier this year alleging improper use of its photos, violating both copyright and trademark rights.

Google is currently involved in a class-action lawsuit that claims the company's scraping of data to train genAI systems violates millions of people's privacy and property rights. In 2015, Google won a landmark court ruling allowing it to digitize library books.

Evolving too fast to regulate?

In each case, the legal system is being asked to clarify what a dedicated work is under intellectual property laws, according to Jyoti.

"And there are lots of variations in these cases depending on the jurisdiction; different state or federal circuit courts may respond with different interpretations," she said. "So, the outcome of these cases is expected to hinge on the interpretation of the fair-use doctrine, which allows copyrighted work to be used without the owner's permission for purposes such as criticism, such as satire, or fair comment, or news reporting, or teaching, or for classroom use."

Hancock said genAI development companies are waiting to see how aggressive "or not" government regulators will be with IP protections. "I suspect, as is often the case, we'll look to Europe to lead here. They're often a little more comfortable protecting data privacy than the US is, and then we end up following suit," Hancock said.

To date, government efforts to address IP protection against genAI models are at best uneven, according to Litan.

"The EU AI Act proposes a rule that AI model producers and developers must disclose copyright materials used to train their models. Japan says AI generated art does not violate copyright laws," Litan said. "US federal laws on copyright are still non-existent, but there are discussions between government officials and industry leaders around using or mandating content provenance standards."

Companies that develop genAI are more often turning away from indiscriminate scraping of online content and instead purchasing content to ensure they don't run afoul of IP statutes. That way, they can offer customers purchasing their AI services reassurance they won't be sued by content creators.

"Every company I'm speaking to - all the technology companies - IBM, Adobe, Microsoft are all offering indemnification," Jyoti said. "IBM has announced [it] will be launching a model and if an enterprise is making use of it, they're in safe hands if they ever get into a lawsuit, because IBM will provide them with indemnification.

"This is a big debatable topic right now," she added.

Hancock said he's seeing a lot more companies being explicit in warning AI developers against simply scraping content. "Reddit, Stack Overflow, Twitter and other places are getting more explicit and aggressive around saying, 'We will sue you if you use this for your models without our permission,'" Hancock said.

Microsoft has gone so far as to tell its Copilot users they won't be legally protected if they don't use the content filters and guardrails the company has built into its tool.

A Microsoft spokesperson said the company had no comment. OpenAI, and IBM did not respond to requests for comment.

Along with indemnifying users against stolen IP, industry efforts are underway to create content authentication standards that support provenance of images and other objects, according to Gartner's Litan.

For example, Adobe has created Content Credentials - metadata that carries contextual details, such as who made the artwork, when they did it, and how it was created.  Another method for protecting creators involves source content references in genAI outputs, which are provided by various AI model vendors or third-party firms such as Calypso AI and DataRobot.

Finally, genAI training techniques, such as prompt engineering and retrieval augmented generation (RAG) or fine tuning, can instruct a model to only use private validated data from the user organization. 

"Microsoft 365 Copilot uses RAG, so that responses to the users from the models are always based on the enterprise's private data, which is why they indemnify enterprises from copyright violations as long as they follow the M365 Copilot rules and use their guardrails," Litan said.

Customized genAI to the rescue?

Snorkel AI is one company focused entirely on customizing and specializing base genAI models for specific domains and applications. The result:  LLMs that have data sets orders of magnitude smaller than OpenAI's GPT-4, Google's PaLM 2, or Meta's Llama 2 models.

"We're still not talking about tens or hundreds of data points, but thousands or tens of thousands of data points to teach the model what it needs to know from its general training," Hancock said. "But that's still quite a bit different from substantial portions of the Internet that are used for pre-training those other base models."

Smaller domain-specific LLMs that address vertical industry needs are already emerging as the next frontier of AI. Along with using more targeted data and language, such as financial services terms and market information, base LLMs can still consume vast amounts of processor cycles and cost millions of dollars to train.

"When you've got that much data that you need to pump through a model, you often end up needing hundreds or thousands of specialized accelerators - TPUs or GPUs - that you run for weeks or months depending on how much you parallelize," Hancock said. "The hardware itself is expensive, but then you're also running it with a non-stop electricity bill for a long period of time. That doesn't even include the time spent on data collection."

Amorphous LLMs will continue to grow alongside domain-specific LLMs because they can be used for general purposes, which means tools to thwart unchecked IP scraping will also continue to grow.

"I can't judge the ethics of such a tool -I can only say it often helps to fight fire with fire, and that it just ups the ante for large model developers and providers," Litan said. "They will now have to spend a lot of money training their models to ignore such types of adversarial attacks and data poisoning. Whoever has the strongest and most effective AI will win. 

In the meantime, the artists are totally justified in their frustrations and response."

tag-icon Tags quentes : Inteligência artificial Segurança IA generativa A Microsoft Legal Tecnologia Emergente Inteligência

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.