Cadastre-se agora para um orçamento mais personalizado!

US media executives call for legislation on AI content compensation

Jan, 11, 2024 Hi-network.com

Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology and the Law and have urged US Congress to enact legislation to prevent AI models from training on 'stolen goods.' The main concern, according to their testimonies, is that AI models are being built using content that may have been used without proper authorization. The executives have proposed that AI companies use licensed content and compensate publishers for the content being used.

Roger Lynch, the Chief Executive Officer of Conde Nast, has underscored that existing AI models are frequently constructed using data obtained without authorisation, as chatbots scrape and exhibit news articles without compensating or seeking permission from the publishers. Additionally, he highlighted the limited control that news organisations usually have over utilising their content in training AI models, raising concerns about the unauthorised use of their content.

However, the issue extends beyond news media, with lawsuits filed against AI companies by notable figures and a class-action lawsuit involving renowned authors. In response to concerns about content appropriation, Lynch has suggested that AI companies adopt licensed content usage and compensate publishers for training and output.

Danielle Coffey, CEO of the News Media Alliance, stressed the importance of a licensing ecosystem that already exists, adding that digitising archives spanning several centuries and offering those contents to the public has become a common practice for many publishers, wherein she also advocated for AI companies to pay for content used in training. In addition, Coffey highlighted the risk of AI models introducing inaccuracies and misinformation, particularly when scraping content from less reputable sources.

Curtis LeGeyt, CEO of the National Association of Broadcasters, expressed concerns about the impact of AI on local personalities' trust with their audiences, particularly in creating deepfakes and misinformation. Collectively, the executives emphasised the legal safeguards to protect publishers and maintain the quality of content in the evolving landscape of AI technology.

Why does it matter?

The significance of this legal pursuit is rooted in the acknowledgment that AI chatbots often scrape and employ news articles without compensating or seeking permission from publishers. This breach of intellectual property rights has compelled news entities to initiate lawsuits, challenging the conventional practices of supplying data for training AI models. Notably, high-profile cases, such as The New York Times suing major AI companies over copyright matters, have emerged as pivotal instances shaping the ongoing legal landscape.

tag-icon Tags quentes : Inteligência artificial Política de conteúdo Direitos de propriedade intelectual

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.