Cadastre-se agora para um orçamento mais personalizado!

NOTÍCIAS QUENTES

Is this legal? For AI and IoT, W&M Law School aims to find out

Oct, 25, 2018 Hi-network.com

Innovators have been developing artificial intelligence (AI), IoT, cybersecurity, and related technologies for some time, yet the legal world has had limited engagement with the issues likely to arise from these technologies-except for those in the area of privacy. And this poses real challenges. For example, it won't do us a great deal of good to produce an AI prediction device if the result is a successful class-action lawsuit that bankrupts the companies that made it, sold it, and used it. Beginning in 2017, William & Mary's Center for Legal and Court Technology (CLCT) began research into the legal aspects of these developments. Cisco supports CLCT with research funding.

The nation's oldest law school tackles the world's newest technologies
CLCT is a joint initiative of William & Mary Law School, the nation's oldest law school (W&M had the nation's first law professor), and the National Center for State Courts. W&M is world famous for its work in court and courtroom technology and has long been interested in the interrelationship between technology and law. Cisco's grant support enabled CLCT to expand into the AI, IoT, and related technologies space.

CLCT's first two findings were fundamental and important. First, CLCT confirmed that by and large legal professionals, including judges and lawyers, were, with notable exceptions, unfamiliar with AI and related technologies. The classic approach of most legal professionals is to wait until a case presents itself and then pursue self-education, often under great time pressure.

Second, CLCT concluded that when, and if, they dealt with these issues, legal academics, judges, and lawyers tended to consider each technology in a vacuum. This can easily produce erroneous results.  Consider AI, for example. A "perfect" AI algorithm is vulnerable to the data used to train it and to the data that it uses. If the data used by the algorithm comes from an IoT device with an uncontrolled Internet connection, it's vulnerable to the false and error-prone data found throughout the Internet. In the event of a lawsuit, who would be liable: designer, data provider, trainer, user, or some other party? Equally important, could we even determine the cause of a bad result, if it were arguably based on erroneous data that cannot be replicated or adequately explained?

Students pose the questions
With Cisco's support, CLCT is augmenting its academic research work by conducting conferences, presentations, and international law student paper contests in which students discuss legal issues related to these interconnected technologies.

Last years' contest was open to students in the EU, Canada, and the United States.  The first prize went to Jordan Cohen, now at Emory Law, for his paper "Lights, Camera, AI: Artificial Intelligence and Copyright Ownership in the Entertainment Industry of Tomorrow." The second prize was awarded to "Perfect Enforcement & Filtering Technology," written Brian Mund, who graduated from Yale Law School in May 2018. The third prize went to "AI-'Agents': to be or not to be in legal 'domain'?" co-authored by Federica Casano and Francesco Cavinato, both from Alma Mater Studiorum, Universit

tag-icon Tags quentes : Segurança cibernética Internet of Things (IoT) #Educação higher education Artificial Intelligence (AI) Machine Learning (ML) William & Mary Law School

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.