Cadastre-se agora para um orçamento mais personalizado!

Why designing AI for humans requires 'productive discomfort'

dez, 05, 2022 Hi-network.com
d3sign/Getty Images

When the consumer version of Google Glass appeared on the scene in 2014, it was heralded as the start of a new era of human-computer interfaces.

Special Feature

The Tech Trends to Watch in 2023

Learn about the leading tech trends the world will lean into over the next 12 months and how they will affect your life and your job.

Read now

People could go about their day with access to all the information they need, right in front of their eyes.

Eight years on, how many people do you see walking around wearing smart glasses? 

The lesson here, as described by Stanford professor Elizabeth Gerber, is that "technology can only reach people if they want it." 

Speaking at the recent Stanford Human-Centered Artificial Intelligence fall conference, she noted that "we didn't want to wear Google Glass because they invaded our privacy. We didn't want to because it changed human interaction. Just remember Google Glass when you're thinking about what can AI do -- people have to want it." (For a comprehensive overview of the entire conference, check out Shana Lynch's article at the HAI site.)

Designing "AI that people want is as important as making sure it works," Gerber continued. Another lesson learned was the adoption of AI-based tutors over Zoom during the COVID-instigated shutdowns of schools -- which served to turn children off from subjects. The same applies to workers who need to work with AI-driven systems, she added.

Also: Stack overflow temporarily bans answers from OpenAI's ChatGPT chatbot

Designing human-centered AI involves greater interaction with people across the enterprise and often is hard work to get everyone in agreement and on the same page as to which systems are helpful and of value to the business. "Having the right people in the room doesn't guarantee consensus and, in fact, results often come from disagreement and discomfort. We need to manage with and look toward productive discomfort," said Genevieve Bell, professor at Australian National University and a speaker at the HAI event. "How do you teach people to be good at being in a place where it feels uncomfortable?" 

It may even mean that no AI is better than some AI, Gerber pointed out. "Remember that as you're designing, take this human-centered approach and design for people's work, sometimes you just need a script. Instead of taking an AI-first approach, take a human-centric approach. Design and iteratively test with people to augment their job satisfaction and engagement."

Perhaps counterintuitively, when designing AI, it may be best to avoid attempting to make AI more human-like, such as the use of natural language processing for conversational interfaces. In the process, the functionality of the system that helps make people more productive may be diluted or lost altogether. "Look what happens when someone who doesn't get it is designing the prompt system," said University of Maryland professor Ben Shneiderman. "Why is it a conversational thing? Why is it a natural language interface, when it's a great place for a design of a structured prompt that would have the different components, designed along the semantics of prompt formation?"

Also: AI's true goal may no longer be intelligence

The thinking that "that human-computer interaction should be based on human-human interaction is sub-optimal -- it's a poor design," Shneiderman continued. "Human-human interaction is not the best model. We have better ways to design, and changing from natural language interaction is an obvious one. There are lots of ways we should get past that model, and reframe to the idea of designing tools -- super tools, telebots, and active appliances."

"We do not know how to design AI systems to have a positive impact on humans," said James Landay, vice director of Stanford HAI and host of the conference. "There's a better way to design AI."  

The following recommendations came out of the conference:

  • Reframe and redefine human-centered design: Panelists proposed a new definition of human-centered AI -- one that emphasizes the need for systems that improve human life and challenges problematic incentives that currently drive the creation of AI tools. Current efforts are based on the "denial of human expertise," Shneiderman said. "Yes, humans make mistakes but they're also remarkable in their creativity and their capacity for expertise. What we really need to do is build machines that make smart people smarter. We want to enhance their abilities. We understand that in a lot of designs by having limitations, guard rails, interlocks. These are all the things that have gone in the human factors of literature for 70 years -- about how we prevent failure. So your self-cleaning oven, once the temperature is above 600 degrees Fahrenheit, you can't open the door, okay? And that's built into a lot of technologies. That's design at work. That's the right kind of design we need to build more of that. and we need to enhance human expertise while lowering the chance of error."
  • Seek multiple perspectives:This calls for multidisciplinary teams made up of workers, managers, software designers, and others with conflicting perspectives, said Jodi Forlizzi, professor at Carnegie Mellon University, In addition, according to Saleema Amershi, senior principal research manager at Microsoft Research, "We have to reframe some of our processes so even if there are people like designers or folks who understand human-centered principles. A lot of those people aren't in the room making the decisions about what gets built. We have to rethink our whole, our processes, and have those folks they're working with the technologists working with AI people up front and early on."

Also: Artificial intelligence: 5 innovative applications that could change everything

  • Rethink AI success metrics: We're most often asking the question of what can these models do, but we really need to be asking, what can people do with these models?" Amershi said. "We currently measure AI by optimizing for accuracy, But accuracy is not the sole measure of value. Designing for human-centered AI requires human-centered metrics."
  • Keep humans in the loop -- and AI easily overridable: "We want AI models that are comprehensible, predictable, and controllable," said Shneiderman. "That's still the durable notion, that you're in charge and that you can override it. We come to depend on reliable safe and trustworthy things, such as our cameras setting the shutter, the focus, and the color balance. But if we see that the focus is wrong we can adjust that. The mental model should be that users have the control panel by which they can get what they want, and then the system gives them some previews, some offer some opportunities, but they can override."

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Tags quentes : Negócio Transformação Digital

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.