Speakers ranging from artificial intelligence (AI) developers to law firms grappled this week with questions about the efficacy and ethics of AI during MIT Technology Review's EmTech Digital conference. Among those who had a somewhat alarmist view of the technology (and regulatory efforts to rein it in) was Tom Siebel, CEO C3 AI and founder of CRM vendor Siebel Systems.
Siebel was on hand to talk about how businesses can prepare for an incoming wave of AI regulations, but in his comments Tuesday he touched on various facets of the debate of generative AI, including the ethics of using it, how it could evolve, and why it could be dangerous.
For about 30 minutes,MIT Technology ReviewEditor-in-Chief Mat Honan and several conference attendees posed questions to Siebel, beginning with what are the ethical and unethical uses of AI. The conversation quickly turned to AI's potential to cause damage on a global scale, as well as the nearly impossible task of setting up guardrails against its use for unintended and intended nefarious purposes.
The following are excerpts from that conversation.
[Honan] What is ethical AI, what are ethical uses of AI or even unethical uses of AI?"The last 15 years we've spent a couple billion dollars building a software stack we used to design, develop, provision, and operate at massive scale enterprise predictive analytics applications. So, what are applications of these technologies I where I don't think we have to deal with bias and we don't have ethical issues?
"I think anytime we're dealing with physical systems, we're dealing with pressure, temperature, velocity, torque, rotational velocity. I don't think we have a problem with ethics. For example, we're...using it for one of the largest commercial applications for AI, the area of predictive maintenance.
"Whether it's for power generation and distribution assets in the power grid or predictive maintenance for offshore oil rigs, where the data are extraordinarily large data sets we're arriving at with very rapid velocity, ...we're building machine-learning models that are going to identify device failure before it happens - avoiding a failure of, say, an offshore oil rig of Shell. The cost of that would be incalculable. I don't think there are any ethical issues. I think we can agree on that.
"Now, anytime we get to the intersection of artificial intelligence and sociology, it gets pretty slippery, pretty fast. This is where we get into perpetuating cultural bias. I can give you specific examples, but it seems like it was yesterday - it was earlier this year - that this business came out of generative AI. And is generative AI an interesting technology? It's really an interesting technology. Are these large language models important? They're hugely important.
"Now all of a sudden, somebody woke up and found, gee, there are ethical situations associated with AI. I mean, people, we've had ethical situations with AI going back many, many years. I don't happen to have a smartphone in my pocket because they striped it from me on the way in, but how about social media? Social media may be the most destructive invention in the history of mankind. And everybody knows it. We don't need ChatGPT for that.
MIT Technology Review/ComputerworldTom Seibel (left) speaks with MIT Technology Review's editor-in-chief Mat Honan at EmTech.
"So, I think that's absolutely an unethical application of AI. I mean we're using these smartphones in everybody's pocket to manipulate two to three billion people at the level of the limpid brain, where we're using this to regulate the release of dopamine. We have people addicted to these technologies. We know it causes an enormous health problem, particularly among young women. We know it causes suicide, depression, loneliness, body image issues -documented. We know these systems are the primary exchange for the slave trade in the Middle East and Asia. These systems call in to question our ability to conduct a free and open Democratic society.
"Does anyone have an ethical problem with that? And that's the old stuff. Now we get into the new stuff."
Siebel spoke about government requests made of his company. "Where have I [seen] problems that we've been posed? OK. So, I'm in Washington DC. and I won't say in whose office or what administration, but it's a big office. We do a lot of work in the Beltway, in things like contested logistics, AI predictive maintenance for assets in the United States Air Force, command-and-control dashboards, what have you, for SOCOM [Special Operations Command], TransCom [Transportation Command], National Guard, things like this.
"And, I'm in this important office and this person turns his office over to his civilian advisor who's a PhD in behavioral psychology..., and she starts asking me these increasingly uncomfortable questions. The third question was, 'Tom, can we use your system to identify extremists in the United States population.'
"I'm like holy moly; what's an extremist? Maybe a white male Christian? I just said, 'I'm sorry, I don't feel comfortable with this conversation. You're talking to the wrong people. And this is not a conversation I want to have.' Now, I have a competitor who will do that transaction in a heartbeat.
"Now, to the extent we have the opportunity to do work for the United States government, we do so. I'm in a meeting - not this administration - but with the Undersecretary of the Army in California, and he says, 'Tom, we want to use your system to build an AI-based human resource system for the Department of the Army.'
"I said, 'OK, tell me what the scale of this system is.' The Department of the Army is about a million and a half people by the time you get into the reserves. I said, 'What is this system going to do?' He says we're going to make decisions about who to assign to a billet and who to promote. I said, 'Mr. Secretary, this is a really bad idea. The problem is, yes we can build the system, and yes we can have it at scale of the Department of the Army say in six months. The problem is we have this thing in the data called cultural bias. The problem is no matter what the question is, the answer is going to be: white, male, went to West Point.'
"In 2020 or 2021 - whatever year it was -- that's just not going to fly. Then we've got to read about ourselves on the front page ofThe New York Times; then we've got to get dragged before Congress to testify, and I'm not going with you.
"So, this is what I'd describe as the unethical use of AI."
MIT Technology ReviewTom Seibel speaks at MIT's EmTech conference.
Siebel also spoke about AI's use in predictive healthcare. "Let's talk about one I'm particularly concerned about. The largest commercial application of AI -hard stop -will be precision health. There's no question about that.
"There's a big project going on in the UK, right now, which may be on the order of 400 million pounds. There's a billion dollar project going on in the [US] Veterans Administration. An example of precision medicine ... [would be to] aggregate the genome sequences and the healthcare records of the population of the UK or the United States or France, or whatever nation it may be..., and then build machine-learning models that will predict with very high levels of precision and recall who's going to be diagnosed with what disease in the next five years.
"This is not really disease detection; this is disease prediction. And this gives us the opportunity to intervene clinically and avoid the diagnosis. I mean, what could go wrong? Then we combine that with the cellphone, where we can reach previously underserved communities and in the future every one of us and how many people have devices emitting telemetry? Heart arrhythmia, pulse, blood glucose levels, blood chemicals, whatever it may be.
"We have these devices today and we'll have more of them in the future. We'll be able to provide medical care to largely underserved [people]..., so, net-net we have a healthier population, we're delivering more efficacious medicine... at a lower cost to a larger population. What could go wrong here? Let's think about it.
"Who cares about pre-existing conditions when we know what you'll diagnosed with in the next five years. The idea that it won't be used to set rates - get over it, because it will.
"Even worse, it doesn't matter which side of the fence you're on. Whether you believe in a single-care provider or a quasi-free market system like we have in the United States. The idea that this government entity or this private sector company is going to act beneficially, you can get over that because they're not going to act beneficially. And these systems absolutely