Anyone who's ever bought a car might find it hard to feel sympathy towards a car dealership, but you just gotta feel it for the folks at one Chevy dealership who fielded a version of ChatGPT to potential car buyers. As reported in Inc., Yahoo Finance, and Driving.ca, among others, the Chevrolet dealership in Watsonville, CA created a custom chatbot that quickly went off the rails.
Users were variously able to convince the chatbot to offer a 2024 Chevy Tahoe (roughly a$55,000 truck) for a buck, get it to recommend buying from Ford instead of Chevy, and write limericks that sang the praises of the Toyota Tundra (another Chevy competitor).
Also: Here's how to create your own custom chatbots using ChatGPT
In a statement in Driver.ca attributed to "Chevrolet" (we don't know if it's the main company or the dealership doing damage control), the company said:
The recent advancements in generative AI are creating incredible opportunities to rethink business processes at GM, our dealer networks and beyond. We certainly appreciate how chatbots can offer answers that create interest when given a variety of prompts, but it's also a good reminder of the importance of human intelligence and analysis with AI-generated content.
See what I mean? Somebody definitely had a very bad day. But, in the interests of making sure you don't have a similarly bad day, we've compiled a list of 20 things you should consider before opening up an AI to your customers.
Also: Bill Gates predicts a 'massive technology boom' from AI coming soon
I'm not going to cover the technical machinations required to make the AI comply with these guidelines, because that will differ from implementation to implementation. But everything I am suggesting is doable, either via AI APIs or with specific tools meant to build custom chatbots.
Let's get started, shall we?
Compared to old-school expert systems, which were trained on specific sets of information, most AI chatbots use unsupervised training, which means they can access just about anything. Don't allow that. Only train your AI on context-specific information, and then limit the kinds of answers it can provide.
Don't just let your employees test this out. Bring in friends and ask them to go wild with what they ask. Test, test some more, and then test again.
Make sure you carefully examine the results of preliminary testing and make any changes that prove to be required. But don't limit your testing to simple trials. Be sure to push the limits of the AI and see how it responds to edge cases. This is how you can develop guard rails that keep the AI from straying into dangerous territory.
You're still not going to find all the flaws, despite your testing. When you roll out your AI, limit it to controlled groups. But be careful: don't just limit it to compliant clients. Find a high school class and let them run amok in your AI, and watch what it does. Get a few curious and mischievous outsiders and ask them to find flaws. Then roll out to another small group and see if you get any weird behavior. Do it slowly, perhaps by invitation only.
Even with all of that, you'll get unexpected results. You'll need a way to implement small improvements as you see how the AI performs. Make sure the developers are on board for continuous improvement, and also be sure you log enough information to allow the developers to trace any unplanned behavior.
Create an escalation and rapid response plan if the AI starts to go off the rails. Make sure you have a way to quickly turn it off, and then a way to escalate the details to the developers to fix what went wrong.
This can be presented as part of an account creation process, before the AI interface is ever provided to the user. It allows you (and your lawyers) to specify the legal bounds of the experience and disavow any unexpected behaviors or promises made by the AI. It is your posterior protection plan, just in case the AI runs amok as it did for Chevrolet of Watsonville when it tried to sell a truck for a buck.
Make sure you build monitoring into your process. You might want to be sure a human audits each interaction the AI has with customers, so that you're able to understand just what your customers are saying and being told. Make sure you have a way to provide input to the developers so improvements can be rapidly deployed.
Your company (or, at least, the manufacturer whose products you sell) is likely to have very carefully defined brand identity guidelines. Make sure the AI follows those guidelines. Be sure to train the staffers doing the monitoring on brand guidelines so that they can also monitor AI activity from a brand identity perspective.
Similarly, train your monitoring staff on how to identify biases and discriminatory behavior. As part of the regular monitoring and auditing process, look out for this behavior and escalate to the devs anything that needs to be fixed. Also keep in mind that if the behavior is particularly egregious, this is a way your staff can reach out to the customer or prospect in question to do damage control before a situation might escalate.
Make sure you're clear to customers when they're interacting with an AI. This will both help you set expectations and give you a bit of an "out" if the AI doesn't respond as expected. Explain that the AI is an experimental feature or a bonus service offering.
Some customers will be excited to try out the new AI features, but some customers will find it either weird or dehumanizing. Be sure to show sensitivity to these variations in responses, and give those customers who don't want to use the AI an alternative channel for getting help. In fact...
You don't want customers to feel they're trapped in some sort of AI bot purgatory. Make sure there's a clear and easy path for humans to reach other humans, whether that's by phone or text interaction. And, whatever you do, don't tell people they're talking to a person when they're actually talking to a bot.
Even customers who think the AI experience is cool may have some feedback, suggestions, or complaints. Provide an easy way to gather that feedback as part of the AI interaction. ChatGPT, for example, has a thumbs up/thumbs down button after each AI response, along with a way for users to provide more details about why they gave the answer that rating.
An AI is capable of tuning its communication style by modifying factors like formality, friendliness, complexity of vocabulary, and overall tone. This encompasses not just the words chosen, but also the style and approach of communication -whether it's more casual or professional, straightforward or elaborate, empathetic or objective. Based on feedback from your customers, modify the AI's manner of speaking to best meet what they are most comfortable with.
We're all very aware of the need for security, especially as more and more criminals attempt to hack, or succeed in hacking, businesses. Information gathered by the AI must be protected. Carefully develop security protocols to make sure the AI asks as few questions as possible when requesting personal information, and then protects whatever information it gathers.
We talked previously about assigning staff to continuously audit AI responses, but didn't specifically call out training. Not only will the folks doing the audits need to be fully trained, but your whole staff will require training on how to use the AI, its limits, how to spot problems, how to escalate when serious problems are found, and how to communicate expectations and limits to customers. Ideally, training will be a continual process, with introductory training and regular refreshers as the technology evolves.
Undoubtedly, the introduction of AI customer service solutions will concern members of your team about their long-term job security. This is where you need to be fully transparent with employees, set expectations, and keep in mind you're dealing with real people with responsibilities, families, and feelings. Be sure to read 's Special Report, The Future of AI, Jobs, and Automation, for some very in-depth coverage and analysis of this complex issue.
AI, particularly generative AI, is evolving at warp speed. Be sure to plan for regular maintenance and updates of your AI systems. Something that's cutting edge in January could well be three generations behind by June.
Each industry is different, so this is one you're going to have to research based on your business type and location. Be sure to check with your attorneys to be sure your AI efforts (and the responses elicited from the AI) are within the bounds of your legal requirements.
Also: Have 10 hours? IBM will train you in AI fundamentals - for free
Whew! Well, there you go. Twenty things to consider before allowing your customers to encounter your AI. If you follow these guidelines, you can avoid many of the troubles that beset Chevrolet of Watsonville.
What do you think? Did I leave anything out? Are you deploying an AI in your business? Do you have any interesting stories to tell? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.