A major concern with generative AI, and specifically ChatGPT, is what happens to user data from users' interactions with the AI model.
The Mar. 20 ChatGPT incident which allowed some users to see other users' chat histories only exacerbated privacy and security concerns. This event even motivated Italy to ban ChatGPT in its entirety.
Also: This new technology could blow away GPT-4 and everything like it
On Tuesday, OpenAI unveiled some changes to ChatGPT which will address privacy concerns by giving user's more control of their own data and their chat history.
ChatGPT users can now turn off chat history, allowing you to choose which conversations can be used to train our models: https://t.co/0Qi5xV7tLi
- OpenAI (@OpenAI) April 25, 2023
Users will now be able to turn off their chat history which will prevent their data from being used to train and improve OpenAI's AI models.
The downside of turning off the chat history is that users will not be able to see previous chats in the sidebar, making it impossible to revisit past conversations.
Also: Nvidia says it can prevent chatbots from hallucinating
The new controls begin rolling out to users on Apr. 25, and can be found in ChatGPT's settings.
Even when chat history is disabled, ChatGPT will still retain new conversations for 30 days and will be used for review only in the case of abuse monitoring. After 30 days, the conversations will be permanently deleted.
OpenAI also integrated a new export option in settings that will allow users to export their ChatGPT data and, as a result, better understand what information ChatGPT is actually storing, according to the release.
Also: This AI chatbot can sum up any PDF and answer questions about it
Lastly, OpenAI shared that it is working on a new ChatGPT Business subscription for professionals who need more control to protect confidential company data and enterprises who need to manage end users.