OpenAI, the Microsoft-backed firm behind the groundbreaking ChatGPT generative AI system, announced this week that it would allow users to turn off the chat history feature for its flagship chatbot, in what’s being seen as a partial answer to critics concerned about the security of data provided to ChatGPT.
The “history disabled” feature means that conversations marked as such won’t be used to train OpenAI’s underlying models, and won’t be displayed in the history sidebar. They will still be stored on the company’s servers, but will only be reviewed on an as-needed basis for abuse, and will be deleted after 30 days.
“We hope this provides an easier way to manage your data than our existing opt-out process,” the company said in an official blog post.
OpenAI also said that the company is working on a new ChatGPT business subscription model, aimed at organizational users who may need more direct control over their data. ChatGPT Business will adhere to the company’s API data usage policies, meaning that user data will not, by default, be used for model training. OpenAI said that it hopes to debut this subscription model “in the coming months.”
Regulators set sights on OpenAI
The news comes in the wake of a move by the European Data Protection Board, earlier this month, to investigate ChatGPT, after complaints from privacy watchdogs that the chatbot did not comply with the EU’s General Data Protection Regulation. Italy, in March, temporarily banned the use of ChatGPT due to alleged violations of user privacy. That country’s guarantor for data protection demanded that the service demonstrate compliance with applicable privacy laws, and provide improved transparency into how the system handles user data.
It’s clear that privacy and data governance were not top-of-mind at the outset for OpenAI, according to Gartner vice president and analyst Nader Henein – who noted that that’s nothing new for a startup focused on getting a workable product out into the market.
“They are continuing to build the airplane mid-flight,” he said. “I imagine most of the development underway at Microsoft on Copilot is focused on wrapping that governance and enterprise support around the OpenAI [large language model.]”
It’s a step in the right direction, Henein added, but reflects that the design decisions underlying much of ChatGPT may have treated privacy as an afterthought, not as a core component.
“There is no doubt in my mind that the team at OpenAI are working feverishly to retrofit governance to their architecture,” he said. “It’s a matter of how much can be done after the fact. The analogy that we have seen used time and time again is that of baking a cake and trying to add sugar or baking powder after you’ve taken it out of the oven.”