OpenAI has led the generative AI space since launching ChatGPT, setting multiple precedents for future AI developments. To that end -- and to further foster transparency with the public -- OpenAI has shared a new document that gives users deeper insight into its AI models.
On Wednesday, OpenAI released the first draft of its Model Spec, a document that outlines how the startup wants its models to behave in the OpenAI API and ChatGPT. The Model Spec is also meant to provide insights into how OpenAI makes decisions about its models' behaviors.
Model behavior refers to how AI models respond to user inputs, including characteristics such as tone, response length, and more. OpenAI explains that AI models are not explicitly programmed but rather learn from data; therefore, shaping their behavior remains a "nascent science" with many nuances to account for.
The Model Spec documents how OpenAI approaches the complex task of shaping model behavior, including experiences and ongoing research regarding the topic. OpenAI explains that it has yet to use the Model Spec "in its current form" but is working on techniques that will allow its models to learn directly from the spec.
Within the document, OpenAI breaks down three Model Spec principles: objectives, which include assisting the developer and end user and benefiting humanity; rules; and default behavior, which refers to guidelines consistent with the objectives and rules.
The company says it is sharing the Model Spec with the public to serve as a guideline for researchers and AI trainers who are working with reinforcement learning based on human feedback. By sharing the document, OpenAI seeks to increase transparency about its models with its audience and collect all stakeholder feedback regarding its approach.
Also: Google was right to be worried: OpenAI reportedly wants to enter the search market
The Model Spec Feedback form is open until May 22. All stakeholders are encouraged to complete the form, regardless of technical expertise.
The Model Spec fits into OpenAI's ongoing efforts to deepen trust with its stakeholders regarding its models. Just this week, OpenAI released a new tool that can detect AI images generated using its own DALL-E 3 tool.