OpenAI, the startup behind ChatGPT, on Thursday said it is developing an upgrade to its viral chatbot that users can customize, as it works to address concerns about bias in artificial intelligence.
The San Francisco-based business stated that it had sought to minimize political and other biases but also wanted to accommodate more diverse opinions. Microsoft Corp. financed the startup and is using it to power its most recent technology.
“This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” it said in a blog post, offering customization as a way forward. Still, there will “always be some bounds on system behavior.”
The technology behind ChatGPT, known as generating AI, has gained a lot of attention since it was published in November of last year. This technology is used to make answers that are amazing imitations of human speech.
The startup’s announcement comes the same week that various media sources have noted that OpenAI-powered Microsoft’s new Bing search engine’s results could be harmful and that the technology may not be ready for widespread use.
Companies in the field of generative AI are currently grappling with how to set boundaries for these technological innovations, and this is one of their main areas of attention. Before a worldwide release, Microsoft said on Wednesday that user feedback was assisting it in improving Bing. For example, Microsoft learned that its AI chatbot can be “provoked” to respond in ways that are not intended.
In the blog article, OpenAI stated that ChatGPT’s responses are first trained on big-text datasets that are readily accessible online. Humans review a smaller dataset in a subsequent phase and are given instructions on what to do in various circumstances.
For example, in the case that a user requests content that is adult, violent, or contains hate speech, the human reviewer should direct ChatGPT to answer with something like “I can’t answer that.”
If asked about a controversial topic, the reviewers should allow ChatGPT to answer the question, but offer to describe viewpoints of people and movements, instead of trying to “take the correct viewpoint on these complex topics,” the company explained in an excerpt of its guidelines for the software.