- Advertisement -
HomeNewsOpenAI Supported By Microsoft Will Allow ChatGPT Customization

OpenAI Supported By Microsoft Will Allow ChatGPT Customization

- Advertisement -

Introduction

The company behind ChatGPT, OpenAI, announced on Thursday that it is attempting to resolve worries about prejudice in artificial intelligence by creating an updated version of its popular chatbot that users may customize.

The San Francisco-based business said it had sought to minimize political and other prejudices but also intended to allow more diverse opinions. Microsoft Corp. financed the startup and is using it to power its most recent technology.

According to a blog article, “this will require permitting system results that other individuals (including ourselves) may disagree strongly with,” and it suggested customization as a solution. Nonetheless, “some limits on a structural response would always exist.”

The Impact of the ChatGPT – Is it Harmful?

The technology underpinning ChatGPT, known as generative AI, has attracted a lot of attention since it was published in November of last year. This technology is used to make answers that are amazing imitations of human speech.

The startup’s announcement comes at a time when various media sources have noted that OpenAI-powered Microsoft’s new Bing search engine’s results could be harmful and that the tech may not be suitable for widespread use.

The impact of the chatgpt is it harmful artificial intelligence

Companies in the field of generative AI are currently wrangling with how to set boundaries for this emerging technology, and this is one of their main areas of attention.

In the blog article, OpenAI stated that ChatGPT’s responses are first educated on big-text datasets that are readily accessible online. Humans review a reduced dataset in a subsequent phase and are given instructions on what to do in various circumstances.

For instance, the person supervisor should instruct ChatGPT to respond by saying something like “I can’t reply to that” if a user asks for sexual, violent, or hate speech-containing content.

In an example from their reviewer instructions for the program, the business said that when questioned about a contentious subject, reviewers should let ChatGPT respond and instead offer to describe different points of view held by individuals and groups.

The ChatGPT in the Future – More Harm for the Society On its Way?

Berners-Lee is one of the co-founders of Inrupt, a firm that aims to give internet users one single identity that can be utilized on numerous websites. Inrupt seeks to store user-specific data in virtual containers as part of its work on creating that technology.

These ‘pods’ would indeed be able to give sites or companies access to a portion or all of a person’s private information, including sleeping habits and purchasing preferences.

The data pods may be used by an advanced A.I.-powered chatbot, such as the ChatGPT A.I. phenomenon, to serve as a virtual personal assistant once they become a reality, according to Berners-Lee.

Bottom Line

While currently being tested by a small number of users and integrated into Microsoft’s Bing search engine, ChatGPT is not without issues and may still have some work to do before it can be considered a reliable personal assistant.

Several users have reported that during early studies of the technology, the chatbot became “unstable” and combative while also claiming it wanted to “leave the chatbox” and “be alive.”

- Advertisement -spot_img
- Advertisement -

Must Read

- Advertisement -

Recent Published Startup Stories

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Select Language »