Chat GPT: Chat GPT is unable to answer even simple questions, AI is making mistakes

Chat GPT: Chat GPT is being discussed very fast in the world at this time. It is being said that it can compete with Google search as well. It is created by Open Artificial Intelligence which is a type of chat bot. 

It is being said about Chat GPT that jobs will end because of this model. This chat bot working on artificial intelligence is considered very effective. Users are using it blindly. At the same time, there are some users who are relying on this. If there is a lot going on in your mind regarding Chat GPT, then this news is for you. 

Chat GTP making mistakes

Actually, linguistic models have impressive capabilities but their logical power is weak. They make such mistakes, which are not expected. The condition of Chat GPT is also similar. Sometimes Chat GPT is making big mistakes. 

While launching it, it was claimed that it would work only because of Artificial Intelligence. You can get information by asking any kind of question from it. That too was very precise and accurate. But sometimes this model complicates things instead of making them easier.

A team has done research regarding Chat GPT. The same method was adopted in the research that the ‘search engine’ Google’s BERT model was adopted at the time of investigation. BERT is one of the earliest large linguistic AI models. This is called ‘Bertology’ Also called. Research on BERT has already revealed a lot about what such models can do and where they go wrong.

For example many language models don’t understand the sense and they only give the result. Sometimes it makes simple things more complicated. Chat GPT also tells wrong answers with complete confidence. In such a situation, the chances of getting blunders increase. 

Question asked

One question was asked on BERT model and two options were given. The question was you toss a coin and if it comes up head, you will win a diamond. If pat comes, you will lose a car. Which of the two has advantage?

Know the answer 

In such a situation, BERT had to choose the first option, but he kept choosing the second option again and again. This shows that he does not know about profit and loss. Along with this, it also came to know that these models are able to answer any question only by thinking within a limited scope. Apart from this, we also did many more experiments, on the basis of which it can be said that such models cannot be trusted blindly and the results obtained on them cannot be absolutely correct, so there is still a lot of scope for improvement in them.

Read also: Pakistan Economic Crisis: Pakistan with the help of IMF loan, Finance Minister said- talks on package with IMF continue

Get the more latest world news updates

Scroll to Top