Its trained to give you the answer that is statistically most likely based on the data it was fed. Regardless if its correct or incorrect.
Its answer is the most prevalent opinion from its training data (most of the time). In this case the internet.
Bullshit in, bullshit out.
We now entered a time where LLMs are trained on data that other LLMs created.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.