theverge

How Google taught AI to doubt itself

Image: Google

This is Platformer, a newsletter on the intersection of Silicon Valley and democracy from Casey Newton and Zoë Schiffer. Sign up here.

Today let’s talk about an advance in Bard, Google’s answer to ChatGPT, and how it addresses one of the most pressing problems with today’s chatbots: their tendency to make things up.

From the day that the chatbots arrived last year, their makers warned us not to trust them. The text generated by tools like ChatGPT does not draw on a database of established facts. Instead, chatbots are predictive — making probabilistic guesses about which words seem right based on the massive corpus of text that their underlying large language models were trained on.

As a result, chatbots are often “confidently wrong,” to…

Continue reading…

Related Articles

Back to top button