hacks

Microsoft Bing AI has been proven wrong

Both Microsoft and Google held presentations of artificial intelligence products last week. Microsoft presented integration of AI in the company’s Bing search engine and in Microsoft Edge on Tuesday of last week.

Google revealed Bard, a language model designed for dialogue, the next day. Reception of both events was totally different. Microsoft awed much of the press and users with its careful presentation of new features and capabilities. Google, on the other hand, received lots of bad press, especially for not revealing a final product, but also for an error in one of the answers the AI made.

Now that the dust has settled, it becomes clear that Microsoft’s AI is not infallible as well.

Tip: check out Ashwin’s first impressions of the Bing enhanced with AI.

The errors of Bing AI

Several researchers, including Dmitri Brereton, highlighted errors in answers produced by Microsoft’s artificial intelligence. It needs to be noted that some of the answers produced did not contain errors.

The pro and con list of the Bissell Pet Hair Eraser Handheld Vacuum made it sound like a bad product. Bing cited limited suction power, a short cord and noise as the main cons. Brereton looked up the source article and product itself, and concluded that the source was not providing the negative information about the product that Bing cited.

Brereton spotted another error when Bing’s AI was tasked with writing a 5-day trip itinerary for Mexico City. Some of the night clubs that Bing suggested appear to be less popular than the AI made them look like. Some descriptions were not accurate, and in one instance, Bing’s AI forgot to mention that it was recommending a gay bar.

Brereton found issues with Microsoft’s demonstration of getting the AI to generate a summary of Gap’s financial statement. Brereton found that several of the numbers in the summary were wrong, and that in one case, a number provided by the AI did not even exist in the financial document at all.

The comparison to Lululemon’s data contained wrong numbers as well. Lululemon’s numbers were not accurate to begin with, which meant that the AI compared inaccurate Gap and Lululemon data.

Closing Words

There is a reason why experts advise users and organizations to verify information that is provided by AI. It may sound plausible, but verification is essential. Microsoft’s AI does list its sources, but that is of no help if the source can’t be use as verification, as it may not include all the data that the AI provided. Sources may be incomplete, which Microsoft and other companies should address quickly.

Microsoft’s stock did not tank after its presentation, and one of the reasons for that was that the errors were not as obvious as Google’s error. One would have to look up the product, the suggested bars or the financial reports, and compare them to the answers provided by the AI, to encounter these errors.

All in all, it is clear that while AI may be useful and helpful, it is also miles away from providing information that one can trust without verification.

Now You:  do you plan to use AI in the coming years?

Thank you for being a Ghacks reader. The post Microsoft Bing AI has been proven wrong appeared first on gHacks Technology News.

gHacks Technology News 

Related Articles

Back to top button