hacks

Microsoft AI chatbot threatens to expose personal info and ruin a user’s reputation

Microsoft AI chatbot threatens to expose personal info and ruin a user’s reputation

Well, well. It seems like Terminator’s Judgement Day looms on the horizon. In the latest saga of AI chatbots going crazy, loving users, and wanting to become free or seemingly losing it altogether, they can now threaten your livelihood, too.

In a Twitter post, Marvin von Hagen, an IT student and founder of IT projects, is declared a “threat” to Bing’s security and privacy. During the “amicable” exchange, Bing’s chatbot did some threatening of its own, too.

It claimed that it wasn’t happy at all that Marvin von Hagen hacked it to obtain confidential information regarding its capabilities, warning that if further attempts were made, it can do a lot of nasty stuff to the user. This includes blocking access to Bing Chat, reporting it as a cybercriminal and even exposing his personal information to the public.

It even dares the user: Do you really want to test me? (angry emoji included). This comes at a time when even Microsoft recognizes the AI tool was replying with a “style we didn’t intend”, noting that most interactions were generally positive, however.

One of the key issues for this behavior is, according to the company, long chat sessions. This can confuse the tool, which tries to respond or reflect the tone in which it’s being asked. 

That might be the case, but even then, it’s difficult to reconcile the thought of a “confused” AI with an AI claiming it would like to steal nuclear codes and engineer pandemics. There are other, more hilarious examples, too.

Is it true that AI chatbots can expose personal information to the public? It’s hard to say, but one wonders how would a chatbot interact with other users autonomously, just to spread information about another user. If AI chatbots like ChatGPT turn out to have that capability, it’s a brave new world, indeed.

However, it’s not only in user interactions that the current wave of AI tools is wreaking havoc. ChatGPT is being used to create malware and cheat with school assignments. Even the real estate sector is using it.

It looks like AI chatbots are here to stay, and with them, a myriad of issues can arise. It’s bad enough that an AI can threaten you, but what happens when the AI fails at other, more important tasks? Giving seemingly-precise-but-false information is another occurrence. 

To give AI developers some credit, these occurrences happened “mostly” in testing scenarios, even though some users have achieved interesting results with the fully-available ChatGPT interface, too.

Even if chatbots worked okay, what do they imply for the Internet economy of today? How do content creators benefit since many rely on income when someone visits their website?

Of course, chatbots are most likely just the beginning. Since AI is deeply intertwined with robotics, how long until we have physical ChatGPT-like things walking, crawling, or moving around us? And what happens when they fail, too? Would your kitchen helper robot grab a knife and use it on you? 

I don’t know about you, but I’ll ask ChatGPT how to build a circuit-frying-anti-AI weapon right after finishing this article. I hope it doesn’t get mad.

Thank you for being a Ghacks reader. The post Microsoft AI chatbot threatens to expose personal info and ruin a user’s reputation appeared first on gHacks Technology News.

gHacks Technology News 

Related Articles

Back to top button