AI researchers found jailbreak method for Bard and ChatGPT

Coinnoin
2 min readJul 28, 2023

--

Some AI researchers say they have found a way to make chatbots like ChatGPT and Bard create harmful content.

AI researchers found a method to bypass safety measures that stop the chatbots from generating hate speech and false information. By adding certain characters to the chatbot’s prompts, they can make it generate dangerous content.

AI researchers found jailbreak method for Bard and ChatGPT

Image by AI

Despite the fact that organizations like OpenAI and Google can hinder a portion of these assaults, there is no certain method for forestalling every one of them. This is a major concern since simulated intelligence chatbots could spread destructive substance and deception on the web.

The AI researchers imparted their discoveries to artificial intelligence engineers, and they are dealing with working on their models to guard against such assaults. In the event that these weaknesses continue to be found, it might prompt government rules to control these man-made intelligence frameworks.

Prior to utilizing chatbots in delicate regions, these dangers should be tended to. Carnegie Mellon College got financing to make a man-made intelligence establishment zeroed in on open arrangement.

MakerDAO raised DAI interest rates to attract more buyers

Netflix plans to hire AI experts with high-paying jobs as Hollywood strike looms

Google, OpenAI, Microsoft create ‘Frontier Model Forum’ for AI development governance

View this post on Instagram

A post shared by CoinNoin (@cryptonew.s)

--

--

No responses yet