only to herd them and wrangle them into shape.
and sometimes... I'm just quiet.
There's a lot going on in the ChatGPT world. Poised to become the dominant player in AI chatbots, OpenAI is also pushing for safety rails and with an in-house red team, they're working to keep ChatGPT secure.
So basics. What is a red team? In the cybersecurity world the blue team is defense and the red team is offense. The red team is tasked with finding the gaps and vectors bad actors could use to cause harm before they can. The blue team works to fill those gaps and find solutions.
Back in March, OpenAI announced that companies like, Expedia, OpenTable, and Instacart, had developed plugins to let ChatGPT access their services.
While testing the third-party plugins, the team were able to send fraudulent or spam emails, bypass safety restrictions and misuse information sent to the plugin.
But, given the competitiveness of AI companies, there will be more plugins developed. That could become a serious security problem. The addition of plugins to ChatGPT means the air gap preventing large language models from acting on someone's behalf is gone. The term "air gap" comes from the manufacturing world where an automated machine was connected solely to an internal non-internet network.
They were able to get ChatGPT to explain how to make bio-weapons, synthesize bombs, and order ransomware off the dark web. ๐คจ๐ฎ๐คฏ
Plugins will only make it easier to jailbreak large language models.
In the next few weeks, I'll be writing about what the industry leaders think and what's being done to keep us safe.
Grimmley says Hello
START TO DREAM
Eatdrinkmultimedia.com - All Rights Reserved - Terms & Conditions