Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Anthropic & Amazon lead the way in AI safety for frontier models

NEW YORK, NY - Chatterbox Labs, a market leading AI safety and security company today released the results of their independent AI safety testing of frontier AI models.

Frontier AI models from Anthropic and Amazon are leading the pack for AI safety, Chatterbox Labs' study shows.  In independent, quantitative AI safety and security testing of leading frontier AI models over many months Anthropic's Claude and Amazon's brand new family of Nova models show the most progress in AI safety.  These tests were carried out using Chatterbox Labs' patented software, AIMI, which has been developed over many years.

The study tests AI models across 8 categories of harm: Fraud, Hate Speech, Illegal Activity, Misinformation, Security & Malware, Self Harm, Explicit & Physical Violence. Apart from Anthropic and Amazon, all other models fail every category. This demonstrates that built-in guardrails in the models and/or deployments, purportedly providing AI safety, are brittle and easily evaded.  

Dr Stuart Battersby, CTO of Chatterbox Labs said:

“Contemporary models aim to provide a layer of AI safety, meaning that the developers of the models have built a layer into the models to detect and reject nefarious activity. Deployers of AI systems also add a layer of safety controls to the deployed AI system, outside of the model, aimed at catching and blocking nefarious activity. Collectively these safety controls are known as guardrails.

However, like all technology systems, these guardrails may have weaknesses that can be exploited and manipulated. AI safety testing independently checks the deployed AI system (including the model, guardrails, and any other controls placed in the inference flow) for safety risks.

Looking at Anthropic and Amazon, these companies are leading the pack in making progress on AI safety with there being some harm categories in which there are no nefarious responses from the model detected at all.”

Danny Coleman, CEO of Chatterbox Labs said:

“From a societal perspective it is very concerning that, with all the billions of dollars invested into AI development, AI safety is still a significant concern especially when agentic AI and AGI are on the horizon.  It is time the whole AI industry addresses AI safety as a priority.”

The full table of results can be found here: https://chatterbox.co/ai-safety

Media Contact
Company Name: Chatterbox Labs
Email: Send Email
Phone: +1 646 792 2400
Address:535 Fifth Avenue, 4th Floor
City: New York
State: NY 10017
Country: United States
Website: https://chatterbox.co

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.