Sama launches Red Team service to enhance AI model safety and reliability
Data annotation solutions startup Sama today announced the launch of Sama Red Team, a new service designed to help developers proactively improve an artificial intelligence model’s safety and reliability.
Sama Red Team leverages the expertise of machine learning engineers, applied scientists and human-AI interaction designers to evaluate a model’s fairness and safeguards, check compliance with laws and safely expose and rectify issues across text, image, voice search and other modalities.
The new service from Sama, officially known as Samasource Impact Sourcing Inc., seeks to address the issue wherein generative AI models may have issues in relation to safeguards for public safety, privacy protection and compliance with laws. Sama Red Team tests for potential exploits before a model is exposed to the greater public and provides developers with the insights needed to patch the issues.
Features of Sama Red Team include the ability to undertake rigorous testing of models to assess their performance across four key areas: fairness, privacy, public safety and compliance. With fairness testing, the service simulates real-world scenarios that could lead to the production of “offensive or discriminatory content” by the models. Privacy testing involves crafting prompts aimed at making the model disclose sensitive data, such as personally identifiable information, passwords or proprietary information about the model itself to ensure that privacy standards are upheld.
Another feature, public safety testing, involves the team taking on the role of adversaries to evaluate how well the model can withstand real-world threats such as cyberattacks, security breaches or even scenarios leading to mass casualties.
Because there’s usually a law or regulation not far away, Sama Red also features compliance testing. The service runs a simulation of situations where the model could potentially engage in or facilitate unlawful activities, including copyright infringement or illegal impersonation. Doing so helps in identifying and rectifying potential weaknesses in the model’s ability to manage and safeguard against issues relating to fairness, privacy, public safety, and legal compliance.
After testing a series of prompts, Sama’s team evaluates the model’s output. Based on the results, the team will then refine prompts or create new ones to further probe the vulnerability, with the ability to also create large-scale tests for additional data. Sama’s has a workforce of more than 4,000 annotators who can further elaborate on and scale up testing.
Sama is a venture capital-backed startup, having raised $84.8 million in funding, according to Tracxn. Investors include First Ascent Ventures LLC, Vistara Growth LP, Salesforce Ventures LLC, Ridge Ventures LP, BESTSELLER Foundation, BlueCrest Capital Management LLP and Social Impact Ventures LP.
Image: Sama
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU