How data governance is shaping AI security: Insights from Securiti’s Chief Data Officer Jack Berkowitz
Data governance using Artificial intelligence is becoming increasingly critical as companies navigate the complexities of managing and securing their information.
A key player in this arena, Securiti Inc., has introduced the Data Command Center, a centralized platform that ensures the secure utilization of data and generative AI. This platform employs a data command graph to visualize data flow, business processes and regulatory compliance, offering a comprehensive view of data management, according to Jack Berkowitz (pictured), chief data officer of Securiti.
“We have a set of graph representations that allow you to represent not just the data and the flow but also business processes,” he said. “Like I said, the regulations or the policies, different countries around the world, different localities inside the US, all that together and you can visualize it. Let me see who has access to information through a data access graph, all of that put together in one context.”
Berkowitz spoke with theCUBE Research’s Dave Vellante and Shelly Kramer at the RSA Conference, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how Securiti is focused on building a unified view of governance solutions for chief data officers, with an emphasis on security and data governance. (* Disclosure below.)
Navigating data governance in the modern enterprise
Securiti is focused on preventing data leakage from LLM vendors to ensure corporations and smaller companies can use LLMs with confidence in light of recent lawsuits regarding the extraction of private information. Cloud vendors may overpromise and require additional assistance, leaving gaps in coverage that need to be addressed, Berkowitz explained.
“One of the biggest barriers to using the LLMs right now is companies are nervous. What information am I actually putting across? Even regulatory, is that a processor, a sub-processor,” Berkowitz said. “All of it gets into play. What we’ve announced is a series of firewalls for both query and retrieval that protect that information in context.”
The emphasis is on the importance of measuring machine learning capabilities and addressing vulnerabilities in AI models to prevent security threats. Machine learning models can be vulnerable to poisoning attacks, and it is crucial to have control of your data to prevent these security threats, Berkowitz elaborated.
“One of the things that we do inside of security, we’re not doing the actual scanning of the models, but we actually will expose those model scans as model cards for people using our system,” Berkowitz said. “Our system’s flexible … you can pick any of the different LLMs to use, but while you’re doing that in your firewall, we’ll also expose some of those metrics so that you can take a look at, ‘Hey, are there vulnerabilities here inside of this code?’”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the RSA Conference:
(* Disclosure: Securiti Inc. sponsored this segment of theCUBE. Neither Securiti nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU