Protect AI warns of increasing security risks in open-source AI and ML tools
A new report released today by Protect AI Inc. is sounding an alarm over increasing security vulnerabilities in popular open-source artificial intelligence and machine learning tools, with 20 critical flaws found across various large language models.
Found through Protect AI’s AI/ML “huntr” bug bounty program, which has over 15,000 community members, the vulnerabilities include significant risks in widely used tools such as ZenML, lollms and AnythingLLM. The vulnerabilities include critical issues ranging from privilege escalation, local file inclusion and path traversal attacks, which can lead to unauthorized access, data breaches and complete system takeovers.
In ZenML, the vulnerabilities included privilege escalation where unauthorized users can escalate their privileges to the server account by sending a crafted HTTP request. An attacker using the flaw could have potentially compromised the entire system, leading to unauthorized access and control.
Another critical vulnerability discovered through Protect AI’s bug bounty program is a local file inclusion in lollms, which allows attackers to read or delete sensitive files on the server. The cause of the critical vulnerability was due to improper sanitization of Windows-style paths, making it vulnerable to directory traversal attacks.
Bug hunters also discovered a path traversal vulnerability in AnythingLLM that allows attackers to read, delete, or overwrite critical files, including the application’s database and configuration files. The bypass, found in the normalizePath() function, can lead to data breaches, application compromise, or denial of service.
The details of the vulnerabilities published today were published ethically, with maintainers given a minimum of 45 days prior to publication to fix the vulnerabilities before the details are shared with the public. Protect AI also worked with maintainers to ensure a timely fix before publication.
“Through our own research and the huntr community, we’ve found the tools used in the supply chain to build the machine learning models that power AI applications to be vulnerable to unique security threats,” Protect AI writes in the report. “These tools are open source and downloaded thousands of times a month to build enterprise AI Systems… They also likely come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion.”
Protect AI was last in the news in May when it unveiled Sightline, a vulnerability database that provides insights into known and emerging AI and machine learning vulnerabilities along with an early warning system to defend against threats.
Image: SiliconANGLE/Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU