Embracing AI and cyber resiliency: IBM enhances its portfolio as storage moves into the enterprise spotlight
Speeds and feeds in the storage market are so 2020. Now the conversation is all about data security and how storage can enable artificial intelligence to drive business results.
This was one of the key messages that emerged from two major events hosted by theCUBE, SiliconANGLE Media’s livestreaming studio, and IBM Corp. in July and October. Executives from IBM, company partners and industry experts appeared on the programs to share their insights into how the world of storage has undergone its own transformation.
“Although storage has presented itself mainly as a background service, it’s becoming more and more forefront in the consideration for what an appropriate data strategy needs to look like,” said Scott Baker, chief marketing officer and vice president of IBM Infrastructure Portfolio product marketing at IBM. “The future of storage is expected to be characterized by the extensive use of AI to unlock data value and provide a solid foundation for businesses to anticipate and withstand market shifts.”
Baker spoke to theCUBE in July during the IBM Storage Summit, an exclusive broadcast on theCUBE that focused on new capabilities for data and AI. In October, theCUBE offered a second exclusive broadcast, “Beyond Firewalls: Resilience Strategies for All,” that featured interviews with IBM executives and partners to discuss how storage now plays an important role in strengthening cyber resilience against data attacks. (* Disclosure below.)
Shifting workloads for the right data, in the right place
The influence of AI in storage gained significant momentum in 2023 as organizations pursued uses for the technology in several operational areas. IBM noticed a shift in workloads among its customers that required a different approach to the challenge of scale and management of data flows through the information supply chain.
“We’ve started to see changes in workloads from media and entertainment, healthcare, life sciences [and] financial services sectors,” said Christopher Maestas, worldwide executive solutions architect at IBM, in an interview during the Storage Summit in July. “AI really has changed it, because it picked the middle of the road — not the itty-bitty files that you see or the large streaming data that you’ve been doing. We’re really seeing that data size change and, again, having to adapt to a different data size that we’ve not traditionally handled in the past.”
To accommodate shifting changes in data size driven by AI, IBM launched several new initiatives for its storage portfolio. These included introduction of the IBM Storage Scale 6000, scale-out storage for file and object that’s targeted at unstructured data and AI and machine learning workloads.
The enhancements to IBM’s Storage Scale platform highlighted the company’s focus on compression, an ability to shrink the size of stored data. Compression can maximize storage capacity, improve the speed of file transfers and lower overall costs. Updates to Storage Scale System 6000 included inline hardware accelerated data compression using 38TB Flash Core Modular drives.
“Compression is a huge concept in storage, said Sarbjeet Johal, guest analyst, during a panel discussion on theCUBE’s broadcast in July. “How much you can compress and where we are storing it, that also matters. We still have a long way to go to bring intelligence into storage, because we are storing a lot more data.”
IBM also enhanced its Storage Fusion offering, a data services solution that integrates compute, storage and networking into a hyperconverged system. The objective was to eliminate frustration among data scientists who were bogged down by endless searches for the right information, according to Pete Brey, global product executive, IBM Storage Fusion.
“We have unique capabilities in terms of being able to catalog and label and tag the data so it’s quickly and easily found,” said Brey, during an interview on theCUBE. “The number one problem for the data scientists today is not how long my inferencing takes or not how long it takes to do model training; can I get to the right data quickly? Some of the estimates are like 80% to 90% of their time is spent just trying to find the right data, and that’s the problem that we solve.”
IBM’s work with Storage Fusion also offers insight into its integration with the cloud-native world and Red Hat Inc. A combination of Storage Fusion with Red Hat OpenShift has created a comprehensive platform that can support both Kubernetes applications and virtual machines.
“We work very closely with our friends at Red Hat … they have a technology called OpenShift Virtualization based on KubeVirt,” Brey noted. “Customers bringing these VMs over want to be able to co-host not just the containers, but also the VMs together and have a single substrate to support it all. That’s really what Fusion is all about.”
Rising data breach costs
IBM’s storage architecture has also been designed with cybersecurity in mind, an important element given the speed and preponderance of attacks plaguing businesses today. While IBM built elements into its FlashCore Modules that can detect threats, adoption of AI has complicated the picture for security operations teams.
“Just like we have access to AI to help with detection, the bad guys have access to the same AI to help cause attacks faster,” said Ram Parasuraman, executive director of IBM, during an interview with theCUBE in October. “It’s about how you use and harness the tools. There’s research from IBM that states what used to take attackers 60 days to cause these attacks today take less than four days.”
Part of IBM’s message is that cyber vulnerability comes at a cost. The company’s annual “Cost of a Data Breach” study pegged the average financial hit from a breach globally at approximately $4.5 million per incident. This is where AI could make a significant difference, according to Jeff Crume, distinguished engineer, cybersecurity architect and chief technology officer of IBM Security Americas, in his appearance on theCUBE in October.
“One thing that came from that report is that the most significant way to cut the cost of a data breach was organizations who had an extensive use of AI and automation,” Crume said. “They saved on average $1.76 million off that $4.5 million. That was the most significant way to cut the cost of that data breach.”
While IBM’s customers have invested in perimeter defenses, many of them have also turned to the company for guidance in data resilience, driven by the harsh reality that breaches have become inevitable. IBM’s executives noted throughout their discussions on theCUBE that the process for protection is a journey to be followed one step at a time.
“You can take baby steps — cybersecurity [and] data resilience is not about an on/off switch,” said Del Hoobler, principal storage software advanced technology specialist at IBM, during the broadcast event in October. “It’s a spectrum, from very weak to very strong. You never can be perfectly data resilient. The most important thing is to get started.”
(* Disclosure: TheCUBE is a paid media partner for the IBM Storage Summit and the “Beyond Firewalls: Resilience Strategies for All” event. Neither IBM Corp., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Image: Created with the assistance of DALL·E
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU