From data to intelligence: Jeff Denworth reveals Vast Data’s journey toward a smarter future
In the dynamic realm of artificial intelligence, revolutionary advancements are continuously reshaping the limits of what can be achieved.
A leader in this shift is Vast Data Inc., a data platform enterprise that is helping to change the approach to AI infrastructure for businesses. Jeff Denworth (pictured), co-founder of Vast Data, offered a glimpse into the company’s mission and the profound influence of its platform on the broader industry, during a recent interview with theCUBE.
“I think the separation between application and storage, whether you’re talking about unstructured data or database infrastructure, has always been an encumbrance to people that ultimately want to try to solve that ‘first principles’ problem,” Denworth said. “Data has a ton of gravity … but if you can essentially attach code to your data … there’s something simpler here by the synthesis of data and code. We need to work a lot over the next couple years to convince the market.”
Denworth has a background spanning more than 10 years in the realm of big data and cloud storage technologies. Before co-founding Vast, Denworth served as senior vice president of marketing at CTERA Networks Ltd. and vice president of marketing at DataDirect Networks Inc., where he directed marketing, business and corporate development during a phase of rapid sales expansion. Preceding his time at DDN, Denworth worked in sales positions at Cluster File Systems Inc. and Dataram Corp.
This feature is part of SiliconANGLE Media’s ongoing series exploring the latest developments in the data storage and AI market.
Revolutionizing data architecture
Vast is a company deeply rooted in the creation of cutting-edge distributed systems architecture, Denworth explained.
“We studied organizations like Facebook and Google, and it was less around big data … and more around all the real unstructured data that inform some of these really rich models,” he added.
This shift highlights the importance of dealing with unstructured data beyond traditional notions of structured datasets and the need for novel solutions to harness the power of massive GPU systems and apply them effectively to unstructured data reservoirs.
Recent developments from Vast underscore its commitment to revolutionizing data infrastructure: The company became the first enterprise-distributed storage systems company to receive support for Nvidia’s SuperPOD architecture. SuperPODs are the pinnacle of Nvidia’s machines, deployed by those creating fundamental models in the market.
“We announced that we were the first enterprise distributed storage systems company to be supported for Nvidia SuperPOD architecture,” Denworth said. “That requires a very specialized type of infrastructure that, up until now, has been kind of the domain of the large hyperscalers. We think there’s a huge opportunity to just go democratize that to the masses.”
There are many challenges that global organizations have to deal with when it comes to large datasets and data gravity, including concepts such as federated learning. Robust data platforms must unify disparate systems and enable globally distributed models for a cohesive view and efficient data processing, according to Denworth.
In a landscape where cloud computing and data-driven AI are converging, Denworth envisions a future where data platform solutions will be central to stitching together diverse systems, enabling distributed processing and unlocking the full potential of unstructured data in the AI equation.
Reshaping data storage: The fusion of AI and innovative infrastructure
The landscape of data storage is in the midst of a remarkable transformation, driven by the fusion of data and AI. Amid the quest to unlock data’s immense potential, firms such as Vast Data are reimagining conventional storage paradigms. Denworth’s vision for democratizing advanced infrastructure and harnessing unstructured data’s potency offers a glimpse into the groundbreaking strides undertaken to reshape the industry.
“People are now starting to architect strategies to collect tons more data … for the first time ever, they can actually go and process it,” he said in a recent interview with theCUBE.
This synergy, in turn, births novel infrastructure models and data resources, unlocking fresh avenues for value creation and application, Denworth added. As the demarcation between data and AI blurs, organizations are compelled to reevaluate their data storage and management strategies, with a heightened emphasis on platforms that can effectively exploit the potential of these technologies.
“Customers are looking for something more integrated,” Denworth said. “There’s a huge division in the market right now between products and services that were designed for the era of big data and what is needed to make that next step into the era where we can actually process natural data — data that doesn’t fit in a classic data warehouse.”
Beyond the conventional discourse on big data centering around structured information, Vast recognizes the worth of unstructured data that fuels intricate models.
“Data input drives the value and the intellectual property issues privacy. This is an important part of the data,” Denworth said.
Envisioning data’s evolution: A co-founder’s perspective
The concept of the data developer holds significant promise, according to Denworth. The reality is that no singular platform will dominate. Rather, success entails a strategic amalgamation of platforms capturing substantial market share.
Denworth cautions against chasing trends, commonly referred to as “shiny objects.” Instead, he encourages forward-looking inventive solutions, recognizing that prevailing trends have typically been tackled by entities with significantly greater resources. This viewpoint champions true innovation and proactive problem-solving.
“We’re working with those hyperscalers that are building the largest of the foundation models. And if we can figure out how to get those things to scale to five, 10,000 GPUs, hundreds of petabytes of infrastructure, then the hope and the expectation is we can go apply that out to the rest of the enterprise that’s lagging those companies by a few years,” Denworth said.
Considering the prevailing notion of “good enough” in contemporary open-source scenarios, what discussions will shape the industry in five years? Denworth envisions a potential shift from an exclusive focus on large language models toward the abstraction of complexities and prioritization of data. In his view, operational data’s role will evolve to seamless invisibility, facilitating app integration while data infrastructure scales organically.
This transformative vision underscores data’s enduring importance while signaling emergent paradigms. Presently, advanced models largely mirror their training data; however, the future portends amplified computational capabilities and unprecedented data abundance. The ultimate goal lies in enabling machines to comprehend and reason, unveiling a realm of exciting possibilities.
“Today, these models are basically just parroting out what they learned on some Twitter scrape or something like that,” Denworth said. “Five years from now, you’ll have compute horsepower and data sets that are five, 10 times, 20 times more than what we have today. We’re looking forward to a world where you actually have machines that can start to reason and understand the fundamentals of what they’re talking about. And that’s exciting.”
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU