Juniper Networks unveils Ops4AI Lab and designs to help customers fast-track AI deployments
There has been plenty of hype and ballyhoo around artificial intelligence and networking, but much of the vendor focus has been AI for networking, where AI is used to improve network operations.
The other side of the AI coin is networking for AI, where a network must be designed and provisioned to support an AI implementation. Though many businesses will likely deploy AI in the cloud, making the supporting network the problem of the hyperscaler, 58% of respondents to a recent ZK Research/theCube Research study stated they have deployed or will be deploying AI in their own private data center. They must deploy a network to create the AI cluster to do this.
The challenge is that networking for AI isn’t cookie-cutter, and the risk of doing it wrong is a lot of extra time and money, neither of which companies have in excess today. To help with this, Juniper Networks Inc. announced July 16 a multi-vendor lab for validating end-to-end automated AI data center solutions and operations. The lab integrates switching, routing, storage and compute solutions from various vendors.
In addition, the company introduced Juniper Validated Designs or JVDs aimed at accelerating AI cluster deployment and is also rolling out software enhancements to optimize AI workload performance and management over Ethernet. These initiatives, branded “Ops4AI,” involve collaboration with numerous infrastructure partners to enhance AI workload performance and simplify data center management.
The validated design is the best way to ensure that all the knobs and levers in an AI environment are set correctly. In our study, we asked what vendors can do to make buyers comfortable deploying Ethernet for AI. The top response was “Create validated blueprints,” which highlights the need for them.
Amit Sanyal, Juniper’s head of data center product marketing, briefed me recently on the announcement. “AI is top-of-mind for everybody out there right now,” he told me. “From an infrastructure perspective, the stack starts with the hardware layer. But there’s a significant amount of infrastructure software that goes on top. And on top of that, you have all these large language models, gen AI models, and then you develop applications, agents, AI applications and agents.”
He said that the complexity and scale of the LLMs have been increasing rapidly, which means computing needs are growing just as fast. Sanyal told me that Juniper has been designing and building switch systems and solutions from the ground up with AI in mind.
“This manifests itself in two ways,” he said. “This brings all the goodness of AI into networking — something we’ve done for nearly 10 years. And we are bringing all the goodness of AIOps into the rest of the portfolio. We’ve extended it to data centers now, so that when you run and operate the networking, you can use AI to help simplify management.”
Networking for AI
A central component of Juniper’s AI-Native Networking Platform is its Networking for AI solution, which features a spine-leaf data center architecture built on AI-optimized 400G and 800G QFX Series Switches and PTX Series Routers. The company says this solution includes high-performance firewalls, Juniper Apstra data center assurance software and the Marvis Virtual Network Assistant.
Juniper Apstra and Marvis offer Ops4AI capabilities such as intent-based networking, multi-vendor switch management, application/flow/workload awareness, AIOps proactive actions and a gen AI conversational interface. This solution looks to make AI training more efficient and improve graphics processing unit utilization. At the same time, the company thinks it can reduce deployment times by up to 85% and operational costs by up to 90%.
The Ops4AI software features include:
- Fabric autotuning for AI, which uses telemetry from routers and switches to automatically configure optimal congestion control settings, using Juniper Apstra’s closed-loop automation for peak AI workload performance.
- Global load-balancing that provides real-time AI traffic load-balancing by monitoring network congestion hotspots, resulting in lower latency, better network utilization and reduced JCTs.
- End-to-end visibility from network to SmartNICs, which offers a holistic view of the network, including smart network interface cards from Nvidia Corp. (BlueField and ConnectX) and others.
Juniper created the Ops4AI Lab to foster collaboration with its partner ecosystem, including Broadcom Inc., Intel Corp., Nvidia, WekaIO Inc. and other industry leaders. Located at Juniper’s Sunnyvale, California headquarters, the lab is open to qualified customers and partners that wish to test their AI workloads using advanced GPU compute, storage technologies, Ethernet-based networking fabrics and automated operations. The Ops4AI Lab demonstrates that validated Ethernet fabrics can deliver performance comparable to InfiniBand-based AI infrastructure.
Juniper Validated Designs
JVDs offer detailed implementation documents to ensure new customers can confidently deploy well-characterized, well-tested and repeatable solutions. These pre-validated blueprints include AI data center designs built on Nvidia A100 and H100 compute, storage from Juniper’s ecosystem partners, and Juniper’s data center leaf and spine switches.
The new Ops4AI JVD complements existing JVDs for automated, secure data centers, incorporating QFX and PTX spines, QFX leaf switching, data center automation, and Juniper’s SRX and vSRX/cSRX solutions for data center security.
Some final thoughts
Juniper’s approach is intriguing, focused on optimization, load balancing and visibility — hallmarks of the company for the past quarter of a century. The Ops4AI lab it established will have the company working in close quarters with heavyweights such as Broadcom and Nvidia, which will be great for Juniper.
One of the aspects I like most about this announcement is that it brings together AI for networking and networking for AI. It has been my thesis that customers need the former to do the latter. Put a bit more simply, networking for AI is sufficiently complex that network engineers can only support it through AI.
Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.
Image: SiliconANGLE/Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU