Skip to Main Content
 

Major Digest Home HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships - Major Digest

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships

HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships
Credit: Network World

At its Discover Barcelona 2025 customer event, HPE unveiled a wide range of networking gear and software to help enterprise customers move efficiently into the AI networking era. The rollout includes new switches and routers as well as deepened partnerships with AMD and Nvidia. In addition, HPE demonstrated how it plans to integrate the AI technology it acquired from Juniper in the deal finalized in July.

HPE plans to combine the best of HPE Aruba Networking Central — the flagship cloud-based management and orchestration platform for wired and wireless networks spanning campus, branch, and data center sites — and Juniper’s core Mist AIOps software, according to Rami Rahim, president and general manager of the HPE networking business.

The natural-language Mist AI and Marvis virtual network assistant (VNA) work by gathering telemetry and user state data from Juniper’s routers, switches, access points, firewalls, and applications to offer actionable insights and automated workflows to detect and resolve a broad range of enterprise networking problems. 

HPE is now adding the Mist Large Experience Model (LEM), which uses billions of data points from apps such as Zoom and Teams, combined with synthetic data from digital twins, to rapidly detect, fix and predict video issues, to Aruba Networking Central, according to Rahim.

At the same time, Aruba Networking’s Agentic Mesh technology will be available to Mist, enhancing AI-based anomaly detection and root-cause analysis. In addition, Mist will gain organizational and global NOC views from HPE Aruba Networking Central, letting customers manage across both platforms, according to Rahim.

“Mist was written largely for cloud deployment, and Aruba Central has much more of a diverse deployment model capability,” Rahim said. “Now, over time, our goal is to unify the experience of these two platforms by leveraging the micro services architecture, and cross-pollinating capabilities from one onto another.”

On the hardware front, HPE is targeting the AI data center edge with a new MX router and the scale-out networking delivery with a new QFX switch. Juniper’s MX series is its flagship routing family aimed at carriers, large-scale enterprise data center and WAN customers, while the QFX line services data center customers anchoring spine/leaf networks as well as top-of-rack systems.

The new 1U, 1.6Tbps MX301 multiservice edge router, available now, is aimed at bringing AI inferencing closer to the source of data generation and can be positioned in metro, mobile backhaul, and enterprise routing applications, Rahim said. It includes high-density support for 16 x 1/1025/50GbE, 10 x 100Gb and 4 x 400Gb interfaces.

“The MX301 is essentially the on-ramp to provide high speed, secure connections from distributed inference cluster users, devices and agents from the edge all the way to the AI data center,” Rami said. “The requirements here are typically around high performance, but also very high logical skills and integrated security.”

In the QFX arena, the new QFX5250 switch, available in 1Q 2026, is a fully liquid-cooled box aimed at tying together Nvidia Rubin and/or AMD MI400 GPUs for AI consumption across the data center. It is built on Broadcom Tomahawk 6 silicon and supports up to 102.4Tbps Ethernet bandwidth, Rahim said. 

“The QFX5250 combines HPE liquid cooling technology with Juniper networking software (Junos) and integrated AIops intelligence to deliver a high-performance, power-efficient and simplified operations for next-generation AI inference,” Rami said.

Partnership expansions

Also key to HPE/Juniper’s AI networking plans are its partnerships with Nvidia and AMD. The company announced its relationship with Nvidia now includes HPE Juniper edge onramp and long-haul data center interconnect (DCI) support in its Nvidia AI Computing by HPE portfolio. This extension uses the MX and Junipers PTX hyperscaler routers to support high-scale, secure and low-latency connections from users, devices, and agents to AI factories and connections between clusters deployed across longer distances or across multiple clouds, Rahim said.

Announced earlier this year, Nvidia AI Computing by HPE is a partnership designed to accelerate enterprise AI deployment and includes Nvidia’s Enterprise AI Factory validated designs, the Spectrum-X Ethernet networking platform and NVIDIA BlueField-3 data processing units (DPU).

In addition, the vendors said they will create an AI factory lab in Grenoble, France. At this lab, customers will be able to test and refine AI workloads.

With AMD, HPE said it will support AMD’s Helios AI rack scale architecture with integrated scale-up Ethernet networking. Specifically the system will be a scale-up turnkey Ethernet package that will feature a purpose-built Juniper Networks switch based on Broadcom’s 102.4 Tbps, Tomahawk 6 network silicon and employ the Ultra Accelerator Link over Ethernet (UALoE) specification. 

That specification, set earlier this year, was crafted by many of the UAL group’s 75 members — which include AMD, Broadcom, Cisco, Google, HPE, Intel, Meta, Microsoft and Synopsys — and lays out the technology needed to support a maximum data rate of 200 Giga transfers per second (GT/s) per channel or lane between accelerators and switches between up to 1,024 AI computing pods, UALink stated. 

Helios is built around the next-generation AMD Instinct MI450 Series GPUs and features up to 260 TB/s of scale-up interconnect bandwidth and 43 TB/s of Ethernet-based scale-out bandwidth, helping ensure high-performance communication across GPUs, nodes, and racks, according to AMD. The vendor says it will ultimately support trillion-parameter training and large-scale AI inference development. 

“This is a true first of its kind — bringing Ethernet to a new layer of the AI data center network for the first time,” Rami said. “This is a scale-up solution using the standard Ethernet. So that means it’s 100% open standard, avoids proprietary vendor lock-in, leverages proven HPE Juniper networking technology to deliver scale and optimize performance for AI workloads,” Rami stated. 

A few other networking news items coming out of HPE this week include:

  • HPE/Juniper’s Apstra Data Center Director and Data Center Assurance software will be integrated with HPE’s with OpsRamp management package, which monitors servers, networks, storage, databases, and applications. The idea is to further enable data center automation as Apstra’s automation capabilities can deliver consistent network and security policies for workloads across physical and virtual infrastructures. Available through GreenLake, the combination delivers full-stack observability, predictive assurance, and proactive issue resolution across compute, storage, networking, and cloud, HPE stated.
  • HPE is introducing software-defined networking for VMs hosted by the HVM hypervisor in HPE Morpheus VM Essentials and HPE Morpheus Enterprise Software. The idea is to bring cloud-enabled networking and security to the virtual machine platform.
  • HPE added two new storage options. The StoreOnce 5720 and all-flash 7700 are new, next-generation backup appliances designed for the rapid protection of critical workloads, HPE stated. Both models integrate directly with HPE Alletra Storage MP and HPE SimpliVity to let customers mount copies directly, making it easy to reuse protected data for forensics, analysis, or testing.

Sources:
Published: