Skip to Main Content
 

Major Digest Home Broadcom is betting big on ethernet to disrupt AI workloads and data centers - Major Digest

Broadcom is betting big on ethernet to disrupt AI workloads and data centers

Broadcom is betting big on ethernet to disrupt AI workloads and data centers
Credit: Victor Dey, Fast Company

Behind the curtain of generative AI breakthroughs and GPU hype, a quieter transformation is taking place. Data center architecture and its prowess have become a fierce battleground as AI models expand in size and demand ever-greater compute power. Today, AI’s performance, scalability and cost are all tied to the choice of network fabric. Broadcom, once known for its dominance in networking and semiconductors, is back on the rise as one of the most consequential players in AI’s infrastructure revolution.

“There’s a shift happening in the market. Today, real AI innovation isn’t just limited to models or the infrastructure—it’s in what connects them,” Ram Velaga, senior vice president and general manager of Broadcom’s Core Switching Group, told Fast Company during the NTT Upgrade 2025 event. “AI is not just about GPUs or compute anymore. It’s about how data moves, power is managed, and how systems scale.”

Founded in 1991 as Broadcom Corporation, Broadcom began as a semiconductor company focused on wireless and broadband communication, operating from a modest Los Angeles garage. A major turning point occurred in 2015 when Avago Technologies acquired Broadcom for $37 billion, leading to Broadcom’s transformation into a global semiconductor and infrastructure technology leader. Avago’s origins trace back to HP’s semiconductor division, linking Broadcom’s current parent company to HP’s semiconductor legacy. Through strategic acquisitions, including ServerWorks in 2001 and VMware in 2023, Broadcom expanded its reach, especially in the data center space. 

Its influence is vast, yet often underestimated. The company’s reputation is driven by high-speed Ethernet chips like the Tomahawk series, which are crucial for high-bandwidth networking within data architectures. Now, the 60-year-old semiconductor giant isn’t chasing headlines with ChatGPT-style theatrics. Instead, it’s embracing a less flashy but more foundational role: building the infrastructure for AI developers to scale the technology. Velaga and his team are quietly helping tech giants and hyperscalers (large-scale cloud service providers that offer extensive computing resources) rethink the architecture of their data centers through deeply integrated systems—codesigned chips, bespoke interconnects, and a commitment to Ethernet, even as others in the industry begin to move on.

Currently, Nvidia dominates the data center network market with its GPU and Ethernet-integrated data center network platforms like Spectrum-X, which promise to drive AI training to new heights. As of 2025, Nvidia commands an estimated 25% share of the entire data center segment, and a dominant 98% share in data center GPU shipments. 

However, according to Broadcom CEO Hock Tan, the company’s strength in custom AI processors and Ethernet networking products is fueling its growth. Broadcom expects to capture a significant portion of the expanding market, projecting its serviceable addressable market (SAM) for AI processors and networking chips to reach $60–90 billion by fiscal 2027, given the company maintains its current market share of approximately 55% to 70% in the AI chip segment.

While Nvidia offers both InfiniBand and Ethernet in its data center portfolio, Broadcom’s Velaga contends that Ethernet is poised to become the backbone of tomorrow’s AI infrastructure, and the company is investing heavily to innovate the technology further. 

“We’re seeing hyperscalers including Meta and others, really leaning into Ethernet for AI infrastructure. Unlike alternatives like InfiniBand, Ethernet is inherently designed to handle data failures, recalibrate quickly, and maintain performance for AI models even under real-world conditions like heat and congestion,” Velaga told Fast Company during the event. “Ethernet is built for all these use cases, and beats InfiniBand.”

What is Ethernet and Why Now?

Ethernet is a foundational networking technology that enables wired communication between devices in data centers. It transmits data through physical cables like twisted pair or fiber optics, connecting servers, storage, and networking equipment. In modern data centers, Ethernet link speeds have scaled from 1 Gbps to 400 Gbps—with 800 Gbps already on the horizon, to handle the massive data throughput demanded by AI workloads. Moreover, the technology facilitates high-speed data transfer between GPUs and storage, enabling efficient AI training and the creation of distributed GPU clusters. 

Broadcom’s argument is simple: Ethernet, the backbone of the internet for decades, is finally ready for its AI prime time. Ethernet’s openness, flexibility, and multivendor support give tech giants like Meta and Google the freedom to innovate without being boxed in by a proprietary stack. 

“Ethernet lets you scale horizontally across thousands of GPUs. Copper has always been the cheaper and more reliable option compared to optics. For a while, people tried cramming as many racks as possible into data centers utilizing copper networking, but that approach just isn’t sustainable,” Velaga said. “Now, we’re seeing a shift toward optics to meet higher power and bandwidth demands. Now, with our current and upcoming chipsets that integrate copackaged photonics, we’re very well positioned for helping enterprises with future workloads.”

Another alternative, InfiniBand is designed for environments that demand ultrafast data transfer and minimal latency—such as HPC clusters and advanced data centers. Known for its high throughput (up to 400 Gbps) and ultralow latency (as low as one microsecond), it’s currently a popular choice for mission-critical workloads requiring rapid, reliable communication.

However, InfiniBand operates on the assumption of a flawless environment—and according to Velaga, that’s precisely the problem. He explained that modern data centers and GPU clusters exist in far-from-perfect conditions. As organizations scale their AI infrastructure, they quickly run into challenges like heat, signal degradation, and system noise. “In the real world, systems aren’t perfect,” he said. “There’s noise, heat, jitter. InfiniBand assumes everything is lossless. Ethernet was built to deal with reality.”

Tomahawk5 vs. Spectrum-X: A Battle of Philosophies

Nvidia’s Spectrum‑X isn’t just Ethernet—it’s Nvidia’s customized version of it. The company markets Spectrum‑X as a purpose-built platform for AI, combining proprietary clustering with claimed performance and efficiency gains: 1.6–1.7 times higher network throughput, 2.5 times better bandwidth for collective operations, and 1.7 times improved power-performance, leading to a lower total cost of ownership for distributed AI training. By April 2025, Spectrum‑X had been adopted by major tech players including Dell, HPE, Lenovo, and leading hyperscalers. 

But Velaga argues that real flexibility and reliability come from open-standard Ethernet, where any GPU can plug in without locking users into a single vendor’s ecosystem. Nvidia’s market approach, he says, is contradictory to Ethernet’s core principles: openness, interoperability, and customer choice. 

“When someone says their Ethernet is better than others, they probably don’t fully understand what Ethernet is,” he asserts. “The beauty of Ethernet is you can connect any GPU from any vendor using our switches, and it just works. That’s interoperability. Solutions that lock you into one vendor’s world are not scalable.”

Currently, Broadcom’s main competitors to Nvidia’s Spectrum-X are its Tomahawk5 and Jericho3-AI switch ASICs. Tomahawk5 is a high-throughput Ethernet switch designed for hyperscale and AI data centers, featuring advanced congestion management to reduce latency and supporting interoperability with any vendor’s data center infrastructure, helping customers avoid vendor lock-in. Likewise, Jericho3-AI is purpose-built for AI and machine learning workloads, enabling near-lossless Ethernet performance across large-scale clusters, similar to the performance claims made by Nvidia’s Spectrum-X.

“I’d challenge Nvidia and others any day on both interoperability and performance. Broadcom’s Ethernet offerings are miles ahead of Spectrum-X or any proprietary offerings out there,” Velaga told Fast Company.

Strategic Partnerships and Silicon Ambitions Amidst the Rise of AI

Broadcom is creating custom silicon for AI leaders like Alphabet, Meta, OpenAI, and Apple, designing ASIC chips tailored to optimize bandwidth, memory efficiency, and power draw for AI workloads in data centers and AI architectures. The company also provides key technologies such as high-bandwidth Ethernet switches, PCIe connectivity, and optical interconnects, all essential for scaling AI clusters. Velaga emphasized that these innovations enable clients to achieve superior data movement, processing speed, and energy efficiency, far surpassing off-the-shelf solutions.

“Our goal is to help customers differentiate themselves,” Velaga said. “We provide the tools they need to build what works best for them—without dictating the approach. They want flexible, cost-effective networking solutions to optimize their data centers and accelerators. With our Ethernet portfolio, ASICs, and silicon innovations, we are empowering large-scale GPU clusters to perform efficiently and at scale, essential for advancing AI.” He added that Broadcom’s flexible approach positions the company as a key collaborative partner, an advantage likely to grow as AI infrastructure evolves.

Despite his confidence, Velaga admits there are risks. AI investment is surging now, but what if the momentum stalls? “Everyone’s asking how long this wave will last. From my perspective, it feels like a real paradigm shift,” he said. “LLMs are changing how companies analyze data, make decisions, and engage with customers. There’s a lot at stake in this cycle.”

What keeps him up at night isn’t hype—it’s execution. “We have to keep delivering innovation and scale so our customers stay confident in Broadcom’s ecosystem. And so far, the signals are strong. Our customers aren’t pulling back, they’re doubling down. We’re ready to lead.”

Whether the boom continues or levels off, Broadcom is betting that the demand for fast, open data movement will only intensify. If Velaga’s vision is right, tomorrow’s AI data centers will be stitched together with open Ethernet, copackaged optics, and modular designs. “We want to be the connective tissue of AI,” he said. “It’s not the flashy part—but it’s the part that makes everything else work.”

Sources:
Published: