Skip to Main Content
 

Major Digest Home Supply constraints, optical advances dominate Arista’s Q1 - Major Digest

Supply constraints, optical advances dominate Arista’s Q1

Supply constraints, optical advances dominate Arista’s Q1
Credit: Network World

It’s mostly good news for Arista Networks, which reported total first-quarter revenue of $2.71 billion, up 35.1% compared to the year-ago quarter. The vendor raised its forecasted growth to 27.7%, aiming now for full-year 2026 revenue of $11.5 billion. Arista also raised its AI revenue target to $3.5 billion, effectively saying it would double its AI sales this year.

Less positive is the supply-chain pressure, as networking components – memory, chips, and wafers – face ongoing shortages and rising costs.

“Our demand is actually the best I’ve ever seen in my Arista tenure. The supply, however, is a slightly different and opposite tail. We are experiencing industry-wide shortages across the board, be it wafers, silicon chips, CPUs, optics and, of course, memory that I referred to last quarter, coupled with elevated costs to procure these,” Jayshree Ullal, CEO and chairperson of Arista, told financial analysts during the vendor’s earnings call. 

“Clearly, our demand is outstripping our supply this year. While we hope the supply chain will ease in the next year or two, the Arista operations team has been diligently engaging with our vendors in strengthening supply agreements and engaging in multiyear purchase commitments,” Ullal said. “I think the supply chain problem is not a one- or two-quarter phenomena. We now think it’s a one- or two-year phenomena… At first, we thought it was memory. Now it’s all the wafer fabrication facilities. Every chip is challenged… So we think a lot of this will continue into next year and keep us constrained for the next couple of years.”

It’s all about the optics

At the recent Optical Fiber Conference, Arista unveiled extended pluggable optics (XPO), a form factor designed specifically for optics at high speed, and assembled 45 optics module suppliers as part of a multi-source agreement to build and support XPO. Arista has seen major industry interest in it since the conference, and it’s now endorsed by more than 100 vendors, according to Ullal.

“Salient features include record-breaking throughput, delivering 12.8 terabits per pluggable module, unprecedented rack density achieving 204.8 terabits per OCP rack unit, integrated cold plate capable of cooling up to 400 watts power per module, and the universality and flexibility across a range of pluggable optics, copper, as well as linear halftime or retimed interfaces,” Ullal said. 

“While the industry has been talking a lot about co-packaged optics, these are still science experiments, and they’re very proprietary with individual vendors doing their own thing. We’ll embrace open CPO a few years from now, but we think XPO has a 10-year run, especially at 1.6T and 3.2T where you need liquid cooling and you need that kind of capacity. So, all the scale-up racks we’re talking about wouldn’t be possible without XPO or CPC or any one of those technologies,” Ullal said.

“Just as the last decade was greatly influenced by OSFP, the next decade will be greatly influenced by XPO,” Ullal said. “And remember, 99% of the optical market today that we connect to is all pluggable optics. So this is a very crucial invention and innovation, not just for Arista, but the industry at large.”

Enterprise AI: Calm before the storm?

When it comes to AI for enterprise network customers, Arista’s Ullal says it’s just getting started.

The company is seeing a shift from AI training toward more AI inference, “which means you don’t always need the GPU,” Ullal said. “You’re going to have high-end CPUs, and you’re going to have a smaller set of parameters and tokens to manage, and you’re going to have specific agentic AI use cases and applications. We’re seeing very, very early trials and stages. Nothing super big yet.” 

Some customers are deploying clusters that are “more inference-based, more agentic AI, edge, inference-based as well,” Ullas said. “I think we’ll see more of that. This is the calm before the storm, if you will. As AI gets more distributed, I think it doesn’t need GPUs alone. It’s going to need more high-performance compute… I think it’s gonna take a couple of years to fully happen.”

Sources:
Published: