“We’re going to space,” Nvidia CEO Jensen Huang announced at the company’s GTC conference on Monday.
But then he qualified those remarks. “We’ve already been out in space,” he said. The company’s chips are already in satellites in orbit above Earth. What’s new is that Nvidia is moving from isolated satellite deployments of its chips to larger-scale plans. “We’ll also build data centers in space,” he said.
To prepare for that, Nvidia is working on a new version of its Vera Rubin flagship chip platform, Huang said. “It’s going to go out into space and start data centers out in space.”
According to Chen Su, Nvidia’s head of edge AI product marketing, the Space-1 Vera Rubin Module will be available in 2027. The company also announced a new chip that is available today, the Nvidia IGX Thor, which provides eight times the compute of the previous gold standard for space-based AI computing. IGX Thor is based on the Blackwell architecture.
Currently, satellite companies typically use the Nvidia Jetson Orin, an AI computer originally developed for robotics and other edge AI applications. “I would say it’s the most popular GPU that people use for space,” Su tells Network World. “It’s our embedded AI supercomputer.”
Related feature: Who’s in the data-center space race?
The Jetson Orin was released in 2022 and is based on the older Ampere GPU architecture. Customers are using it to run image processing workloads in orbit instead of sending raw data down to the ground for processing, Su says. In other words, they can upgrade from being “data as a service” providers to “intelligence as a service.”
For example, instead of sending down raw image data, which can take hours, or even days, a satellite can transmit the information that, say, a particular bridge is down, or that a certain road is having issues—actionable information of immediate business value.
“AI can also help satellites navigate low earth orbit much more confidently, avoid other satellites, and operate much more autonomously,” says Su.
And it can be used for other heavy workloads as well. For example, Kepler Communications is using Jetson Orin in its satellite communication network. That helps the company make its satellites smarter, CEO Mina Mitry said in a statement, “allowing us to intelligently manage and route data across our constellation.”
The Jetson Orin is already bringing data center-level compute capability to space, Su says, and, with the new chips, there will be even more real-time capability for the next generation of satellites.
According to Gartner analyst Bill Ray, orbital data centers are a waste of time and money. “The rush to develop orbital data centers has reached a period of peak insanity,” he wrote in a recent report. “For all the hype around them, these space-based data centers will not be able to deliver on the promise of useful analysis of terrestrial data for terrestrial applications for decades, and may not ever be able to do so.”
But that’s not where today’s use cases are, Su points out. “It is edge computing workloads,” he says. “It’s AI inference for multi-dimensional data for disaster recovery and weather forecasting.”
Kepler Communications, for example, announced Monday that it will offer scalable, cloud-like processing in space as a service, expanding beyond the connectivity services it was offering previously. The company has a constellation of ten satellites powered by 40 Jetson Orin modules, all connected by optical links, each one capable of supporting AI workloads. That includes distributed computing models, allowing workloads to scale dynamically across the constellation.
Other companies that are also using Nvidia chips to power AI computing in space include Sophia Space, which recently closed a $10 million seed round for its space computing systems and proprietary cooling technology, and Starcloud, which launched the Nvidia H100 GPU into space this past November. Starcloud plans to launch a GPU cluster to orbit in 2027.
“Space will have huge potential in the future,” says Su.