Can datacentres in orbit solve for AI models’ soaring energy demand?

  • Home
  • India
  • Can datacentres in orbit solve for AI models’ soaring energy demand?
India
Can datacentres in orbit solve for AI models’ soaring energy demand?


Datacentres are a growing share of global electricity consumption, and artificial intelligence (AI) is driving those power demands up. This is because AI datacentres use dense clusters of graphics processing units (GPUs) for running machine learning workloads, both when training large language models and deploying them. Since the generative AI boom shows no signs of slowing down — industry estimates show at least $3 trillion in planned investments by 2030 — datacentres are guzzling more energy than ever, with whatever electricity sources are available.

That has pushed Google Research to explore a literally outlandish prospect: launching datacentres into outer space, and running them entirely on solar energy.

Bandwidth in the datacentre

Is this even possible? Google Research is mostly confident that it is. So much so that the firm’s researchers have already chalked out a few of the key technical challenges they will face, and how they will solve them. To get into these questions, it is important to first lay out how an AI datacentre is different from the regular variant. 

Traditional datacentres have been driven, more than anything, by growing consumption of content. In markets like India, that is mainly video, since that is among the most data-intensive (by volume) use cases for the aggregated networking and storage facilities that datacentres offer. That has traditionally meant that the bandwidth that the datacentre needs within its own premises is theoretically the same as the bandwidth it is delivering to, or receiving from, the outside world. That has led to booms in things like undersea cable bandwidth, which have needed to keep pace with domestic datacentre growth (the data must, after all, come from somewhere). 

AI datacentres are different. They need high levels of bandwidth not between the infrastructure they host and the users they serve, but within the datacentre itself, and with other datacentres that are situated nearby. For instance, Microsoft’s AI datacentre complexes, which are called Fairwater, have petabit-per-second links between facilities. That is 10 lakh gigabits per second, a million times faster than the best consumer grade internet connection typically offered in Indian metros. 

That kind of densely networked architecture would therefore be important for datacentres in space. Since the majority of the bandwidth would be used in the distributed workloads across multiple satellites, the downlink bandwidth with earth-based ground stations is not nearly as important. An analogy to understand this is available closer to home: ChatGPT needs these superfast connections in its own infrastructure, but all the user needs the bandwidth for is the query they send and the response they receive. 

(Low earth orbit to earth bandwidth is limited because there is a limited range of frequencies where data can be transmitted over that kind of distance, and a finite amount of bandwidth as a result of this. This is why SpaceX’s Starlink satellite internet constellation can be “sold out” in certain parts of the world, since the airwaves can get choked very quickly if a few lakh people get it in a single location.)

Many challenges

Google’s Project Suncatcher proposes a constellation like Starlink’s, but instead of being an evenly spread swarm blanketing the earth, the equipment architecture would rely on densely choreographed clusters, with each satellite not more than a few kilometres from its neighbours, while following an orbit that would always maintain a line of sight with the sun, and the incredible power, with no atmosphere to dilute or obstruct it, that the setup promises. That, combined with technologies like multiplexing — which allows more data to be packed into a single radio beam — would enable the satellites to theoretically distribute their work while having enough power to run the satellites.

Of course, there are many other challenges, and Google is working their way through them. One obvious issue is solar radiation, and how it might affect the tensor processing units (TPUs) over months and years of operation. Here, Google has seen some headway. “While the High Bandwidth Memory (HBM) subsystems were the most sensitive component, they only began showing irregularities after a cumulative dose of 2 krad(Si) — nearly three times the expected (shielded) five year mission dose of 750 rad(Si),” Travis Beals, a researcher at Google wrote in a post last November about Suncatcher.

“No hard failures were attributable to total ionising dose up to the maximum tested dose of 15 krad(Si) on a single chip, indicating that Trillium TPUs are surprisingly radiation-hard for space applications.”

But datacentres have to be maintained all the time, and once equipment is in the sky, there’s no cheap way of reaching outer space for troubleshooting. Another “significant engineering challenge” that Beals underlined was thermal management. On terrestrial datacentres, using liquid cooling is practical. But if the datacentres are going to be blasted with solar energy directly all day long, dissipating the heat and actually allowing the silicon components to run efficiently could cause issues. 

Moving targets

Perhaps the biggest issue may not be around engineering, but economics. For space-based datacentres to work, the cumulative cost of researching their technology, putting clusters in space, and undertaking fresh launches to replace individual satellites that have stopped working, all of this must be competitive with the price that the firm pays to do all this work with technology that already exists on the ground.

Google says that satellite launch prices would have declined to $200 per kilogram by the mid-2030s, and that the power savings due to the solar-first design of this architecture could also lead to a compelling economic case for space-based datacentres. Time will tell if Google — or ISRO, which is also reportedly studying space-based datacentre technology — will be able to hit all these technological and economic targets while keeping pace with advancements for ground-based datacentres. Microsoft Natick, which tried underwater datacentres to make water-cooling their systems easier, ultimately abandoned this experiment, in spite of the promise it showed.

But scepticism about the viability and usefulness of satellite technologies tends to not age very well. After all, few could have predicted that Starlink would be able to reach the scale and performance it boasts today — with practically the whole earth covered with perfectly serviceable internet speeds — when SpaceX launched its very first test satellites in 2019.

aroon.deep@thehindu.co.in

Published – January 16, 2026 05:20 am IST



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *