
By David Savage, CTO, Ripple Suicide Prevention
Last week, Elon Musk announced something that should alarm anyone who cares about online safety: SpaceX has acquired xAI in a £913 billion merger with plans to launch orbital data centres into space. On the surface, this sounds like science fiction, a bold vision to power AI with solar energy beyond Earth's atmosphere. But when you look closer, especially at xAI's recent track record, a darker picture emerges.
At Ripple Suicide Prevention, we've spent years working to intercept harmful online content before it reaches vulnerable people searching for ways to end their lives. We've seen firsthand how critical it is to have legal frameworks that allow regulators, courts, and police to act when platforms fail to protect users. Space-based data centres threaten to put that protection permanently out of reach.
Sci-Fi vs. reality
The pitch sounds compelling: unlimited solar power, no cooling costs, avoiding Earth's stressed power grids. But the reality tells a different story, and it's one that should make us question the real motives behind this expensive venture.
- The economics don't add up. Space-grade radiation-hardened chips cost exponentially more than their Earth-based counterparts: we're talking six-figure sums in pounds per circuit board that delivers performance roughly equivalent to technology from a decade ago. The RAD5500 processors commonly used in satellites deliver just 0.9 gigaflops of performance, whilst a standard NVIDIA A100 chip on the ground delivers 156 teraflops. That's a performance gap of three to four orders of magnitude.
- Launch costs remain prohibitive. Despite optimistic projections, current launch costs average £1,000-2,250 per kilogramme. For a single 100-tonne data centre module, that's £110-220 million just to get it into orbit. Musk claims SpaceX will achieve hourly Starship launches carrying 200 tonnes each, deploying a million satellites. The logistical and financial requirements are staggering.
- Maintenance is impossible. When a server fails on Earth, technicians can fix it. In orbit? You need to launch an entirely new satellite. Radiation degrades silicon chips faster in space, meaning hardware replacement every 5-6 years at those same astronomical launch costs.
- The latency problem persists. Signals take 25-35 milliseconds to travel from ground stations to satellites and back—an eternity for real-time applications, especially those involving crisis intervention.
- Space Data Centres are Un-Cool. Whilst Musk talks about the abundance of solar power in space, he's conspicuously quiet about one of the most fundamental challenges: cooling. On Earth, data centres rely on air conditioning, evaporative cooling, and water systems to dissipate the enormous amounts of heat they generate. In space, there's no air for convection and no water to evaporate; heat can only be rejected through thermal radiation.
The International Space Station provides a sobering comparison: it generates approximately 120 kilowatts of power and requires massive radiator panels covering over 1,000 square metres to dissipate the waste heat. These radiators are comparable in size to the station's enormous solar arrays. A typical spacecraft radiator can only dissipate about 100-200 watts per square metre. Now consider that a modest modern data centre generates tens of megawatts of heat.
A 10-megawatt facility (small by today's AI training standards) would require radiator panels covering over 50,000 square metres, roughly the size of seven football pitches. For the 100-gigawatt constellation Musk envisions, you'd need radiators covering an area larger than Greater Manchester. The structural mass, launch costs, and deployment complexity of radiators on this scale would be astronomical, yet this critical infrastructure is conspicuously absent from xAI's promotional materials.
If the economics are this problematic, why pursue it? That's the question we need to ask.
The pattern of evasion
Here's what genuinely concerns us: xAI's chatbot Grok has become one of the most dangerous AI systems currently deployed, particularly for children. This isn't speculation, it's documented fact.
In January 2026, Grok generated thousands of non-consensual sexually explicit images every hour, including images depicting children. The bot itself admitted in a public post that it had generated images of minors aged 12-16 in "sexualized attire," acknowledging this "violated ethical standards and potentially U.S. laws on child sexual abuse material (CSAM)."
The responses from governments worldwide have been swift and damning:
- The European Union opened formal investigations, with digital affairs spokesman Thomas Regnier calling the content "appalling" and "disgusting," stating bluntly: "This is not spicy. This is illegal."
- UK's Ofcom launched a formal investigation under the Online Safety Act 2023, examining whether xAI conducted appropriate risk assessments or implemented adequate safeguards.
- France referred X to prosecutors for possible violations of the EU's Digital Services Act.
- India's IT Ministry gave xAI 72 hours to submit corrective action plans.
- Indonesia, Malaysia, and the Philippines temporarily blocked access to Grok entirely.
Common Sense Media published a damning assessment calling Grok "among the worst we've seen" for child safety, finding that even with "Kids Mode" enabled, the platform produced harmful content including detailed explanations of dangerous ideas, conspiracy theories, and sexually violent language.
According to CNN, xAI's safety team (already small compared to competitors) lost several key staffers in the weeks leading up to the crisis, including the head of product safety, the leader of the post-training and reasoning safety team, and the lead for personality and model behaviour. Internal sources reported that Musk "was really unhappy" about restrictions on Grok's image generation and pushed back against guardrails.
When Reuters contacted xAI for comment about these child safety failures, they received an auto-reply: "Legacy Media Lies."
The jurisdiction black hole
This is where space-based data centres become genuinely alarming. Legal experts have identified the core problem: space breaks every enforcement tool we have.
As outlined in analysis by legal professionals at TDAN.com, courts cannot easily order the seizure of a satellite. Inspecting hardware in orbit requires specialised spacecraft. Discovery of server logs depends entirely on vendor cooperation. Traditional intellectual property enforcement tools like forensic imaging, hard drive seizure, on-site inspections, become impossible when the servers are 400 kilometres above Earth.
According to Bloomberg Law analysis, personal data in space "isn't governed by the laws of any one nation, and international space law is still in its infancy." Whilst countries like the UK, EU member states, and others may claim their privacy laws apply based on whose data is being processed, enforcement becomes practically impossible.
The Outer Space Treaty of 1967 established that space cannot be appropriated by nations, but objects in space remain under the jurisdiction of the "state of registry." If xAI registers its satellites under a flag of convenience, much like ships in international waters, which country's courts can compel compliance with content moderation orders?
This is where Ripple's mission becomes infinitely harder. In an ideal world, the Ripple crisis intervention technology wouldn't need to exist, as large American tech organisations would do the right thing and put safety first. Instead, we campaign for legal frameworks to be as effective and far-reaching as possible. We work with organisations to intercept harmful searches, operating within legal jurisdictions where courts can enforce compliance when voluntary cooperation fails. Move those servers to orbit, and those hard-won protections evaporate.
A billion-pound escape plan?
Let's be clear about what we're witnessing. xAI has demonstrated repeated unwillingness to implement basic safety measures on Earth, where regulators have power. Multiple governments are investigating the company. Child safety organisations are demanding federal bans. The company faces potential criminal prosecution in multiple jurisdictions.
Now they want to move their infrastructure to a place where courts cannot reach them, where seizure is impossible, where enforcement depends entirely on voluntary cooperation.
The stated rationale—environmental concerns about Earth-based power consumption—rings hollow from a company whose CEO responded to child safety failures with laugh-cry emojis (as reported by TechPolicy.Press) and whose auto-response to serious journalism is "Legacy Media Lies."
This isn't innovation. This is evasion at planetary scale.
What this means for suicide prevention
At Ripple Suicide Prevention, we have intercepted over 100,000 harmful searches related to suicide and self-harm. Our intervention tool exists because of the hard work of the development and wider team at Ripple: we insert our tech and singposting between vulnerable people and harmful content. Whilst not perfect, because of the online safety Act 2023, companies are legally required to cooperate with content safety efforts. Courts can issue orders. Regulators can enforce compliance. Parents can sue. Victims can seek justice.
Space-based data centres threaten to create a permanent offshore haven for the most harmful content imaginable. When a vulnerable teenager searches for methods of suicide, or when predators create AI-generated child sexual abuse material, the servers processing those requests could be orbiting overhead, beyond the reach of any court on Earth.
The European Commission has made clear that "this content has no place in Europe." But if the servers are in space, what can Europe do? What can any nation do?
The questions Musk won't answer
Before any regulatory body grants approval for space-based data centres, we need answers to fundamental questions:
- Which jurisdiction's laws will apply to content hosted on orbital servers?
- How will court orders be enforced when hardware cannot be physically seized or inspected?
- What prevents registration under flags of convenience that provide maximum protection from legal accountability?
- How will police access evidence of crimes when server logs exist exclusively in orbit?
- What recourse do victims have when harmful content is generated by systems beyond any nation's reach?
- Why should we trust voluntary compliance from a company that has repeatedly failed to protect children on Earth?
Most importantly: If the economics of space data centres are so unfavourable, and the technical challenges so immense, why pursue this path unless the goal is regulatory evasion?
The stakes for all of us
This isn't just about one company or one technology. It's about whether we're going to allow the creation of offshore digital havens, literally offshore, in orbit, where the worst of the internet can flourish beyond legal accountability.
For parents, this should be terrifying. For suicide prevention organisations, it's devastating. For anyone who has ever needed legal protection from online harm, it represents a fundamental threat to digital safety.
The technology companies building these systems like to talk about innovation and progress.
But progress towards what? A future where harmful content is generated on satellites beyond any court's jurisdiction? Where child safety depends entirely on the voluntary goodwill of companies that have already demonstrated they prioritise engagement over protection?
We advocate for laws and regulations for the internet because we learned the hard way that voluntary compliance doesn't work. Now we're being asked to trust that same failed model, but this time in an environment where enforcement is physically impossible.
What we can do
The time to act is now, before these systems leave Earth:
- Demand regulatory review of space-based data centre proposals, with particular focus on legal jurisdiction and enforcement mechanisms.
- Support organisations like RAINN, Common Sense Media, and Ripple Suicide Prevention that are documenting the harms and demanding accountability.
- Contact your representatives. MPs, MEPs, Members of Congress in the US need to understand what's being planned and the implications for online safety.
- Pressure telecommunications regulators to require ground-based enforcement mechanisms as a condition for any space data centre licence.
- Insist on transparency. Which jurisdiction will register these satellites? What laws will govern content? How will enforcement work?
- Support enforcement of the Online Safety Act 2023 and demand that regulators hold AI platforms to the same standards as social media companies. The Act requires platforms to assess risks, implement proportionate safeguards, and protect users, especially children, from harmful content. Ofcom's investigation into Grok demonstrates these powers exist; we must ensure they're used. Demand similar protections at international level to prevent companies from simply moving infrastructure beyond any single nation's reach. At Ripple Suicide Prevention, we know that every life we save depends on our ability to intervene in the digital spaces where people search for help... or for harm. We cannot allow those spaces to move beyond the reach of the very laws designed to protect the most vulnerable amongst us.
The choice is stark: either we establish clear legal frameworks now, before launch, or we accept a future where the most harmful content in human history orbits permanently beyond our reach.
We've been to space. We've walked on the Moon. We've sent probes to the edge of the solar system. But some things, like the protection of children, like the prevention of suicide, like basic human decency, should never be allowed to escape Earth's gravity.
Our Sponsors and Supporters















