Technology

Tesla Revives Dojo 3 AI Supercomputer: Space-Based Computing Becomes Reality

Tesla revives Dojo 3 with AI5 and space-based compute, aiming to power next-gen AI chips using orbital data centers, solar energy, and rapid chip cycles.

Sankalp Dubedy
January 30, 2026
Tesla revives Dojo 3 with AI5 and space-based compute, aiming to power next-gen AI chips using orbital data centers, solar energy, and rapid chip cycles.

Tesla has restarted work on its Dojo 3 supercomputer project after CEO Elon Musk announced significant progress on the company's AI5 chip design. This marks a dramatic reversal from August 2025, when Tesla disbanded the Dojo team to focus on vehicle-based AI chips. The revival comes with an unexpected twist: Dojo 3 will be dedicated to space-based AI compute, positioning Tesla at the forefront of orbital data center technology.

The announcement signals Tesla's commitment to building what Musk calls "the highest volume chips in the world." With AI5 nearing completion and a rapid nine-month development cycle planned for future chips, Tesla is reshaping its AI infrastructure strategy. The company now aims to deploy computing power in space, where unlimited solar energy and natural cooling could solve the massive power demands of modern AI training.

Understanding Tesla's Dojo Supercomputer Journey

Tesla's Dojo project started as an ambitious plan to build custom supercomputers for training self-driving neural networks. The system processes millions of terabytes of video data from Tesla's fleet of vehicles. The Dojo ExaPod system includes 120 tiles, totaling 1,062,000 usable cores, reaching 1 exaflops at BF16 and CFloat8 formats.

The original Dojo ran on Tesla's custom D1 chip. Each chip contained specialized processing nodes designed for machine learning tasks. Each D1 node is a general purpose 64-bit CPU with a superscalar core supporting simultaneous multithreading. The system used unique architecture to handle the massive data throughput needed for autonomous driving.

However, the project faced challenges. In August 2025, Bloomberg News reported that the Dojo project was disbanded. Tesla lost key engineers to competitors and decided to focus resources on chips that run inside vehicles rather than training supercomputers. This decision seemed to end Tesla's custom supercomputer ambitions.

Why Tesla Restarted Dojo 3 Development

The restart announcement came after making progress on the design of its AI5 chip. Musk confirmed on X that the AI5 chip design is now stable enough to resume the Dojo 3 project. This stability gave Tesla confidence to rebuild the team it disbanded months earlier.

The timing connects to Tesla's broader chip strategy. Musk later reinforced that Tesla's AI7, not AI6, would effectively be the new Dojo. The company plans aggressive development cycles, with AI7, AI8, and AI9 chips developed in short, nine-month cycles.

Tesla recruited engineers through direct appeals on social media. Musk invited job applications for what he described as the highest volume chips in the world. The company seeks engineers who can solve complex technical problems at unprecedented scale.

Tesla's AI Chip Roadmap Explained

Tesla's chip development follows a clear progression from vehicle-based inference to space-based training systems. Understanding each generation reveals the company's long-term strategy.

Chip GenerationPrimary PurposeKey SpecificationsProduction Timeline
AI4 (Current)Vehicle self-drivingBase performance standardIn production now
AI5Enhanced FSD & Optimus40x faster than AI4, 8x raw compute, 9x memory capacity, 5x memory bandwidthSample 2026, volume 2027
AI6Optimus robots & data centersRoughly 2× performance improvements over AI5Mid-2028
AI7/Dojo3Space-based AI computeAdvanced architecture for orbital deploymentPost-2028

The AI5 chip represents a massive leap in capability. AI5 chip performance will reportedly rival Nvidia Hopper while consuming less power. Industry observers note that AI5's performance is roughly comparable to Nvidia's Hopper on a single chip and runs at approximately 250W compared to H100's 700W.

Manufacturing will happen at multiple facilities. Tesla signed a $16.5 billion deal with Samsung Electronics to manufacture the next-generation AI6 chip. Both Samsung and TSMC will produce AI5 variants, giving Tesla manufacturing flexibility and supply chain resilience.

Space-Based AI Computing: The Future of Data Centers

The most radical aspect of Dojo 3 is its planned orbital deployment. Musk says it will be dedicated to space-based AI compute. This approach addresses fundamental limitations of Earth-based data centers.

Space offers unique advantages for AI training:

Unlimited Solar Power: A solar panel can be up to 8 times more productive than on earth, and produce power nearly continuously. Satellites in sun-synchronous orbits receive constant sunlight, eliminating the need for backup power systems.

Natural Cooling: Starcloud's space-based data centers can use the vacuum of deep space as an infinite heat sink. Space provides passive cooling through radiative heat dissipation, eliminating the massive water consumption of terrestrial data centers.

Scalability: A 5 GW cluster necessary for next-generation AI models exceeding the capacity of most of the world's largest power plants is achievable in orbit. Space offers unlimited physical expansion without land permits or community opposition.

Cost Efficiency: The 60-kilogram Starcloud-1 satellite, about the size of a small fridge, is expected to offer 100x more powerful GPU compute than any previous space-based operation. Launch costs continue declining, making orbital infrastructure increasingly economical.

How Space-Based Computing Works

Orbital data centers use modular architecture designed for the space environment. The core is composed of compute containers, each housing server racks, networking and liquid-cooling and power-distribution infrastructure.

The technical challenges are significant. Large-scale ML workloads require distributing tasks across numerous accelerators with high-bandwidth, low-latency connections. Inter-satellite links must support terabits per second of data transfer.

Radiation protection is essential. High Bandwidth Memory subsystems began showing irregularities after a cumulative dose of 2 krad(Si), nearly three times the expected five year mission dose of 750 rad(Si). Proper shielding ensures chips survive the harsh space environment.

Google is already testing the concept. Project Suncatcher is a moonshot exploring a new frontier: equipping solar-powered satellite constellations with TPUs and free-space optical links. Their demonstration mission is slated to launch two prototype satellites by early 2027.

Real-World Progress in Orbital Computing

Space-based AI computing has moved beyond theory. Starcloud trained an artificial intelligence model from space for the first time. The company's satellite successfully ran Google's Gemma language model in orbit, proving that complex AI workloads can operate in space.

The company's Starcloud-1 satellite is running Gemma, an open model from Google, marking the first time in history that an LLM has been trained in outer space. This achievement demonstrates the viability of orbital AI infrastructure.

The economic case strengthens as launch costs decline. Analysis of historical and projected launch pricing data suggests that with a sustained learning rate, prices may fall to less than $200/kg by the mid-2030s. At this price point, the cost of launching and operating a space-based data center could become roughly comparable to the reported energy costs of an equivalent terrestrial data center.

Challenges and Concerns

Space-based computing faces legitimate obstacles. Environmental researchers warn of potential issues. Researchers at Saarland University calculated that an orbital data center powered by solar energy could still create an order of magnitude greater emissions than a data center on Earth. Most emissions come from burning rocket stages and hardware on reentry, forming pollutants that can further deplete Earth's protective ozone layer.

Technical hurdles remain substantial. Thermal management, high-bandwidth ground communications, and on-orbit system reliability all require solutions. For ML accelerators to be effective in space, they must withstand the environment of low-Earth orbit.

Astronomical observations could be affected. Orbital data centers with large solar arrays might interfere with telescope observations, particularly during twilight hours when astronomers search for near-Earth asteroids.

Tesla's Competitive Position

Tesla's approach differs from competitors. Dojo 3 is expected to be Tesla's first supercomputer built entirely on internal hardware, without relying on Nvidia components. This vertical integration gives Tesla control over its entire AI stack.

The company aims for unprecedented production volume. Musk stated the company wants to shift toward more in-house solutions to reduce dependence on external GPU suppliers. With millions of vehicles and potentially millions of Optimus robots, Tesla could deploy more AI chips than any other company.

The nine-month development cycle represents aggressive ambition. In the semiconductor industry, a 9-month cycle for a major architectural overhaul is unheard of. Even major tech companies operate on much longer timelines for chip development.

What This Means for Tesla's Future

The Dojo 3 restart signals confidence in Tesla's ability to solve autonomous driving with AI5 hardware. It seems that AI5 will be the last major architecture and hardware jump for Tesla's vehicles in the near future. This suggests Tesla believes AI5 provides sufficient compute power for full autonomy.

Future chips target different applications. AI6 will be a chip dedicated to Optimus and Tesla's data centers. The roadmap shows Tesla expanding beyond vehicles into robotics and cloud computing infrastructure.

The space-based computing vision remains ambitious. There are many roadblocks to making AI data centers in space a possibility, not least the challenge of cooling high-power compute in a vacuum. However, Musk's comments fit a familiar pattern: float an idea that sounds far-fetched, then try to brute-force it into reality.

Key Takeaways

Tesla's Dojo 3 revival represents a bold bet on space-based computing. The project combines the company's advanced AI chip development with the emerging field of orbital data centers. Success could give Tesla a significant competitive advantage in AI training infrastructure.

The technology faces real challenges. Environmental concerns, technical obstacles, and high initial costs all need solutions. However, the fundamental physics and economics appear increasingly favorable as launch costs decline and AI compute demands surge.

Tesla's aggressive chip development timeline shows the company's determination to lead in AI hardware. Whether the nine-month development cycles prove realistic remains to be seen. But the combination of vehicle deployment, robotic applications, and space-based training infrastructure positions Tesla uniquely in the AI industry.

The race to space-based computing has begun. Tesla joins Google, Starcloud, and other companies exploring orbital infrastructure. The coming years will reveal whether Dojo 3 and AI7 successfully launch the era of space-based AI training.

    Tesla Revives Dojo 3 AI Supercomputer: Space-Based Computing Becomes Reality | ThePromptBuddy