AI space computing concept showing orbital data centers, AI CEOs, and chip war between tech giants

From space-based GPUs to chip factories, AI expansion accelerates worldwide

Kepler Communications operates the largest orbital compute cluster today, with 40 Nvidia Orin processors across 10 satellites. New partner Sophia Space will deploy an operating system across six of those GPUs, the first attempt at orbital software configuration. This is early proof that space-based compute infrastructure works, well before large-scale data centers arrive in the 2030s.

On the ground, Intel is joining Elon Musk’s Terafab, Nvidia uses AI to cut chip design time from months to overnight, and Amazon acquiring Globalstar for $11.57B to boost Direct-to-Device services against Starlink. Let’s get into further details:

World’s Largest Orbital Compute Cluster Just Went Live – AI’s New Home Is Space

Kepler Communications, operating the largest orbital compute cluster with 40 Nvidia Orin processors, has partnered with Sophia Space to test space-based AI infrastructure. Sophia will deploy its OS across six GPUs on two satellites, a first-ever attempt at orbital software configuration.

Amazon’s $11.57B Globalstar Deal Targets Musk’s Starlink

Amazon is acquiring Globalstar for $11.57 billion to accelerate its Direct-to-Device satellite services and close the gap in the space telecom rivalry with Starlink’s 9 million users. The deal adds Globalstar’s 24 satellites to Amazon’s existing 200-plus network, with D2D deployment targeted for 2028.

Intel Joins Musk’s Terafab to Build AI Chips in Texas

Intel is joining Elon Musk’s Terafab venture with SpaceX and Tesla to build a semiconductor factory in Texas. With Intel’s chip fabrication expertise, the Terafab AI chips target 1 TW/year of compute. Its foundry gains two anchor customers as it closes the gap with Nvidia and AMD.

Write Once, Run Everywhere: The Secret to Cross‑AI Automation in Cursor & Claude

This guide shows how to build reusable slash commands that turn repeatable prompts into consistent pull request reviews across tools like Cursor and Claude. The approach enforces fixed scope, structured checklists, and standardized reports, replacing inconsistent manual reviews with reliable AI workflow automation.

NVIDIA Shows AI Can Design Chips Faster Than Entire Teams

The Nvidia chip design now uses AI at every stage, cutting an 8-engineer, 10-month task to a single overnight GPU run. Its internal LLMs, Chip Nemo and Bug Nemo, train junior designers on proprietary GPU architecture. Still, chief scientist William Dally says full autonomous chip design remains years away.

Claude Code Launches Routines to Automate Your Dev Workflow

Anthropic has launched Routines in Claude Code, letting developers configure automated workflows triggered by schedules, API calls, or GitHub events. Routines run on Claude Code’s web infrastructure, removing any dependency on local machines. It is available for Pro, Max, Team, and Claude enterprise coding plan users.

Microsoft is testing OpenClaw-like AI bots for Copilot

OpenClaw-style features are being tested to make Microsoft’s autonomous Copilot run continuously. It will handle tasks like monitoring Outlook and suggesting daily priorities. The company plans role-specific agents for marketing, sales, and accounting to limit permissions. Key details are expected at Microsoft Build on June 2nd.

From AI Pilots to Production: Scaling Claude Successfully

Most enterprise AI deployment efforts fail due to wrong metrics, poor data readiness, and weak adoption design. Leading organizations are fixing this with hub-and-spoke AI structures, centralized MCP infrastructure, and workforce redesign programs that embed AI into core workflows, not treat it as an add-on.

Zuckerberg Is The First AI-Native CEO for Internal ‘Boss Chats’

Meta is developing a photorealistic 3D AI version of Mark Zuckerberg, trained on his mannerisms, tone, and voice, for employees to interact with. Zuckerberg is personally involved in the project. This is separate from his CEO agent initiative, as Meta commits up to $135 billion to AI development this year.

Apple’s AI Smart Glasses Are Coming to Crush Meta’s Ray-Bans

Apple AI smart glasses are planned for 2027, with a possible unveiling later this year. Apple is testing four frame designs in multiple colors. The glasses carry no display but support photos, calls, music, and Siri. The product signals a shift toward a model closer to Meta’s Ray-Ban glasses.

AI Hype vs Reality Gap: Stanford’s Explosive New Findings

The Stanford AI study 2026 shows frontier models now exceed human performance on PhD-level science and competition mathematics, while agent task success jumped from 20% to 77.3% in one year. Global corporate AI investment hit $581.7 billion in 2025, up 130%. Grok 4’s training alone emitted 72,816 tons of CO2 equivalent.

3 Layered LLM Evaluation For Every AI Agent You Build

LLM evaluation for AI agents is not a pre-deployment checklist. It is a continuous control system built on step-level trace visibility, golden datasets, and a three-tier eval stack covering unit checks, LLM judges, and live user experiments. Teams that govern agent behavior this way turn random failures into permanent safeguards.

Anthropic’s Mythos Model Gains White-House Backing

Treasury Secretary Bessent and Fed Chair Powell urged JPMorgan, Goldman Sachs, and three other major banks to test Anthropic’s Mythos model for cybersecurity vulnerabilities. The recommendation directly contradicts the Pentagon’s active effort to blacklist Anthropic over its refusal to remove AI safety restrictions.

What Else Is Happening?

Subscribe to our tech newsletter. Receive regular insights that keep you informed. And if you find it valuable, please share it with your network to help spread the word.
Catch you next time with fresh insights on AI and Tech, right here.