Apple Finally Approves External GPUs for Mac — But Only If You’re Training AI, Not Gaming
Six years after killing eGPU support, Apple quietly reverses course. The catch? It’s for AI developers only.
For half a decade, Apple Silicon Mac users have lived with a frustrating reality: their machines, despite having Thunderbolt ports capable of handling external GPUs, simply wouldn’t work with them. Apple killed eGPU support when it transitioned from Intel to its own M-series chips in 2020, and despite years of user complaints, the company held firm.
Also worth reading: Our deep dive into how AI tools are quietly reshaping human cognition explores the same technological shifts driving Apple’s AI pivot.
Until now.
In a move that slipped under the radar of mainstream tech coverage, Apple has officially approved third-party drivers that allow AMD and Nvidia graphics cards to connect to Apple Silicon Macs via Thunderbolt or USB4. But before gamers start celebrating, there’s a significant catch: this isn’t for you.
The Tiny Corp Breakthrough
The drivers come from Tiny Corp, a company best known for tinygrad — an open-source deep learning framework positioning itself as a leaner alternative to PyTorch and TensorFlow. Announced on April 1, 2026 (confirmed by multiple sources including Tom’s Hardware and Apple Insider), Apple’s approval means users can finally connect external GPUs without the previous workaround of disabling System Integrity Protection (SIP), a security feature that most users were understandably reluctant to turn off. This development comes as AI companies increasingly lobby for influence over how the technology develops.
“It’s so easy to install now a Qwen could do it,” Tiny Corp quipped in their announcement, referencing the popular open-source language model.
The technical requirements are straightforward: macOS 12.1 or later, a Thunderbolt/USB4 port, and a supported GPU (AMD RDNA3+ or Nvidia Ampere+). But the use case is narrowly defined: AI and machine learning workloads only.
💡 Key Stat: Macs with 128GB+ unified memory now face delivery windows of 6 days to 6 weeks, and Apple has discontinued the 512GB Mac Studio configuration entirely due to AI demand.
What You Can (and Can’t) Do
Let’s be clear about the limitations. This is compute-only support. The drivers enable AI training and inference through the tinygrad framework, but they don’t provide graphics acceleration. Video output to external monitors remains unaccelerated. Gaming? Forget about it.
For AI developers, though, this is genuinely transformative. A MacBook Pro with an external RTX 4090 suddenly becomes a serious local AI workstation. The combination of Apple’s unified memory architecture for inference and Nvidia’s CUDA cores for training creates a hybrid setup that was previously impossible on macOS.
The setup process, while simpler than before, still requires technical comfort: Docker Desktop for Nvidia users, custom compiler installation for AMD, and working within the tinygrad ecosystem rather than mainstream frameworks.
Why Now? Follow the AI Money
Apple’s timing isn’t coincidental. The company has watched demand for high-memory Macs explode as local AI agents — tools like OpenClaw that run large language models on-device — have gained traction. This surge reflects broader questions about how governments are struggling to regulate AI even as adoption accelerates. According to Tom’s Hardware, this demand has created genuine supply constraints:
- Macs with 128GB+ unified memory now face delivery windows of 6 days to 6 weeks
- Apple has discontinued the 512GB Mac Studio configuration
- The 256GB Mac Studio received a $400 price increase
When your customers are scrambling for more compute to run AI workloads, and your own silicon can’t satisfy that demand, you have two choices: lose those customers to Linux workstations, or find a way to accommodate external compute. Apple chose the latter — but on its own terms.
The Strategic Calculation
This move reveals Apple’s strategic priorities with unusual clarity. The company spent six years resisting calls to restore eGPU support from gamers, video editors, and creative professionals. But when AI developers started hitting the limits of M-series chips, Apple moved within months.
The message is unambiguous: AI productivity matters more to Apple than gaming ever did.
It’s a calculated trade. By allowing external GPU compute for AI while maintaining the graphics lockdown, Apple keeps developers in its ecosystem without ceding ground to Microsoft’s DirectX or opening the door to gaming competition. The Metal graphics API remains the only game in town for Mac visuals, but AI workloads now have an escape valve.
The Tinygrad Gambit
Perhaps the most interesting aspect of this story isn’t Apple’s reversal — it’s who benefited from it. Tiny Corp isn’t Nvidia. It isn’t AMD. It’s a small company building an alternative to the CUDA ecosystem that has dominated AI development for years.
By approving tinygrad-specific drivers rather than generic GPU support, Apple is effectively endorsing a challenger to Nvidia’s software monopoly. If developers start building AI applications on tinygrad instead of PyTorch or TensorFlow, the entire AI infrastructure landscape shifts.
This isn’t just about Mac compatibility. It’s a potential wedge in the CUDA empire.
What This Means for Users
For the average Mac user, this changes nothing. Your MacBook Air won’t suddenly run Cyberpunk 2077 at 4K. Your Mac mini won’t become a gaming console.
But for AI developers, researchers, and the growing community of users running local language models, this is a genuine inflection point. The ability to add serious GPU compute to a Mac without compromising security or wrestling with unsupported hacks removes a major barrier to using Apple’s hardware for serious AI work.
The question is whether this represents a permanent shift in Apple’s stance or a temporary concession to AI demand. History suggests caution: Apple has a habit of opening doors slightly, then slamming them shut when strategic priorities change.
For now, though, the door is open. If you’re training neural networks, that is. Gamers will have to keep waiting.
The Bottom Line
Apple’s eGPU approval isn’t the gaming victory users have wanted for six years. It’s something more strategically significant: an acknowledgment that even Apple’s vertically integrated silicon strategy has limits, and that the AI revolution is forcing compromises the company would prefer not to make.
The winners here are clear: AI developers who want to stay in the Mac ecosystem, and Tiny Corp, which just became the sanctioned gateway to external GPU compute on macOS. The losers? Anyone who hoped this meant native gaming support was coming. Apple’s walls are still high. They’ve just added a small, AI-specific door.
For a company that prides itself on controlling the full stack, that’s a remarkable concession — and a telling sign of where the technology landscape is heading.
Related Reading
- Are We Being Trained by Our Own AI Constructs? — How AI tools are reshaping human cognition
- UK Postponing AI Compliance Deadlines — Regulatory confusion as AI adoption accelerates
- Anthropic Launches AnthroPAC — When AI safety companies start playing politics
Sources
- Tom’s Hardware — Apple approves drivers that let AMD and Nvidia eGPUs run on Mac
- Apple Insider — AMD or Nvidia eGPUs can work on Apple Silicon Macs
- The Verge — Apple approves driver that lets Nvidia eGPUs work with Arm Macs
- Tinygrad Documentation — TinyGPU for Mac
- TechPlanet — Apple Approves Nvidia eGPU Driver
- Hacker News Discussion
- ResetEra Forum Thread
