When Apple announced its new Manzano large language model (LLM) at WWDC 2024, the AI community’s reaction wasn’t the uniform applause Apple might have expected. Instead, a wave of skepticism swept through forums like Reddit and Twitter, with developers and AI researchers questioning just how “open” Apple’s approach to AI truly is. As one prominent AI researcher put it in a viral post: “Apple wants the PR benefits of ‘openness’ without actually opening the hood.”
The tension between Apple’s carefully crafted messaging about transparency and the reality of its AI implementation raises a fundamental question: Is Apple genuinely embracing open AI development, or is this another case of the company maintaining its legendary control while adopting the language of openness?
Apple’s Complicated Relationship with “Open” Technology
Apple has long maintained a walled garden approach to its ecosystem. From its proprietary hardware to its tightly controlled App Store, the company has built its reputation on integrated experiences that “just work” – largely because Apple controls every aspect of them.
This philosophy has served Apple well, helping it become the world’s most valuable company. But in the rapidly evolving AI landscape, where open-source models like Meta’s Llama have accelerated innovation through collaborative improvement, Apple’s traditional approach faces new challenges.
The Manzano Announcement: What Apple Actually Said
At WWDC 2024, Apple unveiled Manzano with careful language about transparency. Craig Federighi, Apple’s senior VP of Software Engineering, described it as “our most capable on-device model yet, with transparency into how it was built.” But notably, Apple stopped short of calling Manzano “open-source” – instead using terms like “transparency” and “documentation.”
This distinction matters enormously. True open-source AI models provide:
- Complete model weights – allowing developers to examine and modify the actual parameters
 - Training methodologies – detailing exactly how the model was trained
 - Training datasets – revealing what information the model learned from
 - Licensing for commercial use – permitting integration into products
 
What Apple offered instead was a “model card” – a document describing Manzano’s capabilities and limitations – along with limited API access. The actual model remains firmly behind Apple’s walls.
The Community’s Skepticism: Reading Between the Lines
The response from the AI community was swift and critical. On Reddit’s r/MachineLearning, a thread titled “Apple’s ‘Open’ AI Model Isn’t Actually Open” garnered thousands of upvotes. One commenter with a background in AI research noted: “This is ‘open’ in the same way that a restaurant with a glass window to the kitchen is ‘open’ – you can see what’s happening, but you can’t go in and cook yourself.”
This skepticism isn’t merely academic posturing. It reflects genuine concerns about what Apple’s approach means for the AI ecosystem.
Three Critical Concerns from the AI Community
- Limited innovation potential – Without access to model weights, developers can’t fine-tune or adapt Manzano for specialized use cases
 - Lack of scrutiny – The AI safety community can’t properly evaluate models they can’t examine
 - Competitive disadvantage – Developers building on Apple platforms have fewer options than those working with truly open models
 
Dr. Emily Bender, a computational linguistics professor at the University of Washington, put it succinctly in a recent interview: “There’s a fundamental tension between Apple’s business model of control and the collaborative nature of meaningful AI safety and progress. You can’t claim to prioritize both equally.”
Apple’s AI Strategy: Control vs. Collaboration
To understand Apple’s approach to AI transparency, we need to examine the company’s broader AI strategy. Unlike Google, Microsoft, or Meta, Apple came relatively late to the generative AI race, choosing to focus on on-device capabilities rather than cloud-based models.
This strategy aligns with Apple’s privacy-focused messaging but also serves another purpose: keeping users within the Apple ecosystem. By developing AI that works best (or exclusively) on Apple devices, the company reinforces its walled garden.
The Business Case for Apple’s Approach
From a business perspective, Apple’s strategy makes sense. By maintaining control over its AI models, the company can:
- Differentiate its hardware – Creating AI features that only work on Apple devices
 - Protect competitive advantages – Preventing competitors from adopting Apple’s innovations
 - Maintain quality control – Ensuring AI features meet Apple’s standards
 
As Benedict Evans, technology analyst and former Andreessen Horowitz partner, observed: “Apple isn’t interested in advancing AI for humanity – they’re interested in advancing Apple products for Apple customers. That’s not evil; it’s just their business model.”
This approach has proven enormously profitable in the past. Apple’s ecosystem lock-in has created the most loyal customer base in consumer technology, with switching costs that grow higher with each new integrated service.
The PR Challenge: Claiming Openness in a Closed System
Apple faces a unique challenge in the AI space. The most significant advancements in AI have come through open collaboration – from academic research sharing to open-source models that improve through community contributions. Companies like Meta have earned genuine goodwill by releasing models like Llama with permissive licenses.
This creates a PR problem for Apple. The company wants to be perceived as contributing to AI progress while maintaining its traditional control. The solution appears to be adopting the language of openness without fully embracing its practice.
The Transparency Gap
When we compare Apple’s approach to truly open AI initiatives, the differences become stark:
- Meta’s Llama 3 – Full model weights available, commercial use permitted, training methodology published
 - Mistral AI – Open weights, transparent training process, commercial-friendly license
 - Apple’s Manzano – API access only, limited documentation, no access to model weights
 
This gap between messaging and reality hasn’t gone unnoticed. As AI researcher Andrej Karpathy (formerly of OpenAI and Tesla) noted on Twitter: “The AI community has a finely-tuned BS detector when it comes to claims of openness. You either open the model or you don’t – there’s no middle ground that satisfies both business control and community contribution.”
The Implications for Apple’s Future in AI
Apple’s approach to AI transparency will have significant consequences for both the company and the broader AI ecosystem. Three potential outcomes seem most likely:
Scenario 1: The Walled Garden Advantage
If Apple’s bet pays off, its controlled AI approach could create compelling experiences that further lock users into its ecosystem. By integrating AI capabilities that work seamlessly across Apple devices – while performing less well or not at all on competitors’ products – Apple could strengthen its hardware premium and ecosystem advantage.
This would follow Apple’s historical playbook: not being first to market, but delivering a more refined, integrated experience that justifies its premium pricing.
Scenario 2: Left Behind by Open Innovation
The risk for Apple is that open-source AI development continues to accelerate, creating capabilities that surpass Apple’s controlled models. If developers can build more innovative applications on truly open models, Apple’s AI features could begin to feel limited by comparison.
This scenario becomes more likely if Apple’s AI capabilities lag significantly behind competitors. As AI becomes more central to computing experiences, falling behind could threaten Apple’s premium positioning.
Scenario 3: A Hybrid Approach Emerges
The most likely outcome may be that Apple gradually adjusts its approach, selectively opening certain aspects of its AI technology while maintaining control of core components. This would mirror Apple’s evolution in other areas, like its gradual opening of iOS to third-party keyboards, extensions, and widgets.
We’re already seeing hints of this approach with Apple’s partnership with OpenAI to integrate ChatGPT – acknowledging that some AI capabilities are better sourced externally while keeping core experiences in-house.
What This Means for the Tech Industry
Apple’s approach to AI transparency isn’t just about one company’s strategy – it reflects a broader tension in the technology industry between open collaboration and proprietary advantage. Three key takeaways emerge:
- The definition of “open” is being contested – Companies are adopting the language of openness while maintaining varying degrees of control
 - User experience may trump openness – If Apple’s AI features provide a superior experience despite being closed, many consumers won’t care about the underlying model’s openness
 - The AI ecosystem is still taking shape – Whether open or closed models dominate will depend on where innovation happens fastest
 
For developers, the message is clear: build for multiple AI backends rather than committing exclusively to one company’s vision. For consumers, the choice increasingly becomes not just between devices but between philosophies of technology development.
Conclusion: The Transparency Imperative
Apple’s approach to AI transparency represents more than just a technical decision – it reflects the company’s fundamental philosophy about technology development. While Apple has built its empire on controlling the user experience, the collaborative nature of AI progress presents new challenges to this model.
The skepticism from the AI community isn’t simply about technical details – it’s about whether meaningful AI progress can happen in closed ecosystems. As AI becomes more central to computing experiences, the tension between control and collaboration will only intensify.
For Apple, the challenge will be finding a balance that maintains its business advantages while genuinely contributing to AI progress. For the rest of us, the question becomes whether we value integrated experiences more than open innovation – a choice that will shape not just the devices we use, but the future of artificial intelligence itself.
Where This Insight Came From
This analysis was inspired by real discussions from working professionals who shared their experiences and strategies.
- Share Your Experience: Have similar insights? Tell us your story
 
At ModernWorkHacks, we turn real conversations into actionable insights.
			  
0 Comments