Apple is set to open up its Apple Intelligence system in iOS 27, allowing users to select third-party AI models for core generative tasks instead of relying solely on its own or its initial partner. According to reports, the change will extend to features such as Writing Tools, Image Playground, and broader Siri capabilities, giving users more flexibility across iPhone, iPad, and Mac.
The framework, referred to internally as Extensions, will appear in the Settings app. Developers can opt in through compatible App Store applications, after which users assign specific AI services to handle different functions. Apple has been testing the system with Google and Anthropic, partners it already works with on various infrastructure and development fronts. Its own in-house models will remain available as an option, creating a mixed environment rather than a full handover to outsiders.
This marks a notable shift from the original Apple Intelligence rollout, which centered on a prominent ChatGPT integration that has seen lower adoption than both companies anticipated. The relationship between Apple and OpenAI has cooled further amid reports of talent poaching, highlighting the competitive tensions beneath their earlier collaboration. By broadening access, Apple appears to be hedging against over-reliance on any single external provider while positioning its devices as a more neutral platform for AI services.
Additional planned refinements include the ability to assign distinct voices to different models during Siri interactions, making it easier to distinguish between responses from Apple’s systems and those from services like Claude. A dedicated section in the App Store for compatible AI apps is also expected, along with disclaimers clarifying that Apple bears no responsibility for third-party generated content. These details suggest a cautious approach to integration, acknowledging the risks of handing over user data and output quality to outside models.
The changes arrive alongside other iOS 27 updates, including a standalone Siri app, Visual Intelligence enhancements in the Camera, improved Photos editing tools, and Wallet customizations. Historically, Apple has moved slowly on opening its ecosystem, often prioritizing control and privacy claims while gradually yielding to user and market pressures. This latest step echoes past concessions, such as allowing alternative app stores in certain regions or supporting RCS messaging, where pragmatism tempered initial resistance.
Critics may view the move as reactive rather than visionary. Apple Intelligence launched with considerable fanfare around on-device processing and privacy, yet real-world usage has exposed limitations in scope and performance. Enabling rival models could improve results for writing, image generation, or voice interactions, but it also introduces new variables around consistency, data handling, and reliability. Users will need to navigate these trade-offs themselves, something the company has traditionally tried to shield them from.
Overall, the expansion reflects Apple’s evolving stance in a fast-moving AI landscape. Rather than doubling down exclusively on proprietary technology, it is creating space for competition within its tightly controlled environment. Whether this leads to meaningfully better experiences or simply more fragmented options will depend on execution and how readily developers and users embrace the new flexibility when iOS 27 arrives this fall.
