Apple is expected to announce a series of feature updates for its AirPods lineup during the WWDC 2025 keynote on June 9, according to reports. The updates aim to expand how users interact with their wireless earbuds, with a particular focus on hands-free control, sleep detection, and broader device compatibility.
Among the most notable additions are new head gestures that build on functionality first introduced in the AirPods Pro 2 and AirPods 4. These gestures, which include nodding or shaking the head, are designed to let users answer or reject phone calls, or respond to notifications without physically touching the AirPods or their connected device.
Apple is also expected to enhance the Conversation Awareness feature, which automatically lowers media volume and reduces ambient noise when someone is speaking directly to the user. Currently, users need to manually end this mode by pressing or swiping on the AirPods stem, but the upcoming update may allow a head gesture to end Conversation Awareness, creating a more seamless experience.
Another anticipated feature is automatic sleep detection. When AirPods recognize that a user has fallen asleep, playback could pause automatically. This would likely leverage existing sleep-tracking data from an Apple Watch, enhancing integration between the two wearables.
Further expanding the utility of AirPods, Apple is said to be working on a feature that allows users to control the iPhone or iPad camera remotely by tapping the AirPods stem—essentially turning the earbuds into a wireless shutter button for taking photos.
In education and shared environments, Apple is also targeting improvements in AirPods pairing, particularly with iPads used by multiple users. Streamlining the pairing process could address a common pain point in classrooms and shared-device settings.
One of the more intriguing developments is a new “studio microphone” mode aimed at content creators. This feature would focus on delivering higher-quality voice recordings, potentially as part of a broader “Audio Mix” suite that Apple is rumored to be building into the iPhone 16. The technology would use machine learning to isolate the user’s voice from background noise during video recordings, offering improved clarity without the need for external microphones.
These feature enhancements come as Apple prepares to reframe how it labels its operating systems. Starting this year, the company is expected to abandon version numbers in favor of naming conventions based on the upcoming year. For instance, iOS 19 will instead be branded as iOS 26. Similar changes will apply across Apple’s platforms, including iPadOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26.
While Apple may revise or withhold some of these AirPods features before their official announcement, the focus at WWDC appears to be less about showcasing generative AI and more about refining user experience across its ecosystem. If these features make the cut, they would mark another step in Apple’s ongoing effort to make AirPods a more versatile tool—not just for listening, but for productivity, accessibility, and daily interaction.