Share this @internewscast.com
Unveiled at Google I/O, Project Aura emerges from a partnership between Xreal and Google, marking the second Android XR device to hit the scene after Samsung’s Galaxy XR headset. Anticipated for a 2026 release, these smart glasses challenge the traditional definition of the term.
During a recent demonstration, I pondered whether Project Aura should be classified as a headset or smart glasses. At first glance, it resembles oversized sunglasses, but the cable connecting to a battery-pack-cum-trackpad suggests otherwise. Google representatives refer to it as a headset disguised as glasses, coining the term “wired XR glasses.”
The device connects wirelessly to a laptop, allowing me to create an expansive virtual workspace with a 70-degree field of view. I effortlessly launched Lightroom on this virtual desktop while streaming YouTube in another section. I also engaged in a 3D tabletop game, manipulating the board with simple gestures. By focusing on a painting, I activated Circle to Search, and Gemini provided details about the artwork and artist.
While similar experiences are possible with the Vision Pro and Galaxy XR, Project Aura offers a less obtrusive alternative. Its design means you could wear it in public without attracting much attention. However, it does not overlay digital information on the physical world like augmented reality; instead, it projects apps in front of your view akin to the Galaxy XR experience.
According to a Google spokesperson, everything I tested on Project Aura was originally crafted for Galaxy XR, ensuring seamless compatibility without redesigning apps or features for Aura’s unique design—a significant advantage.
The XR landscape faces a considerable challenge with app availability. Devices like the Meta Ray-Ban Display and Vision Pro launched with limited third-party apps, offering minimal motivation for consumers. Developers are forced to prioritize which platforms to support, stifling innovation from smaller companies eager to experiment and compete.
That’s what makes Android XR fascinating. Smaller players, like Xreal, can access apps developed for Samsung’s headset. Android apps will also work on the AI glasses launching next year from Warby Parker and Gentle Monster.
“I think this is probably the best thing for all the developers. You just don’t see any fragmentation anymore. And I do believe there will be more and more devices converging together. That’s the whole point of Android XR,” says Xreal CEO Chi Xu.

Slipping on Google’s latest prototype AI glasses, I’m treated to an Uber demo in which a fictional version of me is hailing a ride from JFK Airport. A rep summons an Uber on the phone. I see an Uber widget pop up on the glasses display. It shows the estimated pickup time and my driver’s license plate and car model. If I look down, a map of the airport appears with real-time directions to the pickup zone.
It’s all powered by Uber’s Android app. Meaning Uber didn’t have to code an Android XR app from scratch. Theoretically, users could just pair the glasses and start using apps they already have.
When I’m prompted to ask Gemini to play some music, a YouTube Music widget pops up, showing the title of a funky jazz mix and media controls. It’s also just using the YouTube Music app on an Android phone.
I’m asked to tell Gemini to take a photo with the glasses. A preview of it appears in the display and on a paired Pixel Watch. The idea is that integrating smartwatches gives users more options. Say someone wants audio-only glasses with a camera. They can now take a picture and view what it looks like on the wrist. It’ll work on any compatible Wear OS watch.

I also try live translations where the glasses detect the language being spoken. I take Google Meet video calls. I get Nano Banana Pro to add K-pop elements to another photo I’ve taken. I try a second prototype with a display in both lenses, enabling a larger field of view. (These are not coming out next year.) I watch a 3D YouTube video.
It’s all impressive. I hear a few spiels about how Gemini truly is the killer app. But my jaw really drops when I’m told next year’s Android XR glasses will support iOS.
“The goal is to give this ability to have multimodal Gemini in your glasses to as many people as possible. If you’re an iPhone user and you have the Gemini app on your phone, great news. You’re gonna get the full Gemini experience there,” says Juston Payne, Google’s director of product management for XR.
Payne notes that this will be broadly true across Google’s iOS apps, such as Google Maps and YouTube Music. The limitations on iOS will mostly involve third-party apps. But even there, Payne says the Android XR team is exploring workarounds. At a time when wearable ecosystem lock-in is at an all-time high, this is a breath of fresh air.
Google’s use of its existing Android ecosystem is an astute move that could give Android XR an edge over Meta, which currently leads in hardware but has only just opened its API to developers. It also ramps up the pressure on Apple, which has fallen behind on both the AI and glasses fronts. Making things interoperable between device form factors? Frankly, it’s the only way an in-between device like Project Aura has a shot.
“I know we can make these glasses smaller and smaller in the future, but we don’t have this ecosystem,” adds Xu, Xreal’s CEO. “There are only two companies right now in the world that can really have an ecosystem: Apple and Google. Apple, they’re not going to work with others. Google is the only option for us.”
Google is trying to avoid past mistakes. It’s deliberately partnering with other companies to make the hardware. It’s steering clear of the conspicuous design of the original Google Glass. It has apps pre-launch. The prototypes explore multiple form factors — audio-only and displays in one or both lenses.
Payne doesn’t dodge when I ask the big cultural question: How do you discourage glassholes?
“There’s a very bright, pulsing light if anything’s being recorded. So if the sensor is on with the intent to save anything, it will tell everyone around,” says Payne. That includes queries to Gemini for any task involving the camera. On and off switches will have clear red and green markings so users can prove to others that they’re not lying when they say the glasses aren’t recording. Payne says Android’s and Gemini’s existing permissions frameworks, privacy policies, encryption, data retention, and security guarantees will also apply.
“There’s going to be a whole process for getting certain sensor access so we can avoid certain things that could happen if somebody decides to use the camera in a bad way,” Payne says, noting Google’s taking a conservative approach to granting third parties access to the cameras.
On paper, Google is making smart moves that address many of the challenges inherent to this space. It sounds good, but that’s easy to say before these glasses launch. A lot could change between now and then.