The announcement marks Google’s return to the smart glasses market after its earlier Google Glass project stalled in 2015.
Google plans to launch its next-generation artificial intelligence (AI) glasses next year, allowing wearers to use apps without pulling out their phones.
The tech giant announced its AI glasses project earlier this year, and said on Monday that the first glasses will arrive next year.
Here’s what we know.
How it works
One set of AI glasses relies on audio and camera features to interact with Google’s Gemini AI assistant, for example to chat, take photos, or “get help,” the company said.
Meanwhile, the other set uses in-lens displays for navigation and translations.
The hardware is being developed by South Korean high-end eyewear brand Gentle Monster and electronics conglomerate Samsung as well as American glasses company Warby Parker.
The glasses will run on Android XR, which powers Google’s mixed-reality devices, the company said.
Google’s latest foray into smart glasses
The announcement marks Google’s return to the smart glasses market after its earlier Google Glass project stalled in 2015, just two years after its rollout.
The original Google Glass was widely criticised for its limited battery life, uncomfortable design, a lack of public understanding about the product, and privacy concerns.
The market Google is re-entering is now largely led by Meta. Its Ray-Ban Meta smart glasses, developed with EssilorLuxottica, have become a breakout success.
In September, Meta introduced a display-equipped model that shows messages, photo previews, and live captions through a small lens-embedded screen.
Similarly, Google is also working on a wired mixed-reality headset known as Project Aura, designed to bring a virtual workspace or entertainment environment anywhere.
The device uses optical see-through technology to blend digital interfaces with the real world in a 70-degree field of view.
Google said it will share more details about the launch of its glasses in 2026.