Google is making Gemini AI part of everything you do with your smartphone — here’s how

Themearound
3 min readAug 14, 2024

--

Google showed off a lot of impressive hardware at the Made by Google event this year, with new Pixel smartphones, earbuds, and smartwatches. But, the company’s Gemini AI model was arguably the real star, playing a central or supporting role in nearly every unveiled feature.

We’ve put together the most notable, interesting, and quirky ways Gemini will be a part of Google’s mobile future.

Gemini Live

The most up-front appearance of Gemini came in the form of Gemini Live, which, as the name implies, breathes life into the AI assistant and lets it act much more human. Not only can you talk casually to the Gemini without needing formal commands, but you can even interrupt its response and divert the conversation without having to start over. Plus, with ten new voice options and a better speech engine, Gemini Live is closer to a phone call with a friend/personal assistant than its more robotic forebears.

Pixel Screenshots

Screenshots is a banal name but is a significant element of the new Pixel 9 smartphone series. The native mobile app uses the Gemini Nano AI model built into the phone to turn your photos into a searchable database automatically. The AI can essentially process an image the way a human would.

For example, say you take a picture of a sign with the details of an event. When you open that picture, Gemini will include options to put the event into your calendar, map a route to its location, or even open a webpage listed on the sign. And the AI will enhance the more common searches, like looking for pictures of a spotted dog or brick building.

Pixel Studio

Google is using Gemini and its new smartphones to try to get an edge in the fast-growing AI image generation market with the Pixel Studio app. This text-to-image engine uses Gemini Nano on the smartphone to employ on-device and cloud models like Imagen 3 to create images faster than the standard web portals.

The app itself includes a menu for changing the style as well. The biggest caveat is that it won’t make human faces. Google didn’t say it’s because of controversy earlier this year, but it may also just be erring on the side of caution.

Add Me

Another image-based AI feature Google announced is almost the inverse of the face-shy Pixel Studio. Add Me uses AI to create a (mostly) seamless group photo that includes the person taking the photo.

All it takes is for the photographer to switch out with someone else. Then, the AI will guide the new photographer on how to set up a second shot and composite the two images into a single image with everyone there.

Pixel Weather and more

Arguably, both the least necessary use of Gemini’s advanced AI and probably the most frequently used is the Pixel Weather app. The Gemini Nano AI model will produce customized weather reports fitting what the user wants to see in the app. It simplifies the customization in subtle but very real ways. There were plenty of other smaller AI highlights throughout the presentation as well.

For instance, Android users can overlay Gemini on their screens and ask questions about what’s visible. At the same time, the new Research with Gemini tool will tailor research reports to specific questions, probably mostly in academic settings. Other examples aren’t out just yet, but Android phones will soon be able to share what they find using the Circle to Search feature.

You might also like

Originally published at https://www.themearound.com on August 14, 2024.

--

--

Themearound
Themearound

Written by Themearound

Exploring endless ideas around the internet. Our journey began with a simple idea to create a platform that would bring people together meaningful connections.