Categories
Selected Articles

Google’s Gemini Live AI assistant gains new features for better user interaction

Google Enhances Gemini Live with New Features

Google is enhancing its AI assistant, Gemini Live, with a series of new features aimed at improving user interaction. Starting next week, Gemini Live will have the ability to highlight items on your screen while sharing your camera, making it easier for users to identify tools or objects. This enhancement is designed to facilitate more efficient project completion, where users can simply point their camera at a collection of tools, and Gemini Live will indicate the correct one for their needs, reports 24brussels.

The feature will debut on the newly announced Pixel 10 devices set to launch on August 28th, with plans to roll out visual guidance to other Android devices simultaneously, and later to iOS users in the coming weeks. In addition to the visual capabilities, Google is expanding Gemini Live’s functionality to allow interaction with various applications including Messages, Phone, and Clock.

This development means that users can, for instance, while discussing directions with Gemini, easily shift the conversation to text a friend about running late. Google has indicated that users will be able to prompt the AI to draft messages on their behalf effortlessly.

Moreover, Google is introducing an updated audio model for Gemini Live that aims to enhance the chatbot’s communication by emulating key aspects of human speech, such as intonation and rhythm. Gemini will soon adapt its tone based on the context; for instance, it may use a calmer voice when discussing stressful topics.

Users will also have the option to control the speed at which Gemini speaks, similar to features available in other AI voice assistants. Additionally, if asked for dramatic narrations, the chatbot may even adopt characters’ accents to deliver a more engaging storytelling experience.