News In Brief Technology and Gadgets
News In Brief Technology and Gadgets

Google May Introduce 'Live for AI Mode' in Lens: What It Is and How It Works

Share Us

155
Google May Introduce 'Live for AI Mode' in Lens: What It Is and How It Works
06 May 2025
6 min read

News Synopsis

Google is reportedly working on a new feature called “Live for AI” mode, which could be integrated into Google Lens in the near future. This feature is expected to bring a significant upgrade to the current capabilities of Google Lens by enabling users to share their screens with Google’s artificial intelligence (AI). The AI will then provide real-time information based on what is being displayed, marking a key advancement from Lens' existing camera-only functionality.

What is Live for AI Mode in Google Lens?

Google Lens currently allows users to point their smartphone’s camera at an object or scene to receive contextual information about it. This function has become increasingly useful, helping users identify landmarks, objects, plants, and even text. With the upcoming introduction of the “Live for AI” mode, users will be able to share their screens with Google’s AI, taking Lens functionality to the next level. This will allow Google to provide real-time contextual insights about both the camera view and the on-screen elements being displayed.

For example, if users are browsing a webpage or using an app, the "Live for AI" mode can analyze the content on the screen and offer relevant information or suggestions. This represents an evolution of Lens’s current function, which is primarily limited to recognizing physical objects through the camera lens.

How Does Live for AI Mode Compare to Google’s Project Astra?

The "Live for AI" mode in Google Lens shares some similarities with another Google project—Project Astra (also known as Gemini Live). Both systems are designed to facilitate contextual interactions using a combination of visual inputs and on-screen elements. However, the core purposes of these two technologies differ slightly.

  • Live for AI Mode: This mode focuses primarily on improving search capabilities. It allows users to obtain real-time information related to on-screen content, such as text or images displayed on their phones. The feature’s main aim is to help users find relevant details quickly by analyzing what is presented on the screen.

  • Project Astra: In contrast, Astra, developed by Google DeepMind, serves as a more comprehensive AI assistant. It is designed to interact with users in a broader context, supporting text, voice, images, and video. Project Astra focuses more on providing real-time assistance and engaging users through conversation, while Live for AI is more narrowly focused on gathering and delivering information.

Limitations of Live for AI Mode

Despite the potential benefits, Google’s Live for AI mode comes with one significant limitation: it does not allow for follow-up questions. Each query will be treated as a new, independent search, which means that the AI will not retain context from previous interactions. This lack of continuity may limit the ability to engage in deeper or more complex conversations with the AI, as users will have to start fresh with each new query.

What is Google’s Project Astra?

Google’s Project Astra, or Gemini Live, is a multimodal AI agent created by Google DeepMind. Unlike the more search-focused Live for AI mode, Project Astra is a fully integrated AI assistant designed to communicate and engage with users across multiple platforms, including text, voice, images, and video. The project’s aim is to create an AI that can offer real-time responses and engage in deeper interactions by drawing on both visual and contextual inputs.

A key feature of Project Astra is its ability to combine real-world cues (such as images and videos) with internet-based data to provide a more natural and human-like experience. This interaction style aims to make conversations with the AI more intuitive and responsive to users’ needs.

Key Differences Between Live for AI Mode and Project Astra

Feature

Live for AI Mode

Project Astra

Core Function

Information retrieval and search

Real-time assistance and interaction

Focus

Search-centric, on-screen content

Multimodal interaction (text, voice, image)

Follow-up Questions

No

Yes

Platform

Google Lens

DeepMind’s multimodal AI assistant

 

The Future of Google Lens and AI

With the introduction of the "Live for AI" mode, Google is pushing the boundaries of AI’s role in providing real-time information and enhancing user experience. The integration of AI with both physical objects (via the camera) and digital content (via screen sharing) opens up new possibilities for seamless interactions. While Project Astra may cater to more advanced real-time assistant functionalities, Live for AI mode appears to be tailored for users who seek quick, context-driven insights to enhance their digital experience.