Google unveils upgraded Gemini Deep Research to power safer, smarter AI agents and apps: 5 things to know

Google unveils upgraded Gemini Deep Research to power safer, smarter AI agents and apps: 5 things to know

Updated on 12 Dec 2025 Category: Technology • Author: Scoopliner Editorial Team
हिंदी में सुनें

Listen to this article in Hindi

गति:

Google has made its advanced Deep Research AI agent accessible to developers for the first time. Under the hood, the upgraded agent runs on Gemini 3 Pro, Google’s most advanced multimodal model, which powers the agent’s “reasoning core.”


Google is taking a big leap in making its most capable AI research tools more accessible, and a little more human in how they think. The company has rolled out a new and more powerful version of its Deep Research Agent, now open to developers for the first time, along with a fresh benchmark designed to test how well AI systems handle complex, multi-step web searches. Think of it as giving developers their own mini research assistant, powered by Gemini, that knows how to dig deep, double-check its work, and even admit when it doesn’t have all the answers.
Here are five key things to know about Google’s latest AI push.
Deep Research graduates from the Gemini app
First introduced inside the Gemini app in late 2024, Deep Research is now stepping out into the world. Developers can finally embed Google’s most advanced autonomous research features directly into their own products and workflows. The agent doesn’t just pull up search results, it thinks through them.
Its workflow resembles how a careful human researcher might operate: it creates queries, reads results, identifies gaps, and then refines its search again. The process repeats until it reaches what it deems a satisfactory conclusion. Google says this iterative approach helps the system produce richer, more complete insights than simple prompt-response models can offer.
Built on Gemini 3 Pro
Under the hood, the upgraded agent runs on Gemini 3 Pro, Google’s most advanced multimodal model, which powers the agent’s “reasoning core.” The company says the model has been fine-tuned to minimise hallucinations, a persistent challenge for large language models, while improving the accuracy and quality of long-form research summaries.
In internal testing, this version of Deep Research reportedly outperformed the web search capabilities of Gemini 3 Pro itself. While Google acknowledges that users shouldn’t take every answer as gospel, it argues that Deep Research is invaluable for exploratory information gathering, especially when dealing with unfamiliar topics or cross-domain analysis.
A new open-source benchmark for complex web queries
Alongside the agent, Google is introducing DeepSearchQA, an open-source benchmark that aims to reflect how real research tasks unfold online. Existing benchmarks, the company argues, often test isolated facts, not the step-by-step reasoning humans use to connect dots across multiple sources.
DeepSearchQA includes 900 hand-crafted “causal chain” tasks across 17 subject areas, from history and policy to climate science and health. Each task builds on the previous one, making it a tougher and more realistic test of how well an AI can sustain reasoning over time. Instead of simply checking for factual correctness, the benchmark measures answer completeness, whether the model fully captures the nuances and dependencies of a question.
To support the community, Google is also releasing a dataset, leaderboard, and technical report, allowing developers and researchers to benchmark their own systems against Deep Research.
Developer-friendly features (and what’s coming next)
Developers will have access to a range of powerful tools through the Deep Research API. These include PDF, CSV, and document parsing, structured report templates, granular source citations, and JSON schema outputs that make integration smoother.
Future updates are set to add native chart generation, allowing the agent to visualise data on its own, and expanded support for the Model Context Protocol (MCP), which lets developers plug in custom data sources. The upgraded Deep Research is also slated to roll out soon across Google Search, NotebookLM, and Google Finance, making its capabilities available to end users more broadly.
A new API for the age of “thinking” models
To tie everything together, Google has also unveiled the Interactions API, a new standard for connecting AI models like Gemini 3 Pro and agents like Deep Research. Available in public beta via Google AI Studio, it replaces the simple request-response style of the older generateContent interface with something more dynamic and stateful.
The new API supports server-side session management, nested message structures, background execution for long-running tasks, and built-in MCP compatibility. Google describes it as an essential step towards more autonomous, persistent AI systems, the kind that can remember context, plan multi-step reasoning, and handle complex workflows without constant human supervision.
Google’s latest upgrades mark a shift in how the company sees AI agents: not just as text generators, but as active researchers that can assist in building smarter, safer, and more reliable systems. With Deep Research now open to developers and a benchmark to keep them honest, Google is clearly betting on a future where AI doesn’t just answer questions, it learns how to ask better ones.
- Ends

Source: India Today   •   12 Dec 2025

Related Articles

Motorola Edge 70 Ultra: Snapdragon 8 Gen 5 and Triple 50MP Cameras Tipped
Motorola Edge 70 Ultra: Snapdragon 8 Gen 5 and Triple 50MP Cameras Tipped

The Motorola Edge 70 Ultra is rumored to feature a Snapdragon 8 Gen 5 processor and a triple 50MP camera setup. Details …

Source: livemint.com | 15 Dec 2025
Google Translate Now Offers Real-Time Headphone Translations: Here's How to Use It
Google Translate Now Offers Real-Time Headphone Translations: Here's How to Use It

Google Translate's beta feature allows users to hear real-time translations through headphones. Available on Android in select countries, iOS coming in 2026.

Source: India TV News | 15 Dec 2025
Apple releases new iOS 26.2: How to update your iPhone, new features coming and other details to know
Apple releases new iOS 26.2: How to update your iPhone, new features coming and other details to know

Apple's iOS 26.2 update is here! Learn how to update your iPhone 11 or later and explore the new features, including Liquid …

Source: Times of India | 14 Dec 2025
← Back to Home

QR Code Generator