Google Makes Gemini 3 Flash Its Default AI, Raising the Stakes in the Global AI Race
हिंदी में सुनें
Listen to this article in Hindi
Google escalates the AI competition by setting Gemini 3 Flash as the default AI in its app and Search, challenging OpenAI's GPT-5.2 with speed and efficiency.
Google is intensifying its efforts in the artificial intelligence arena by introducing Gemini 3 Flash, a new, rapid AI model. This model is now the standard for both the Gemini application and Google Search's AI-enhanced mode. Launched just a month after Gemini 3, the Flash version is designed to be quicker, more effective, and more economical. Google asserts that it surpasses Gemini 2.5 Pro in performance while operating three times faster.
This announcement highlights Google's ambition to bring sophisticated AI to a broad audience of everyday users. Simultaneously, it puts pressure on competitors such as OpenAI, which recently launched GPT-5.2 in an effort to regain market leadership. Google's strategy with Gemini 3 Flash appears to be centered on the idea that speed, efficiency, and accessibility can be as crucial as sheer processing power.
Faster, Cheaper, and Powerful
Gemini 3 Flash is engineered to be Google's fastest and most cost-effective AI model to date. The company reports that it offers a "significant" performance improvement over Gemini 2.5 Flash, which was released earlier in the year. Data seems to back this claim. On the Humanity’s Last Exam benchmark—which evaluates general reasoning skills—the new model achieved a score of 33.7% without using external tools. This is significantly higher than Gemini 2.5 Flash’s score of 11%.
Notably, its performance approaches that of premium competitors. Gemini 3 Pro scored 37.5% on the same test, while OpenAI’s GPT-5.2 reached 34.5%. On the multimodal MMMU-Pro benchmark, Gemini 3 Flash actually leads with a score of 81.2%, surpassing all competing models.
Default Model Across Gemini and Search
Google is immediately deploying Gemini 3 Flash on a large scale by making it the default model within the Gemini app and AI-enhanced Search globally. Although users retain the option to switch to Gemini 3 Pro for tasks requiring complex mathematics or coding, Flash will manage the majority of routine inquiries.
The model is designed for multimodal comprehension, enabling users to integrate text, images, video, and audio within a single prompt. Gemini 3 Flash can deliver context-aware responses—often enhanced with visuals like charts and tables—whether it's analyzing brief sports clips, interpreting sketches, or extracting insights from audio recordings. It can even assist creators in prototyping applications directly within the Gemini interface.
Strong Adoption by Developers and Enterprises
Beyond individual consumers, Gemini 3 Flash is gaining momentum among developers and enterprise clients. Google indicates that companies like JetBrains, Figma, Cursor, Harvey, and Latitude are integrating the model via Vertex AI and Gemini Enterprise. Developers can also gain access to it in preview through the API or Google’s new coding assistant, Antigravity.
Pricing is set at $0.50 per million input tokens and $3.00 per million output tokens. While this is slightly more expensive than Gemini 2.5 Flash, Google points out that the model utilizes approximately 30% fewer tokens for "thinking tasks," which improves overall efficiency.
Renewed Competition with OpenAI
This launch occurs amidst increasing competition in the AI sector. Earlier reports indicated that OpenAI issued an internal "Code Red" following signs of slowing traffic to ChatGPT, which was then followed by the swift release of GPT-5.2. By making Gemini 3 Flash the default for millions of users, Google is clearly aiming to regain a competitive advantage.
As both companies vie to provide faster and more intelligent AI solutions, the battle for AI dominance is undoubtedly intensifying and rapidly evolving.