Google Just Launched Gemini 2.0 Flash and Pro for Users and Developers

More artificial intelligence for the people.

Google Just Launched Gemini 2.0 Flash and Pro for Users and Developers
Google Gemini

Credit: Google


Google has issued another round of significant AI model announcements, upgrading its Gemini offerings across the board to bring users and developers artificial intelligence engines that are, according to the company, more capable and reliable. In the wake of DeepSeek's rise and new OpenAI models, the pace of AI development isn't slowing down.

First up, the Gemini 2.0 Flash model that appeared in December for a select few is now rolling out to everyone, so you'll see it in the Gemini apps on desktop and mobile (this actually began appearing last week, so you may have already used it). The Flash models are designed to be faster and more lightweight, without too many performance trade-offs.

Google is also making a Gemini 2.0 Flash Thinking Experimental model available for all users to test. It's another "reasoning" model, like the ones we've seen in ChatGPT, where the AI displays its thinking as it goes—with the intention of producing results that are more accurate and more transparent.

There's also a version of this model appearing to all users with access to apps included: Google Search, Google Maps, and YouTube. It'll return real-time information from the web, as well as references to Google Maps data (including journey times and location details), and information pulled from YouTube videos.

Lastly for the Flash models, Google is making Gemini 2.0 Flash-Lite available to developers. It's the most cost-efficient Gemini model yet—which will appeal to those building tools with Gemini—while still maintaining high levels of processing performance across a variety of multimodal inputs (text, images, and more).

Pro-level models

Google Gemini

You need Gemini Advanced to get at some of these models. Credit: Lifehacker

Next up, the even more capable Gemini 2.0 Pro Experimental model is here—a little slower than the Flash equivalents, but better at thinking, writing, coding, and problem solving. This model is now appearing in experimental form for developers, and for any users who are paying $20 a month for Gemini Advanced.

"It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far," says Google. It can also take in two million tokens per prompt, which averages out at about 1.4 million words—roughly the Bible, twice.

That's double the capacity of the 2.0 Flash models, and Google provided some benchmarks, too. In the general MMLU-Pro benchmark, we've got scores of 71.6 percent, 77.6 percent, and 79.1 percent respectively for Gemini 2.0 Flash-Lite, 2.0 Flash, and 2.0 Pro, compared to 67.3 percent for 1.5 Flash and 75.8 percent for 1.5 Pro.

There are similar improvements across the board on other AI benchmarks, with Gemini 2.0 Pro Experimental hitting a score of 91.8 percent in a leading math test. That compares to 90.9 percent for 2.0 Flash, 86.8 percent for Flash-Lite, 86.5 percent for 1.5 Pro, and 77.9 percent for 1.5 Flash.

As is the norm for AI model launches like this, details are thin on the training data used, hallucination risks and inaccuracies, and energy demands—though Google does say the new Flash models are its most efficient yet, while all its latest models are better than ever at reasoning feedback and stopping potential safety and security hacks.

Photo of David Nield

David Nield

David Nield is a technology journalist from Manchester in the U.K. who has been writing about gadgets and apps for more than 20 years.

Read David's full bio