Estimated reading time: 3 minutes
Artificial intelligence wars are going up. A week to the day after OpenAI released the o1 model for public use, Silicon Valley mega player Google today is providing a sneak peek of the Gemini 2.0 model. Using a blog post credited to Google CEO Sundar Pichai, the company asserts that 2.0 remains its most advanced model yet built with the algorithm coming with the ability to natively support image and audio output. āIt would help us continue to develop new AI agents that guide us towards our ambition of having a single AI helper,ā said Pichai.
Google is doing something different to Gemini 2.0. Instead of commencing todayās preview with its latest and most sophisticated version of the model, Gemini 2.0 Pro, it is the search giant that is commencing with 2.0 Flash only. Starting today, all Gemini users have access to the more efficient ā and therefore cheaper ā solution. If you want to try it yourself, you can enable Gemini 2.0 from the dropdown menu in the Gemini web client, with availability within the mobile app to be revealed in the future.
More like this
As for the future, Googleās main goal is to bring 2.0 intelligence to Search (as expected) beginning with AI Overviews. This new model, the company has said, will enable the feature to answer queries that are more complex and those which require several steps to solve such as mathematical problems, or computer coding riddles. At the same time, after expanding the API in October, Google is going to make AI Overviews in more languages and countries.
Further ahead, Gemini has brought improvements to some of Googleās more futuristic AI ideas like Project Astra, the announced multi-modal AI Avatar that Google showcased at I/O ā24. According to Google, the new model enabled the most current version of Astra to speak in different languages and even use one over the other with ease. It also can ārememberā things for more extended periods and has a lower latency, it can also allow the use of tools such as Google Lens and Maps.
As you might imagine, Gemini 2.0 Flash performs considerably more effectively than Gemini Flash. For example, it scored 63 per cent on HiddenMath, an assessment of how well AI models can solve math problems that are akin to competition levels. On the same test, however, Gemini 1.5 Flash scored 47.2 per cent, according to the press release. But the thing of interest here is that the Gemini 2.0 experimental version is way better than Gemini 1.5 Pro in many ways, as data Google provided Range shows, with long-context understanding and automatic speech translation being the only areas where the experimental version is worse than Gemini 1.5 Pro.
That is why Google is maintaining the older model for at least a little while longer so that folks such as myself have a chance to get our heads around them. In addition to the Gemini 2.0 release, the company launched Deep Research today which makes use of Gemini 1.5 Proās long context to generate in-depth reports on complex topics.
Discover more from News Round The Clock
Subscribe to get the latest posts sent to your email.