100 Highlights from Google I/O 2025
Explore the funniest and most iconic moments from Seinfeld’s first 100 episodes in “The Highlights of 100.” This two-part special features classic scenes, memorable quotes, and fan-favorite characters from the legendary sitcom.
100 Highlights from Google I/O 2025

That’s a wrap on Google I/O 2025! From groundbreaking advancements in generative AI to smarter Search, Gemini upgrades, and creative tools that blur the lines between imagination and reality — this year’s I/O was packed with innovations you can explore today.
Let’s dive into 100 of the most exciting announcements, demos, and product launches:
AI-Enhanced Search Gets Smarter and More Helpful
1. AI Mode in Google Search is rolling out across the U.S. with more in-depth responses than ever. Opt in through Search Labs for early access.
2. Deep Search brings research-level responses, helping users dig deeper into complex questions.
3. Search Live, powered by Project Astra, arrives this summer — use your phone’s camera and talk in real-time with Google about what you’re seeing.
4. Project Mariner brings agent-like abilities to Search Labs — enabling real-time actions like booking tickets or making reservations.
5. Coming Soon: Smart data analysis. Search will analyze complex financial and sports datasets and visualize them for you.
6. New Shopping Experience: AI Mode integrates with Shopping Graph to help you make smarter buying decisions.
7. Try-On Tool: Upload a photo and see how clothes look on you. It’s now rolling out via Search Labs in the U.S.
8. Smart Checkout Agent: Set your budget and track price drops — a new AI-powered way to shop smarter.
9. AI Overviews now serve 1.5 billion users monthly across 200+ countries.
10. In the U.S. and India, AI Overviews have driven a 10%+ increase in Google Search usage.
11. Gemini 2.5 is now powering both AI Mode and AI Overviews in the U.S., bringing smarter responses everywhere.
Major Gemini Upgrades You Can Use Now
12. New Study Buddy: Ask Gemini to create custom practice quizzes instantly.
13. Gemini Live gets smarter: Soon you’ll be able to add events to Calendar or get info from Maps, Tasks, and Keep during a conversation.
14. Camera & screen sharing for Gemini Live is rolling out to iOS users.
15. Canvas Create Menu: Turn text into infographics, quizzes, web pages, and even audio stories in 45 languages.
16. Deep Research now supports uploading PDFs and images — mixing your data with public insights.
17. Soon: Link Gmail or Drive documents to tailor your research sources.
18. Introducing Agent Mode: Just state your goal, and Gemini will take action for you. Coming soon for Google AI Ultra users.
19. Gemini in Chrome rolls out for Google AI Pro and Ultra users in the U.S.
20. Gemini now has 400 million monthly active users worldwide.
Under the Hood: Advancements in Gemini Models
21. Gemini 2.5 Pro now tops the WebDev Arena and LMArena AI leaderboards.
22. LearnLM is built into Gemini 2.5, making it the most advanced learning model available.
23. Meet Gemini 2.5 Flash: Optimized for speed and complex reasoning tasks — now available in the Gemini app.
24. Coming in June: Flash and Pro versions will roll out to Google AI Studio and Vertex AI.
25. Introducing Deep Think mode: Unlock advanced reasoning for coding and math with Gemini 2.5 Pro.
26. Gemini 2.5 has the strongest security protections yet — dramatically improving defenses against prompt injection attacks.
27. Project Mariner’s computer usage abilities are coming to the Gemini API and Vertex AI.
28. Thought Summaries help organize AI thoughts with clarity, now in Gemini API and Vertex AI.
29. Thinking Budgets let developers manage latency and quality. Now in Flash, coming to Pro soon.
30. Native SDK support for Model Context Protocol (MCP) makes integrating Gemini into open-source tools easier than ever.
31. Gemini Diffusion: A new research model that generates text by refining noise — similar to how image generation models work.
New Plans to Access AI Tools
32. Introducing Google AI Ultra: Our most powerful AI subscription. Includes premium models, YouTube Premium, and 30 TB of storage.
33. Available now in the U.S. for $249.99/month. First-timers get 50% off for three months.
34. Free student upgrades: Students in the U.S., UK, Brazil, Japan, and Indonesia can access premium Gemini features all year long.
35. Google AI Pro is available for $19.99/month, offering core tools like Flow, NotebookLM, and enhanced Gemini access.
Unlocking Creativity with Generative AI
36. Try it now! Veo 3 lets you generate video with audio. Available in the Gemini app (AI Ultra only) and Vertex AI.
37. Veo 2 adds camera controls, outpainting, and object manipulation.
38. Watch films created using Veo models on Flow TV.
39. Try it now! Imagen 4 creates ultra-detailed photorealistic and abstract images — now live in the Gemini app.
40. Imagen 4 is also available in Whisk and Vertex AI.
41. Coming soon: Imagen 4 Fast — up to 10x faster than Imagen 3.
42. Supports aspect ratios and 2K resolution for printing and presentations.
43. Vastly improved at spelling and typography, making it great for cards, posters, and comics.
44. Flow is our new AI filmmaking tool, letting creators script entire films with full control over scenes and characters.
45. Available now to AI Pro and Ultra users in the U.S.
46. Music AI Sandbox (powered by Lyria 2) is now available to YouTube Shorts creators and Vertex AI users.
47. Lyria 2 supports rich vocal compositions — from solo voices to choirs.
48. Lyria RealTime allows users to create and perform music interactively. Now in Google AI Studio and Vertex AI.
49. Google DeepMind has partnered with Primordial Soup, a new studio by director Darren Aronofsky, to develop AI-powered films.
50. The first film, ANCESTRA, premieres June 13 at the Tribeca Festival — directed by Eliza McNitt.
AI Transparency and Safety
51. We introduced SynthID Detector, a portal to identify AI-generated content using digital watermarks (more details coming soon).
52. And since launch, SynthID has already watermarked over 10 billion pieces of content.
53. We are starting to roll out the SynthID Detector portal to a group of early testers. Journalists, media professionals and researchers can join our waitlist to gain access to the SynthID Detector.
Take a look at the future of AI assistance
Video demonstrating Google's Project Astra.
1:56
54. We’re working to extend our best multimodal foundation model, Gemini 2.5 Pro, to become a “world model” that can make plans and imagine new experiences by understanding and simulating aspects of the world, just as the brain does.
55. Updates to Project Astra, our research prototype that explores the capabilities of a universal AI assistant, include more natural voice output with native audio, improved memory and computer control. Over time we’ll bring these new capabilities to Gemini Live and new experiences in Search, Live API for devs and new form factors like Android XR glasses.
56. And as part of our Project Astra research, we partnered with the visual interpreting service Aira to build a prototype that assists members of the blind and low-vision community with everyday tasks, complementing the skills and tools they already use.
57. With Project Astra, we’re prototyping a conversational tutor that can help with homework. Not only can it follow along with what you’re working on, but it can also walk you through problems step-by-step, identify mistakes and even generate diagrams to help explain concepts if you get stuck.
58. This research experience will be coming to Google products later this year and Android Trusted Testers can sign up for the waitlist for a preview.
59. We took a look at the first Android XR device coming later this year: Samsung’s Project Moohan. This headset will offer immersive experiences on an infinite screen.
60. And we shared a sneak peek at how Gemini will work on glasses with Android XR in real-world scenarios, including messaging friends, making appointments, asking for turn-by-turn directions, taking photos and more.
61. We even demoed live language translation between two people, showing the potential for these glasses to break down language barriers.
62. Android XR prototype glasses are now in the hands of trusted testers, who are helping us make sure we’re building a truly assistive product and doing so in a way that respects privacy for you and those around you.
63. Plus we’re partnering with innovative eyewear brands, starting with Gentle Monster and Warby Parker, to create glasses with Android XR that you’ll want to wear all day.
64. We’re advancing our partnership with Samsung to go beyond headsets and extend Android XR to glasses. Together we’re creating a software and reference hardware platform that will enable the ecosystem to make great glasses. Developers will be able to start building for this platform later this year.
Communicate better, in near real time
A demo of Google Beam.
1:03
65. A few years ago, we introduced Project Starline, a research project that enabled remote conversations that used 3D video technology to make it feel like two people were in the same room. Now, it’s evolving into a new platform called Google Beam.
66. We’re working with Zoom and HP to bring the first Google Beam devices to market with select customers later this year. We’re also partnering with industry leaders like Zoom, Diversified and AVI-SPL to bring Google Beam to businesses and organizations worldwide.
67. You’ll even see the first Google Beam products from HP at InfoComm in a few weeks.
68. We announced speech translation, which is available now in Google Meet. This translation feature not only happens in near real-time, thanks to Google AI, but it’s able to maintain the quality, tone, and expressiveness of someone’s voice. The free-flowing conversation enables people to understand each other and feel connected, with no language barrier.
Build better with developer launches
A demo of Jules.
0:37
69. Over 7 million developers are building with Gemini, five times more than this time last year.
70. Gemini usage on Vertex AI is up 40 times compared to this time last year.
71. We’re releasing new previews for text-to-speech in 2.5 Pro and 2.5 Flash. These have first-of-its-kind support for multiple speakers, enabling text-to-speech with two voices via native audio out. Like Native Audio dialogue, text-to-speech is expressive, and can capture really subtle nuances, such as whispers. It works in over 24 languages and seamlessly switches between them.
72. The Live API is introducing a preview version of audio-visual input and native audio out dialogue, so you can directly build conversational experiences.
73. Try it now! Jules is a parallel, asynchronous agent for your GitHub repositories to help you improve and understand your codebase. It is now open to all developers in beta. With Jules you can delegate multiple backlog items and coding tasks at the same time, and even get an audio overview of all the recent updates to your codebase.
74. Gemma 3n is our latest fast and efficient open multimodal model that’s engineered to run smoothly on your phones, laptops, and tablets. It handles audio, text, image, and video. The initial rollout is underway on Google AI Studio and Google Cloud with plans to expand to open-source tools in the coming weeks.
75. Try it now! Google AI Studio now has a cleaner UI, integrated documentation, usage dashboards, new apps, and a new Generate Media tab to explore and experiment with our cutting-edge generative models, including Imagen, Veo and native image generation.
76. Colab will soon be a new, fully agentic experience. Simply tell Colab what you want to achieve, and watch as it takes action in your notebook, fixing errors and transforming code to help you solve hard problems faster.
77. SignGemma is an upcoming open model that translates sign language into spoken language text, (best at American Sign Language to English), enabling developers to create new apps and integrations for Deaf and Hard of Hearing users.
78. MedGemma is our most capable open model for multimodal medical text and image comprehension designed for developers to adapt and build their health applications, like analyzing medical images. MedGemma is available now for use now as part of Health AI Developer Foundations.
79. Stitch is a new AI-powered tool to generate high-quality UI designs and corresponding frontend code for desktop and mobile by using natural language descriptions or image prompts.
80. Try it now! We announced Journeys in Android Studio, which lets developers test critical user journeys using Gemini by describing test steps in natural language.
81. Version Upgrade Agent in Android Studio is coming soon to automatically update dependencies to the latest compatible version, parsing through release notes, building the project and fixing any errors.
82. We introduced new updates across the Google Pay API designed to help developers create smoother, safer, and more successful checkout experiences, including Google Pay in Android WebViews.
83. Flutter 3.32 has new features designed to accelerate development and enhance apps.
84. And we shared updates for our Agent Development Kit (ADK), the Vertex AI Agent Engine, and our Agent2Agent (A2A) protocol, which enables interactions between multiple agents.
85. Try it now! Developer Preview for Wear OS 6 introduces Material 3 Expressive and updated developer tools for Watch Faces, richer media controls and the Credential Manager for authentication.
86. Try it now! We announced that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing.
87. Here’s an example of a recent update you can explore in Gemini Code Assist: Quickly resume where you left off and jump into new directions with chat history and threads.
88. Firebase announced new features and tools to help developers build AI-powered apps more easily, including updates to the recently launched Firebase Studio and Firebase AI Logic, which enables developers to integrate AI into their apps faster.
89. We also introduced a new Google Cloud and NVIDIA developer community, a dedicated forum to connect with experts from both companies.
Work smarter with AI enhancements
A video showing Gemini features in Gmail.
1:09
90. Gmail is getting new, personalized smart replies that incorporate your own context and tone. They’ll pull from your past emails and files in your Drive to draft a response, while also matching your typical tone so your replies sound like you. Try it yourself later this year.
91. Try it now! Google Vids is now available to Google AI Pro and Ultra users.
92. Try it now! Starting today, we’re making the NotebookLM app available on Play Store and App Store, to help users take Audio Overviews on the go.
93. Also for NotebookLM, we’re bringing more flexibility to Audio Overviews, allowing you to select the ideal length for your summaries, whether you prefer a quick overview or a deeper exploration.
94. Video Overviews are coming soon to NotebookLM, helping you turn dense information like PDFs, docs, images, diagrams and key quotes into more digestible narrated overviews.
95. We even shared one of our NotebookLM notebooks with you — which included a couple of previews of Video Overviews!
96. Our new Labs experiment Sparkify helps you turn your questions into a short animated video, made possible by our latest Gemini and Veo models. These capabilities will be coming to Google products later this year, but in the meantime you can sign up for the waitlist for a chance to try it out.
97. We’re also bringing improvements based on your feedback to Learn About, an experiment in Labs where conversational AI meets your curiosity.
Finally… we’ll leave you with a few numbers:
99. As Sundar shared in his opening keynote, people are adopting AI more than ever before. As one example: This time last year, we were processing 9.7 trillion tokens a month across our products and APIs. Now, we’re processing over 480 trillion — 50 times more.
100. Given that, it’s no wonder that the word “AI” was said 92 times during the keynote. But the amount of “AIs” we heard actually took second place — to Gemini!