- AI KATANA
- Posts
- Google I/O and Microsoft Build Updates
Google I/O and Microsoft Build Updates
Also: Google reveals $250 per month ‘AI Ultra’ plan

Hello!
The Google I/O and Microsoft Build events this week have been exciting for AI, painting a vivid picture of an agent-first, multi-model future. Google pushed Gemini 2.5 Pro across its stack, slipped a conversational “AI Mode” into Search, and showed Project Astra spotting lost glasses before users even ask. Android 16 picked up on-device Gemini Nano while the Gemini app gained Hollywood-grade Imagen 4 and Veo 3 media tricks. Microsoft answered with openness: Azure AI Foundry now hosts nearly two-thousand models—including xAI’s Grok 3—while new tools like NLWeb and Copilot Tuning promise to sprinkle chat interfaces and fine-tuned agents into every website and workflow. The takeaway? Google is doubling down on integrated experiences, Microsoft on an everything-marketplace—both racing to make AI ubiquitous and, crucially, useful.
Sliced just for you:
🆕 Google reveals $250 per month ‘AI Ultra’ plan
🎨 Google Launches Imagen 4 & Veo 3
🌐 Google Reimagines Search with AI Mode
🛠️ Project Astra Becomes Proactive
📱 Android 16 & Gemini Nano On-Device APIs
🤝 Azure Welcomes Grok 3 & Rival Models
🔍 Microsoft NLWeb Powers Chat Search Everywhere
🧑💼 Copilot Tuning & Multi-Agent Orchestrator
🧩 Azure Adds 1 900 Models & Coding Agents
Google has unveiled its premium AI Ultra plan at $249.99 per month, offering exclusive access to its most advanced models and the highest usage tiers across applications like Gemini, NotebookLM, Whisk, and the new video-generation tool Flow. Subscribers can explore Gemini 2.5 Pro’s “Deep Think” mode for complex problem-solving, gain early access to Gemini in Chrome for in-browser AI assistance, and trial Project Mariner, which automates up to ten tasks simultaneously. The plan also bundles YouTube Premium and up to 30TB of cloud storage.
Google added high-fidelity Imagen 4 image generation and Veo 3 4K/60-fps video creation to the Gemini app. Users can refine scenes with draggable keyframes, and every frame carries SynthID watermarking. Enterprise access via Vertex AI is slated for Q3, with pricing pitched below rival media APIs.
Google’s new opt-in “AI Mode,” driven by Gemini 2.5 Pro, fires off dozens of queries in parallel and returns a conversational answer complete with source cards and follow-up prompts. The experience handles itinerary planning and product picks while still surfacing ads, and Google hints it will become the default once engagement proves higher than classic links. Early testers praise richer context but note longer load times—underscoring compute costs Google must balance against revenue.
Google’s universal assistant prototype now watches, listens and acts unprompted. Running on Gemini and previewed on Pixel phones and AR glasses, Astra can locate misplaced keys through the camera, flag coding errors seen on-screen or suggest breaks based on calendar context. New developer APIs expose its memory graph and visual grounding, though Google warns of potential battery trade-offs.
The Android 16 preview debuts ML Kit GenAI APIs that tap a 1.5 GB Gemini Nano 2 model for offline summarisation, tone fixes and image descriptions. Gemini now screens scam calls, powers system-wide video captions and underpins new accessibility features in TalkBack. Beta 2 lands next month ahead of the Pixel 10 launch.
Microsoft expanded Azure AI Foundry to include xAI’s Grok 3 family, Meta’s latest Llama and startups Mistral and Black Forest, pushing the catalogue past 1 900 models. Grok 3 offers 131 K-token context and structured JSON output; developers get a two-week free preview before pay-go rates apply.
NLWeb, an open-source protocol revealed at Build, lets any site convert its content into a conversational interface using just a JavaScript snippet. Early pilots with TripAdvisor and Shopify cut bounce rates by double digits, and Microsoft is courting W3C standardisation to make NLWeb the HTML of the agentic web.
A new low-code wizard inside Copilot Studio lets organisations fine-tune models on private data, complete with bias scans and red-teaming. Agents such as Researcher and Analyst can now hand off tasks to each other via secure function calls, cutting document-prep time by 40 percent in early pilots.
Build’s broader theme was scale: a free developer tier, reserved GPU throughput units and a GitHub Copilot agent that files pull requests and runs tests autonomously. Analysts say the move pressures rivals and chips away at OpenAI’s moat by letting devs mix-and-match specialised engines per task.
🛠️ AI tools updates
Google has introduced SynthID Detector, a unified portal designed to identify content generated using its AI tools by detecting embedded SynthID watermarks. This expansion of the SynthID ecosystem—originally limited to imagery—now spans text, audio, and video created with models like Gemini, Imagen, Lyria, and Veo. The watermark is imperceptible, resilient to edits, and detectable even after content is shared or transformed.
💵 Venture Capital updates
Investigate VC, a Singapore-based early-stage investment firm, has launched a new global fund series—Fund 2 (I–III)—with a target of raising over $500 million to support startups across Asia, Europe, and North America. This initiative expands on the firm’s data-driven strategy, leveraging a proprietary AI-powered decision engine to evaluate over 1,000 curated startups annually and identify high-potential Seed to Series B opportunities.
🫡 Meme of the day

⭐️ Generative AI image of the day

Before you go, check out you can get free AI skills training from Microsoft for a few more days
