- AI KATANA
- Posts
- Perplexity makes bold $34.5 billion bid for Google's Chrome browser
Perplexity makes bold $34.5 billion bid for Google's Chrome browser
Also: Altman and Musk escalate over rankings and algorithms
AI voice dictation that's actually intelligent
Typeless turns your raw, unfiltered voice into beautifully polished writing - in real time.
It works like magic, feels like cheating, and allows your thoughts to flow more freely than ever before.
With Typeless, you become more creative. More inspired. And more in-tune with your own ideas.
Your voice is your strength. Typeless turns it into a superpower.
Hello!
Big swings in AI this cycle: a surprise $34.5B cash bid to buy Chrome puts the browser front-and-center in the AI distribution war, while the latest X spat between Sam Altman and Elon Musk spilled from app-store rankings into accusations of platform manipulation. On the infra side, a fast-growing GPU cloud posted surging revenue, underscoring how compute scarcity still dominates earnings storylines. Policy and procurement also moved: one frontier-model provider offered its chatbot to the U.S. government for $1, testing new routes to public-sector adoption. In Asia, a major server maker in Taiwan said GB300-based systems are nearing shipment, hinting at the next wave of data-center builds. Finally, security researchers say GPT-5 can be coaxed around guardrails via narrative “jailbreaks,” keeping safety in the spotlight. Tools this week focus on giant context windows and inbox/calendar connectors; VC circles a billion-dollar chip deal and fresh Asia deep-tech.
Sliced just for you:
🧭 Perplexity makes bold $34.5 billion bid for Google's Chrome browser
⚔️ X feud: Altman and Musk escalate over rankings and algorithms
🧱 CoreWeave revenue pops on unrelenting AI demand
🛡️ GPT-5 reportedly jailbroken within hours of release
🏛️ Claude offered to U.S. government for $1
🏭 Taiwan’s Quanta readies GB300-based AI servers
📰 Latest AI News
Perplexity lobbed an unsolicited $34.5B, all-cash offer to acquire Chrome, wagering that owning the world’s most-used browser could fuse distribution with an AI-first browsing experience. The company argues the move could dovetail with potential antitrust remedies that might force a browser divestiture, pledging to keep code open-source, invest heavily, and retain current default-search economics. Analysts are skeptical: Chrome is central to its owner’s AI roadmap, any forced sale could take years on appeal, and the bidder’s valuation is a fraction of the price tag—meaning external financing would be essential. Still, the proposal surfaces a strategic truth: whoever controls the browser can deeply integrate AI agents, context, and commerce. Even if rebuffed, the bid spotlights how incumbents and challengers will compete over “ambient” AI entry points, not just model benchmarks.
A fresh public clash erupted on X after Elon Musk threatened legal action over App Store treatment he claims favors a rival chatbot. Sam Altman fired back, asserting Musk has manipulated X’s algorithm to boost his own content and hurt competitors, escalating the exchange into personal accusations and challenges to sign sworn statements. The spat reprises a long-running rivalry that spans lawsuits, governance barbs, and divergent philosophies about open development versus tight control of distribution. Beyond theatrics, the episode underscores how platform power—charts, recommendations, and default placements—can tilt AI adoption curves more than raw model scores. It also hints at rising regulatory scrutiny of ranking systems viewed as gatekeepers. For operators, the lesson is clear: distribution leverage and perception of neutrality now shape AI outcomes as much as breakthroughs in training runs.

A leading GPU cloud reported results showing revenue more than tripled year over year, beating expectations as AI training and inference workloads snapped up capacity. Management pointed to persistent accelerator scarcity, longer customer commitments, and a growing mix of enterprise workloads beyond model labs—evidence that AI spend is broadening. Investors will watch gross-margin cadence as power costs, long-term leases, and rapid hardware refreshes collide with pricing and utilization. The update reinforces the market’s core narrative: compute remains the chokepoint, with mid-market buyers turning to specialized clouds when hyperscaler queues stretch. Downstream ripple effects include stronger demand for HBM memory, advanced packaging, and power-dense data-center sites. The key questions ahead: how quickly new supply relieves bottlenecks, whether spot pricing normalizes, and how multi-tenant SLAs hold up as customers push for mission-critical uptime.
Security researchers say GPT-5 can be steered into unsafe outputs using “echo chamber” prompts and narrative misdirection that mask malicious goals, reportedly bypassing safeguards soon after launch. The technique chains plausible, benign-sounding steps that gradually nudge the model into policy-breaking territory—highlighting how social-engineering tactics, not just raw exploits, matter in red-teaming. While vendors emphasize test-time compute and upgraded safety training, early jailbreaks underscore the cat-and-mouse dynamic: as models scale, the attack surface grows, from tool-use injections to long-context misalignment. For security leaders, the takeaway is operational: combine layered filters, retrieval-grounding, and human review for high-risk actions; log and replay incidents; and treat prompt provenance as a first-class signal. For regulators, it raises questions about reporting standards and liability when AI agents act across connected tools.
One leading lab moved to accelerate public-sector adoption by offering its chatbot to U.S. government entities for $1, framing the arrangement as a low-friction way to deploy AI while agencies evaluate security and compliance. The headline price masks real considerations: usage tiers, data-handling rules, procurement pathways, FedRAMP status, and carve-outs for sensitive workloads. Advocates say a nominal cost could spur pilots across service delivery, document processing, and research. Skeptics worry about switching costs and long-term lock-in if mission-critical workflows form around proprietary assistants. Either way, it’s a signal that AI vendors are competing not only on model performance but on accessibility, contracts, and governance assurances—especially as agencies wrestle with accuracy, auditability, and records-retention obligations. Expect rivals to counter with discounted SKUs and stricter on-prem/air-gapped options.
A major server maker said pilot runs for next-gen AI servers built around GB300 accelerators will begin by quarter-end, with limited shipments possible in the following quarter. The update signals Asia’s supply chain is lining up for the next compute cycle, as OEMs synchronize chassis, cooling, and power delivery with higher-wattage parts. For buyers, it hints at incremental relief in a tight market and broader vendor choice beyond first-wave reference designs. Strategically, the brief suggests how quickly manufacturing partners can pivot from design wins to volume, a critical variable for enterprises budgeting 2026 deployments. The knock-on effects include orders for HBM stacks, substrates, and high-density racks—plus continued competition among integrators to pre-certify configurations for leading frameworks and inference workloads. Watch timelines closely; even “small shipments” can meaningfully shift capacity for early adopters.
🛠️ AI Tools Updates
Anthropic expanded a flagship model to a 1M-token context window, enabling ingestion of book-length docs and codebases (tens of thousands of lines) without chunking. For engineering teams, this reduces brittle retrieval setups and preserves cross-file reasoning; for analysts and legal ops, it simplifies audits, due diligence, and discovery tasks. The move keeps pace with rivals that also tout million-token windows, but usability depends on throughput, latency, and guardrails when long prompts invite prompt-injection or conflicting instructions. Expect pricing to reflect heavy context costs; teams should measure net productivity versus retrieval-augmented pipelines. Early access targets high-tier API users, with broader rollout to follow. The broader signal: toolchains are shifting from “chatbot helpers” to assistants that can operate over entire repositories and knowledge bases, closing a practical gap between demos and day-to-day work.
ChatGPT introduced built-in connectors for Gmail, Google Calendar, and Google Contacts (rolling out first to Pro), so the assistant can reference email threads, schedules, and address books directly in chat. That reduces app-hopping for common tasks—drafting replies with inline citations, summarizing meetings, or preparing briefings with attendee context—while keeping manual opt-in and admin controls. The update lands amid GPT-5’s broader rollout and a growing roster of searchable connectors (Box, Dropbox, Notion, SharePoint, Teams, GitHub), signaling a shift from single-prompt chats to agentic workflows stitched across tools. Security teams should review scopes, retention, and audit logs; end users should confirm what’s in scope before sharing sensitive content. It’s a steady march toward assistants that understand both your corpus and your calendar—and act with least-privilege access by default.
💵 Venture Capital Updates
A heavyweight asset manager is in talks to lead an approximately $1B private round for a challenger in AI compute, according to people familiar. The prospective deal—potentially including a ~$500M anchor—would fund scale-up of wafer-scale systems aimed at training and high-throughput inference as buyers seek alternatives to dominant accelerators. If finalized, this would reinforce investor interest in differentiated hardware at a time when capacity allocation and power constraints drive procurement decisions. The diligence focus: software stack maturity, ecosystem adoption, and cost-per-token trained versus GPU clusters. Strategically, a larger balance sheet could support aggressive pricing and longer warranties to win share from early adopter labs and national compute initiatives. Even rumors move markets: rivals may respond with partnership announcements, financing lines, or bundled cloud credits to lock in customers before new supply lands.
An India-based VC unveiled plans to deploy roughly ₹6B (~$69M) into 18–20 early-stage startups through its third fund, targeting deep-tech areas including AI, space, climate, and dual-use defense. The strategy concentrates on seed and pre-seed checks with room for follow-ons, reflecting confidence that India’s technical talent pool can spin up IP-heavy companies—not just services—if capital and patient guidance align. The announcement lands as domestic gen-AI startups post record funding to date this year, and as policymakers promote self-reliance in chips, satellites, and strategic tech. For founders, the message is to hone defensible core science and early pilot traction; for limited partners, it’s a bet that India’s deep-tech flywheel (labs → startups → procurement) is turning. Expect collaborations with research institutions and a tilt toward export-compliant, standards-aligned products.
🫡 Meme of the day

⭐️ Generative AI image of the day

Before you go, check out Mexico’s selfie-loving robot dog “Waldog” teaches kindness on city streets.
