• AI KATANA
  • Posts
  • Nvidia, AMD agree to pay the US 15% of China AI-chip sales

Nvidia, AMD agree to pay the US 15% of China AI-chip sales

Also: Upstage vaults Korea into the top tier of AI model makers

In partnership with

Rank #1 on Amazon—Effortlessly with Micro-Influencers!

Ready to reach the #1 page on Amazon and skyrocket your recurring revenue? Stack Influence empowers brands like Magic Spoon, Unilever, and MaryRuth Organics to quickly achieve top Amazon rankings by automating thousands of micro-influencer collaborations each month. Simply send free products—no influencer fees, no negotiations—just genuine user-generated content driving external traffic to your Amazon listings.

Stack Influence's fully automated platform allows you to effortlessly scale influencer campaigns, improving organic search positioning and significantly boosting sales. Trusted by leading brands who've experienced up to 13X revenue increases in just two months, Stack Influence provides complete rights to all influencer-created content, letting you authentically amplify your brand.

Start scaling your brand today—claim the #1 spot on Amazon and multiply your revenue.

Hello!

A busy 24 hours for AI: chip geopolitics took center stage as the US required Nvidia and AMD to hand over 15% of China AI-chip revenues, while the White House also floated letting a scaled-down next-gen GPU trickle into China—two moves that could reshape global supply chains and data-center buildouts overnight. Markets digested the whiplash: Micron lifted guidance on surging AI memory demand even as chip stocks wobbled on policy headlines. In M&A, Rumble weighed a $1.2B deal for Northern Data to grab GPU capacity—another sign that “compute is king.” Asia delivered a standout: South Korea’s Upstage pushed into the global leaderboard with a compact model that punches above its weight, underscoring how efficient training is challenging giant labs. Below, you’ll find the most consequential coverage, plus fresh tool notes, venture moves, and a lighthearted closer.

Sliced just for you:

  • 🔌 Nvidia, AMD agree to pay the US 15% of China AI-chip sales

  • 🧠 Micron raises forecasts on unrelenting AI memory demand

  • 📡 Rumble circles $1.2B deal for Northern Data’s GPU cloud

  • 🇰🇷 Upstage vaults Korea into the top tier of AI model makers

  • 📉 Markets wobble as policy shocks collide with the AI build-out

📰 Latest AI News

Washington struck an unprecedented arrangement requiring Nvidia and AMD to remit 15% of revenue from AI-chip sales to China to the US government, a quid-pro-quo tied to export approvals. Officials argue the framework preserves national-security guardrails while letting US firms compete with downgraded parts, but the move could complicate pricing, channel incentives, and gray-market leakage monitoring. Nvidia says it will follow government rules; regulators maintain the licenses won’t compromise security. The policy adds a new cost layer for Chinese buyers and could reweight demand toward domestic accelerators while preserving some US vendor presence. Investors are parsing second-order effects: where margins land, whether Chinese customers delay purchases for clarity, and how rivals like Intel or local suppliers respond. The decision also sets a template other jurisdictions may try to emulate as AI industrial policy globalizes. 

Micron raised Q4 revenue and profit guidance on stronger demand for high-bandwidth memory and advanced DRAM tied to AI servers, sending shares higher in after-hours trading. The company cited broadening orders from hyperscalers and AI infrastructure providers, with pricing and mix improving as supply tightens. HBM remains capacity-constrained industry-wide; Micron is scaling output while navigating long lead times for substrates and equipment. The update reinforces the thesis that memory is the stealth winner of AI capex cycles, capturing value as model sizes and context windows balloon. Near-term watch items: wafer-supply bottlenecks, any spillover from US-China licensing gyrations, and elasticity on unit pricing as new capacity ramps into 2026. For customers, the signal is clear—plan for elevated memory costs and earlier commitments to secure allocations amid persistent AI demand. 

Rumble is evaluating an all-stock acquisition of Northern Data that would hand it control of Taiga, a GPU-rich AI cloud, with Tether expected to become Rumble’s largest single shareholder. The contemplated deal would pivot Rumble from a video-platform pure-play into a compute-infrastructure owner—part of a broader land-grab for accelerator capacity as AI workloads surge. Northern Data would reportedly divest its crypto mining operations, sharpening focus on AI services. Strategic questions loom: can Rumble monetize compute beyond internal demand, what SLAs and utilization can it sustain, and how will governance balance new stakeholders? For the market, the proposal highlights a trend: media and software companies buying or leasing GPUs directly, not just via hyperscalers, to secure supply, manage costs, and differentiate with low-latency inference. Expect scrutiny of balance-sheet strength and execution risk if talks advance.

Amid tighter controls, the administration suggested it could allow a reduced-capability variant of Nvidia’s upcoming GPU to reach Chinese buyers—while signaling hesitation on permitting the flagship Blackwell. The idea threads a needle: preserve leverage on cutting-edge compute while alleviating pressure on US suppliers and discouraging illicit workarounds. Any greenlight would still sit atop complex license regimes and might require telemetry or location-verification mechanisms. For Chinese AI labs and cloud providers, a “lite” path could stabilize near-term planning but reinforces a widening performance gap versus global peers. For Nvidia, the calculus is balancing revenue retention against political risk and compliance costs. The policy debate underscores how AI hardware has become a diplomatic instrument, with iterative, model-specific rules shaping the speed and direction of model training across borders. 

Seoul-based Upstage is drawing global attention after its Solar Pro 2 model earned “frontier”-class recognition from an independent benchmarking firm despite a relatively modest 30B-parameter scale. The company credits a training approach focused on depth-up scaling and data efficiency, enabling performance that competes with far larger models from US labs at lower compute cost. Early enterprise traction reportedly includes deployments with chipmakers and insurers, and inquiries from additional US carriers. The story speaks to a broader shift: smarter recipes and fine-tuned curricula are narrowing the gap with giant labs, while national champions target sectors—like financial services and public digital services—where data sovereignty and latency are paramount. With Korean policymakers boosting funding and talent incentives, Upstage underscores how Asia’s AI ecosystem is moving from follower to fast innovator in the foundation-model race. 

US equities eased with semiconductor names particularly volatile after headlines about revenue-sharing on China AI-chip sales and mixed signals on next-gen GPU exports. Traders weighed tighter policy friction against undeniable infrastructure momentum: memory suppliers guided up, but investors rotated tactically on uncertainty around margins and shipments into China. Strategists flag that earnings sensitivity now hinges not just on unit demand but also on policy-imposed “tolls” and the pace of license processing. Meanwhile, cloud capex roadmaps—still anchored in AI training and inference—appear intact, suggesting drawdowns could stay headline-driven rather than fundamental. Near term, watch for guidance from suppliers exposed to China revenue, commentary on substitution by domestic Chinese accelerators, and potential copy-cat mechanisms from other governments as they seek a slice of AI economics while preserving control over advanced compute flows. 

🛠️ AI Tools Updates

Google’s hardware showcase later today is poised to lean heavily on on-device and cloud-assisted AI, with rumors pointing to deeper Gemini integrations across Pixel 10 and companion devices. Expect tighter multimodal capture → summarize → act loops (voice, camera, text), richer on-device reasoning on NPUs, and more proactive assistance in first-party apps. If delivered, those features would push more tasks out of the cloud into silicon at the edge, improving latency and privacy while preserving hand-offs to larger cloud models for complex cases. Hardware-software co-design is the backdrop: longer battery life must coexist with always-on inference and local vector stores, and developers will want clearer APIs to plug into the new agentic primitives. Keep an eye on how Google positions subscriptions, as premium AI tiers increasingly gate the most capable models. 

💵 Venture Capital Updates

UK-based CuspAI is negotiating a funding round north of $100M to scale its AI-first platform for designing novel materials—an area drawing rising enthusiasm as investors seek near-term industrial ROI beyond pure software. The pitch: couple graph neural nets and generative models with high-throughput simulation and lab validation to shrink discovery cycles for batteries, catalysts, and carbon-capture media. If finalized, the raise would add to a summer wave of AI-for-science financings and could position CuspAI as a flagship in Europe’s applied-AI ecosystem. Diligence focuses on data advantage, time-to-prototype, and partnerships with chemical majors for downstream scaling. The bet is that “atoms + AI” will mint the next set of defensible moats as compute prices and open-weights compress differentiation in general-purpose language models. 

🫡 Meme of the day

⭐️ Generative AI image of the day