OpenAI Unleashes GPT-5

Also: Singapore Sees a Surge in AI Skills Development

In partnership with

Training Generative AI? It starts with the right data.

Your AI is only as good as the data you feed it. If you're building or fine-tuning generative models, Shutterstock offers enterprise-grade training data across images, video, 3D, audio, and templates—all rights-cleared and enriched with 20+ years of human-reviewed metadata.

With 600M+ assets and scalable licensing, our datasets help leading AI teams accelerate development, simplify procurement, and boost model performance—safely and efficiently.

Book a 30-minute discovery call to explore how our multimodal catalog supports smarter model training. Qualified decision-makers will receive a $100 Amazon gift card.

For complete terms and conditions, see the offer page.

Hello!

The AI week turned into a day: a sweeping GPT-5 debut landed with promises of sturdier reasoning and agentic behavior, quickly followed by real-world integrations—from productivity suites to enterprise copilots—signaling an immediate push from demo to deployment. Markets reacted in kind: language-learning platforms and engineering services tied to AI demand popped, while one notable retrenchment—Tesla winding down its Dojo team—underscored how expensive bespoke compute bets can be. In Asia, SoftBank’s quarterly results and timeline update on its mega-infrastructure ambitions offered a pragmatic counterpoint: profits today, patience on moonshots. Meanwhile, investors kept writing large checks (or preparing to), with fresh late-stage activity and speculation around a towering secondary share sale reminding everyone that capital is still chasing category leaders. Bottom line: new models are here, distribution is accelerating, and the winners are those turning AI capabilities into dependable, cost-aware products—fast.

Sliced just for you:

  • 🚀 GPT-5 launches with a push for better reasoning and broad availability

  • ⚡ Tesla reportedly shuts down its Dojo supercomputer team

  • 📈 Duolingo surges as AI-led features lift outlook

  • 🗾 SoftBank posts profit and delays its mega AI “Stargate” timeline

  • 🛠️ EPAM raises guidance on enterprise AI demand

  • 🎯 Pinterest revenue tops estimates as AI ad tools drive spend

GPT-5 arrived with claims of stronger multi-step reasoning, more agent-like task execution, and expanded access through flagship apps and APIs. The launch frames a key test: can newer models convert curiosity into revenue while keeping inference costs in check? Early positioning emphasizes improved coding, longer context, and “mini” variants to serve latency- and price-sensitive workloads, positioning the lineup from enterprise copilots down to on-device assistants. The debut also lands amid heightened scrutiny over safety guardrails and training transparency, with providers promising tighter policy enforcement and evaluation harnesses. Strategically, timing matters: rival suites are racing to standardize around built-in copilots across productivity, customer support, and developer tooling. If GPT-5’s reliability holds up in production, expect procurement to tilt toward consolidated stacks—where model quality, predictable pricing, and integration speed decide winners. 

Tesla is reallocating the Dojo team and shifting resources toward broader data-center compute, a sharp turn for an effort once billed as a bespoke training engine for autonomy. The move reflects a harsh reality: building and maintaining custom supercomputers at frontier scale is capital-intensive, talent-hungry, and risky when commercial timelines slip. It also hints at a pragmatic pivot toward established silicon and cloud procurement models as competitors double down on standardized GPU clusters. For the autonomy roadmap, leadership now faces two imperatives: sustaining model progress with diversified compute while preserving cost discipline. For the ecosystem, it’s a reminder that vertical AI infrastructure bets demand patience and deep pockets; when returns lag, consolidation back to mainstream providers accelerates. Investors will watch whether redeployed teams can translate lessons from Dojo into faster model iteration and safer, more reliable driving features. 

Shares jumped after the company raised guidance, crediting AI-powered experiences—from conversational practice to personalization—that are nudging users into higher-priced tiers. Management highlighted improved gross margins helped by falling model-call costs and better ad performance, cushioning earlier concerns about AI expenses. The growth strategy blends social mechanics, price experimentation, and upsell to premium plans like a feature-rich “Max,” demonstrating how content platforms can monetize AI beyond chatbots. The bigger takeaway: durable AI revenue comes from embedding helpful loops (practice, feedback, streaks) directly into workflows people already value, not standalone novelties. With pricing power intact and conversion improving, the challenge shifts to retention—ensuring new features stay sticky as rivals adopt similar tools. For investors, the print reinforces that operational discipline plus targeted AI can expand ARPU without crushing unit economics. 

The group returned to profit on gains from listed holdings while signaling that the ultra-ambitious “Stargate” compute build-out needs more time. The message from Tokyo: keep compounding where returns are tangible (public stakes, selective late-stage) and be patient on capex-heavy infrastructure. For Asia’s AI scene, the update matters in two ways. First, it underscores how even deep-pocketed backers sequence mega-projects amid volatile chip supply and policy flux. Second, it suggests a continued appetite for strategic investments tied to foundational models, robotics, and enabling platforms—just with tightened milestones. As capital rotates from hype to throughput, expect more emphasis on partner ecosystems, offtake commitments, and cost-per-token trajectories. The delayed timeline is less retreat than recalibration, aligning ambition with procurement realism and the maturing economics of training and serving frontier-scale systems. 

EPAM lifted its annual outlook, citing a steady ramp in enterprise AI work—from proof-of-concepts to production deployments across customer service, knowledge management, and software development. The company pointed to rising demand for retrieval-augmented generation, safety tuning, and copilot integrations, coupled with clients’ push for cost-controlled inference via right-sized models. Notably, the guidance implies that services revenue tied to AI modernization is becoming less episodic and more programmatic as governance patterns harden and change management catches up. For buyers, the signal is that integration partners are standardizing toolchains and playbooks, shortening time-to-value. Risks remain (model drift, data residency, cost sprawl), but vendors are stress-testing architectures to keep TCO predictable. If sustained, expect services firms to compete on accelerators, reusable components, and outcomes-based contracts rather than hourly bodies. 

Pinterest beat revenue expectations as AI-enhanced ad and shopping tools improved retrieval, targeting, and creative generation, nudging brands to spend more. Beyond growth headlines, the quarter hints at a broader pattern: platforms that tightly couple first-party intent signals with on-platform AI see better ROAS and faster iteration cycles. The company emphasized product velocity in visual search, automated creative, and merchant tooling—areas where inference cost curves are easing and feedback loops are rich. While profit dynamics remain sensitive to macro and traffic mix, the results reinforce that practical AI in ad stacks—recommendations, creative variants, conversion modeling—now materially moves P&L. With competitors racing on similar features, defensibility will hinge on proprietary data, retail integrations, and guardrails that avoid brand-safety missteps as generative systems scale across commerce touchpoints. 

🛠️ AI tools updates

Copilot is adding GPT-5 access behind a ‘smart mode’ toggle that automatically selects models for tasks across documents, coding, and data analysis. The update emphasizes less prompt fiddling and more context-aware assistance, plus guardrail and reliability upgrades aimed at enterprise rollouts. By baking GPT-5 into Office, Windows, and web experiences, the strategy is distribution first: meet users where they already work. Expect faster agentic actions (summarize, draft, transform, reconcile) and improved hand-offs between text, images, and tables. Technically, ‘smart mode’ abstracts model choice and versions, which could reduce change-management headaches as families of GPT-5 variants evolve. The bigger story: platform owners are turning model launches into instant product upgrades, compressing the distance between research progress and day-to-day productivity. 

The developer-facing rollout highlights four GPT-5 variants tuned for logic, speed, cost, and chat, plus noticeable bumps in code generation and tool-use reliability. Pricing tiers are designed to widen the funnel: “mini” and “nano” options promise lower latency and spend for background tasks, while full-fat models target complex workflows. Early materials point to stronger function-calling, longer context windows, and improved policy enforcement—critical for enterprise tasks where determinism and auditability matter. The cadence suggests a model family strategy: a core set refreshed often, plus small, specialized siblings for embedded or on-device scenarios. If benchmarks translate to production, expect IDEs, agents, and RAG systems to upgrade fast, particularly where token-heavy coding tasks previously constrained ROI.

💵 Venture Capital updates

A new late-stage round values n8n around $2.3B, reflecting investor appetite for automation platforms that blend AI models with workflow orchestration. The thesis: enterprises want pragmatic agent-plus-rules systems that talk to SaaS apps, enforce governance, and keep human-in-the-loop where needed. For founders, the signal is clear—buyers reward connectors, reliability, and audit trails as much as model flash. The round also reinforces a broader 2025 pattern: capital flowing to tools that tame cost and complexity (data lineage, monitoring, caching) rather than pure chat front-ends. If n8n leverages proceeds to deepen integrations and inference-cost controls, it could ride the wave of “agentic ops” budgets now emerging in IT and RevOps. Watch for M&A pressure as incumbents seek ready-made AI automation suites to accelerate product roadmaps. 

Fresh commentary underscores ongoing share-sale discussions that could value the company near $500B—massive for a venture-backed startup and a barometer for late-stage liquidity. The piece flags concentration risks: revenue ramp is steep but business lines remain young, and leadership continuity is central to investor confidence. Still, the implied valuation—on the heels of accelerated user metrics and enterprise traction—signals that private markets expect durable model-driven cash flows and healthy platform take-rates. For VC portfolios, this sets a reference point that could reprice comparable assets across code assistants, agents, and AI infrastructure. But there’s a cautionary note: if operating costs or policy headwinds bite, markdowns could be swift. Net-net, secondary liquidity may relieve employee pressure while testing how much of AI’s future profitability is already priced in. 

🫡 Meme of the day

⭐️ Generative AI image of the day