• Posts
  • Biden and Xi will sign a deal to keep AI out of control systems for nuclear weapons

Biden and Xi will sign a deal to keep AI out of control systems for nuclear weapons

Also: NVIDIA's next generation of AI supercomputer chips is here


In today's newsletter, we cover significant developments in AI and technology: U.S. President Biden and Chinese President Xi are set to sign an agreement limiting AI in nuclear weapons at the APEC summit; NVIDIA unveils its advanced AI supercomputer chips, promising enhanced AI capabilities; AI models show over 90% accuracy in detecting Alzheimer's from brain scans, highlighting their potential in medical diagnostics; Wedbush predicts a surge in AI and cloud technology spending, potentially powering a tech bull market; a study finds AI-generated images of white faces to be "hyper-real," raising concerns about racial bias in AI; AntiFake, a new tool, aims to protect voices from AI deepfake misuse; and Intel's strategic investment in Stability AI indicates a growing focus on generative AI technologies.


  • 🤝 Biden and Xi will sign a deal to keep AI out of control systems for nuclear weapons

  • 🆕 NVIDIA's next generation of AI supercomputer chips is here

  • 🧠 AI that reads brain scans shows promise for finding Alzheimer’s genes

  • 📈 ‘Tidal Wave’ of AI Spending to Power Tech Bull Market, Wedbush Says

  • 👩🏻 AI images of white faces are now 'hyper-real': Study

U.S. President Joe Biden and Chinese President Xi Jinping are poised to sign an agreement to restrict the use of artificial intelligence in nuclear weapons systems. This significant move comes as they meet at the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco. Amid escalating tensions between the two nations, including differences over the Ukraine conflict and the Israel-Hamas war, this meeting represents a critical effort to find common ground. The agreement aims to limit AI in autonomous weaponry, including drones and nuclear warhead control systems. This decision follows growing concerns about the potential use of AI in autonomous weapons that can independently select and engage targets. Both the U.S. and China, having previously endorsed the responsible use of AI in military applications, are now focusing on managing the risks associated with this technology in warfare. This agreement is a step towards addressing these concerns, especially in the realm of nuclear weapons, where the introduction of AI poses significant risks and ethical questions.

NVIDIA has announced its latest generation of AI supercomputer chips, marking a significant advancement in AI and deep learning technology. The flagship product, the HGX H200 GPU, is based on NVIDIA's "Hopper" architecture and utilizes HBM3e memory, offering nearly double the capacity and 2.4 times more bandwidth than its predecessor. This chip significantly enhances AI capabilities, doubling inference speed on large language models. Alongside, NVIDIA introduced the GH200 Grace Hopper "superchip", combining the HGX H200 GPU and an Arm-based CPU, designed for supercomputers to accelerate complex AI and high-performance computing applications. One of the most notable implementations will be the JUPITER supercomputer in Germany, anticipated to be the world's most powerful AI system, aiding in scientific breakthroughs across various fields. These innovations underscore NVIDIA's growing dominance in the AI and data center market segments, where it has recently seen a significant increase in revenue.

The article from Nature discusses the development of AI techniques for identifying Alzheimer’s disease by analyzing brain scans. These AI algorithms can efficiently sort through a large number of brain images, identifying characteristics of Alzheimer’s and important structural brain features. This approach aims to use brain images as visual biomarkers of Alzheimer's, potentially enabling scientists to pinpoint genes contributing to the disease. This could aid in creating treatments and predictive models for Alzheimer's risk. Notably, an AI model developed by the AI4AD consortium has shown over 90% accuracy in detecting Alzheimer’s from brain scans. However, researchers note that these AI models are limited by the diversity of the data they are trained on, indicating a need for more varied datasets to ensure broader applicability.

Wedbush analysts predict a robust setup for the tech sector heading into 2024, with an expected acceleration in spending on cloud and AI technologies, potentially underestimated by the market. IT budgets are projected to modestly increase in 2024, while cloud and AI-driven spending could rise by 20%-25% in the next year. This trend is evidenced by the positive impact of AI monetization on tech sector earnings, with companies like Microsoft, Datadog, and Palantir reporting strong results. AI is viewed as the most transformative technology trend since the Internet's inception in 1995, with a projected $1 trillion in AI spending over the next decade, benefiting the chip and software sectors. Leading companies in this domain include Nvidia and Redmond. Wedbush analysts favor tech stocks like Apple, Microsoft, Google, Palo Alto, Palantir, Zscaler, Crowdstrike, and MongoDB.

A recent study in the journal "Psychological Science" reveals that AI can generate images of white faces that appear more real than actual photographs of real people, a phenomenon termed "hyper-real." This advancement is attributed to biases in AI training data, often leading to the depiction of non-white ethnicities with white features. The study involved experiments where participants often misidentified AI-generated images as human and vice versa. Concerns are raised about the potential reinforcement of racial biases online, as AI-generated white faces are perceived as more realistic. The rapid progress of AI technology suggests that the differences between AI and human faces might soon vanish, emphasizing the need for public awareness about AI's hyper-realism to prevent misinformation and online scams, and highlighting the importance of addressing biases in AI training data.

🛠️ AI tools updates

The article from NPR discusses the rising concern among celebrities regarding the misuse of their likenesses in AI-generated deepfakes. Highlighted cases include Scarlett Johansson's legal action against an AI app that used her voice and face without consent, and a deepfake scam involving social media personality MrBeast. The article underscores the challenge in distinguishing synthetic from human-generated content, a problem particularly acute for public figures. Addressing this, researchers at Washington University in St. Louis have developed AntiFake, a tool that scrambles audio signals, making it difficult for AI systems to generate convincing voice clones while remaining comprehensible to the human ear. This innovation, inspired by the University of Chicago's Glaze, aims to protect individuals' voices from being misappropriated by generative AI models.

💵 Venture Capital updates

Intel recently made a strategic investment in UK-based generative AI startup, Stability AI, by providing approximately $50 million. This investment comes as a significant boost for Stability AI, which had been facing financial struggles despite its initial success and $1 billion valuation after raising $101 million in October. The company, known for its innovative AI image creation technology, encountered difficulties due to high costs and slower revenue in the competitive generative AI sector. Intel's investment is particularly notable as it follows the departure of major investors like Coatue Management and Lightspeed Venture Partners from Stability AI's board due to disagreements over the company's direction. This investment in Stability AI aligns with Intel's increasing focus on generative AI, as evidenced by their recent announcement of a large AI supercomputer project featuring Xeon processors and Gaudi2 AI accelerators, further establishing Intel as a key player in the advancement of generative AI technologies.

🫡 Meme of the day

⭐️ Generative AI image of the day