• Posts
  • Nvidia develops AI chips for China in latest bid to avoid US restrictions

Nvidia develops AI chips for China in latest bid to avoid US restrictions

Also: Former Apple designers launch $700 Humane AI Pin as smartphone replacement


In today's roundup, Nvidia counters U.S. export controls by tailoring AI chips for China, while OpenAI's Data Partnerships initiative aims to cultivate more inclusive AI training datasets. Big Tech's measures against AI-driven election misinformation prompt debate over their effectiveness, and former Apple designers unveil a $700 AI Pin touted as a smartphone alternative. The U.S. Department of State launches its inaugural Enterprise AI Strategy, focusing on ethical AI implementation. Meanwhile, Hollywood actors have won new protections on AI-generated content use. In the venture capital arena, Eilla AI's €1.4 million funding marks a leap towards revolutionizing financial decision-making in private markets with generative AI technology.


  • Nvidia develops AI chips for China in latest bid to avoid US restrictions

  • 🤝 OpenAI seeks partnerships to generate AI training data

  • 🤔 Tech Companies Are Taking Action on AI Election Misinformation. Will it Matter?

  • 🆕 Former Apple designers launch $700 Humane AI Pin as smartphone replacement

  • 🤖 The U.S. Department of State Unveils its First-Ever Enterprise Artificial Intelligence Strategy

  • 🎭 Hollywood actors secure safeguards around AI use on screen

Nvidia has developed three new AI chips for the Chinese market, the H20, L20, and L2, in an adaptive measure to comply with U.S. export controls while still meeting China's demand for artificial intelligence technology. This move comes after the U.S. restricted sales of high-performance chips to China, which could be used to create advanced AI systems—a development that forms part of the ongoing technology rivalry between the two countries. While the performance of these new chips has been deliberately moderated, they are expected to be competitive in the Chinese market. This response by Nvidia, a company with a significant market value driven by its processors essential for AI development, illustrates the rapid strategic adjustments tech companies are making in the face of evolving geopolitical constraints. As the U.S. tightens its technology export policies, Nvidia's quick pivot to develop and test new chips underscores the high-stakes race in global AI chip market leadership.

OpenAI has launched a new initiative called Data Partnerships, aiming to collaborate with various organizations to create new data sets for AI training, addressing the limitations and biases of current data sets. Recognizing that present AI data sets are often flawed due to biases such as being U.S.-centric, containing toxic language, and lacking in diversity, OpenAI seeks to build more inclusive data sets that better reflect global society. The initiative seeks large-scale, diverse data across multiple formats, including images, audio, and video, with a particular interest in data expressing human intention like long-form writing and conversations. OpenAI plans to create both open-source data sets for public use and private data sets for proprietary AI models, and has already collaborated with entities like the Icelandic Government to enhance GPT-4’s Icelandic language capabilities and the Free Law Project to improve understanding of legal documents.

In an era poised for significant elections globally, Big Tech's efforts to combat AI-generated election misinformation seem to miss the more profound challenges at hand. Meta and Microsoft's recent initiatives to label politically altered ads and provide tools for content authentication are steps forward, yet experts argue that the actual impact of deepfakes and misinformation on election outcomes remains unsubstantial at best. Despite concerns over the misuse of generative AI in political campaigns, evidence suggests that misinformation has not significantly swayed U.S. election results to date. Nevertheless, there is an undercurrent of fear that the mere existence of deepfake technologies could erode trust in credible information sources. This article from TIME underscores the complexities of AI's role in misinformation, highlighting a diverse range of expert opinions and noting the broader, perhaps more crucial, political reforms needed to address the roots of misinformation.

Former Apple designers Imran Chaudhri and Bethany Bongiorno have introduced Humane's first product: the AI Pin, a lapel-worn device priced at $699, aiming to supplant smartphones. It enables voice-controlled calls, texts, and data searches, and features a laser display for projecting information onto surfaces like a palm. Requiring a $24 monthly T-Mobile data plan, the AI Pin is not dependent on a tethered smartphone. Humane, which has raised over $200 million from high-profile investors like Microsoft and OpenAI's Sam Altman, will start taking orders for the device on November 16. The AI Pin, powered by a Qualcomm chipset, includes a speaker, camera, real-time translation, and integrates AI services from the web without needing app downloads, offering features such as music play from Tidal and health data summaries.

The U.S. Department of State has unveiled its first Enterprise AI Strategy to harness the power of AI in advancing diplomacy and development objectives. The strategy outlines a responsible approach to AI adoption, emphasizing its alignment with democratic values and human rights, the rule of law, and international commitments. It aims to improve decision-making processes, reduce administrative burdens, and foster partnerships to advance AI literacy and skills. The Department of State's strategy is comprehensive, addressing AI governance, ethical principles, technical standards, and talent cultivation, with the intent to position the Department at the forefront of AI usage in federal government entities.

Hollywood actors have successfully negotiated new safeguards against the use of their images and performances by artificial intelligence (AI). According to a labor agreement that concluded a 118-day strike, movie studios are now required to obtain permission from actors for AI-generated content and must compensate them for the use of their digital likenesses. This contract, which sets a minimum compensation level, aims to protect actors against the potential replacement by AI-generated "metahumans" or unauthorized use of their likeness. Full details of the contract will be disclosed after a union board vote and subsequent ratification by union members. This move reflects growing concerns in the entertainment industry regarding AI technology's impact on creative professions and the need for regulations to ensure fair and ethical use of such technology.

🛠️ AI tools updates

💵 Venture Capital updates

Eilla AI, a London-based generative AI platform, has secured €1.4 million in seed funding to transform the financial decision-making process in private markets, especially in M&A, venture capital, and private equity sectors. The funding round, led by Eleven Ventures and supported by Fuel Ventures with personal investment from Mark Pearson, aims to help the platform grow its team and scale its operations. Eilla AI's technology automates the time-consuming tasks of financial research, analysis, and document creation for financial professionals. It uses generative AI to mirror industry experts, enabling the aggregation and analysis of vast amounts of information swiftly, thereby supporting complex decision-making and offering new insights.

🫡 Meme of the day

⭐️ Generative AI image of the day