• AI KATANA
  • Posts
  • OpenAI shares preview of new AI voice technology amid rising deepfake concerns

OpenAI shares preview of new AI voice technology amid rising deepfake concerns

Also: South Korea contends with AI and electoral integrity

Hi there,

Today's newsletter highlights recent advancements and partnerships in AI, addressing both the opportunities and challenges posed by this rapidly evolving technology. OpenAI introduced Voice Engine, a new AI tool capable of producing natural-sounding speech, aimed at assisting individuals with speech impairments, but raising deepfake concerns. Meanwhile, the U.S. and U.K. have solidified a partnership focused on AI safety testing, a move that aligns with global efforts to ensure responsible AI development. South Korea is grappling with AI's impact on electoral integrity, implementing laws to curb deepfake misuse in political campaigns. On the corporate front, Microsoft and OpenAI are developing a $100 billion AI supercomputer, the Stargate project, to lessen reliance on traditional computing infrastructure and vendors like Nvidia. In healthcare, DrugGPT emerges as a promising AI tool in England, potentially reducing medication errors significantly. Lastly, in the venture capital arena, a new fund aims to provide agile support to AI startups, recognizing the need for a more dynamic investment model to match the sector's growth.

Sliced:

  • 🆕 OpenAI shares preview of new AI voice technology amid rising deepfake concerns

  • ⚠️ U.S., U.K. Announce Partnership to Safety Test AI Models

  • 🌏 South Korea contends with AI and electoral integrity

  • ⭐ Microsoft, OpenAI plan $100B Stargate AI data center, eases reliance on Nvidia

OpenAI has recently showcased a new AI technology named Voice Engine that is capable of generating natural-sounding speech, closely mimicking human voices from a brief 15-second audio sample. This innovation is intended to aid in reading, translating content, and providing speech for nonverbal individuals or those with speech impairments. However, the company acknowledges potential serious risks, particularly in light of election year concerns. The Voice Engine has been tested with select partners under strict policies, including the requirement for original speaker consent, prohibition of unauthorized impersonation, and mandatory disclosure of AI-generated voice nature, along with embedded watermarking for tracing origins. OpenAI suggests integrating voice authentication to confirm speaker consent and recommends avoiding voice-based authentication for accessing sensitive information.

The U.S. and U.K. have officially joined forces to improve AI safety testing, leveraging their resources and expertise in a strategic partnership aimed at ensuring the responsible development of AI technologies. This collaboration was announced following their individual efforts to establish AI Safety Institutes, with the U.K. hosting the first AI Safety Summit and committing significant financial resources to their institute. The partnership, formalized during a meeting between U.K. Science Innovation and Technology Secretary Michelle Donelan and U.S. Commerce Secretary Gina Raimondo, will foster shared methodologies and infrastructure for AI safety testing. These measures are crucial in the context of international regulatory efforts, such as the EU’s AI Act and U.S. President Joe Biden’s executive order, which mandate transparency and safety testing for advanced AI models.

As South Korea prepares for its general elections, it is addressing the significant influence of AI on its democratic processes. The country acknowledges the benefits of AI in political engagement, like streamlining the creation of campaign materials and personalizing communication with voters. However, the risks, particularly regarding misinformation and deepfakes, have prompted legislative actions, such as the ban on AI-generated deepfake content in political campaigns within the crucial pre-election period. Penalties for violating this law are substantial, including imprisonment and hefty fines. Efforts to ensure transparency and accountability in AI usage in politics are evident through established guidelines that necessitate clear disclosure of AI-assisted content. South Korea's leading tech entities are enhancing their monitoring and reporting mechanisms to combat AI-generated misinformation. Nevertheless, concerns persist about the enforceability of these regulations, particularly against internationally produced content and the potential for overly restrictive measures that might impinge on free expression.

Microsoft and OpenAI have embarked on an ambitious venture, the Stargate project, to construct a cutting-edge AI supercomputer data center, earmarked to be operational by 2028 with an estimated cost exceeding $100 billion. This venture is a significant step to satisfy the burgeoning demand for high-performance computing, crucial for supporting generative AI workloads. The Stargate initiative, which will significantly reduce dependency on traditional data centers and shift away from reliance on Nvidia, particularly their InfiniBand cables, opting instead for Ethernet connections, represents a deepening of the partnership between Microsoft and OpenAI. This project, one of the most costly of its kind, involves multiple phases, with Microsoft currently advancing to the fourth phase that includes a smaller supercomputer expected to launch in 2026.

🛠️ AI tools updates

DrugGPT, an innovative AI tool developed at Oxford University, is set to revolutionize the way healthcare professionals in England prescribe medication, offering a potential solution to the 237 million medication errors occurring annually. This tool, designed as a prescription safety net, provides immediate recommendations and information on drugs, their potential adverse effects, and drug interactions. With its performance said to be competitive with human experts, DrugGPT aims to enhance the accuracy of prescribing practices. The technology is part of a broader trend towards integrating AI in healthcare to support decision-making and improve patient outcomes. The tool's introduction is seen as a critical step in reducing the considerable human and financial costs associated with medication errors.

💵 Venture Capital updates

Taizo Son, co-founder of The Edgeof, has identified a significant challenge in the venture capital landscape, noting the rapid growth potential of AI startups which often exceeds the traditional funding models' ability to support them. Son's insights come from his extensive experience, including his roles with GungHo Online Entertainment and Yahoo Japan, and his observations on the limitations of conventional venture capital funding in keeping pace with the exponential growth of AI companies. To address this, Son plans to launch a new fund focused on AI-related startups, particularly in the early stages, promoting a nimble investment approach that prioritizes quick decision-making and an Asian market orientation. This initiative aims to support the rapid scaling of AI businesses, emphasizing the acquisition of semiconductors over traditional growth metrics like staff numbers or physical infrastructure.

🫡 Meme of the day

⭐️ Generative AI image of the day