• AI KATANA
  • Posts
  • OpenAI sets up safety committee as it starts training new model

OpenAI sets up safety committee as it starts training new model

Also: Elon Musk’s xAI raises $6 billion to fund its race against ChatGPT and all the rest

Good morning,

In today's newsletter, we highlight key developments in the AI sector. OpenAI has formed a Safety and Security Committee to enhance its safety protocols as it begins training its new model, marking a significant shift in its operations. Elon Musk's AI startup, xAI, has secured $6 billion in funding to propel its AI initiatives and build a supercomputer, intensifying competition in the AI landscape. In a rare legal case, a Kawasaki man was arrested for creating malware using generative AI tools. Nvidia's market value has soared, nearing that of Apple, driven by robust demand for its AI chips. Additionally, IBM and AI Singapore are collaborating on a localized large language model for Southeast Asia. Lastly, Stanford's study on AI legal research tools reveals a 17% hallucination rate, prompting further investigation. Stay tuned for more insights and updates.

Also, check out our upcoming event during AI week in Singapore!

Sliced:

  • ⚠️ OpenAI sets up safety committee as it starts training new model

  • 💵 Elon Musk’s xAI raises $6 billion to fund its race against ChatGPT and all the rest

  • 🇯🇵 Kawasaki man arrested over malware made using generative AI

  • 📈 AI darling Nvidia's market value surges closer to Apple

  • 🇸🇬 IBM and AI Singapore to collaborate on developing Southeast Asian large language model

AI KATANA is partnering with The Great Room to run a panel event session for business leaders during AI week in Singapore.

Date: June 4th
Time: 6:30pm
Location: The Great Room, Centennial Tower, Singapore

Speakers:
Kevin Chung COO Writer AI
Kisson Lin Co-Founder & COO Mindverse
Tiang Lim Foo Co-Founder Forge Ventures

Register here

OpenAI has established a Safety and Security Committee to oversee and recommend safety protocols as it embarks on training its next AI model. The committee, led by CEO Sam Altman and board members Bret Taylor, Adam D'Angelo, and Nicole Seligman, will evaluate existing safety practices over 90 days and provide recommendations to the board, which will then be publicly shared. This move marks OpenAI's shift towards a commercial entity, aiming to streamline product development while ensuring accountability. Notable changes include the departure of former Chief Scientist Ilya Sutskever and the dissolution of the Superalignment team. The new committee, which includes Chief Scientist Jakub Pachocki and head of security Matt Knight, will consult experts like former NSA cybersecurity director Rob Joyce to enhance its safety measures.

Elon Musk's AI startup, xAI, has raised $6 billion in funding to advance its efforts in developing AI technologies and infrastructure. The funding will support bringing xAI's first products to market, building an AI supercomputer by late next year, and accelerating research and development. xAI has already launched Grok, a chatbot similar to ChatGPT, available to X Premium subscribers on the platform formerly known as Twitter. The funding round included contributions from investors like Andreessen Horowitz, Sequoia Capital, and Saudi Arabian Prince Al Waleed bin Talal. xAI's ambitious plans include acquiring 100,000 Nvidia H100 chips to power its supercomputer, aiming to launch the new data center by fall 2025. This initiative places xAI in direct competition with other major tech firms heavily investing in AI, such as Google, Apple, Amazon, Microsoft, and Meta.

Tokyo police have arrested Ryuki Hayashi, a 25-year-old from Kawasaki, for allegedly creating malware using free generative AI tools. Hayashi reportedly crafted the malware in March last year by combining illegal malware designs obtained through interactive AI tools, with intentions to "earn easy money." The malware resembled ransomware, which encrypts data on computers and demands a ransom, but it is believed that Hayashi's creation did not cause any damage as he failed to acquire the necessary program for practical use. This case is one of the few instances where law enforcement has taken action against malware created with AI. Hayashi, a former factory worker with no expertise in malware, learned online how to bypass AI safeguards that prevent the generation of malicious code. His arrest follows a prior charge for using fake identification to obtain a SIM card.

Nvidia's market value has surged to a record high, coming within $100 billion of overtaking Apple's valuation, driven by a recent 6% rise in its shares. The company's stock, which has more than doubled this year, reached $1,128 per share, pushing its market capitalization to $2.8 trillion. Nvidia's growth has been fueled by high demand for its AI chips, leading to a five-fold increase in revenue from its data center segment. This performance outpaces Apple, which has struggled with weak iPhone demand and competition in China. Nvidia's strong outlook and investor enthusiasm following a stock split and positive revenue forecasts have further propelled its stock, making it a standout in the tech sector.

IBM and AI Singapore (AISG) have signed a memorandum of understanding to collaborate on developing and testing the SEA-LION large language model (LLM), tailored for Southeast Asia. This partnership aims to enhance the SEA-LION model using IBM's AI technology and data platform, watsonx, facilitating localized generative AI applications for data scientists, developers, and engineers. The collaboration underscores the importance of customizable AI models for diverse business needs and emphasizes the role of public-private partnerships in advancing AI capabilities in the region. Both organizations also plan to integrate AI governance into SEA-LION to address compliance and risk management as regulatory scrutiny increases. This initiative aligns with AISG's goal to boost Singapore’s AI capabilities and IBM’s mission to support digital transformation across various industries in ASEAN nations.

🛠️ AI tools updates

A recent study by Stanford's RegLab and Human-Centered AI research center found that AI legal research tools from LexisNexis and Thomson Reuters hallucinate in 17% of queries, meaning they produce incorrect or misgrounded responses. The study, however, faced criticism for comparing non-equivalent products; it tested LexisNexis's general legal research tool against Thomson Reuters's more limited Practical Law AI, instead of its AI-Assisted Research in Westlaw Precision. Thomson Reuters has since provided access to its appropriate tool, and the study will be augmented. Despite the identified hallucination rates, the study acknowledged that AI tools still offer significant value over traditional methods. The findings highlight the need for greater transparency and rigorous benchmarking of AI products in the legal field.

💵 Venture Capital updates

Amplitude Ventures, a Montréal-based investment firm specializing in health and life sciences, has closed $263 million for its second precision medicine venture fund. This new fund, which brings the firm's total assets under management to over $500 million, aims to invest in companies at the intersection of biology and AI. Backed by existing and new investors, including BDC Capital, RBC, NorthLeaf Capital Partners, and the Government of Canada’s Venture Capital Catalyst Initiative, Amplitude plans to make 14 to 16 investments. The firm has already invested in companies such as Evommune, Tentarix Biotherapeutics, Reverb Therapeutics, and Evolved Therapeutics. Amplitude's strategy focuses on early-stage companies leveraging AI for drug discovery, supported by a team of experienced venture partners and strategic investors.

🫡 Meme of the day

⭐️ Generative AI image of the day