• AI KATANA
  • Posts
  • OpenAI o1 Sets New Standards for AI Reasoning

OpenAI o1 Sets New Standards for AI Reasoning

Also: Meta fed its AI on almost everything you’ve posted publicly since 2007

Good morning! Here’s your daily digest of the latest in AI advancements, privacy concerns, and industry shifts. OpenAI’s new “o1” series has set impressive benchmarks in reasoning and safety, dramatically outperforming its predecessors in tasks like competitive programming and math competitions. Meanwhile, AI is also showing promise as a tool for combating misinformation, as new research demonstrates its ability to counter conspiracy beliefs. On the privacy front, Meta’s use of public user data since 2007 for AI training is under fire, especially as many countries outside the EU lack adequate opt-out options. In the policy realm, top tech leaders, including those from Nvidia, OpenAI, and Google, met with the White House to address AI’s growing energy and infrastructure demands. These stories highlight the rapid evolution of AI and its societal impacts.

Sliced just for you:

  • 🤖 OpenAI o1 Sets New Standards for AI Reasoning

  • 🔍 Generative AI as a tool for truth

  • 🛡️ Meta fed its AI on almost everything you’ve posted publicly since 2007

  • 🏛️ Nvidia, OpenAI, Anthropic and Google execs meet with White House to talk AI energy and data centers

OpenAI recently introduced the “o1” model series, a breakthrough in AI reasoning and safety. This new generation is designed to tackle complex reasoning tasks, surpassing previous models like GPT-4 in performance. For example, o1 achieved 83% accuracy in International Mathematics Olympiad exams compared to GPT-4’s 13%. The model also excels in coding challenges, outperforming in competitive programming. OpenAI has made strides in AI safety, with o1 demonstrating a marked improvement in resisting jailbreak attempts. The o1 models are available through ChatGPT and the API, with plans to expand features like web browsing and file uploads, positioning them as powerful tools for professionals across scientific, research, and software development fields.

Generative AI, often criticized for amplifying misinformation, has shown promising potential in reducing conspiracy beliefs. A study by Costello et al. revealed that AI-driven chatbot conversations could effectively lower such beliefs by 20%, with the effects lasting at least two months. This finding highlights the capacity of AI to offer scalable interventions against entrenched misinformation. AI’s strength lies in its ability to provide personalized, detailed counterarguments to specific conspiracies, a task challenging for human interlocutors. While this approach appears effective against well-established conspiracy theories, its efficacy on more superficial misinformation, like health myths or climate skepticism, remains uncertain. The study also emphasizes the need for further research to refine the frequency and duration of these AI interactions, along with strategies to engage users who are resistant to AI-based corrections. Additionally, the integration of AI into search results and social media could help intercept misinformation, though trust and engagement remain challenges.

Meta has admitted to using all public posts and photos from Facebook and Instagram users since 2007 to train its AI models, unless users had actively set their content to private. This includes everything adults have publicly shared on the platforms over the years. While European users can opt out of this data scraping due to privacy laws, most users globally, including Australians, do not have this option. Meta’s global privacy director confirmed the data collection during a government inquiry, acknowledging that the practice applies to all public content, including photos of children shared by their parents. This revelation has sparked criticism, with calls for stronger privacy protections in countries like Australia, where users’ data is not safeguarded as it is in the European Union. The company maintains that large datasets are necessary to build effective AI tools, but the lack of transparency and the absence of opt-out options have raised concerns over privacy rights.

Leaders from Nvidia, OpenAI, Anthropic, Google, and several major tech and utility companies met with White House officials to discuss the energy and infrastructure needs of AI. The meeting, which included executives like Nvidia CEO Jensen Huang and OpenAI CEO Sam Altman, focused on the increasing energy demands and the capacity required for data centers to support the rapid growth of AI technologies. The discussion also touched on semiconductor manufacturing and grid capacity, with an emphasis on public-private collaboration to address the challenges. The White House announced a task force following the meeting to coordinate policy on AI infrastructure. Nvidia’s CEO highlighted the scale of energy needed for AI development, calling it the beginning of a new industrial revolution. OpenAI shared its analysis of the economic benefits of building large-scale data centers, emphasizing the potential for job creation and the importance of maintaining U.S. leadership in AI innovation. This meeting follows recent moves by the Biden administration to enhance AI safety and ensure responsible development of the technology.

🛠️ AI tools updates

Researchers at King’s College London have developed an AI tool using anonymized NHS eye data from over 100,000 diabetic patients to predict the likelihood of developing sight-threatening diabetic retinopathy (DR) up to three years in advance. This condition, affecting one in three people with diabetes, is a leading cause of vision loss in working-age adults. The AI model, trained on more than 1.2 million retinal images, could revolutionize the NHS Diabetic Eye Screening Program (DESP), which currently screens 3.2 million people annually at a cost of over £85 million. By enabling individualized screenings, the AI tool can identify high-risk patients while reducing unnecessary appointments for those at low risk, potentially saving the NHS millions of pounds and improving patient care. This approach demonstrates the practical benefits of AI in healthcare, offering both social and economic advantages while reducing the burden on current screening programs.

💵 Venture Capital updates

Despite a challenging funding environment, South Korea’s AI and robotics startups have continued to attract investment, particularly in seed rounds. In August 2024, Holiday Robotics, a developer of AI humanoid robots for manufacturers, secured $12 million in seed funding, the largest for a Korean startup this year. Other notable startups, such as AIZ Entertainment, Beeble Inc., and Sionic AI, also raised significant seed capital. These companies, which focus on AI virtual humans, virtual studios for filmmakers, and generative AI solutions, respectively, highlight the sustained interest in AI despite a broader downturn in seed investment. Overall, seed funding for Korean startups has dropped significantly compared to the previous year, but investors are concentrating on companies with proven founders and strong business models, particularly in the AI sector.

🫡 Meme of the day

⭐️ Generative AI image of the day