- AI KATANA
- Posts
- Flawed data ranked as top AI risk
Flawed data ranked as top AI risk
Also: New ‘Physics-Inspired’ Generative AI Exceeds Expectations

Welcome!
In recent AI news, CEOs express concerns over potential errors and data inaccuracies in AI adoption, despite acknowledging its benefits. Google DeepMind introduces an AI tool that boosts the identification of disease-causing genes from 0.1% to 89%. Carl Benedikt Frey and Michael Osborne discuss how generative AI technologies could reshape opportunities for average workers. MIT’s Max Tegmark explores physics-inspired generative AI models for rapid image production. Google’s Bard chatbot can now source data from users’ Gmail, Docs, and Drive, and cybersecurity startup HiddenLayer raises $50M for its AI-defending tools.
Sliced:
⚠️ Flawed data ranked as top AI risk
🧬 Google DeepMind AI speeds up search for disease genes
👨💼 Carl Benedikt Frey and Michael Osborne on how AI benefits lower-skilled workers
✅ New ‘Physics-Inspired’ Generative AI Exceeds Expectations
In a recent Workday survey, 67% of CEO respondents identified potential errors as a primary concern regarding AI adoption, even though 98% acknowledge AI’s potential immediate benefits for their organizations. Many CEOs are hesitant to fully adopt AI due to risks associated with data privacy and inaccuracies. Despite the rapid evolution of these technologies, 28% of CEOs prefer to observe AI’s impact on their organizations before finalizing their approach. Similarly, a KPMG survey found that 80% of executives believe generative AI, which creates content based on training data, will disrupt their industries, but almost half fear it could harm their organizations’ trustworthiness without proper risk management. The success of AI hinges on the quality of its training data, with many organizations struggling with siloed or low-quality data. This emphasis on AI’s risks and benefits has also caught the attention of regulators and policymakers in Washington.
Google DeepMind has developed an AI model that significantly enhances the identification of disease-causing genes in human DNA. While previously only 0.1% of mutations were classified as benign or disease-causing, DeepMind's new tool has escalated this figure to 89%. This advancement enables researchers to pinpoint potentially harmful regions in DNA more efficiently. The tool, documented in the Science journal, has been evaluated by Genomics England in collaboration with the NHS. Dr. Ellen Thomas of Genomics England expressed that the health sector would be among the early beneficiaries, aiding clinical scientists in deriving meaningful insights from genetic data for patient care. Experts, like Prof. Birney, foresee AI playing a transformative role in molecular biology and related fields.
A decade ago, Carl Benedikt Frey and Michael Osborne highlighted the rise of AI and its potential to automate routine jobs, sidelining the “average” worker while benefiting highly skilled professionals. However, with the advent of generative AI like GPT-4 and DALL·E 2, there’s been a shift. These technologies can produce human-like content, potentially replacing or assisting professionals in fields like design and advertising. Furthermore, tools like Github’s Copilot, an AI “pair programmer”, amplify the scope of automation. As a result, the narrative is changing, signaling a resurgence of opportunities for the “average” worker.
Max Tegmark, a physicist at MIT, believes that the principles of physics can contribute to advancements in AI, particularly in the domain of generative models. Tegmark and his team have been exploring physics-inspired generative models, including those based on diffusion processes. Their recent work introduced the Poisson flow generative model (PFGM), where data is represented by charged particles creating an electric field governed by the Poisson equation. This model can produce images rapidly, using the characteristics of the electric field. It operates similarly to diffusion models by adding and removing noise to images during training. The upgraded version, PFGM++, allows adjustments in the system's dimensionality, offering variability in robustness and training ease. The MIT team is now exploring more physical processes, like the Yukawa potential, to inspire new generative models.
🛠️ AI tools updates
Google's Bard AI chatbot has evolved from merely sourcing information from the web to now accessing user-specific data in Gmail, Docs, and Drive to provide answers. With this enhanced capability, Bard can extract and summarize email contents or pinpoint vital aspects of documents stored in Drive, offering users a more streamlined method to locate particular information without having to rummage through their storage. The feature, currently available only in English, can be activated with a prefix, like @mail, or by asking Bard directly. Though users may have privacy concerns, Google ensures that this data will neither train Bard's public model nor be visible to human reviewers. Opting in is a user's choice, with the ability to deactivate at any time. In addition to Gmail, Docs, and Drive, extensions will be made to Maps, YouTube, and Google Flights. Google aims to expand these integrations further, within its ecosystem and with external partners.
💵 Venture Capital updates
HiddenLayer, a cybersecurity startup specializing in defending AI systems against adversarial attacks, has secured $50 million in funding from prominent investors including M12, Moore Strategic Ventures, IBM, and Capital One. Co-founded by Chris Sestito, Jim Ballard, and Tanner Burns in 2019, the company offers tools that guard AI models against vulnerabilities, malicious code injections, and other threats, ensuring the integrity of AI models before their deployment. Sestito highlighted that many organizations use pre-trained open-source models, which can expose them to potential threats. HiddenLayer emphasizes its unique AI-driven detection and response system and has already partnered with notable organizations, including the U.S. Air Force. As AI adoption surges, HiddenLayer's platform aims to address the increasing concerns around AI security.
🫡 Meme of the day

⭐️ Generative AI image of the day

Before you go, check out how Telling AI model to “take a deep breath” causes math scores to soar in study.
