• AI KATANA
  • Posts
  • How real is the threat of AI deepfakes in the 2024 election?

How real is the threat of AI deepfakes in the 2024 election?

Also: Broadening demand points to faster AI growth

Welcome!

In today's edition, we delve into some compelling discussions around the potential threats and challenges posed by AI, including the use of deepfakes in political campaigns, the propagation of misinformation by AI chatbots, and the looming legal issues for generative AI. On a more optimistic note, we explore the broadening demand for AI and its implications on market growth, as well as a promising new tool developed by MIT researchers aimed at protecting images from AI manipulation. In venture capital updates, AWS and Accel are boosting support for generative AI startups in India with the launch of their ML Elevate 2023 program. Let's dive in.

Sliced:

  • 🥸 How real is the threat of AI deepfakes in the 2024 election?

  • 📈 Broadening demand points to faster AI growth

  • 😱 Fragmented truth: How AI is distorting and challenging our reality

  • 👩🏻‍⚖️ AI’s Growing Legal Troubles

The threat of AI deepfakes influencing the 2024 elections is quite real and worrisome, as evidenced by a campaign ad supporting Florida Republican Governor Ron DeSantis that used an AI-cloned voice of former President Trump, says University of California, Berkeley's digital forensics expert Hany Farid. He highlights the risks of such technology, including the creation of audio records of opponents making false statements and the potential for discrediting genuine content. While AI voices have noticeable imperfections, rapid technological advancements could lead to near-perfect fakes, especially problematic given the pace of content consumption on social media. Deepfakes are seen in both national and local political campaigns, but regulation is difficult due to free speech protections and the complexity of proving intent to deceive. Farid suggests that campaign ethics and tech companies, by refusing to generate misleading content and watermarking generated content, can play a vital role in mitigating the threat. Despite these strategies, the spread of disinformation could outpace fact-checking efforts, making it a pressing concern for upcoming elections.

The technology earnings season began with mixed results, causing uncertainty for the third quarter performance of tech stocks, which have seen a 25-30% rally year-to-date. However, increasing demand trends suggest faster AI growth, contributing approximately USD 2 trillion to the combined global tech market cap's USD 6 trillion increase. Current AI revenues primarily derive from the infrastructure layer, but as demand broadens, the software and internet-based applications and data models layer are expected to take over. As such, forecasts for long-term AI end-demand have increased from a 20% CAGR during 2020–25 to 61% CAGR during 2022–27, potentially growing global AI demand from USD 28bn in 2022 to USD 300bn in 2027. Given these trends, software stocks appear to have more attractive risk-reward profiles and are positioned to capitalize on the broadening AI demand, as compared to the early cyclical industries like semiconductors and hardware, which are currently overvalued.

The AI landscape has become complex and fraught with issues related to truth and trust as AI chatbots like ChatGPT, Microsoft's Sydney, and others, provide varying outputs based on their training data and embedded guardrails, causing concern about the propagation of misinformation and bias. While companies like Anthropic and Meta have taken different approaches to secure ethical AI interaction, the release of open-source models has made the implementation of guardrails less feasible. Concerns deepen with new research showing ways to bypass these guardrails, leading to safety implications and an overall degradation of trust. This fragmentation extends to the rise of digital humans, which are becoming increasingly advanced, leading to applications such as digital human newscasters. However, the realism of these AI-generated faces coupled with the potential for personalized news content could lead to further manipulation and disinformation, threatening the very fabric of truth in society.

Generative AI, like OpenAI's ChatGPT, are coming under legal fire due to copyright and defamation issues following the loss of Section 230 protection. These AI systems, which generate content by scanning vast amounts of information from various sources, are facing a barrage of copyright lawsuits over whether their outputs are "derivative" and require original copyright owners' permission. High-profile cases include a class-action suit led by comedian Sarah Silverman and others against OpenAI and Meta, and a $9 billion claim against Microsoft and OpenAI for the use of copyrighted code. Moreover, potential defamation liabilities are rising, as AI systems are considered publishers rather than content hosts and thus lack Section 230-like protections. These legal challenges underscore the urgent need for new legal protection mechanisms, dubbed Section 230.ai, for the burgeoning AI industry. The current legal quandary threatens to slow the rollout of generative AI tools, necessitating immediate action to address AI's legal vulnerabilities.

🛠️ AI tools updates

Researchers at MIT have developed a tool called PhotoGuard that could protect pictures from being manipulated by artificial intelligence (AI) systems. The tool adds indiscernible alterations to images, which, while invisible to humans, prevent AI models from modifying them convincingly. This could significantly reduce the risk of photos being used maliciously, such as in nonconsensual deepfake pornography. PhotoGuard works by implementing two techniques: an 'encoder attack' that causes the AI to interpret the image differently, and a more effective 'diffusion attack' that disrupts the AI's image generation process. While the tool currently works reliably only on the Stable Diffusion model, it represents an encouraging step towards mitigating the potential harm of AI-powered image manipulation. Experts suggest that integrating such protective measures into tech platforms could help further shield users' images from AI manipulation.

💵 Venture Capital updates

AWS and Accel announce ML Elevate 2023 to support generative AI startups in India Amazon Web Services (AWS) and venture capital firm Accel have launched ML Elevate 2023, a six-week accelerator program designed to support Indian startups creating generative AI applications. The initiative aims to give these startups access to AI models and tools, mentorship, resources, and up to $200,000 in AWS Credits, amongst other benefits. The program targets startups with a Minimum Viable Product (MVP) who plan to seek funding within the next 12-18 months. Participants will have opportunities to learn from industry experts and potential investors during live virtual masterclasses and also get a chance to pitch their ideas during a dedicated Demo Week. This initiative is a testament to AWS's continued commitment to India, with plans to invest $12.7 billion in the country by 2030.

🫡 Meme of the day

⭐️ Generative AI image of the day