• AI KATANA
  • Posts
  • MIT study shows AI conversations are more positive if users think AI is empathetic, negative if they think it’s nefarious

MIT study shows AI conversations are more positive if users think AI is empathetic, negative if they think it’s nefarious

Also: How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work

Hi!

In recent AI news, a study by MIT and Arizona State University highlights how users' perceptions of a chatbot's motives can shape their interactions, with conversations leaning positive when users believe the AI is empathetic. Harvard emphasizes the swift progression of AI technologies and their imminent impacts on various sectors, citing ChatGPT as a prime example. On the application front, Farmer.CHAT, an AI chatbot, is aiding farmers in addressing contemporary agricultural challenges. Furthermore, a coalition led by Microsoft and Adobe seeks to enhance transparency by introducing an 'AI watermarking' system for AI-generated images. Adobe is set to redefine video editing with its new generative AI tool unveiled at Adobe MAX. In the venture capital realm, a list of 100 promising South Korean AI startups will be released later this month, shedding light on the burgeoning AI sector in the region.

Sliced:

  • 🙂 MIT study shows AI conversations are more positive if users think AI is empathetic, negative if they think it’s nefarious

  • ⚠️ A Tech Warning: AI's Rapid Advancements

  • 🚜 AI chatbot helps farmers navigate a changing world

  • 🔖 How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work

The article discusses a study by MIT and Arizona State University revealing that users' preconceived notions about a chatbot's motives significantly influence their interactions with it. The study, which involved priming participants with different descriptions of a chatbot's intentions (empathetic, neutral, or manipulative), found that such priming altered users' perceptions and interactions with the chatbot. The sentiment of conversations turned more positive when users believed the AI was empathetic, and vice versa. The findings underscore the importance of understanding and considering human factors in AI development and deployment, as users' mental models of AI can be easily shaped by external factors such as media and popular culture.

A recent publication from Harvard underscores the rapid pace at which Artificial Intelligence (AI) technologies are advancing, particularly spotlighting the writing and artistic capabilities of ChatGPT. The article delves into the consequential impacts anticipated for the U.S. economy, national security, and other essential facets of life due to the swift evolution of AI. It suggests that the current trajectory of AI development, epitomized by ChatGPT, is not only a testament to the significant progress already made but also a harbinger of the transformative changes awaiting us in the near future. The narrative serves as a reminder of the boundless possibilities and the critical challenges that AI presents, urging stakeholders to brace for a potentially rough ride ahead as AI continues to permeate every sector of society.

A novel AI chatbot, Farmer.CHAT, is assisting farmers in tackling contemporary agricultural hurdles. Esteemed by food and climate experts, this innovation is critical for addressing food insecurity amid global heating and the Ukraine war. By fostering the creation of drought-tolerant crops, early-warning systems for climate-induced calamities, and sustainable land management practices, AI chatbots like Farmer.CHAT are envisioned to considerably fortify agricultural and food systems against emerging challenges, marking a substantial stride towards resilient and sustainable agriculture.

Microsoft, Adobe, and other companies have pledged to add metadata to AI-generated images to indicate their machine-made nature via a unique symbol, aimed at promoting transparency. This initiative, led by the Coalition for Content Provenance and Authenticity, intends to provide a method for users to easily discern if an image was created by a model or a human, through applications compatible with this metadata system. When viewed in a supporting application, a symbol is displayed on the image, which upon interaction, reveals the image's source and other related details1.

🛠️ AI tools updates

Adobe's annual creative conference, Adobe MAX, unveiled a revolutionary AI-driven video editing tool named Project Fast Fill. This advanced generative AI tool, an extension of Adobe's popular Generative Fill, promises to significantly streamline the video editing process. With simple text prompts, video editors can swiftly add or remove objects and alter backgrounds, tasks that were once labor-intensive. For instance, in demonstrations, objects were seamlessly removed from a video's background and a tie was added to a man simply by typing "tie." Notably, this tool not only edits a single frame but adjusts every frame in the video for factors like lighting and movement. Although the release date is still unknown, Adobe emphasizes that this innovative technology might soon be incorporated into its product lineup.

💵 Venture Capital updates

The Korea Economic Daily, in partnership with KT Corp., will unveil a list of 100 emerging AI startups in South Korea on October 26, marking the third consecutive year of this collaboration. This initiative aims to bolster South Korea's burgeoning AI startup ecosystem amid the digital transformation era. In the past, 157 startups have been recognized, with nine going public and 37 securing subsequent funding rounds. Startups like Upstage and Riiid have showcased advancements in large language models, outperforming OpenAI’s ChatGPT-3.5 in standardized evaluations. The 2023 AI Startup 100 list is anticipated to serve as an essential guide for investors keen on the Korean AI sector.

🫡 Meme of the day

⭐️ Generative AI image of the day

Before you go, check out Adobe’s new animated dress tech.