• Posts
  • Adobe adds more AI models to its palette

Adobe adds more AI models to its palette

Also: US Space Force pauses use of AI tools like ChatGPT over data security risks


In recent AI news, Adobe has enhanced its suite with Project Stardust, a generative AI-powered image editing engine, enabling seamless object manipulations. Meanwhile, concerns over data security have led the U.S. Space Force to temporarily suspend the use of AI tools like ChatGPT, pending further review. As AI's influence grows across sectors, it's observed that while AI holds transformative potential, achieving its promise remains complex, with data quality being a significant challenge. On the global stage, nations are grappling with AI regulation, seeking to ensure both innovation and safety. In the corporate arena, tech giants are leading the way in shaping AI's role in recruitment, emphasizing fairness and the need to counteract biases. In tool updates, the AI-based EVEscape offers insights into potential viral mutations, aiding proactive health measures. In the financial sector, Conveyor has secured a $12.5M Series A round for its AI-driven security review platform, while AMD's acquisition of Nod.ai indicates a strategic move to strengthen its stance in the AI chip market against competitors like Nvidia.


  • 🎨 Adobe adds more AI models to its palette

  • 🪐 US Space Force pauses use of AI tools like ChatGPT over data security risks

  • 🤖 New Technologies Arrive in Clusters. What Does That Mean for AI?

  • 🌎 How countries around the world are trying to regulate AI

  • 👩🏻 Microsoft, Amazon among the companies shaping AI-enabled hiring policy

Adobe's Project Stardust, revealed during their MAX conference, is an innovative object-aware editing engine powered by generative AI, designed to transform image editing practices. This technology enables users to effortlessly move or remove objects within images with a simple click, showcasing the potential of generative AI and 3D technologies across various creative domains including photography and videography. Additionally, the unveiling highlighted Adobe's Firefly generative AI models, now commercially available, expanding the scope of applications beyond image generation. Project Stardust is notably powered by Firefly Model 2, emphasizing Adobe's continuous efforts in pioneering generative AI tools for creative endeavors.

The U.S. Space Force has temporarily halted the use of AI tools like ChatGPT due to data security concerns, as revealed by a memo. The prohibition will remain until there's formal approval from the force's Chief Technology and Innovation Office. This decision, termed as a "strategic pause," aims to safeguard service data while assessing how to responsibly integrate these technologies into the Space Force's operations. A task force is exploring the strategic use of generative AI, with further guidance expected in the coming month.

AI's potential to revolutionize industries is undeniable, with some likening its transformative power to that of fire and electricity. It promises to address long-standing challenges and has showcased its capability in various sectors like retail and manufacturing. However, realizing AI's promise is not straightforward. Past excitement around AI has often been met with periods of disillusionment and underachievement. The current wave of AI optimism is tempered by the reality that over 80% of AI projects fail and that AI tools often lack accuracy. Beyond the technology itself, successful AI adoption requires a convergence of supportive technologies, organizational adaptations, and societal accommodations. For AI, crucial ingredients include vast computing power and high-quality data. While computational power continues to grow, data quality remains a challenge.

Governments globally are striving to balance the promotion of AI innovation with ensuring safety and fairness. In the US, tech firms initially led AI safety measures, but officials are now exploring necessary regulations, including licensing for high-risk AI models. The UK aims for principle-based regulation to avoid stifling innovation, with existing bodies overseeing AI. The EU is advancing the AI Act, setting stricter standards particularly for high-risk AI systems. China mandates a security assessment for generative AI providers and adherence to socialist values, with a goal to lead in AI by 2030.

Only 12% of hiring professionals currently use AI in their talent recruitment processes, despite the potential benefits touted by AI proponents. Tech giants like Microsoft and Amazon, along with other major companies like Unilever and Koch Industries, have collaborated to formulate policies for AI's integration in hiring and recruitment. While AI can streamline some hiring tasks, there are reservations about its full-scale adoption due to potential bias and other challenges. Companies must ensure the use of AI in hiring upholds principles of transparency, fairness, and non-discrimination. Meanwhile, experts like Josh Millet of Criteria and Eric Reicin of BBB National Programs emphasize the importance of trustworthy AI systems that don’t amplify existing biases, along with the necessity of human oversight in the AI-driven recruitment process.

🛠️ AI tools updates

The newly developed AI tool called EVEscape, harnesses evolutionary and biological information to predict how a virus might alter to evade the immune response. This innovative tool has proven its mettle during the COVID-19 pandemic by successfully identifying several alarming new variants before they emerged. Researchers believe that EVEscape holds promise in steering the development of effective vaccines and therapies, not just for SARS-CoV-2, but also for other rapidly mutating viruses. By providing foresight into potential viral mutations, it facilitates a proactive approach in combating viral outbreaks.

💵 Venture Capital updates

Conveyor, a San Francisco-based company, recently secured $12.5 million in a Series A funding round. The firm has developed an AI-powered platform which significantly streamlines, automates, and scales the process of customer security reviews. This influx of capital illustrates the continued interest and investment in leveraging artificial intelligence to enhance cybersecurity measures, providing a more robust and efficient method of conducting necessary security reviews. The funding will likely enable Conveyor to further refine its AI technology, expand its team, and extend its market reach, thereby providing more organizations with the means to enhance their security infrastructures.

In a strategic move to bolster its software capabilities and better compete with rival chipmaker Nvidia in the AI chip market, AMD acquired Silicon Valley-based Nod.ai. The startup, which specializes in developing software to deploy AI models, had previously garnered around $20 million to $36.5 million in funding according to different sources. This acquisition is a part of AMD's broader initiative to invest heavily in critical software necessary for advanced AI chips, aiming to build a unified collection of software to power various chips the company produces. By acquiring Nod.ai, AMD can enable easier deployment of AI models tuned for its chips, showcasing AMD's commitment to executing its strategy through both internal investments and external acquisitions. This acquisition is likely to bring AMD a step closer to narrowing the software advantage Nvidia has built over more than a decade in the AI chip market.

🫡 Meme of the day

⭐️ Generative AI image of the day