AI News of April

UN Takes Historic Step: International Guidelines Established for AI Development

The United Nations General Assembly has approved a draft framework, backed by over 120 member states, for the development & use of AI systems. 

The core principle of this resolution is the establishment of “safe, secure, and trustworthy” AI that aligns with the UN’s Sustainable Development Goals (SDGs). These SDGs emphasize the importance of integrating human rights considerations throughout the entire AI lifecycle, from design to deployment. It also signifies a global commitment to ensuring AI is used ethically.

 U.S. Ambassador to the United Nations Linda Thomas-Greenfield said:

Today, all 193 members of the United Nations General Assembly have spoken in one voice, and together, chosen to govern artificial intelligence rather than let it govern us.

The framework calls for international collaboration to bridge this gap and empower developing countries to participate in the responsible development of AI. This includes ensuring inclusive access to AI technologies and fostering global digital literacy.

OpenAI Aims for Hollywood, But Can Sora Live Up to the Hype?

OpenAI is reportedly courting Hollywood with their latest marvel: Sora, a text-to-video AI model.

According to Bloomberg:

OpenAI has a deliberate strategy of working in collaboration with industry through a process of iterative deployment – rolling out AI advances in phases – to ensure safe implementation and to give people an idea of what’s on the horizon. We look forward to an ongoing dialogue with artists and creatives.

While the promise of instant visuals based on a text prompt sounds like a dream come true, some cracks are starting to show in OpenAI’s pitch.

The biggest question mark is Sora’s grasp of reality. Critics have raised concerns about the model’s ability to understand and depict the physical world. While Sora might generate visuals, they might need more nuance of real-world physics. 

Secondly, there’s the issue of accessibility. While Sora could streamline pre-production, it’s unlikely to be readily available. OpenAI can leave smaller studios and independent filmmakers out in the cold. This raises concerns about further consolidating power in the hands of major studios.

Finally, the potential for misuse looms large. With the ability to create hyper-realistic videos based on text alone, the lines between fact and fiction could easily blur. Deepfakes could reach a whole new level of sophistication, making it even harder to discern truth from manipulation. So, Sora might be a powerful tool, but its limitations and potential downsides need serious consideration before Hollywood embraces it wholeheartedly. 

Faster, Faster, Faster: MIT Unveils AI That Generates Images in a Flash

MIT‘s CSAIL is celebrating a breakthrough – a new AI framework that cuts image generation time by a whopping 30 times. This one-step method, dubbed Distributed Matching Distillation (DMD), promises to revolutionize design tools and potentially accelerate fields like 3D modeling. 

The DMD framework simplifies the current multi-step process used by traditional diffusion models. This simplification translates to a dramatic increase in speed while still maintaining high-quality image generation according to MIT researchers.

It is believed that DMD represents a significant step forward, and with further development, it can achieve both speed and quality. Additionally, the one-step process could simplify AI use, making it more accessible to non-experts. 

Regarding the technology, Tianwei Yin, the lead researcher on the DMD framework said in a statement released on the official website of MIT:

Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by 30 times. This advancement not only significantly reduces computational time but also retains, if not surpasses, the quality of the generated visual content. Theoretically, the approach marries the principles of generative adversarial networks (GANs) with those of diffusion models, achieving visual content generation in a single step — a stark contrast to the hundred steps of iterative refinement required by current diffusion models. It could potentially be a new generative modeling method that excels in speed and quality.