Runway Introduces Video AI Gen-3

Home AI Projects Runway Introduces Video AI Gen-3
Runway Introduces Video AI Gen-3

Competition for high-quality AI-generated videos is intensifying. On Monday, Runway, which develops artificial intelligence tools for film and imaging, unveiled the Gen-3 Alpha. This new AI model can create video clips from text descriptions and still images. Runway claims that Gen-3 offers significant improvements in generation speed and quality over the previous Gen-2 model, as well as detailed control over video structure, style, and motion.

 

Gen-3 will soon be available to Runway subscribers, including corporate clients and creators participating in Runway’s Creative Partner Program. “Gen-3 Alpha stands out in its ability to create expressive human characters, showcasing a diverse array of actions, gestures, and emotions,” Runway stated on its blog. “It was designed to understand different styles and cinematic terms, while also allowing for creative transitions and precise framing of scene elements.”

 

Despite its capabilities, the Gen-3 Alpha has some limitations, such as creating frames that have a maximum duration of 10 seconds. However, Runway co-founder Anastasis Germanidis assures that the Gen-3 is only the beginning of a series of advanced models that will be built on the updated infrastructure. “The model can struggle with complex interactions between characters and objects, and the results don’t always obey the laws of physics exactly,” Germanidis told TechCrunch. “This initial release will support 5- and 10-second high-resolution clips with much faster generation times than Gen-2. It takes 45 seconds to make a 5-second clip, and 90 seconds to make a 10-second clip.”

 

Gen-3 Alpha, like all video generation models, was trained on a large dataset of video and images to “learn” the patterns needed to generate new clips. However, Runway did not disclose the source of its training data, a common practice among AI companies to maintain a competitive edge and avoid potential intellectual property disputes. “We have our research team that is responsible for all of our training using curated in-house datasets,” Germanidis noted. Details of the training data remain undisclosed, presumably to avoid legal complications related to copyright.

 

In dealing with copyright issues, Runway said it consulted with the artists when developing the model. The company also plans to release Gen-3 with new safeguards, including a moderation system to block attempts to generate copyrighted content and an origin system to identify videos created by Gen-3 following the C2PA standard supported by Microsoft, and Adobe. , OpenAI, and others. “Our new visual and text moderation system automatically filters out objectionable or harmful content,” Germanidis said. “C2PA authentication verifies the origin and authenticity of media created with our Gen-3 models. As we improve our modeling capabilities and generate high-fidelity content, we will continue to invest heavily in compliance and security measures.”

 

Runway also works with leading entertainment and media organizations to create their versions of Gen-3, allowing for more stylistically consistent characters across different scenes and specific artistic requirements. A significant challenge for video generation models is achieving a consistent output that aligns with the creator’s vision. Traditional filmmaking tasks, such as preserving the color of a character’s clothing, often require solving with generative models, resulting in additional manual editing.

 

Runway has raised more than $236.5 million from investors including Google and Nvidia, as well as other venture capital firms such as Amplify Partners, Felicis, and Coatue. The company works closely with the creative industry, operating Runway Studios as a corporate production partner and organizing an AI film festival to showcase AI-powered films.

 

Other companies are also entering the AI ​​generative video space. Luma recently launched Dream Machine, a viral video generator, and Adobe is developing a video generation model based on the Adobe Stock media library. OpenAI’s Sora, while still restricted, has been used by marketing agencies and filmmakers, including at the 2024 Cannes Film Festival and the Tribeca Festival, which showcased films created with AI tools. Google has offered its Veo image creation model to select creators to integrate into products like YouTube Shorts.

 

Generative AI video tools have the potential to revolutionize the film and television industry. Director Tyler Perry put the studio’s $800 million expansion on hold after seeing Sora’s potential, and Avengers: Endgame director Joe Russo predicts the AI ​​will be able to create entire movies within a year. A 2024 study by the Animators Guild indicates that 75% of film companies using AI have already cut or eliminated jobs, and by 2026 AI could affect more than 100,000 US entertainment jobs. Safeguards will be important to ensure that video-generating AI tools do not significantly reduce the demand for creative work, as has been observed with other generative AI technologies.

allix