Gen-3 Alpha: Runway introduces AI generator for realistic videos

June 19, 2024  14:05

New York startup Runway, known for its innovative developments in the field of artificial intelligence, introduced a new model for generating video called Gen-3 Alpha.

Technology and capabilities

The Gen-3 Alpha is the first model in the series to be trained on the new Runway infrastructure designed for large-scale multimodal learning. According to the developers, these AI models are capable of simulating “a wide range of situations and interactions found in the real world.”

The new system allows you to create high-quality, detailed and realistic videos up to 10 seconds long. The videos feature a wide range of characters' emotions and a moving camera. It takes 45 seconds to create a 5-second video, and 90 seconds to create a 10-second video.

The Gen-3 Alpha model is trained on video and images through the collaboration of an interdisciplinary team of researchers, engineers and artists. However, the company does not disclose the origin of all materials from the training array. A Runway spokesperson explained that an internal research team oversees model training using carefully selected internal datasets.

Availability and plans

Currently, the Gen-3 Alpha is not yet publicly available, but according to Runway CTO Anastasis Germanidis, the model will be available to paid subscribers of the platform (from $15 per month or $144 per year) in the coming days. Later, during this year, the model will become available to all users.

Runway is also partnering with leading entertainment and media organizations to create custom versions of the Gen-3 Alpha. This allows for the generation of stylistically controlled and consistent characters that meet specific artistic and narrative requirements. Runway products have already been used by directors of films such as Everything Everywhere and At Once and The People's Joker.


 
 
 
 
  • Archive