New Project Meta's AI can generate videos from text descriptions

September 29, 2022  21:25

A couple of months ago, Meta introduced the Make-a-Scene project, which generated images based on text. Now the company has introduced a new project with even more advanced artificial intelligence, allowing people to turn text descriptions into short video clips. The new project is called Make-A-Video.

So far, the videos created by AI Make-A-Video are of low quality, but in the future, this development will surely be improved and will play an important role in advancing the field of AI-assisted content creation.

Functionally, Make-A-Video works roughly the same way as Make-a-Scene: the technology is based on processing written text messages and using neural networks to convert non-visual messages into images. It's just that the final content is delivered in a different format -- in the form of video rather than pictures.

From the paired text-image data, the Make-A-Video AI learns what the world looks like. And it learns from videos without linked text how the world moves. And this data helps it convert text descriptions into video clips.

Here, for example, is a video of "A dog wearing a Superhero outfit with red cape flying through the sky":

And here’s “A teddy bear painting a portrait”:

Meta representatives believe this project could open up new opportunities for artists and content creators. They also published a research paper with details about the work of the AI, which is the basis for the new project. 

To see examples of videos produced with Make-A-Video, go to: https://makeavideo.studio/.


 
 
 
 
  • Archive