Meta Introduces "Movie Gen" AI-Powered Video Generation Model

TECHCRB
By -
0
A screenshot from a video released by Meta, created using its new model.


Meta unveiled its new generative AI interface, Movie Gen, on Friday, allowing users to create videos from text or images and offering editing options for existing videos. This model expands Meta's AI-generated image tools, following two earlier models launched in July 2022 and November 2023.

Currently, none of the three models are public. Instead, they are in a testing phase, where content creators and directors are evaluating Movie Gen’s capabilities and helping to enhance the software. Meta has yet to announce a release date for Movie Gen.

Along with demo videos, Meta released a research paper detailing this new model. Language models like Movie Gen rely on vast databases to generate content—text, images, videos, audio, or code—in response to naturally phrased prompts.

Movie Gen can create videos based on text or image prompts and allows users to edit existing videos. It also provides the option to add soundtracks, following the same natural language prompt format.

In a short video shared by Meta, a single spoken command adds scenes of an SUV driving through the desert, complete with a roaring engine and guitar soundtrack.

Meta hinted that Movie Gen might one day be available on platforms like Instagram and Facebook, or even through WhatsApp messaging.

However, Meta is not the first to develop AI-generated video capabilities. Runway AI’s Runway tool, for example, can create short clips, turn a series of still images into a video, or transform existing sequences. In February, OpenAI also released a model called Sora, while Google is developing Lumiere.

Meta claims that Movie Gen outperforms similar industry models when evaluated by humans.

Post a Comment

0Comments

Post a Comment (0)