This new AI animation tool is blowing people’s minds

The AI research company, Runway lets you beef up your generative AI images with its new Motion Brush that's a part of its latest update.

This new AI animation tool is blowing people’s minds

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

More AI tools are popping up to advance features of the popular generators that are already available, and the latest one is blowing people’s minds.

One AI research company, Runway, has recently introduced the second generation of its Motion Brush tool, which helps animate aspects of AI-generated images, such as those created in Midjourney. The simple brush tool to animate images feels like magic — which is always true when AI is done right. The video below, as posted by AI enthusiast, Rory Flynn, shows the new tool in action.

Many creators are already having fun with the Motion Brush tool, bringing to life still images such as trucks driving down a dirt road, panning nature shots, moving people and animals, leaves twisting in the wind, and moving clouds. Runway also showcased examples of making waterfalls, fish in a tank, fire, and the smoke from a burning cigarette move.

Motion Brush works by uploading an image into the service. Select Start with Image and then select the Motion Brush tool. Then use it to draw a highlight over the area of the image you would like to animate on the image. You can also generate an image within Runway using a text prompt before using Motion Brush. Confirm the horizontal, vertical, and proximity controls at the bottom of the screen, and then press Save. Once saved, you can generate the video by selecting the Extend 4s button. You can also click Extend 4s again to expand the length of the video up to 16 seconds. Generated videos are available for downloading, sharing, and use in other editors, among other functions.

Some of the features that you can use together are Motion Brush and Camera Controls, which allow you to set aspects of the image to move while the camera pans or zooms at the same time. Other updated features include the Gen-2 Style Presets, which allow you to add style to content without prompts, and Director Mode updates allowing for adjustments to camera moves at a fraction of a second.

Runway Motion Brush preview. RunwayML

Its interface resembles most image or video editors. You’ll have access to different functions and limits in the service, depending on your price tier, which includes basic, Standard, Pro, Unlimited, or Enterprise. Currently, the Motion Brush tool is in beta, making it available to all Runway members.

In addition to the Motion Brush update, Runway recently introduced new Gen-2 Style Presets, and updated Camera Controls, among other features, the company said on its X (formerly Twitter) profile.

Runway is free to sign up and you can use Google or Apple as sign-up options. There is also a Single Use Sign-on available only for the Enterprise tier.

Editors' Recommendations

Google tackles scammers offering malware-laden ‘Bard’ tool New ‘poisoning’ tool spells trouble for AI text-to-image tech ChatGPT’s new upgrade finally breaks the text barrier Microsoft says bizarre travel article was not created by ‘unsupervised AI’ Google Bard could soon become your new AI life coach

Fionna Agomuoh

Fionna Agomuoh is a technology journalist with over a decade of experience writing about various consumer electronics topics…

Amazon expands use of generative AI to summarize product reviews

An AI-generated review highlight on Amazon's website.

Amazon is rolling out the use of generative-AI technology to summarize customer product reviews on its shopping site.

It follows several months of testing the feature, which is designed to help speed up the shopping experience for those who don’t want to spend a long time trawling through endless reviews.

Read more

DALL-E 3 could take AI image generation to the next level

DALL-E 2DALL-E 2 Image on OpenAI.

OpenAI might be preparing the next version of its DALL-E AI text-to-image generator with a series of alpha tests that have now been leaked to the public, according to the Decoder.

An anonymous leaker on Discord shared details about his experience, having access to the upcoming OpenAI image model being referred to as DALL-E 3. He first appeared in May, telling the interest-based Discord channel that he was part of an alpha test for OpenAI, trying out a new AI image model. He shared the images he generated at the time.

Read more

Top authors demand payment from AI firms for using their work

Person typing on a MacBook.

More than 9,000 authors have signed an open letter to leading tech firms expressing concern over how they're using their copyrighted work to train AI-powered chatbots.

Sent by the Authors Guild to CEOs of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft, the letter calls attention to what it describes as “the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.”

Read more