Google is YouTube. Set to roll out Advanced creative AI tools for creators in the coming months, enabling them to produce video content using AI models. i see And Image 3 through a feature called Dream Screen. The new technology will simplify the content creation process for YouTube shorts by providing users with high-quality, AI-generated video backgrounds.

“On Youtubewe want creators to be able to express their creativity, build communities + run sustainable businesses. New tools on #MadeOnYouTube Helping: We’re bringing Veo to DreamScreen to create high-quality, custom backgrounds on shorts + more,” Google chief Sundar Pichai posted on X.

Google’s Veo is a creative video model unveiled at Google I/O 2024. It can create high-quality 1080p resolution videos over a minute in length, understand and accurately capture nuances from text cues, including cinematic terms.

DreamScreen will initially allow creators to create images based on text prompts, with four options presented differently. Once selected, the Veo model will generate a 6-second background video based on the creator’s vision. Starting in 2025, this feature will be expanded to allow the creation of standalone 6-second video clips.

The technology builds on Google’s extensive research in AI, including its Transformer architecture and diffusion models, and is designed for widespread use among millions of YouTube creators. These AI-generated creations will be watermarked by SynthID, and YouTube will label the content to indicate its AI origin.

YouTube is working to make creative AI technologies more accessible globally, making content production easier for millions of creators.

Meanwhile, OpenAI has yet to release its video generation model, SurahHollywood actor Ashton Kutcher recently praised OpenAI’s Sora, saying that creators would be able to render an entire movie using it. “I have the beta version of it, and it’s pretty amazing,” he said.

In a recent interview with the Wall Street Journal, CTO Meera Murthy said that OpenAI will make Sora publicly accessible later this year.



Source link