{"id":169590,"date":"2025-05-20T17:45:00","date_gmt":"2025-05-20T17:45:00","guid":{"rendered":"https:\/\/entertainment.runfyers.com\/index.php\/2025\/05\/20\/veo-3-can-generate-videos-and-soundtracks-to-go-along-with-them-techcrunch\/"},"modified":"2025-05-20T17:45:00","modified_gmt":"2025-05-20T17:45:00","slug":"veo-3-can-generate-videos-and-soundtracks-to-go-along-with-them-techcrunch","status":"publish","type":"post","link":"https:\/\/entertainment.runfyers.com\/index.php\/2025\/05\/20\/veo-3-can-generate-videos-and-soundtracks-to-go-along-with-them-techcrunch\/","title":{"rendered":"Veo 3 can generate videos \u2014 and soundtracks to go along with them | TechCrunch"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p id=\"speakable-summary\" class=\"wp-block-paragraph\">Google\u2019s latest video-generating AI model, Veo 3, can create audio to go along with the clips that it generates.<\/p>\n<p class=\"wp-block-paragraph\">On Tuesday <a href=\"https:\/\/techcrunch.com\/storyline\/google-i-o-2025-live-coverage-gemini-android-16-updates-and-more\/\" target=\"_blank\" rel=\"noopener\">during the Google I\/O 2025 developer conference<\/a>, Google unveiled Veo 3, which the company claims can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, <a href=\"https:\/\/techcrunch.com\/2025\/04\/15\/googles-veo-2-video-generator-comes-to-gemini\/\" target=\"_blank\" rel=\"noopener\">Veo 2<\/a>, in terms of the quality of footage it can generate, Google says.<\/p>\n<p class=\"wp-block-paragraph\">Veo 3 is available beginning Tuesday in Google\u2019s Gemini chatbot app for subscribers to <a href=\"https:\/\/techcrunch.com\/2025\/05\/20\/google-ai-ultra-youll-have-to-pay-249-99-per-month-for-googles-best-ai\/\" target=\"_blank\" rel=\"noopener\">Google\u2019s $249.99-per-month AI Ultra plan<\/a>, where it can be prompted with text or an image.<\/p>\n<p class=\"wp-block-paragraph\">\u201cFor the first time, we\u2019re emerging from the silent era of video generation,\u201d Demis Hassabis, the CEO of Google DeepMind, Google\u2019s AI R&amp;D division, said during a press briefing. \u201c[You can give Veo 3] a prompt describing characters and an environment, and suggest dialogue with a description of how you want it to sound.\u201d<\/p>\n<p class=\"wp-block-paragraph\">The wide availability of tools to build video generators has led to such an explosion of providers that the space is becoming saturated. Startups including\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/09\/16\/runway-announces-an-api-for-its-video-generating-models\/\" target=\"_blank\" rel=\"noopener\">Runway<\/a>,\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/02\/28\/lightricks-announces-ai-powered-filmmaking-studio-to-help-creators-visualize-stories\/\" target=\"_blank\" rel=\"noopener\">Lightricks<\/a>, Genmo,\u00a0<a href=\"https:\/\/techcrunch.com\/2023\/11\/28\/pika-labs-which-is-building-ai-tools-to-generate-and-edit-videos-raises-55m\/\" target=\"_blank\" rel=\"noopener\">Pika<\/a>,\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/04\/03\/former-snap-ai-chief-launches-higgsfield-to-take-on-openais-sora-video-generator\/\" target=\"_blank\" rel=\"noopener\">Higgsfield<\/a>, Kling,\u00a0and\u00a0<a href=\"https:\/\/venturebeat.com\/ai\/luma-ai-releases-ray2-generative-video-model-with-fast-natural-motion-and-better-physics\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Luma<\/a>, as well as tech giants such as\u00a0<a href=\"https:\/\/techcrunch.com\/2025\/02\/28\/openai-plans-to-bring-soras-video-generator-to-chatgpt\/\" target=\"_blank\" rel=\"noopener\">OpenAI<\/a> and Alibaba, are releasing models at a fast clip. In many cases, little distinguishes one model from another.<\/p>\n<p class=\"wp-block-paragraph\">Audio output stands to be a big differentiator for Veo 3, if Google can deliver on its promises. AI-powered sound-generating tools <a href=\"https:\/\/techcrunch.com\/2024\/06\/05\/stability-ai-releases-a-sound-generator\/\" target=\"_blank\" rel=\"noopener\">aren\u2019t<\/a> <a href=\"https:\/\/techcrunch.com\/2024\/05\/31\/elevenlabs-debuts-ai-powered-tool-to-generate-sound-effects\/\" target=\"_blank\" rel=\"noopener\">novel<\/a>, nor are models to create <a href=\"https:\/\/newatlas.com\/technology\/microsoft-vasa1-ai-video-audio-photo\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">video<\/a> <a href=\"https:\/\/venturebeat.com\/ai\/pika-adds-generative-ai-sound-effects-to-its-video-maker\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">sound<\/a> <a href=\"https:\/\/www.geekwire.com\/2024\/ai-startups-new-tool-creates-music-for-video-footage-without-requiring-text-prompts\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">effects<\/a>. But Veo 3 uniquely can understand the raw pixels from its videos and sync generated sounds with clips automatically, per Google.<\/p>\n<p class=\"wp-block-paragraph\">Here\u2019s a sample clip from the model: <\/p>\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"\/>\n<p class=\"wp-block-paragraph\">Veo 3 was likely made possible by <a href=\"https:\/\/techcrunch.com\/2024\/06\/17\/deepminds-new-ai-generates-soundtracks-and-dialog-for-videos\/\" target=\"_blank\" rel=\"noopener\">DeepMind\u2019s earlier work<\/a> in \u201cvideo-to-audio\u201d AI. Last June, DeepMind revealed that it was developing AI tech to generate soundtracks for videos by training a model on a combination of sounds and dialogue transcripts as well as video clips.<\/p>\n<p class=\"wp-block-paragraph\">DeepMind won\u2019t say exactly where it sourced the content to train Veo 3, but YouTube is a strong possibility. Google owns YouTube, and DeepMind\u00a0<a href=\"https:\/\/techcrunch.com\/2024\/05\/14\/google-veo-a-serious-swing-at-ai-generated-video-debuts-at-google-io-2024\/\" target=\"_blank\" rel=\"noopener\">previously<\/a>\u00a0told TechCrunch that Google models like Veo \u201cmay\u201d be trained on some YouTube material.<\/p>\n<p class=\"wp-block-paragraph\">To mitigate the risk of deepfakes, DeepMind says it\u2019s using its proprietary watermarking technology, SynthID, to embed invisible markers into frames Veo 3 generates.\u00a0<\/p>\n<p class=\"wp-block-paragraph\">While companies like Google pitch Veo 3 as powerful creative tools, many artists are understandably wary of them \u2014 they threaten to upend entire industries. A 2024\u00a0<a href=\"https:\/\/animationguild.org\/wp-content\/uploads\/2024\/01\/Future-Unscripted-The-Impact-of-Generative-Artificial-Intelligence-on-Entertainment-Industry-Jobs-pages-1.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">study<\/a>\u00a0commissioned by the Animation Guild, a union representing Hollywood animators and cartoonists,\u00a0estimates that more than 100,000 U.S.-based film, television, and animation jobs will be disrupted by AI by 2026.<\/p>\n<p class=\"wp-block-paragraph\">Google also today rolled out new capabilities for Veo 2, including a feature that lets users give the model images of characters, scenes, objects, and styles for better consistency. The latest Veo 2 can understand camera movements like rotations, dollies, and zooms, and it allows users to add or erase objects from videos or broaden the frames of clips to, for example, turn them from portrait into landscape.<\/p>\n<p class=\"wp-block-paragraph\">Google says that all of these new Veo 2 capabilities will come to its Vertex AI API platform in the coming weeks.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/techcrunch.com\/2025\/05\/20\/googles-veo-3-can-generate-videos-and-soundtracks-to-go-along-with-them\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google\u2019s latest video-generating AI model, Veo 3, can create audio to go along with the clips that it generates. On Tuesday during the Google I\/O 2025 developer conference, Google unveiled Veo 3, which the company claims can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":169591,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[],"class_list":{"0":"post-169590","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-tech"},"_links":{"self":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/169590","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/comments?post=169590"}],"version-history":[{"count":0,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/posts\/169590\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media\/169591"}],"wp:attachment":[{"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/media?parent=169590"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/categories?post=169590"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/entertainment.runfyers.com\/index.php\/wp-json\/wp\/v2\/tags?post=169590"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}