Future Thinker @Benji
Future Thinker @Benji
  • 378
  • 2 252 726
Stable Diffusion ComfyUI Workflow - Using Multimodal Pipeline To Create AI Video
Stable Diffusion ComfyUI Workflow - Using Multimodal Pipeline To Create AI Video
We'll be exploring how to create stunning AI videos with the help of a multimodal pipeline.
As we all know, AI video models have been evolving rapidly, and now we have companies like Kling AI, Luma AI, and the latest Gen-3 AI video models from RunwayML. Although Runway Gen-3 is currently available only for text-to-videos, image-to-video capabilities are just around the corner.
If you need to setup Local Ollama to host your local LLM with ComfyUI.
Here's previous tutorials :
1 - ua-cam.com/video/EQZWyn9eCFE/v-deo.html
2 - ua-cam.com/video/yR2Y9G71w6E/v-deo.html
For Freebies : www.patreon.com/posts/stable-diffusion-107319629
Goodies For Patreon Supporters: www.patreon.com/posts/create-story-llm-107317948
In this workflow, I'll show you how to transform natural language content into Stable Diffusion Text Prompts, which can be used to generate images for each scene. These images serve as the initial keyframes for the AI video generator. We'll be using powerful models like LLaMA 3 fine-tuned SD Prompt large language models to transform the storyline into text prompts for image generation in Stable Diffusion.
Here's the Things Will Be In To-Do List:
Connect Database for query story background setting , SD prompt
Character setting , SD prompt
Story Contents in Database Table structure, and it will be able to process each scenes automatically.
Connect ComfyUI as Client App to AI Video provide API.(If they open up for connect)
Don't forget to hit that subscribe button and turn on the notification bell, so you won't miss any of my upcoming tutorials on AI video generation and more.
If You Like tutorial like this, You Can Support Our Work In Patreon:
www.patreon.com/aifuturetech/
Discord : discord.gg/BTXWX4vVTS
Переглядів: 1 373

Відео

AI Diffusion Models From Static to Dynamic - AutoStudio, MOFA-Video, and ViewDiff
Переглядів 1,9 тис.День тому
In today's video, we are diving deep into the world of AI frameworks and models. We will be exploring three exciting and game-changing AI models: AutoStudio, MOFA-Video, and ViewDiff. These models are revolutionizing image generation, video animation, and 3D object creation. So, if you're ready to explore the cutting-edge technology behind these AI models, make sure to hit that subscribe button...
AI Video Story - 2 Good Friends Shattered Innocence
Переглядів 606День тому
Create scenes with Kling AI , and some with Stable Diffusion. Haven't editing much, just try to put a story into text 2 video, and images. Next, will get a workflow make characters consistent. Just want to test how the whole processing workflow can be using AI tools to create short story. If You Like tutorial like this, You Can Support Our Work In Patreon: www.patreon.com/aifuturetech/ Discord ...
Stable Diffusion ComfyUI And Diffutoon Create AI Videos - Domo AI Alternative?
Переглядів 7 тис.День тому
Stable Diffusion Animation - ComfyUI And Diffutoon Create Deflickering Videos - Domo AI Alternative? We're diving into the world of Stable Diffusion's animation and exploring a fascinating new project called Diffutoon. This project takes video-to-video transformations to a whole new level by turning dance videos into anime or cartoon-style videos. Related Video: Stable Diffusion Video To Anime ...
ComfyUI With Florence 2 Vision LLM - This Is Not Just A Segmentation Model
Переглядів 7 тис.День тому
ComfyUI With Florence 2 Vision LLM In this video, I delve into a new LLM - Florence 2, an extraordinary vision foundation model developed by Microsoft. Join me as I discuss its features, demonstrate its capabilities, and guide you through the installation process. Florence 2: An Image-to-Text Prompt Large Language Model Florence 2 is trained with the massive FLD-5B dataset, making it one of the...
ComfyUI With Dense Diffusion - Better Control For Your AI Images
Переглядів 3,5 тис.День тому
How can we use ComfyUI and Leverage Dense Diffusion To Be More Control Of Your AI Images? We're diving deep into Dense Diffusion, a powerful image generation model that allows for precise control and accuracy in allocating specific elements to regions within an image canvas. Dense diffusion is not a new concept, but it has gained significant attention since its introduction in a previous video ...
Kling AI Video - The First Real Practical Demo In YouTube - Create AI Video In Mobile
Переглядів 3,5 тис.2 дні тому
Kling AI Video - The First Real Practical Demo In UA-cam - Create AI Video In Mobile. We delve into the world of Kuaishou Kling AI, a popular video model in mainland China. Join us as we discuss the hype surrounding this AI video model and showcase real practical demo results directly from our phone. If you've been keeping up with UA-cam and Twitter, you've probably seen a lot of buzz about Kua...
Runwayml Gen 3 Alpha Is Coming To The New AI Video Models Battleground
Переглядів 1,4 тис.14 днів тому
In this exciting video, we explore the cutting-edge world of AI video generation with RunwayML's groundbreaking model, Gen-3 Alpha. Prepare to be amazed as we delve into the features and capabilities of this high-fidelity and controllable video generation tool. If You Like tutorial like this, You Can Support Our Work In Patreon: www.patreon.com/aifuturetech/ Discord : discord.gg/BTXWX4vVTS Gen-...
Luma AI Video Is About to Blow Your Mind - Alternative To Kling AI Video Model or Maybe Sora AI
Переглядів 1,8 тис.14 днів тому
Luma AI Video Is About to Blow Your Mind - Alternative To Kling AI Video Model or Maybe Sora AI
How To Use Stable Diffusion 3 -A Full Tutorial Guide And Review
Переглядів 6 тис.14 днів тому
How To Use Stable Diffusion 3 -A Full Tutorial Guide And Review
AI Models Are Getting Insanely Better! MotionFollower , Ouroboros3D , Kling , Claude 3 , TRVTurbo
Переглядів 2,3 тис.21 день тому
AI Models Are Getting Insanely Better! MotionFollower , Ouroboros3D , Kling , Claude 3 , TRVTurbo
Stable Diffusion ComfyUI Face Parsing Fine Tune Faces For AI Images
Переглядів 2,5 тис.21 день тому
Stable Diffusion ComfyUI Face Parsing Fine Tune Faces For AI Images
Omost Canvas Code AI Image Generation - Installation Guide For WebUI and ComfyUI
Переглядів 2,9 тис.21 день тому
Omost Canvas Code AI Image Generation - Installation Guide For WebUI and ComfyUI
ToonCrafter - A Diffusion Model Really Change The Industry? (A Honest Review)
Переглядів 2,3 тис.21 день тому
ToonCrafter - A Diffusion Model Really Change The Industry? (A Honest Review)
This Custom Node Allow Anyone To Become ComfyUI Developer - Any Node With LLM
Переглядів 3 тис.28 днів тому
This Custom Node Allow Anyone To Become ComfyUI Developer - Any Node With LLM
How To Install Lip Sync AI Talking Avatar In ComfyUI (The Most Easiest Beginner Friendly)
Переглядів 3,6 тис.28 днів тому
How To Install Lip Sync AI Talking Avatar In ComfyUI (The Most Easiest Beginner Friendly)
This Diffusion Model Are Insanely Great! Instance Diffusion Create Animation In ComfyUI
Переглядів 6 тис.Місяць тому
This Diffusion Model Are Insanely Great! Instance Diffusion Create Animation In ComfyUI
Open Source AI Video Framework Edit Video Styling With Consistency - AnyV2V
Переглядів 1,9 тис.Місяць тому
Open Source AI Video Framework Edit Video Styling With Consistency - AnyV2V
How To Make Stable Diffusion Video To Anime Style (No Limitation By Discord App)
Переглядів 4,3 тис.Місяць тому
How To Make Stable Diffusion Video To Anime Style (No Limitation By Discord App)
Stable Diffusion eCommerce Accessory Virtual Try On Workflow Perfect Product Display
Переглядів 1,3 тис.Місяць тому
Stable Diffusion eCommerce Accessory Virtual Try On Workflow Perfect Product Display
How To Use Custom Trained Motion Lora In Stable Diffusion AnimateDiff
Переглядів 2,8 тис.Місяць тому
How To Use Custom Trained Motion Lora In Stable Diffusion AnimateDiff
How To Train Motion Lora Model For Stable Diffusion AnimateDiff
Переглядів 4,3 тис.Місяць тому
How To Train Motion Lora Model For Stable Diffusion AnimateDiff
Stable Diffusion XL Finally Got An Better LineArt Alike ControlNet Model - MistoLine
Переглядів 6 тис.Місяць тому
Stable Diffusion XL Finally Got An Better LineArt Alike ControlNet Model - MistoLine
How To Create Music Video With Stable Diffusion AnimateDiff Workflow
Переглядів 3,9 тис.Місяць тому
How To Create Music Video With Stable Diffusion AnimateDiff Workflow
Google Veo Video Generation AI Models Released - It Is Mind-blowing!
Переглядів 1,2 тис.Місяць тому
Google Veo Video Generation AI Models Released - It Is Mind-blowing!
IC Light Installation Guide In ComfyUI - Add Light Effect To AI Images & Animations
Переглядів 3,4 тис.Місяць тому
IC Light Installation Guide In ComfyUI - Add Light Effect To AI Images & Animations
Stable Diffusion ComfyUI & Suno AI Create AI Music Video On Our Control
Переглядів 4,2 тис.Місяць тому
Stable Diffusion ComfyUI & Suno AI Create AI Music Video On Our Control
StoryDiffusion - The Future Of Comics And Videos Using AI?
Переглядів 2,2 тис.Місяць тому
StoryDiffusion - The Future Of Comics And Videos Using AI?
Stable Diffusion Create Facial Expressions For AI Images And Videos
Переглядів 4,3 тис.Місяць тому
Stable Diffusion Create Facial Expressions For AI Images And Videos
How To Use Stable Diffusion ComfyUI Workflows For eCommerce Jewelry Niche
Переглядів 2,5 тис.Місяць тому
How To Use Stable Diffusion ComfyUI Workflows For eCommerce Jewelry Niche

КОМЕНТАРІ

  • @AI_LookBook_Studio_AI
    @AI_LookBook_Studio_AI День тому

  • @mordokai597
    @mordokai597 День тому

    i made a custom gpt for gpt4 that converts prompts/sentences to wd 1.4 tag prompts. it's called '(BooruKai_Prompter: 1.4)' "### Example Prompt Conversion: - **Original Sentence**: "A sci-fi cyberpunk heroine in an industrial area, wielding futuristic weapons." - **Token-Sparse Prompt**: "1girl, cyberpunk, weapon, industrial_area, futuristic, short_hair, blue_eyes, armor, neon, machinery""

  • @hirotanuki5592
    @hirotanuki5592 День тому

    the music is amazing, is it AI made too?

  • @crazyleafdesignweb
    @crazyleafdesignweb День тому

    Pretty cool idea. Looking forward to it. And AutoStudio you talked about, do similar image generation. But yours are build on workflow and run all of this. What backend SQL and code you will be use for the next update?

    • @TheFutureThinker
      @TheFutureThinker День тому

      yup , something like that for the next update. As previous video we talked about Comfy API, so things are doable.

  • @kalakala4803
    @kalakala4803 День тому

    it can be integrated like LBX Studio, but for better AI Video Model, not SVD. 🤭

  • @TheFutureThinker
    @TheFutureThinker День тому

    For Freebies : www.patreon.com/posts/stable-diffusion-107319629 Goodies For Patreon Supporters: www.patreon.com/posts/create-story-llm-107317948 If you need to setup Local Ollama to host your local LLM with ComfyUI. Here's previous tutorials : 1 - ua-cam.com/video/EQZWyn9eCFE/v-deo.html 2 - ua-cam.com/video/yR2Y9G71w6E/v-deo.html

  • @RamonGuthrie
    @RamonGuthrie День тому

    Hey can you prompt the Florence2 model on what parts of an image you want to describe? Example Describe the background only or Describe the person in detail only! or are there better vision models for this?

    • @TheFutureThinker
      @TheFutureThinker День тому

      Yes this vision model can be segment the background and then do captioning

  • @hleet
    @hleet День тому

    That's nice. But I don't know how to feel about that. It's funny and disturbing at the same time. I mean, theses are not real people, one day they might have some kind of chatgpt-like actors that will have "Lora-models" attached to them ... and will "live" inside the machine to provide their movie. Honestly disturbing future of AI on the mankind.

    • @TheFutureThinker
      @TheFutureThinker День тому

      I understand what you mean. The actor doesn't like what you usually see in film.

  • @drucshlook
    @drucshlook 2 дні тому

    I'd love to have lightweight workflows to try theses out

  • @TheFutureThinker
    @TheFutureThinker 2 дні тому

    ViewDiff: 3D-Consistent Image Generation lukashoel.github.io/ViewDiff/ MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model myniuuu.github.io/MOFA_Video/ github.com/MyNiuuu/MOFA-Video huggingface.co/MyNiuuu/MOFA-Video-Hybrid AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation howe183.github.io/AutoStudio.io/ github.com/donahowe/AutoStudio

  • @peacetoall1858
    @peacetoall1858 2 дні тому

    WOnder when we can get access to these models?

    • @TheFutureThinker
      @TheFutureThinker 2 дні тому

      They have the GitHub project already, but no one make a Comfyui node or any webui extension yet.

    • @peacetoall1858
      @peacetoall1858 2 дні тому

      @@TheFutureThinker Looking forward to that

  • @reaperhammer
    @reaperhammer 2 дні тому

    Cool... now I wonder about VRAM requirements for these tasks... any info on that?

    • @TheFutureThinker
      @TheFutureThinker 2 дні тому

      I assume its similar to Comfy or even lighter weight. Because its all packed SD models into code pipeline.

  • @bgmspot7242
    @bgmspot7242 2 дні тому

    Please provide links

  • @crazyleafdesignweb
    @crazyleafdesignweb 2 дні тому

    Looks like Auto Studio can work with AI video generator. Create storytelling images then Img2Vid. Using Luma AI, Gen 3, Kling or maybe Sora later.

    • @TheFutureThinker
      @TheFutureThinker 2 дні тому

      You got it Alex😉👍 Experience designer, have fast thinking.

  • @kalakala4803
    @kalakala4803 2 дні тому

    Mofa video , will they do in ComfyUi node?

  • @rickyneeter69
    @rickyneeter69 2 дні тому

    How to use inpainting and change something from the excisting picture?

    • @TheFutureThinker
      @TheFutureThinker 2 дні тому

      On Load Image, right click, there's a Inpaint Editor. Then you can start from there.

    • @rickyneeter69
      @rickyneeter69 2 дні тому

      @@TheFutureThinker Cant find the inpaint editor when right clicking the load image :(

  • @xdevx9623
    @xdevx9623 2 дні тому

    wow amazing and How did you get consistent characters

    • @TheFutureThinker
      @TheFutureThinker День тому

      Basically, for characters, I setup a character face, and person images before generate video. Check out the last video, I did on the workflow to generate scenes.

  • @reaperhammer
    @reaperhammer 3 дні тому

    Pretty good! Only a few morphs for long video

    • @TheFutureThinker
      @TheFutureThinker 3 дні тому

      Yup , i want to test it.Then next step should be find a way to fix those melting face , morphing objects.

  • @K-A_Z_A-K_S_URALA
    @K-A_Z_A-K_S_URALA 3 дні тому

    круто и просто..

  • @drucshlook
    @drucshlook 3 дні тому

    very nice !

  • @Homopolitan_ai
    @Homopolitan_ai 3 дні тому

    👍

  • @MartinZanichelli
    @MartinZanichelli 3 дні тому

    So from best to worst: 1) Open AI Sora 2) Kling AI 3) Luma Labs 4) Runway Gen 2 (this one very far) . Runway Gen 3 I do not know how to qualify it.

    • @TheFutureThinker
      @TheFutureThinker 3 дні тому

      Looking forward to try Runway soon. The OG of AI video should have something suprise.

  • @timothywells8589
    @timothywells8589 3 дні тому

    Would love to be able to play with this, but alsa it seems not for us peasants 😭 I like the level of consistency in each scene, unlike luma which to often mutilates faces if they move even a fraction, but they did add start and stop frames which is something I guess.

    • @TheFutureThinker
      @TheFutureThinker 3 дні тому

      Can try with Dream Machine. Actually I am looking forward to Runwayml gen3. Looks good from their videos

  • @eccentricballad9039
    @eccentricballad9039 3 дні тому

    Global warming is so bad those friends are melting and shifting like ice-cream

  • @user-pn6ey5dn4y
    @user-pn6ey5dn4y 3 дні тому

    @benji - another great video. If you can nail the expressions and blinking, that would be amazing!

    • @TheFutureThinker
      @TheFutureThinker 3 дні тому

      Thanks, that will be another workflow to play with the face part. ;)

    • @user-pn6ey5dn4y
      @user-pn6ey5dn4y 3 дні тому

      @@TheFutureThinker You are the MASTER of suspense my friend haha

  • @enthuesd
    @enthuesd 3 дні тому

    Wonderful thank you. No one else on UA-cam is covering this

  • @Kevlord22
    @Kevlord22 3 дні тому

    it killed my comfyui, i wont start anymore. i only tried the first node with the manager. RIP.

  • @SavageBro.
    @SavageBro. 3 дні тому

    The app is in English for me, how do I get Chinese?

  • @promptaganda
    @promptaganda 4 дні тому

    using spacepxl node, i am getting strange polygons for all images i try to run region to segmentation on. The captioning is working correct. any ideas?

  • @vaporchickenwave6980
    @vaporchickenwave6980 4 дні тому

    i tried fastblend on Automatic1111 is very very slow, i tried to increase the batch but take all my 24gb vram and 96gb of ram, what parameters are better for a video in 1920*1080?

  • @iamtoufick
    @iamtoufick 4 дні тому

    Can 8gb vram handle this😅

    • @vaporchickenwave6980
      @vaporchickenwave6980 4 дні тому

      i have 4090 24gb doesnt handle properly

    • @TheFutureThinker
      @TheFutureThinker 4 дні тому

      That‘s a waste for the 4090 😅

    • @iamtoufick
      @iamtoufick 4 дні тому

      @@TheFutureThinker 😂😂

    • @vaporchickenwave6980
      @vaporchickenwave6980 4 дні тому

      @@TheFutureThinker what you mean? I tried to use it, that plugin use all my vram and is very slow!!

    • @TheFutureThinker
      @TheFutureThinker 4 дні тому

      I am not sure... From my 4090 setup it works. evidence showing in the video. 😉

  • @Vashthareaper
    @Vashthareaper 4 дні тому

    2 hours and smooth video node has not moved an inch, with vid 2 vid , 60 frames cap , lcm cp and lora, tried again and again

    • @TheFutureThinker
      @TheFutureThinker 2 дні тому

      Recent comfyui need update, I have experienced that 2 days ago

  • @NLPprompter
    @NLPprompter 5 днів тому

    stupidity ai LOL... ROFL...

  • @LahiruBandara-iq8xd
    @LahiruBandara-iq8xd 5 днів тому

    Can I do this in a RTX 3060 gpu

  • @cfcmoon1
    @cfcmoon1 5 днів тому

    Fantastic work, like always

  • @MrJgrez
    @MrJgrez 5 днів тому

    I get error at realistic line art for some reason? Do you know how to resolve this? Thanks

  • @RenjithRS
    @RenjithRS 6 днів тому

    It works on stableswarm ui

    • @TheFutureThinker
      @TheFutureThinker 6 днів тому

      Nice👍 how's it in Stableswarm? Generate time, memory consuming rate?

    • @RenjithRS
      @RenjithRS 6 днів тому

      It works fine on my 4gb vram, 16gb ram laptop. Speed is around 3.5 it/sec

    • @TheFutureThinker
      @TheFutureThinker 6 днів тому

      Thats great.

  • @jacekb4057
    @jacekb4057 6 днів тому

    Hey man any tutorial / workflow on your video 2 video method? Thanks in advance: )

  • @user-eh7vz4de4q
    @user-eh7vz4de4q 7 днів тому

    I watched the video and tried to follow along. I saw this error message on the video combine node side. Error occurred when executing VHS_VideoCombine: Cannot handle this data type: (1, 1, 512, 3), |u1 The only thing I changed the settings to is The SD1.5 version checkpoint I have and This is about the size of a sample video. The rest of the settings were the same, but an error occurred.

    • @user-eh7vz4de4q
      @user-eh7vz4de4q 7 днів тому

      Try changing the size as it was in the original example file.

    • @user-eh7vz4de4q
      @user-eh7vz4de4q 7 днів тому

      How can I fix this error?

    • @user-eh7vz4de4q
      @user-eh7vz4de4q 6 днів тому

      Fortunately, I changed the example file and it worked.

    • @TheFutureThinker
      @TheFutureThinker 6 днів тому

      Great you can do trouble shooting 👍

  • @huichan5140
    @huichan5140 7 днів тому

    will get blurry result if the stylize image pattern (edges, contrast) is different from the original image due to the blending algorithm; Need extra steps to deblur

  • @MaghrabyANO
    @MaghrabyANO 7 днів тому

    does Google Colab approve usage of stable diffusion notes?

    • @TheFutureThinker
      @TheFutureThinker День тому

      Why not. Its only limited vram for free account, and gradio library for webui public link

    • @MaghrabyANO
      @MaghrabyANO День тому

      @@TheFutureThinker what do u mean?

  • @greenTech88
    @greenTech88 7 днів тому

    GTX 1650 super work ?😢😢

  • @patagonia4kvideodrone91
    @patagonia4kvideodrone91 7 днів тому

    There are other nodes, I don't remember the name now, but what do you say detect me such a thing, and it generates the automatic mask, (but not square) but with its real contour,

    • @TheFutureThinker
      @TheFutureThinker 7 днів тому

      Segment Anything

    • @promptaganda
      @promptaganda 4 дні тому

      @@TheFutureThinker every time ive ever tried to add a prompt to a segment anything mode it makes zero mask ....... any suggestions?

    • @TheFutureThinker
      @TheFutureThinker 4 дні тому

      @@promptaganda what is your setting?

  • @digitalflick
    @digitalflick 7 днів тому

    thanks! where was the comfyui node/workflow?

  • @peacetoall1858
    @peacetoall1858 7 днів тому

    Cool stuff. Would have been good for dance videos but sadly will get a copyright strike on UA-cam.

    • @TheFutureThinker
      @TheFutureThinker 7 днів тому

      Thats why the better way to change character and background, only use the movment.

  • @ysy69
    @ysy69 7 днів тому

    This is wonderful. I assume this is not SXDL ready yet, right?

    • @TheFutureThinker
      @TheFutureThinker 7 днів тому

      02:10 - they have list SDXL. 😉 Img2img in ComfyUI and Diffutoon script Good to go

    • @ysy69
      @ysy69 7 днів тому

      @@TheFutureThinker really!!!! I will check it out!!! thank you!!!!

  • @crazyleafdesignweb
    @crazyleafdesignweb 7 днів тому

    This is fun , 1 code can do anime video. I will try it.