Skip to content

ComfyUI Tutorial How to Install ComfyUI on Windows RunPod and Google Colab Stable Diffusion SDXL

FurkanGozukara edited this page Oct 23, 2025 · 1 revision

ComfyUI Tutorial - How to Install ComfyUI on Windows, RunPod & Google Colab | Stable Diffusion SDXL

ComfyUI Tutorial - How to Install ComfyUI on Windows, RunPod & Google Colab | Stable Diffusion SDXL

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

Updated for SDXL 1.0. #ComfyUI is a node based powerful and modular Stable Diffusion GUI and backend. This UI will let you design and execute advanced Stable Diffusion pipelines using a graph/nodes/flowchart based interface. In this video I will teach you how to install ComfyUI on PC, Google Colab (Free) and RunPod. I will also show you how to install and use #SDXL with ComfyUI including how to do inpainting and use LoRAs with ComfyUI. This is the Zero to Hero ComfyUI tutorial.

Source GitHub Readme File ⤵️

https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/How-To-Use-ComfyUI-On-Your-PC-On-RunPod-On-Colab-With-SDXL.md

Automatic RunPod Installer Script File ⤵️

https://www.patreon.com/posts/runpod-comfyui-86062569

Automatic Windows Installer Script File ⤵️

https://www.patreon.com/posts/92013455

Our Discord server ⤵️

https://bit.ly/SECoursesDiscord

If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰 ⤵️

https://www.patreon.com/SECourses

Technology & Science: News, Tips, Tutorials, Tricks, Best Applications, Guides, Reviews ⤵️

https://www.youtube.com/playlist?list=PL_pbwdIyffsnkay6X91BWb9rrfLATUMr3

Playlist of #StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img ⤵️

https://www.youtube.com/playlist?list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3

00:00:00 Introduction to the 0 to Hero ComfyUI tutorial

00:01:26 How to install ComfyUI on Windows

00:02:15 How to update ComfyUI

00:02:55 To to install Stable Diffusion models to the ComfyUI

00:03:14 How to download Stable Diffusion models from Hugging Face

00:04:08 How to download Stable Diffusion x large (SDXL)

00:05:17 Where to put downloaded VAE and Stable Diffusion model checkpoint files in ComfyUI installation

00:06:07 How to start / run ComfyUI after installation

00:06:30 Start using ComfyUI - explanation of nodes and everything

00:07:52 How to add a custom VAE decoder to the ComfyUI

00:08:22 Image saving and saved image naming convention in ComfyUI

00:08:44 Queue system of ComfyUI - best feature

00:09:48 How to save workflow in ComfyUI

00:10:07 How to use generated images to load workflow

00:10:54 How to use SDXL with ComfyUI

00:13:29 How to batch add operations to the ComfyUI queue

00:13:57 How to generate multiple images at the same size

00:15:01 File name prefixs of generated images

00:15:22 SDXL base image vs refiner improved image comparison

00:15:49 How to disable refiner or nodes of ComfyUI

00:16:30 Where you can find shorts of ComfyUI

00:17:18 How to enable back nodes

00:17:38 How to use inpainting with SDXL with ComfyUI

00:20:43 How to use SDXL refiner as the base model

00:20:57 How to use LoRAs with SDXL

00:23:06 How to see ComfyUI is processing the which part of the workflow

00:23:48 How to learn more about how to use ComfyUI

00:24:47 Where is the ComfyUI support channel

00:25:01 How to install and use ComfyUI on a free Google Colab

00:28:10 How to download SDXL model into Google Colab ComfyUI

00:30:33 How to use ComfyUI with SDXL on Google Colab after the installation

00:32:45 Testing out SDXL on a free Google Colab

00:33:40 You can use SDXL on a low VRAM machine but how

00:34:10 How to download all images generated on Google Colab

00:36:18 How to install and use ComfyUI (latest version) on RunPod including SDXL

00:37:19 Where to learn how to use RunPod

00:38:40 Instructions to the manual installation of ComfyUI on a RunPod

00:41:52 How to start ComfyUI after the installation

00:43:19 How to very fast download generated images on a RunPod with runpodctl

00:44:06 How to download SDXL on RunPod manually

Thumbnail artworks from moldadorite from deviantart

Video Transcription

  • 00:00:00 Greetings everyone. In this video, I will show you how to install and use ComfyUI on your

  • 00:00:06 computer. Moreover, I will explain most important fundamentals of ComfyUI. Then I will show you how

  • 00:00:13 you can use SDXL with ComfyUI without refiner and with refiner. Then I will show how you can

  • 00:00:21 use SDXL with LoRAs like you are seeing right now. This is a LoRA that is trained on myself.

  • 00:00:28 Hopefully, my next tutorial will be about this: how to train LoRAs with SDXL. Then I will show how

  • 00:00:35 you can use inpainting with SDXL as you are seeing right now. Then I will show how to install and use

  • 00:00:42 ComfyUI on a free Google Colab with SDXL. Then I will show you how you can install and use ComfyUI

  • 00:00:51 on RunPod with SDXL support as well. So I have prepared a very detailed GitHub readme file. All

  • 00:00:59 of the commands and links that you are going to need will be posted here. Moreover, if something

  • 00:01:06 gets changed in the future, I will update this readme file so you will always have up-to-date

  • 00:01:14 instructions. The link of this file will be in the description of the video and also comment section

  • 00:01:20 of the video. So let's begin with installing on our PC. PC installation is very easy. Just click

  • 00:01:28 this link. In here you will see releases page and direct download link. I will click direct download

  • 00:01:35 link. This is a 7-zip file. So the download is completed. You see it is here. Let's show

  • 00:01:42 in folder. Cut the downloaded file. It is 1.4 GB. Move into any folder where you want to install. I

  • 00:01:51 will install it inside my F drive. Right click and extract here. Since this is a 7-zip file,

  • 00:01:58 you need Winrar or 7-zip to extract it. I have put the Winrar link here. Just open it and download

  • 00:02:05 Winrar x64 version and install it. The files are extracted. This is all you need to do for

  • 00:02:13 using ComfyUI. But before using it, let's also update. So enter inside update folder and click

  • 00:02:21 update ComfyUI.bat file and it has updated. This readme file is important. It gives you some of

  • 00:02:30 the information. You should use update of ComfyUI with dependencies only if you are having issues.

  • 00:02:37 Moreover, if you have NVIDIA GPU, run with NVIDIA GPU. If you don't have NVIDIA GPU, run with CPU.

  • 00:02:45 I don't know if it is supporting AMD GPUs. So I will begin using ComfyUI with NVIDIA GPU. However,

  • 00:02:53 we don't have any checkpoints yet. So go inside ComfyUI. Go inside models. Here you will see

  • 00:03:00 checkpoints. We need to put checkpoints here. I have added Realistic Vision version 4 direct

  • 00:03:05 download link. So let's click and download it. You see the download started. I will also use the best

  • 00:03:11 VAE file. Let's also click and download it. You can download models from CivitAI or from Hugging

  • 00:03:18 Face. I prefer to use Hugging Face for downloading models. When you click models, it will list

  • 00:03:24 your models. In here for example, go to Stable Diffusion 1.5 version. You will see a label like

  • 00:03:30 this Stable Diffusion when you click it. It will list you all of the trending by default Stable

  • 00:03:36 Diffusion models like this. You can also sort it by other things. I am not preferring CivitAI

  • 00:03:43 because it is extremely saturated. Therefore, I prefer downloading my models from Hugging Face.

  • 00:03:50 And let's say you want to download DreamLike photo real. Enter here, go to files and versions in here

  • 00:03:57 look for the safetensors file and download the biggest safetensors files usually. For downloading

  • 00:04:04 click this link and the download will start. But the main purpose of this tutorial is for

  • 00:04:09 Stable Diffusion x large. So we need to download Stable Diffusion x large model. How you are going

  • 00:04:15 to do it is that you need to have a Hugging Face account. So if you don't have an account, click

  • 00:04:20 here and join. If you have an account, click here and login. After that, this is really important.

  • 00:04:25 Currently, the Stable Diffusion x large version is only available as research purposes. So you need

  • 00:04:32 to open both of these links and accept their terms and services. Then click files and versions here.

  • 00:04:40 And then click this download icon to download SDXL base version. Also download the refiner version

  • 00:04:49 from files and versions you see SDXL refiner version. Just click and download. Let me show you

  • 00:04:56 the currently the downloading files. SDXL refiner is 5.7 gigabytes. SDXL safetensors base model is

  • 00:05:05 12.9 gigabytes. We are also downloading Dream Like photo two gigabytes. This is pruned version. Our

  • 00:05:12 VAE file is downloaded. Also Realistic Vision version 4 is being downloaded with 4 gigabytes.

  • 00:05:17 All downloads are completed. Let's first copy the VAE file. This VAE file is necessary for Stable

  • 00:05:26 Diffusion 1.5 version. So I will cut it, move into our installation folder, which is here, ComfyUI,

  • 00:05:35 ComfyUI inside here models, inside here VAE and paste it there. Then our model files are also

  • 00:05:42 here. Let's select all of them and cut them. Then move to the models, move into checkpoints. This is

  • 00:05:48 where you need to put the model checkpoint files. Either they are safetensors or either they are

  • 00:05:53 ckpt files. If you have a LoRA file, then you need to put the LoRAs here. If you have Hyper Networks,

  • 00:05:59 you need to put them here. If you have Embeddings, you need to put them here. This is also supporting

  • 00:06:04 Diffusers as well. Then you need to put them here. Once you have put the necessary model files, just

  • 00:06:11 run with Nvidia GPU. It won't install anything and it will start this UI immediately. So let me

  • 00:06:20 clear it. Okay. When you clear the UI, it is like this. This is a node based UI. It is a little bit

  • 00:06:27 hard to use if you are accustomed to Automatic1111 web UI. So click load default and it will load the

  • 00:06:35 default workflow. By default workflow. you see our base models here. Currently the Realistic Vision

  • 00:06:42 version 4 is selected. This is SD1.5 version based model. It is able to generate very cool images

  • 00:06:50 with 768 and 768 so I changed the resolution here. This is a default text that it comes. So

  • 00:06:59 this is our seed. Based on seed, the generated image will change control after generate so it

  • 00:07:05 will change the seed every time you generated an image. Number of steps to generate image. The CFG

  • 00:07:12 value to generate image. Sampler name. You can see the samplers here. For Stable Diffusion 1.5

  • 00:07:18 models I prefer eular a as sampler. Scheduler normal. Denoise. Normally we use denoise for

  • 00:07:26 image to image, but in text to image we are also using it because of the structure of the Stable

  • 00:07:33 Diffusion model, but when we use text to image, it is by default 1. Why? Because we are turning

  • 00:07:40 the latent noise into a full image, therefore denoise 1 is necessary. It is going to use this

  • 00:07:46 VAE Decode and the VAE is provided from the model itself. However, if you want to use the VAE that

  • 00:07:54 you have downloaded so right click here. Add node. In here you will see many options. So we are going

  • 00:08:02 to use loaders. Load VAE. In here you see our downloaded VAE is selected. So all we need to

  • 00:08:09 do is change this VAE to here like this. So VAE Decode will use the VAE we selected. You can of

  • 00:08:18 course always use the model embedded VAE but if you want to use specific VAE this is the way. So

  • 00:08:24 save image. This is going to save our image. The save file naming is pretty different. So you need

  • 00:08:32 to define a prefix if you wish. Let's define as Realistic Vision. So all of the images we generate

  • 00:08:40 will have the file name prefix of Realistic Vision. The very cool thing of Comfy web UI

  • 00:08:48 is the queue system. I like it very much. So let's queue this. It is added to the queue. By the way,

  • 00:08:54 if you want to increase the batch size let them increase, you can increase it here. So let's

  • 00:08:59 queue another one. So you see now we have another queue. Let's say I also want to test this sampler

  • 00:09:04 name queue it. Let's say you want to test the cfg queue it. Let's say you want to test steps

  • 00:09:10 25 queue it. You see they are all being added to the queue. Then you can see the queue here. This

  • 00:09:17 is very cool. You can change the settings, you can change the prompt like fast car added to queue.

  • 00:09:23 You see you can do anything you wish. Add to the queue and everything will be processed according

  • 00:09:28 to the queue. You see the previews are displayed here. I think we can expand this. Let's expand

  • 00:09:35 it. You see when you expand it you will see them bigger. You can also zoom in and zoom out. For

  • 00:09:41 zooming in and for zooming out I keep pressing the left control button on my keyboard and I

  • 00:09:47 am using my mouse wheel. So let's say you want to save this workflow. Here you can just click save,

  • 00:09:53 give it a name as Realistic Vision for example. Click ok. The file will be saved as a json file.

  • 00:09:59 Then you can load this json file. You can share it. It is only 6 kilobytes. However, this is

  • 00:10:06 not also necessary. The files are saved inside ComfyUI folder inside output. So all of the images

  • 00:10:14 generated will be saved here and these images have metadata of your configuration. So you can

  • 00:10:22 drag and drop and load the all of the workflow by using these images. This is really convenient and

  • 00:10:29 these are the generated images. For example let's clear our workflow. Let's drag and drop one of

  • 00:10:36 the image and you see all of the workflow of that image is loaded with its seed value. So basically

  • 00:10:44 when we queue again, we should get the same image generated again. All of the settings are here.

  • 00:10:50 Selected model and selected VAE and everything. But how are you going to use SD xlarge? You can

  • 00:10:58 add the nodes yourself one by one, but it is really hard. There is a learning curve and

  • 00:11:03 examples of ComfyUI are shared here. You can open this link and you will see the examples here, but

  • 00:11:11 I also shared some of the workflows in my GitHub file. So SDXL_1 is the base file that has both

  • 00:11:20 refiner and base model. How you are going to use it. Right click and save link as. It will download

  • 00:11:26 the image. Let's download it as sdxl111. Okay, it is saved, then drag and drop into your ComfyUI and

  • 00:11:34 our Stable Diffusion xlarge. SDXL workflow is loaded. So this may look a little bit confusing

  • 00:11:42 and hard to understand. And yes, it is like that, but when you look all of the elements one by one,

  • 00:11:48 you can understand it actually. It begins with base latent image which is a noise. By default,

  • 00:11:56 SDXL is supporting 1024 and 1024. Therefore, our generated image settings are set here as a latent

  • 00:12:06 noise. Then we enter our prompts. So this is the prompt that I have decided. Masterpiece, realistic

  • 00:12:12 photo expensive sports car. You can type anything. This is our negative prompts. And here our models

  • 00:12:18 are selected. We have refiner you see SDXL refiner and we have base model SDXL base model. Then we

  • 00:12:27 have our samplers. For example, this is a sampler of the base model. These are the settings that

  • 00:12:33 I find better. Sampler name dpmpp_2s_ancestral. Number of steps 30. CFG 7. Then we have refiner.

  • 00:12:43 Now the refiner is image to image we know from Automatic1111 web UI. So it will get the base

  • 00:12:51 image generated by the base model. Then it will improve it. So we have denoise 25% here. You

  • 00:13:00 need to change this and see which one is working best for you. Number of refining steps by default:

  • 00:13:06 15 the seed and the sampler. So let's generate several images. First I queue the first item,

  • 00:13:13 then let's make the denoise 30% create the second item, then let's make the denoise 20% queued the

  • 00:13:21 third item. You see I can add everything to the queue very quickly and it will do everything

  • 00:13:27 automatically with the order. You can also see the queue from view queue here. And let's say you want

  • 00:13:34 to generate 100 images with the current settings. So what you need. Click these extra options and

  • 00:13:40 set the batch count so it will generate the number of times your settings. And let's make

  • 00:13:48 this as 5 and queue. So you see it has added this workflow 5 times to the queue. This batch size

  • 00:13:55 is one by one executed. If you want to generate multiple images at the same time, then you need

  • 00:14:01 to increase this batch size here. Currently, while recording video with NVIDIA broadcast

  • 00:14:07 open and also some other applications, this is my VRAM usage. It is working very well. If you have

  • 00:14:14 8 gigabyte VRAM I think you should be able to use maybe even with 6 gigabyte. I haven't tested it,

  • 00:14:20 but you should be able to use SDXL very well on your computer. The speed is not that great.

  • 00:14:27 Currently it is 1.5 it per second for RTX3090. This is when base model is generating. Let me

  • 00:14:36 zoom in so 1.5 it per second when generating base image and when doing refiner it is even slower.

  • 00:14:45 1.4 it per second. Automatic1111 is also working on implementing SDXL and I think it will be much

  • 00:14:53 faster than this. So our images are being saved here with refiner output and also base output.

  • 00:15:01 You see you define the file name prefix from here. Whatever you define will be used as a

  • 00:15:08 prefix of the generated images. And image quality of SDXL is amazing. You see. Here another image

  • 00:15:16 you see. It even has the reflection. It is not very correct but it is here. It is amazing. So

  • 00:15:23 the base image is here and here we see the refined image. You see it is amazing. A lot of details and

  • 00:15:30 quality has been added to the image. It is much more clear, the blueness has gone. It is looking

  • 00:15:36 much better than the base image. You can also delete the pending queue. Or you can cancel the

  • 00:15:42 current running queue from here and let's say you want to disable refiner. How you can do that. You

  • 00:15:51 need to move these icons here as you are seeing so you need to open some space. Okay let's move

  • 00:15:58 them. Let's check for it. Let's move them. Okay we are opening some space. So how refiner is

  • 00:16:05 working. You see this is the sampler of refiner. So what we need to do is we need to cancel this

  • 00:16:14 sampler to be working. How we are going to do that. We can disable this refiner. We and we

  • 00:16:21 can disable this sampler. How did I do it? While pressing left ctrl hit m key and it will disable

  • 00:16:28 the node. So where you can find these shortcuts. I added a shortcut link here. Open it. You will

  • 00:16:36 see all of the shortcuts. You see mute, unmute selected nodes, select all nodes, load workflow,

  • 00:16:42 save workflow, and other shortcuts are all shared here. And since I have disabled these two nodes,

  • 00:16:48 the refiner will not be executed. But now it shows us an error because this node is supposed to be

  • 00:16:56 executed. Therefore, we need to also disable it. And let's queue again. Now it is telling me that

  • 00:17:01 I also need to disable this node and now all of the nodes are disabled. It will only generate the

  • 00:17:09 base image. Okay, now it should work very well. The base image is selected. We can queue more. It

  • 00:17:15 is queuing 5 times because we have batch count 5. So this is how you disable nodes or for enabling

  • 00:17:22 back them. Select them and hit ctrl m and they will get enabled again. What about if you want

  • 00:17:29 to use LoRA or inpainting with SDXL with ComfyUI. I also shared 4 other workflows for them. Let's

  • 00:17:39 begin with SDXL inpaint. So right click. Save link as download as you wish. Let's say SDXL in paint.

  • 00:17:49 Then let's go back to our UI. Cancel all of the pending operations, just drag and drop the image

  • 00:17:56 here and you will see the workflow is loaded. It is not very organized. This is what came up

  • 00:18:02 with. So this is base model and how you are going to do inpainting. You need to choose your file to

  • 00:18:09 inpaint and let's pick a file. Let's go to our ComfyUI generated images inside output folder.

  • 00:18:17 Let's look at them bigger size. Okay let's try refined image. I will use this one. So this is

  • 00:18:25 the image that we are going to inpaint. Which part you may be wanting to inpaint. So right

  • 00:18:31 click here and you will see open in mask editor and it will load masking screen. With using mouse

  • 00:18:39 wheel you can increase or decrease the size of the inpainting circle. You can also change

  • 00:18:45 the thickness from here. Let's mask this area. If you hit clear it will clear the mask. Okay,

  • 00:18:51 I will just mask this area save to node so you see now we have masked area. You can also upload

  • 00:18:58 mask I think but I didn't look for it yet so I will use this masked image. Then we need to

  • 00:19:05 change our prompt. So currently this is a positive prompt. Let's type something there. For example

  • 00:19:11 an emblem of Lamborghini on a car. I don't know how it will work. This is a pretty small area. We

  • 00:19:19 have the settings here. Okay now denoise is very important. Based on the noise, it will change the

  • 00:19:25 image. I don't know which one would work best you need to test. Moreover, there is one more

  • 00:19:31 important setting here. This is grow mask by. This is like the padding pixels of Stable Diffusion

  • 00:19:38 Automatic1111 web UI image the image inpainting. So I make this 64 pixels. You can also change it

  • 00:19:46 and you can give a new name here. For example inpainted image and okay and hit queue. Now it

  • 00:19:56 added 5 queue. By the way since we are using fixed seed they will be all same. So let's make this as

  • 00:20:03 different. Let's change this to randomize and hit queue again. When you click here you will see all

  • 00:20:09 of the options so you can click these input boxes and you will see the value enterings. You can also

  • 00:20:16 select them from here or by using these arrow keys as you are seeing. So it may get really confusing

  • 00:20:23 in the output folder. Right click, sort by, sort by date and you will see the last generated image

  • 00:20:28 in the beginning. So this is our inpainted. Okay, I see. Yes nice. It is looking pretty decent. You

  • 00:20:36 see the emblem is added here. So this is how you do inpainting with SDXL. So what if you want to

  • 00:20:45 use refiner as a base model instead of the base model? Click here and select refiner from here.

  • 00:20:51 With this way you will use refiner model as an inpainting model. That's it. So how you can use

  • 00:20:59 LoRA with SDXL? I also have a workflow for that. Right click. Save link as SDXL LoRA png. So let's

  • 00:21:08 drag and drop our image to here and our LoRA workflow will get loaded. So the difference is

  • 00:21:14 that we are adding a load LoRA node anywhere we wish and we connect the model of base to the LoRA

  • 00:21:23 model like this. We connect the clip to the LoRA model. Like this and instead of from model to clip

  • 00:21:29 text encode, we change the direction from LoRA to clip text encode like this as you are seeing.

  • 00:21:38 If you look for how the nodes are connected, you start to understand the working logic of

  • 00:21:45 ComfyUI. Actually, this gives you more freedom and more options. However, it is really hard to

  • 00:21:52 understand at the beginning so you really need to look carefully how the nodes are connected to

  • 00:21:58 each other. So with this way you can use LoRA. This is a LoRA of myself. I have done over 15

  • 00:22:06 trainings. So you see my testing LoRAs are here. I am using Kohya UI for doing LoRA training and I

  • 00:22:13 am trying to find the best workflow for training yourself with extreme realism. Hopefully after

  • 00:22:20 this video it is my next video. So stay tuned, stay subscribed. So this is the way of loading

  • 00:22:26 LoRA. Nothing else you need. The LoRA model output goes to our sampler and that's it. So this LoRA

  • 00:22:33 exists in another folder. I will copy one of the LoRA.For example, test8 and let's put it into our

  • 00:22:41 new installation which is here ComfyUI, models, LoRAs. Okay, I paste it here so when I click here,

  • 00:22:50 it won't be displayed. So I need to refresh my page and after that I need to click it and you

  • 00:22:57 will see the file is here. Now I can generate myself. Actually, i'm going to do that right

  • 00:23:03 now. Okay, we have this prompt so let's queue it. In ComfyUI it will highlight in the which

  • 00:23:09 node it is doing processing right now. So we are at the sampler node right now. After that it will

  • 00:23:15 decode with VAE and we will see the image here. Whatever is happening will be displayed on the

  • 00:23:22 command line instance of the ComfyUI. So you can see the messages here. If you get error,

  • 00:23:27 you should look here. Okay, you see, this is me. According to the prompt. This is decent. This is

  • 00:23:34 not the best, but this is decent. I am still working on the best workflow for training LoRA

  • 00:23:40 models with SDXL and hopefully it will be on the channel very soon. So let's say you want to learn

  • 00:23:47 more about how to use ComfyUI. As I said, you can look the examples posted here for example,

  • 00:23:53 image to image. Let's open it. You will see that they will display the workflow for image to

  • 00:24:00 image here. This file is actually containing the workflow metadata. So right. Click. Save image as

  • 00:24:08 save it as your download folder. Got your download and drag and drop the image and it will load the

  • 00:24:15 workflow displayed there like this. So with this way you can download these examples and load them

  • 00:24:22 if you wish. This is extremely convenient way. ComfyUI also has a Discord channel like system

  • 00:24:29 so open their main GitHub repository. Go to the very bottom. In here you will see their chatting

  • 00:24:38 system, support and development channel. It is matrix space. I already joined it. It is free to

  • 00:24:44 join. When you open it you will get the rooms like this. Join the rooms and you can ask questions to

  • 00:24:52 the developer and other experts of ComfyUI. They can give you feedback. They can give you workflow

  • 00:24:58 files. You can load and use them as you wish. Now I will show you how to install and use ComfyUI on

  • 00:25:06 Google Colab a free Google Colab. Click this link. It will open you this Google Colab page.

  • 00:25:12 Click connect. Once you are connected, click here and verify that you have GPU ram. Which means you

  • 00:25:19 are assigned to a GPU. If you don't have a GPU, you can change the runtime from here. Click it and

  • 00:25:25 select python from here. GPU from here and this is the GPU type. Since I am on a free account,

  • 00:25:31 I am not able to select these other GPUs and hit save. You can use Google Drive to save your files,

  • 00:25:39 upload them, save them. Or if you don't use Google Drive, they will be saved in here. Left,

  • 00:25:46 click here. This is runtime repository. Everything here will get deleted when you terminate your

  • 00:25:52 runtime from here. Click here and disconnect and delete runtime and everything here will

  • 00:25:57 get deleted forever. Okay, the setup is pretty easy. First run this cell. Click run anyway. Wait

  • 00:26:05 until it is fully executed. You will also see new folders are appearing here. It will also display

  • 00:26:12 the progress in the bottom of this cell as you are seeing right now. This is the progress. It is

  • 00:26:18 downloading everything and installing everything. You see the ComfyUI appeared here. When I refresh

  • 00:26:24 I will also see whatever is coming new. Okay, the cell execution has been completed. It took only 33

  • 00:26:30 seconds. Now we can move to the next point. Here they have added quickly download links. If you

  • 00:26:39 want to download any of these certain models, just remove the # in front of them and they will get

  • 00:26:46 downloaded. So by default they are downloading SD 1.5 version. Let's say you want to use Realistic

  • 00:26:53 Vision. So how you are going to use it. Find Realistic Vision Hugging Face to your Google.

  • 00:26:59 Go to the Realistic Vision. In here click this username that is the person who uploads Realistic

  • 00:27:06 Vision models. Select the realistic vision version that you like. Go to files and versions. In here

  • 00:27:12 use the biggest file with safetensors. Right click this download icon, copy link,

  • 00:27:17 and then all you need to do is copy this command. Paste it so it is copy pasted. Then let's copy the

  • 00:27:26 link again because it is gone. Okay, delete this file and that's all you need to do. Now it will

  • 00:27:32 also download Realistic Vision. Actually, let's disable the other download and let's just download

  • 00:27:37 Realistic Vision as a beginning so you see it will also download the best VAE file. And there are

  • 00:27:43 many other models they have added. So I will just download these two models. They will automatically

  • 00:27:49 get downloaded into the correct folder with wget command. Actually we can open the ComfyUI,

  • 00:27:56 in here models, in here we will see checkpoints and you see they are getting downloaded. You can

  • 00:28:01 also see the download progress here. It is very fast. Over 200 megabytes per second and the models

  • 00:28:07 are downloaded. So this is the way of downloading models. But we want to use SDXL. So how are we

  • 00:28:13 going to download it? The procedure is totally same. So let's return back to our GitHub file.

  • 00:28:19 Open the SDXL base files. So this was the base file. Let's go to files and versions. Right click,

  • 00:28:26 copy link and this time let's change this file link into SDXL file. Let's also copy paste it

  • 00:28:35 one more time. Like this, let's also open the refiner okay files and versions. Right

  • 00:28:41 click the safetensors, copy link and let's also change this link. Okay, but will this work? No,

  • 00:28:49 this won't work. Because these files are behind a verification. Research agreement. So what you need

  • 00:28:59 to do is: you have to generate token. How are you going to do that? This is my profile. Click your

  • 00:29:06 profile link, click settings. In here you will see access tokens. Okay. Go to access tokens

  • 00:29:13 new token, test read, selected, generate, copy the token to clipboard and also memorize your username

  • 00:29:21 from here. Then go back to the GitHub readme file. In the bottom of the readme file you will see

  • 00:29:28 these commands that I have shared. So actually copy this link. We will replace this link with

  • 00:29:35 that like this and in here we will change username to the username of Hugging Face and also we need

  • 00:29:44 to copy our token again and paste the token here and it is done. So the same thing applies to the

  • 00:29:52 below link as well. So copy this part that starts from Hugging Face and paste it here so you see it

  • 00:30:00 has become exactly same. By the way I also need to delete extra wget command. So you see this is

  • 00:30:06 the final version of downloading the SDXL models with wget, then click this play icon again. In the

  • 00:30:15 bottom we should see they will get downloaded. You see it started downloading SDXL model to

  • 00:30:22 the Google Colab. When we refresh our folder we should see them in the checkpoint. Yes, they are

  • 00:30:27 coming. Okay, both files are downloaded. It didn't download the previously downloaded file again. Now

  • 00:30:34 we can start using ComfyUI on a free Google Colab. So just click this play icon. This is

  • 00:30:40 the suggested way of using it and it is working. I tested it previously. Just patiently wait. So you

  • 00:30:46 see it has enabled automatically high VRAM mode because my GPU has more VRAM than my computer ram.

  • 00:30:53 Because the computer ram is you see lesser than the GPU ram. Then all you need to do is open this

  • 00:31:00 link and copy this IP. This is really important. Selected ctrl-c to copy and paste it here. You

  • 00:31:07 can also look and type it and click submit in the opened page, then it will open the ComfyUI. This

  • 00:31:16 is really slow when compared to our computer. Okay, in the first time it didn't load properly

  • 00:31:22 so I will just refresh. Okay still I can't see so let's try with load default. Okay. Okay it says

  • 00:31:30 that cannot read properties. Okay, it just came. Nice. Nice. It is loaded. You see by default it

  • 00:31:37 has selected Realistic Vision and I also see other models. Let's do a test with Realistic Vision.

  • 00:31:43 So click queue. It is queued. We should see the messages here. Yes we are seeing. This is the GPU

  • 00:31:50 ram being used. Okay, we are still waiting for the output. Okay, it started to load the model. I see

  • 00:31:57 the GPU ram is increasing. Okay, now we are seeing the it per second. It is 7it per second and then

  • 00:32:05 we should see the image here. It will be slower than computer obviously because the data will

  • 00:32:11 be transferred, we are still waiting. Okay image arrived. Now you can right click and open image

  • 00:32:17 in a new tab and you can save it if you wish. Alternatively, it will be saved in this left part.

  • 00:32:23 You see I click here. This is our runtime. In here output and our image is saved here. You can double

  • 00:32:30 click it and open it. You can right click and download. If I have selected Google Drive, I think

  • 00:32:37 these files would be saved inside my Google Drive and by default I would be having all of them. So

  • 00:32:45 let's try SDXL on Google Colab. This was the image that we downloaded from my GitHub file. Let's drag

  • 00:32:53 and drop it. Okay, it is loaded and let's try. Okay, hit queue. By the way currently we are

  • 00:33:00 trying to use refiner as well, but it should work I think. Let's just patiently wait. Okay,

  • 00:33:06 GPU ram usage is increasing. Currently it is loading the checkpoint you see it is highlighted.

  • 00:33:12 Loading is also taking time. It is also displaying the messages here while loading. Okay then it is

  • 00:33:19 going to start generating image. It is 1.5 seconds per it and using 8.4 gigabytes VRAM right now. The

  • 00:33:27 first image has been generated. Now it should appear here. Yes, we can see the image now it

  • 00:33:33 will try to generate refined image. The GPU ram usage is increasing. Okay, it has loaded refiner

  • 00:33:39 model as well now using 12 gigabytes VRAM. So if you want to use this on a low VRAM machine

  • 00:33:47 like 8 gigabytes having VRAM, then you need to have higher system ram so that the models

  • 00:33:54 will be loaded onto ram and you will be able to use it. Okay this time it is 1.8 seconds per it

  • 00:34:01 and it is also generated and the image is here. Now it should also appear here as well. Okay,

  • 00:34:08 it has appeared here as well. What if if you want to download this entire folder so right? Click

  • 00:34:15 copy path then I will open ChatGPT and ask to the ChatGPT. Give me a Google Colab code that will

  • 00:34:24 download the following folder: okay, I copy pasted it. It is giving me the code, copy code and click

  • 00:34:34 here. Add code, paste the code and hit execute. But since this execution is permanent, this part

  • 00:34:43 of the code won't be executed. So let's terminate this. Once we terminate this, our ComfyUI will

  • 00:34:50 stop. Then we can download the entire folder. Let's play. Okay, we have got some errors so we

  • 00:34:57 need to fix them. So for fixing I will just copy this message and give it back to the ChatGPT. It

  • 00:35:04 will give me updated code, copy it. I am showing all of this so you will also learn then hit

  • 00:35:10 play icon again. Okay we got a download link but this link is not working so this is saved inside

  • 00:35:18 content download which should be somewhere around here. Yes! So this second file is the download.

  • 00:35:25 Click download and now you can download all of the images. So I have generated this download code

  • 00:35:33 on my Patreon post. You will see the link here when you open it in the very bottom you will see

  • 00:35:39 download colab.txt click it. It will download the txt file, open it. You will find the entire code,

  • 00:35:46 copy it, then go to the very bottom of the page. You will get plus code icon here when you hover

  • 00:35:52 your mouse so add a code. Alternatively from here insert code cell you see insert code cell,

  • 00:35:59 copy paste the code and play and you will get the download zip of the entire generated images like

  • 00:36:06 this. So why I share here? Because I need your support with Patreon. This Patreon post is also

  • 00:36:13 including auto installer script for RunPod. Now I will show you RunPod installation. For ComfyUI

  • 00:36:21 RunPod installation I prepared an automated script and also step-by-step instructions.

  • 00:36:28 Let's begin with automated script. So register or login your RunPod from this link, click login. Go

  • 00:36:36 to community cloud, the ComfyUI and SDXL working very well on RTX3090 which is only 29 cents per

  • 00:36:44 hour. So click deploy. In here type test. You will see RunPod Fast Stable Diffusion template. This

  • 00:36:50 is important selected. You can also customize deployment and increase your volume disk size

  • 00:36:56 if you wish. So just click continue, click deploy. Why I am using this template because this template

  • 00:37:03 has the necessary files to install and run and it is also very lightweight and easy to use. Just

  • 00:37:09 patiently wait until it is loaded. Okay, you see this was very fast why because it was previously

  • 00:37:16 cached probably by someone else. So click connect, click Jupyter lab. If you don't know how to use

  • 00:37:22 RunPod, I have this master tutorial. Why master: when you open it you will see it is over 100

  • 00:37:30 minutes and when you expand the description you will see all of these chapters. This video

  • 00:37:37 will significantly help you to learn how to use RunPod. Okay, we have connected our Jupyter lab

  • 00:37:44 so open this Patreon post, click here and download ComfyUI.sh file. Alternatively in the very bottom

  • 00:37:52 you will see attached files. You can also click here and download it. Then in your jupyter lab,

  • 00:37:57 click this upload icon, select Comfyui.sh file. You will see it here. Then all you need to do

  • 00:38:04 is copy this, open a new terminal, paste and hit enter and it will install ComfyUI fully

  • 00:38:11 automatically for you. Let's also start another machine for manually installing ComfyUI. Deploy. I

  • 00:38:18 will do the same Fast Stable Diffusion. Continue deploy. Let's name this machine as manual like

  • 00:38:25 this and the other machine will be auto like this. Okay, manually is also getting loaded.

  • 00:38:30 You don't have to do anything else for automatic installation, just running the initial command.

  • 00:38:36 Okay, manual machine is ready. Let's connect from jupyter lab. So I also prepared a very

  • 00:38:42 detailed instructions for manual installation as well. But if you support me on Patreon, I would

  • 00:38:48 appreciate that very much because my Youtube revenue is very bad. Your Patreon support is

  • 00:38:54 tremendously important for me. Okay, jupyter lab started. So first we need to move into workspace.

  • 00:39:00 We are already in that. So start a terminal. You see this is where we are. Copy this, hit enter and

  • 00:39:08 it will clone. Then you need to move into ComfyUI. So for moving into ComfyUI, refresh folders here,

  • 00:39:15 ComfyUI and open a new launcher here and terminal. You see now we are inside ComfyUI. Copy this code,

  • 00:39:23 copy paste and hit enter. It will generate a new virtual environment. Then we need to move inside

  • 00:39:30 virtual environment folder so you will see the virtual environment folder here. Enter inside it

  • 00:39:36 venv open a new terminal. Copy this command, paste and hit enter. Now this virtual environment is

  • 00:39:44 activated. Then we need to execute this command, copy paste and hit execute, wait until it is

  • 00:39:50 completed. RunPod automatic ComfyUI installer will also download best VAE file and Realistic Vision

  • 00:39:57 model and SDXL models automatically for you so you don't need to do anything for them as well. Okay,

  • 00:40:04 we can continue with manual installation. As a next step we need to install. This will install

  • 00:40:10 latest xFormers. This is also a special command that I have searched and found for you. So you

  • 00:40:17 see it has installed development 564 version of xFormers. Then we need to copy this and execute

  • 00:40:25 it. Then we will install requirements, copy execute. I am installing requirements while

  • 00:40:31 the virtual environment is activated. This is really important. Sometimes you are skipping

  • 00:40:36 this step. Therefore, the applications on RunPod or Google Colab is not working. Okay, it has been

  • 00:40:43 installed. Now we need to move into VAE folder. So let's move into VAE folder from here. ComfyUI,

  • 00:40:50 models, VAE open a new terminal, copy this command. This will copy the VAE and download

  • 00:40:58 it into this folder you see and then we can also download Realistic Vision. So copy this command,

  • 00:41:06 move into checkpoints folder here. We are not able to enter inside it unfortunately. So let's open

  • 00:41:13 a new terminal, copy paste it. The model will be downloaded in here. This is weird. I don't

  • 00:41:19 know why I am not able to enter inside checkpoints folder, but this is happening. Once the file has

  • 00:41:25 been downloaded, drag and drop it into checkpoints like this, and now it is inside checkpoints. But I

  • 00:41:32 am still not able to see checkpoints folder. That is very weird. Probably this is a bug of jupyter

  • 00:41:37 lab. However, you can open a new terminal here and move into checkpoints like this and you can

  • 00:41:44 type this and it will show you what is inside this folder like this. Meanwhile, automatic

  • 00:41:49 installation has been fully completed. So for using ComfyUI on RunPod after installation, copy

  • 00:41:56 this command entirely, open a new terminal, paste the command and hit enter and it will start the

  • 00:42:05 ComfyUI. Just patiently wait. Okay, it is started. Once you see this message it means it is started.

  • 00:42:11 Go back to your mypods and in here click connect. Click connect to http service 3001 and now we will

  • 00:42:20 get the ComfyUI interface. It is loading. Okay it has been loaded. It has been loaded with Realistic

  • 00:42:26 Vision version 4: just click queue and you can see the progress in this new terminal that you

  • 00:42:34 have started and it was super fast. You see it was 18 it per second. Let's load our SDXL. So

  • 00:42:42 from downloads, drag and drop this png. SDXL is loaded. Let's clear and let's see the speed of

  • 00:42:50 SDXL. So it is going to load the SDXL model. It is loading everything. Meanwhile we can see the

  • 00:42:57 pod utilization from this screen. So currently it is using CUP. Okay wow! So base SDXL model it was

  • 00:43:05 1.98 it per second and now it is doing refiner. Refiner was 1.77 it per second. You see how faster

  • 00:43:15 when compared to free Google Colab and we should see images here. Yes they are now here. On RunPod

  • 00:43:22 downloading is much more easier. Enter inside ComfyUI, then click this plus icon, open a new

  • 00:43:29 terminal type runpodctl send and type the name of the folder that you want to download which

  • 00:43:36 is output. It will give you a link like this, then open a cmd wherever you want to download,

  • 00:43:43 copy paste it and it will download inside that opened folder like this as you are seeing right

  • 00:43:48 now. By the way for this to work, for RunPodctl to work on your computer. Watch this tutorial

  • 00:43:54 and you will learn how to use RunPodctl on your computer. Alternatively, without using RunPodctl

  • 00:44:01 right click this folder download as an archive. It will download it. However, if you have too many

  • 00:44:07 images then it will be very slow. Okay, we can continue with our manual installation. So where

  • 00:44:14 we were left. We were left in this part where you need to download SDXL. So you need to have

  • 00:44:22 an Hugging Face account. I already explained this in the Google Colab part but let's say you just

  • 00:44:28 jumped to RunPod part so I will explain again. You need to open these two links and accept terms

  • 00:44:34 and services. Click the links to open them. Once you have accessed the files and versions, go to

  • 00:44:39 your account, go to settings. In here you have to generate access token. So go to access token. New

  • 00:44:45 token test, test, test2. You can give any name, generate token, copy the token, open a new notepad

  • 00:44:53 file, copy paste it like this. Then copy the first command written here. Paste it into your notepad.

  • 00:45:01 Check out your username from here. This is my username MonsterMMORPG. Then change the username

  • 00:45:07 here and copy paste the token here. Then copy this command and you need to now download it into

  • 00:45:15 checkpoints. So now we are inside checkpoints. Just copy paste it and it will download the

  • 00:45:21 SDXL into the checkpoint, then repeat the same progress for refiner as well. After this you are

  • 00:45:29 ready to use it on RunPod because installation is completed. Just run this command. This command is

  • 00:45:36 same as the automatic installation. Because after installation it is same and you will be able to

  • 00:45:43 use ComfyUI on RunPod like this. Once you are able to use ComfyUI, it is same with Windows or

  • 00:45:52 RunPod or Google Colab. Only where the files are saved only where the files are uploaded changes.

  • 00:45:59 Everything else is same. It is working perfectly fine. Thank you so much for watching. I hope you

  • 00:46:05 have enjoyed. Please join my Youtube channel and support me. It is tremendously important for me.

  • 00:46:11 Why? Because my Youtube views are terrible as you are seeing right now. I am spending huge time.

  • 00:46:17 For example, I have been working on training LoRA models on SDXL for days now and maybe it will be

  • 00:46:25 watched very few, but your join support and your Patreon support significantly helping me. When

  • 00:46:31 you open this link, you will get to my Patreon page. You will also see the Patreon link in the

  • 00:46:37 description of the video and also in the comment section of the video. You see I have over 300

  • 00:46:42 supporters. I appreciate them very much. They are giving me support to continue producing videos.

  • 00:46:48 On Patreon I have an index page. When you open this. This is a public page. You will see all of

  • 00:46:55 my Patreon sharings. You will see their details, you will see their links. I am sharing very useful

  • 00:47:01 resources here. I am explaining them in the videos as well, but this will make your life easier. So

  • 00:47:07 this is a little bit incentive for you to support me. But if you support me, I appreciate that very

  • 00:47:12 much. Please also comment share like ask me anything you want. If something gets broken

  • 00:47:18 just comment to this video and I will update this readme file with the newest instructions. Also in

  • 00:47:24 this readme file you will see our Youtube channel, Patreon page and my LinkedIn and

  • 00:47:29 my Twitter profile. Open them and you can start following me on Twitter. Or you can connect me

  • 00:47:35 and follow me on LinkedIn. So hopefully see you in another amazing tutorial video. Thank you so much.

Clone this wiki locally