Skip to content

How To Do Stable Diffusion XL SDXL DreamBooth Training For Free Utilizing Kaggle Easy Tutorial

FurkanGozukara edited this page Oct 22, 2025 · 1 revision

How To Do Stable Diffusion XL (SDXL) DreamBooth Training For Free - Utilizing Kaggle - Easy Tutorial

How To Do Stable Diffusion XL (SDXL) DreamBooth Training For Free - Utilizing Kaggle - Easy Tutorial

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

🌟 Master Stable Diffusion XL Training on Kaggle for Free! 🌟 Welcome to this comprehensive tutorial where I'll be guiding you through the exciting world of setting up and training Stable Diffusion XL (SDXL) with Kohya on a free Kaggle account. This video is your one-stop resource for learning everything from initiating a Kaggle session with dual T4 GPUs to fine-tuning your SDXL model for optimal performance.

#Kaggle #StableDiffusion #SDXL

Notebook ⤵️

https://www.patreon.com/posts/kohya-sdxl-lora-88397937

Tutorial GitHub Readme File ⤵️

https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/Full-Stable-Diffusion-XL-SDXL-DreamBooth-Training-Tutorial-On-Kaggle.md

00:00:00 Introduction To The Kaggle Free SDXL DreamBooth Training Tutorial

00:02:01 How to register Kaggle account and login

00:02:26 Where to and how to download Kaggle training notebook for Kohya GUI

00:02:47 How to import / load downloaded Kaggle Kohya GUI training notebook

00:03:08 How to enable GPUs and Internet on your Kaggle session

00:03:52 How to start your Kaggle session / cloud machine

00:04:02 How to see your Kaggle given free hardware features

00:04:18 How to install Kohya GUI on a Kaggle notebook

00:04:46 How to know when the Kohya GUI installation has been completed on a Kaggle notebook

00:05:00 How to download regularization images before starting training

00:05:22 Introduction to the classification dataset that I prepared

00:06:35 How to setup and enter your token to use Kohya Web UI on Kaggle

00:08:20 How to load pre-prepared configuration json file on Kohya GUI

00:08:48 How to do Dataset Preparation after configuration loaded

00:08:59 How to upload your training dataset to your Kaggle session

00:09:12 Properties of my training images dataset

00:09:22 What kind of training dataset is good and why

00:10:06 How to upload any data to Kaggle and use it on your notebook

00:10:20 How to use previously composed Kaggle dataset in your new Kaggle session

00:10:34 How to get path of session included dataset

00:10:44 Why do I train with 100 repeating and 1 epoch

00:10:54 Explanation of 1 epoch and how to calculate epochs

00:11:23 How to set path of regularization images

00:11:33 How to set instance prompt and why we set it to a rare token

00:11:46 How to set destination directory and model output into temp disk space

00:12:29 How to set Kaggle temporary models folder path

00:13:07 How many GB temporary space do Kaggle provides us for free

00:13:23 Which parameters you need to set on Kohya GUI before starting training

00:13:33 How to calculate the N number of save every N steps parameter to save checkpoints

00:13:45 How to calculate total number of steps that your Kohya Stable Diffusion going to take

00:14:10 If I want to take 5 checkpoints what number of steps I need calculation

00:14:33 How to download saved configuration json file

00:14:43 Click start training and training starts

00:14:55 Can we combine both GPU VRAM and use as a single VRAM

00:15:05 How we are setting the base model that it will do training

00:15:55 The SDXL full DreamBooth training speed we get on a free Kaggle notebook

00:16:51 Can you close your browser or computer during training

00:17:54 Can we download models during training

00:18:26 Training has been completed

00:18:57 How to prevent last checkpoint to be saved 2 times

00:19:30 How to download generated checkpoints / model files

00:21:11 How you will know the download status when downloading from Kaggle working directory

00:22:03 How to upload generated checkpoints / model files into Hugging Face for blazing fast upload and download

00:25:02 Where to find Hugging Face uploaded models after upload has been completed

00:26:54 Explanation of why generated last 2 checkpoints are duplicate

00:27:27 Hugging Face upload started and the amazing speed of the upload

00:27:49 All uploads have been completed now how to download them

00:29:02 Download speed from Hugging Face repository

00:29:17 How to terminate your Kaggle session

00:29:36 Where to see how much GPU time you have left for free on Kaggle for that week

00:29:46 How to make a fresh installation of Automatic1111 SD Web UI

00:31:05 How to download Hugging Face uploaded models with wget very fast

00:31:57 Which settings to set on a freshly installed Automatic1111 Web UI, e.g. VAE quick selection

00:32:07 How to install after detailer (adetailer) extension to improve faces automatically

00:32:51 Why you should add --no-half-vae to your command line arguments

00:33:05 How to start / restart Automatic1111 Web UI

00:33:37 How switch to the development branch of Automatic1111 Web UI to use latest version

00:34:24 Where to download amazing prompts list for DreamBooth trained models

00:35:07 How to use PNG info to quickly load prompts

00:35:52 How to do x/y/z checkpoint comparison to find the best checkpoint of your SDXL DreamBooth training

00:38:09 How to make SDXL work faster on weak GPUs

00:38:37 How to analyze results of x/y/z checkpoint comparison to decide best checkpoint

00:42:06 How to obtain better images

00:42:20 How to install TensorRT and use it to generate images very fast with same quality

00:44:41 How to use amazing prompt list as a list txt file

Video Transcription

  • 00:00:00 In this tutorial video, I will guide you through  setting up your Stable Diffusion XL (SDXL) Kohya  

  • 00:00:06 training notebook on a free Kaggle account. Here  is what you will learn: How to select the correct  

  • 00:00:11 Kaggle notebook settings and start your session.  Steps to install and initiate the Kohya graphical  

  • 00:00:17 user interface Stable Diffusion trainer. Setting  best parameters and configurations for SDXL  

  • 00:00:23 training with Kohya on a free Kaggle notebook,  utilizing dual T4 GPUs simultaneously. Simply load  

  • 00:00:30 my pre-shared configuration and click prepare data  set. Adding new data to your Kaggle account as a  

  • 00:00:36 data set for use in your session, like training  images. The types of training images to use in  

  • 00:00:42 your data set. A new training approach, instead  of epochs, use a higher repetition count and save  

  • 00:00:48 checkpoints based on step count. How to calculate  checkpoint saves every N steps. Estimating the  

  • 00:00:54 total number of steps your training will take.  Downloading saved checkpoints or files directly  

  • 00:00:58 from the Kaggle working directory. Uploading  generated checkpoints to Hugging Face from Kaggle  

  • 00:01:04 or from other cloud services such as Google Colab,  RunPod, and AWS. Quickly downloading checkpoints  

  • 00:01:10 from Hugging Face using a browser or wget.  Switching your Automatic1111 Stable Diffusion  

  • 00:01:17 web UI to the development branch. Finding and  using amazing prompt list PNGs. Installing and  

  • 00:01:23 effectively using the after-detailer extension  for automatic face inpainting to enhance image  

  • 00:01:29 quality, including specific face improvements.  Comparing checkpoints by using X/Y/Z comparison  

  • 00:01:35 to identify the best trained model. Correct  Automatic1111 command line arguments for  

  • 00:01:41 using SDXL on low-end graphic cards for image  generation. Installing TensorRT for faster image  

  • 00:01:47 generation and improved quality. As usual, I have  prepared an amazing GitHub Readme file for this  

  • 00:01:54 tutorial. Follow this tutorial while this file is  open. I will update this file if it be necessary.  

  • 00:02:01 We will begin with logging in into our Kaggle  account. If you don't have a Kaggle account,  

  • 00:02:06 register it from here. It is for free. After that  login and make sure that your phone number is  

  • 00:02:14 verified. Otherwise, you won't be able to use GPUs  that Kaggle provides us for every week, 30 hours.  

  • 00:02:21 And it is amazing. So this is my Kaggle account.  I am logged in right now. So as a next step,  

  • 00:02:27 go to this link to download the notebook file  that we are going to use in this tutorial. You can  

  • 00:02:32 download it from here, version 9. Alternatively,  which is even better way, go to the very bottom  

  • 00:02:39 of the post. And in here, you will see the  attachment here. Downloading from this attachment  

  • 00:02:45 is usually better. So we have downloaded our  notebook file, then click create icon here,  

  • 00:02:50 click new notebook, then go to the file here  and click import notebook. Click browse files,  

  • 00:02:56 select the downloaded notebook from your downloads  or wherever you have downloaded and click import.  

  • 00:03:02 Then the notebook will be imported. Click X  to close it. So this will be your notebook.  

  • 00:03:07 Before starting, you need to make sure that your  accelerator is selected GPU T4 X2. This is super  

  • 00:03:15 important. Turn on GPU as you have seen right  now. Then you can also select persistence. If  

  • 00:03:21 you select files only, it will try to recover  files, but this will make it work very slow,  

  • 00:03:28 extremely slow. I don't suggest it. So when  you are going to do training, you should be  

  • 00:03:34 available until training completes. Then once  the training completed, you should upload your  

  • 00:03:39 models into the Hugging Face or download them  into your computer. Uploading to Hugging Face,  

  • 00:03:44 which is I suggest you to do. Then make sure that  Internet on is selected. Once you have made those  

  • 00:03:52 changes, you need to click this button. It will  start the session. Once the session started,  

  • 00:03:58 you will see your hardware when you click here.  Recently, Kaggle did a huge upgrade to the RAM  

  • 00:04:05 they provide for free accounts. Now it is 29  gigabytes and it is amazing. And we still get  

  • 00:04:10 2 dual GPU T4 with 15 gigabyte VRAM having each  GPU. They are amazing. Then all you need to do  

  • 00:04:19 is click this cell. Once this cell is selected you  can click this icon or this icon. Let's click this  

  • 00:04:26 icon. Wait until this cancel run disappears.  So currently it will execute this cell and  

  • 00:04:34 install everything automatically for you. This  installation may take a while. So patiently wait.  

  • 00:04:40 While installing you will see the installation  messages like this. Just wait at the end here.  

  • 00:04:47 Once the setup has been completed, you see there  is no cancel run anymore, there is no executing  

  • 00:04:53 cell and you will see setup finished here. After  that, whether you are training a woman or whether  

  • 00:04:59 you are training a man, you can execute this cell  to automatically download them and extract them.  

  • 00:05:07 These are the latest regularization images that  I have prepared for you guys. Since I'm going to  

  • 00:05:13 train myself, it is going to a man data set. So  I click this cell. It will download and extract  

  • 00:05:19 all the man data set automatically for me. This  data set is prepared by me by spending days,  

  • 00:05:25 literally days, and I am providing all of them  to you. So when I enter inside the 1024 folder,  

  • 00:05:33 1024 pixels folder, you see they are even sorted  by the face quality. How did I calculate the face  

  • 00:05:40 quality? I used the very best algorithm available  to rank the faces among different images based on  

  • 00:05:49 the sharpness and focus of the face. You see  in each image, the face is extremely clear,  

  • 00:05:56 very high quality. And the reason why I did this  is and sort them with a specific naming pattern,  

  • 00:06:02 as you are seeing, because the Kohya script will  use the very first images that you need to use  

  • 00:06:10 during your training, which I will explain how  many will be used. So using these regularization  

  • 00:06:15 images, classification images will improve our  training significantly. This is the very best  

  • 00:06:22 regularization / classification images data set  that you can find. It is manually prepared by  

  • 00:06:28 me. So the extraction has been completed as you  are seeing right now, our classification images  

  • 00:06:33 are ready. Then as a next step, we are going to  do a something special thing to run the GUI on  

  • 00:06:40 the Kaggle notebook. Unfortunately, the Kaggle  is not allowing to use public Gradio sharing  

  • 00:06:47 anymore. Therefore, we are going to use a specific  way. So first, click the link that you will see  

  • 00:06:54 on your screen, read the description and the  steps. Get your authentication token, copy it,  

  • 00:07:01 then paste your authentication token where you  will see your token here, then execute the cell.  

  • 00:07:07 Whenever you start your GUI, you need to execute  this cell. So it will give you a link here that  

  • 00:07:14 you will see on your screen. Open it and just  wait before clicking visit site. Then we will  

  • 00:07:21 execute this link. This will also download the  very best as the SDXL DreamBooth configuration  

  • 00:07:27 automatically for you, which is curated for  Kaggle. With this configuration, we will be able  

  • 00:07:33 to do full fine tuning, full DreamBooth training  of Stable Diffusion XL, SDXL for free on Kaggle,  

  • 00:07:41 which is amazing. If you had used the paid  services, you would get much lesser quality and  

  • 00:07:47 you would pay huge amount of money. However, I am  bringing all of these to you for free so that you  

  • 00:07:53 can use that. I have made over 100 full trainings  to find the best hyper parameters for Kohya to do  

  • 00:08:02 Stable Diffusion XL, SDXL DreamBooth training. So  you will have all of it. After this cell executed,  

  • 00:08:08 you will get this screen, then go to the other  link that we have opened and click the visit site.  

  • 00:08:15 And now you see, we have loaded our Kohya GUI, as  you are seeing right now, then we will begin with  

  • 00:08:22 loading the configuration file. So click here,  return back to your notebook. And when you go to  

  • 00:08:29 the output folder and refresh it by clicking here,  you should see the configuration. Yes, we see the  

  • 00:08:36 configuration. Kaggle_SDXL_DreamBooth_Best.json.  Click this here to copy file path, then paste it  

  • 00:08:43 here. Click load. It will load the very best  configuration for you. All you need to do is  

  • 00:08:49 just data set preparation, nothing else. So to  be able to use your training images, you need to  

  • 00:08:56 first generate a data set in the Kaggle. So click  this icon here. It will allow you to upload files,  

  • 00:09:03 click browse files, go to your training images,  wherever they are. My training images are in  

  • 00:09:08 here and this is my training images data set.  They are all 1024 pixels exactly. This is not a  

  • 00:09:17 very good data set. Why? Because I have repeating  backgrounds and I have repeating clothing. This is  

  • 00:09:22 a decent data set, but not a good data set. I have  different distances as you are seeing right now,  

  • 00:09:27 close shot, mid shot, and I have different angles.  I don't have different emotions. If you want to  

  • 00:09:32 have different emotions in the output, you can  also add them. So this is my medium quality data  

  • 00:09:38 set. Hopefully I will make a much better and  bigger tutorial. I will explain all of them.  

  • 00:09:43 So stay subscribed. So I select all of them with  control A and open. I have to enter a data set  

  • 00:09:50 title. When entering a data set title here,  make sure that you use English characters and  

  • 00:09:54 no spaces. My_train_images, whatever the name you  want to give. This is private. So no one else will  

  • 00:10:01 see them. Click create. They will get uploaded  into my Kaggle data sets. If you want to use  

  • 00:10:07 anything with a Kaggle notebook, you should make  a data set first. Whether they are models, whether  

  • 00:10:12 they are images, whatever they are, or whether  they are a configuration file. And now you see,  

  • 00:10:17 I have my data set here. If I do training again,  I don't need to re-upload. When you click add data  

  • 00:10:23 set and when you click your data sets here,  you will see it. And when you click this plus  

  • 00:10:29 icon and they will get added into your running  notebook session. So my data set is added now.  

  • 00:10:34 I click copy file path and I paste it here into  training images. I will train up to 100 repeating.  

  • 00:10:42 And since this is a dual GPU, it will be total  200 epochs. Now the epoch calculation is really,  

  • 00:10:50 really confusing at the beginning. Hard to  understand. One epoch means that all of your  

  • 00:10:56 training images are one time trained. Since we are  going to do training with dual GPU at one step,  

  • 00:11:03 we will train two images. Therefore, with 100  repeating, we will actually do 200 epochs. You  

  • 00:11:10 don't need to think about it a lot because I will  also show you how to get checkpoints and compare  

  • 00:11:16 them. So you will find the best checkpoint.  Regularization images. They are also here as  

  • 00:11:22 you are seeing right now. So copy directory and  put it here like this. Regularization images  

  • 00:11:27 repeat 1. Instance prompt. it will be a rare  token ohwx and the class prompt will be man.  

  • 00:11:34 I have other tutorials. If you want to learn  more about instance prompt and class prompt,  

  • 00:11:39 you can watch them. They are all linked in the  GitHub Readme file, as you are seeing right now.  

  • 00:11:45 So destination directory, this will be inside  in our working directory in outputs like this.  

  • 00:11:53 So everything will be saved here. Click prepare  training data, copy info to folders tab. And now  

  • 00:11:59 you see they are all ready. You can give any model  output name. I will use My_DB_Kaggle and you can  

  • 00:12:05 see they are all copied here. Now we will be using  more output folder size as you are seeing right  

  • 00:12:11 now. The outputs arrived here after I clicked  refresh. However, since each checkpoint will be  

  • 00:12:18 about 7 gigabytes, we don't have sufficient output  folder space. So what we are going to do is we  

  • 00:12:25 are going to save them into the Kaggle temporary  models folder. So copy the path from here, go back  

  • 00:12:33 to your Kohya GUI, go back to folders and change  the output folders model path. Make it how you  

  • 00:12:41 should make it unique to make it like Kaggle temp  models. Because this directory has been generated  

  • 00:12:46 automatically. So go back to here and change it  to this. This is really important. If you give  

  • 00:12:52 another name, it may not work because the models  folder may not be automatically generated by  

  • 00:12:58 the Kohya in the second time. So use this models  folder. So the model files, the checkpoints will  

  • 00:13:03 be saved into the temporary space of the Kaggle,  and Kaggle provides us about 50 gigabyte temporary  

  • 00:13:10 space available. You see from here, which makes  roughly we can take like seven checkpoints, but  

  • 00:13:16 we will stop at taking 6 checkpoints to be sure.  Okay. Now the parameters. There is one parameter  

  • 00:13:23 that you need to set, which is when you go to  the advanced tab, save every N steps. Now you  

  • 00:13:30 need to calculate this yourself. For calculating  it easily click print training command here, and  

  • 00:13:37 it will display how many training steps that it is  going to take. Since I have 13 images, it is going  

  • 00:13:43 to take 13 multiplied with repeating 100, 1300  divided to 1 because the batch size is 1, divided  

  • 00:13:52 to 1 because the gradient accumulation steps are  1 and multiplied with 1 because we are going to do  

  • 00:13:58 training with only 1 epoch and multiplied with 2  because we are also using regularization images.  

  • 00:14:04 So the total training will take 2,600 steps  and we will do total 200 epochs training. If  

  • 00:14:11 I want to take 5 checkpoints, you just need to  divide this into 5. So 2,600 divided to 5, 520  

  • 00:14:19 steps. This is the step count that I need to get  checkpoints. So 520 steps. With every 520 steps,  

  • 00:14:27 I will get a checkpoint and I will download all  of them. And after that, you can save your json  

  • 00:14:32 if you want to use it later. Then you can download  it from here if you wish and we are ready. Then  

  • 00:14:38 click start training and you don't need to do  anything else. This training will start. First  

  • 00:14:44 it will cache all of the images that we are going  to use, including the regularization images. The  

  • 00:14:51 caching will take some time. It will also display  you everything here. You see, it is going to fully  

  • 00:14:56 load models into both of the GPUs. So it is not  like we can combine the VRAM of the both GPU and  

  • 00:15:03 it will download the SDXL base model and it will  download the SDXL 32 precision VAE as well. So  

  • 00:15:11 our models will have the best SDXL VAE already  embedded. The download speed of Kaggle is just  

  • 00:15:18 amazing. As you are seeing right now, then it is  going to load the models into the VRAM after they  

  • 00:15:24 are downloaded. You see additional VAE is loaded.  I have prepared everything for you. So it is just  

  • 00:15:31 super easy for you. You don't need to do anything.  First, it will cache latents, then it will start  

  • 00:15:36 training. The training for 2600 steps is taking  about 3.5 hours. When the caching included, it is  

  • 00:15:45 going to take like 4 hours, which is amazing. Why?  I will explain in a moment once the caching is  

  • 00:15:52 completed. So the training has been started. You  see, currently we are getting 5.08 seconds IT and  

  • 00:16:00 you may say this is slow. No, this is actually  a great speed. Why? Because currently we are  

  • 00:16:07 utilizing dual GPU. Therefore, this is actively  2.5 seconds per IT and on my RTX 3090 Ti GPU,  

  • 00:16:18 I am getting like 1.6 second IT. So we are almost  getting a free RTX 3090 Ti GPU from Kaggle for  

  • 00:16:29 free every week, 30 hours. This is amazing speed.  So the total training is going to take like 3  

  • 00:16:36 hours and 32 minutes plus 7 minutes are passed. So  it is about 4 hours in total for training. So just  

  • 00:16:46 wait and we will download all of the models after  the training has been completed. You cannot close  

  • 00:16:52 your browser while the training is happening.  This is a limitation of free Kaggle. Therefore,  

  • 00:16:58 your computer and browser has to be open during  the entire training session. Currently, we are  

  • 00:17:04 using 32.4 gigabytes on our temporary disk. After  checkpoints are generated, we will use more. So  

  • 00:17:14 we will compare once the first checkpoint has been  generated. The first checkpoint will be generated  

  • 00:17:20 after exactly 520 steps because we did set it like  that and currently we are at the 108th step. So  

  • 00:17:29 the first checkpoint has been generated. You see  exactly at the 20% steps because we have divided  

  • 00:17:37 the number of steps that are required to 5. So we  got our first checkpoint. This is its saved name  

  • 00:17:45 and you see the disk usage is now 38.9 gigabytes  and the training is continuing successfully. We  

  • 00:17:53 just need to wait. Unfortunately, there is no way  to download this model before canceling or waiting  

  • 00:18:01 the training to finish. Therefore, we have to wait  until the training has been completed to download  

  • 00:18:06 this saved checkpoint since it is saved in the  temporary folder. We are going to use the Hugging  

  • 00:18:14 Face repository upload method, which will be super  fast for uploading. And when you are downloading,  

  • 00:18:20 it will be also super fast. I will show in a  moment after the training has been completed.  

  • 00:18:26 All right, the training has been completed.  It took 3 hours and 44 minutes. However, we  

  • 00:18:33 have an issue. You see, currently we are using 77  gigabytes of temporary memory and they are giving  

  • 00:18:39 us only 73 gigabytes of hard drive. Therefore, we  need to reduce the number of saved checkpoints.  

  • 00:18:47 We saved 5 checkpoints. However, since it was  exactly as the 20% of the steps, it also saved  

  • 00:18:56 the last step 2 times. So how to prevent it?  Instead of using the exactly divided step number,  

  • 00:19:03 increase this by 1 step. So make it 521 steps. So  you will not get the last step saved checkpoint.  

  • 00:19:12 You will get the end of the checkpoint that the  model generates. Therefore, remember when you are  

  • 00:19:17 watching this tutorial, make the step count 1 more  than the division of the 5. All right. So how are  

  • 00:19:26 we going to download the generated models right  now? There are two ways to download them. The  

  • 00:19:32 first way is moving them into the output directory  and downloading from there. To do that, we need to  

  • 00:19:39 first cancel run, which is the running instance  of the Kohya GUI. So I will cancel the run. Okay,  

  • 00:19:45 now I can execute the other cells. The first  cell that we will execute is this cell. This  

  • 00:19:52 cell will list the directory of the temporary  models folder where we have saved our models.  

  • 00:19:58 And these are the models that have been saved as  you are seeing right now. So we can download them  

  • 00:20:03 one by one. Let's say we want to download the  first checkpoint, which is this one. Copy its  

  • 00:20:09 name. Change the name here like this. Execute  this cell. After you executed this cell, this  

  • 00:20:16 model will be also copied into the Kaggle working  directory. We will see it in a moment after this  

  • 00:20:22 cell execution has been completed. The temporary  disk space of the Kaggle is really really slow  

  • 00:20:29 compared to the output folder's hard drive space.  Actually, there was once a bug which made it much  

  • 00:20:36 more slower than the output. By the way, it looks  like it is also counting the output folder in this  

  • 00:20:43 total disk space utilization. Okay, it is almost  completed. Yes, it is completed. Now refresh here  

  • 00:20:50 and you see the safetensor file is also here.  So you can click these three dots and click  

  • 00:20:55 download. When you do this, watch the address bar  of your browser. If you start multiple downloads,  

  • 00:21:02 it will ask you to allow multiple downloads from  Kaggle.com and it will not display the download  

  • 00:21:08 status. So how you will know the download status?  You need to open your task manager, go to the  

  • 00:21:14 performance and check out your ethernet. So you  see currently it is downloading with 200 megabits  

  • 00:21:20 per second. Once the download is completed, I will  see an immediate download from my browser. So it  

  • 00:21:27 will be immediately saved into the downloads  folder. But I am not suggesting to do this  

  • 00:21:32 methodology. Because you have to move every file  one by one and you will not see the progress of  

  • 00:21:38 download. For example, if I want to also download  the second checkpoint, I have to first remove this  

  • 00:21:45 first checkpoint from the output folder by using  this. This will delete this model from here. So  

  • 00:21:52 I will be able to repeat the first process and  move the second model into the output folder. So  

  • 00:21:59 what I suggest, I suggest you to do Hugging Face  upload methodology. To do that, you need to have  

  • 00:22:04 a Hugging Face account. The join link is here. You  can click this join and register. Currently I will  

  • 00:22:12 also register to show you. So this is the account  that I am going to use for registering. I written  

  • 00:22:18 a password, click next. It will ask you username  and other stuff. So let's make this account as  

  • 00:22:24 video tutorials. Let's say my username. Okay.  Other things are not important. I have read,  

  • 00:22:32 create account. Okay. It is first asking me human  verification. This may be an interesting issue  

  • 00:22:39 for you. So let's make it like this. Okay. It  was success and we got our account. We need to  

  • 00:22:45 verify our email. So let's verify it. Okay. It is  verified. So then you need to go to the new model.  

  • 00:22:51 This is important. Click new model, give a model  name, my checkpoints, whatever the model name you  

  • 00:22:58 want. You can make it private or public. I will  make it private. So no one else will be able to  

  • 00:23:03 see or download it. So this will be my username  and model folder. Click here to copy it, return  

  • 00:23:09 back to your Kaggle notebook, change the repo ID  from here. The folder path is this folder where I  

  • 00:23:16 have generated and saved the models. So you don't  need to change it. Then click this icon first. It  

  • 00:23:21 will ask you to enter your token. So how will you  generate the token? When you go back to our GitHub  

  • 00:23:28 readme file, you see there is a tokens link, right  click and open it in browser. Alternatively, you  

  • 00:23:33 can click here and you can go to settings. In the  settings you will have access tokens. Click here,  

  • 00:23:39 click new token, make sure that you have selected  write. This is super important. And let's say  

  • 00:23:44 upload. Any name you can give, generate token,  copy the token from here, go back to your Kaggle,  

  • 00:23:51 paste it, click login. And you see token is valid  permission write. This is super important. Then  

  • 00:23:57 set the repo ID, as I said from here and click  this play icon. So this will upload all of the  

  • 00:24:05 models into the Hugging Face repository that  you have made permanently. And then later you  

  • 00:24:11 can download them as you wish whenever you want  with ultra fast speed. I thank a lot the Kaggle  

  • 00:24:18 and Hugging Face from here. They are hugely  contributing to the machine learning, to the  

  • 00:24:24 AI space. Their importance are significant. So  I thank them a lot. I suggest you to also follow  

  • 00:24:31 this strategy to upload. In the first run it  may take a while because it is first calculating  

  • 00:24:38 the hashes of the models. Then it will start  uploading. You see our model download is also  

  • 00:24:43 just completed after downloading. And now when I  check my Ethernet, you see the speed is dropped  

  • 00:24:50 like this, the internet usage speed. So as I said,  you can move each file one by one and download  

  • 00:24:57 them. Alternatively, you can upload all of them  to the Hugging Face. Let's just wait a little bit.  

  • 00:25:02 So after you upload those files into the Hugging  Face, where they will appear. When you go to the  

  • 00:25:08 profile, go to your model's checkpoints here. Go  to files and versions, they will be here. And this  

  • 00:25:14 will be only visible to you because it is set to  private. No one else will be able to access them.  

  • 00:25:20 Let's just wait a little bit more. Okay. Meanwhile  waiting it, the below is for LoRA training. I will  

  • 00:25:26 search for better configuration for LoRA training,  but I don't suggest you to do LoRA training. SDXL  

  • 00:25:33 DreamBooth is much more stronger and better  than the LoRA. So if you really need LoRAs,  

  • 00:25:40 then you can use the Kohya GUI version to extract  LoRAs out of the fully trained checkpoints,  

  • 00:25:47 fully trained models. And they are much better  quality than the LoRA training itself. Hopefully  

  • 00:25:53 I will make a video about that too. You can use  your own computer to extract them. You don't  

  • 00:25:59 need VRAM actually for doing that. So it will  be hopefully a topic of another video. I will  

  • 00:26:05 search for the best LoRA extraction settings and  share them with you. I will also search for best  

  • 00:26:11 LoRA training configuration as well. But this  is the configuration I have at the moment when  

  • 00:26:16 you download this notebook, you will have that.  So on our channel, I suggest you to watch these  

  • 00:26:23 2 tutorials as well if you wish to learn more  about using the Kaggle, using the Automatic1111  

  • 00:26:30 Web UI on a Kaggle and watch this tutorial to  learn more about LoRA training if you prefer.  

  • 00:26:36 Hopefully new tutorials are coming too. So please  subscribe to our channel. You won't regret it  

  • 00:26:41 believe me. The upload still didn't start. I think  it is taking time to calculate hash all of the  

  • 00:26:48 models. We can see the CPU usage and the RAM usage  because it is working right now. So from these  

  • 00:26:54 uploaded models, actually this final file, you  see it is named like this, is equal to 2600 steps  

  • 00:27:03 model. So these 2 model checkpoints are actually  duplicate. As I have explained to prevent that, we  

  • 00:27:11 need to make minimum number of steps to generate  checkpoint is 1 plus of the division. So calculate  

  • 00:27:20 the number of steps, multiply it with 5 and add  1 step. So the upload has been started. You see  

  • 00:27:26 the upload speed. It is huge. Currently, it is  uploading 5 models and the total upload speed  

  • 00:27:33 is like 100 megabytes per second. So it is equal  to 800 megabits per second. You see the speed. It  

  • 00:27:41 is huge. Once these uploads have been completed,  we will download them from the Hugging Face as we  

  • 00:27:46 wish. So all of the uploads have been completed.  You see it took total like 5 minutes and not more  

  • 00:27:53 than that to upload all of the models. So you see  we got the link here where the uploads have been  

  • 00:27:59 completed. Let's refresh our files and versions  and you see all of the files are here. We already  

  • 00:28:06 have downloaded this checkpoint. Now I will  download the other ones too. So click this icon  

  • 00:28:13 and it will ask you where to download. So I will  download them here. Let's download all of them.  

  • 00:28:18 Why I am downloading all models because I will do  a checkpoint comparison. So instead of these 2600  

  • 00:28:26 steps, I am going to download this file. This  is the which one was the latest. So if we look  

  • 00:28:34 at the checkpoints, yeah, the last checkpoint is  My_DB_Kaggle.safetensors. So that is the one we  

  • 00:28:41 are going to download, which is this one. This  is the last checkpoint. This one is equal to  

  • 00:28:45 2600 steps. So this model that contains one step  shouldn't have been generated. I will tell this to  

  • 00:28:52 the developer for fixing and our last checkpoint  is named like this. So let's also download it. All  

  • 00:28:58 the files are getting downloaded as you are seeing  right now. Let's see the download speed. You see  

  • 00:29:03 it is 430 megabits per second and my maximum  download speed is 500 megabits. So it is almost  

  • 00:29:10 maximum. Okay, now we are completely done with the  Kaggle. So we don't need anything else from here.  

  • 00:29:17 Then you can terminate your session by clicking  here and start using your models. Now I will show  

  • 00:29:24 how to do a checkpoint comparison to find the best  checkpoint among all of the same checkpoints. So  

  • 00:29:31 let's terminate this and I still have over 19  hours this week. When I hover my mouse here,  

  • 00:29:38 you see it shows when my quota will be reset. So  all of the downloads have been completed. I am  

  • 00:29:45 going to do a fresh installation of Automatic1111  UI. Let's go to our automatic installer list. In  

  • 00:29:52 here we have automatic windows installer which  is automatic installer for SDXL. So let's open  

  • 00:29:58 this link and we have automatic installer bat  file here. Let's download it. Where should we  

  • 00:30:05 install? I will install it into my F drive.  Let's say test. Let's enter inside test and  

  • 00:30:12 let's double click. More info around anyway.  It is cloning the Automatic1111 Web UI. This  

  • 00:30:18 will install everything automatically for you.  It skipped the download SDXL.py file because it  

  • 00:30:24 doesn't exist. We didn't download it. I will move  my models into this folder. So they were inside my  

  • 00:30:31 downloads folder. Let's refresh and let's cut all  of the downloaded files. This is from yesterday,  

  • 00:30:38 not today. And let's move them into the models  folder. If you don't have a GPU having computer,  

  • 00:30:45 then you can follow my how to use Kaggle for  Automatic1111 Web UI tutorial and you can upload  

  • 00:30:53 these models to your Kaggle with just wget into  the models folder and use them. So it is very  

  • 00:31:00 easy to upload them once you have them on your  Hugging Face account. If you want to use these  

  • 00:31:06 with wget command, all you need to do is right:  click copy link address, then entering inside the  

  • 00:31:14 models folder. Wget this link, remove this part  and that's it. However, for this to work, you need  

  • 00:31:20 to make this public. So when you are downloading  into your Kaggle or Google Cloud or RunPod, make  

  • 00:31:26 it public temporarily, then make it private again  and it will allow you to quickly download models.  

  • 00:31:33 All right, automatic installer is installing. Our  models are here. These models have the embedded  

  • 00:31:39 VAE the best VAE so we don't need a secondary VAE  for this to work. Okay, the installation has been  

  • 00:31:46 completed. The Stable Diffusion Automatic1111  Web UI automatically started. We can see the  

  • 00:31:52 checkpoints here. There are several things that  I do. First of all, go to the settings tab,  

  • 00:31:57 in here go to the user interface, in here  make VAE from here like this. Apply settings  

  • 00:32:04 and then go to the extensions click available,  load from. Select after. Search for after and  

  • 00:32:11 install After Detailer. You see this extension,  click install and you can watch the status from  

  • 00:32:17 here. So you see my automatic installer installed  everything for me. What you need for this to work,  

  • 00:32:23 you need to have Python installed. You see my  default Python is 3.10.11 and plus to that, you  

  • 00:32:30 need to have Git installed. When you type git, you  should get a message like this git. All right the  

  • 00:32:36 web UI is getting reloaded. It is installing the  necessary requirements. Everything is installed.  

  • 00:32:43 After this go to installed tab apply and restart  the UI so you will have the newly installed  

  • 00:32:49 extension. We also should add --no-half-vae to our  web UI. So it is here: I am editing webui-user.bat  

  • 00:33:01 file so let's add no-half-vae and after that  I will restart the web UI. So let's go back to  

  • 00:33:09 Web UI and start again. This is how you start the  Automatic1111 Web UI. I am starting with xFormers  

  • 00:33:14 and no-half-vae. After detailer is initialized.  I will make this none so it will use the embedded  

  • 00:33:21 VAE. I prefer to use DPM++ 2M SDE Karras. This is  really important as a sampling method and first  

  • 00:33:30 we need to decide which checkpoint is best.  But before start there is one more thing that  

  • 00:33:35 I will do. There haven't been any significant  updates to the Automatic1111 Web UI for weeks,  

  • 00:33:44 actually for months. When we go to the development  branch in the Web UI GitHub repository, we will  

  • 00:33:50 see that it is 367 commits ahead. Therefore, I  will update the Web UI branch to the development  

  • 00:34:01 branch. So how will we do that? Let's close this.  Let's go back to our installation, start a cmd  

  • 00:34:08 like this and do git checkout dev. Now we are at  the development branch. Let's also update it. Git  

  • 00:34:15 pull and now we are at the latest version of the  development branch. Then let's restart the Web UI  

  • 00:34:22 like this and I will use the amazing prompts that  I have found after a lot of research to do x/y/z  

  • 00:34:31 checkpoint comparison testing. The prompts are in  this post. Let's click it from here. You see it  

  • 00:34:38 is under suggested resources in the GitHub Readme  file. This Readme file will be in the description  

  • 00:34:43 of the video and you see there is version 1  prompt list.pdf or you can also download the  

  • 00:34:48 images and use the png info. Actually, let's  do that. So these are the images. They are  

  • 00:34:54 getting downloaded. Let's open, extract and the  images are here. So now I can use png info. You  

  • 00:35:02 can alternatively also download the pdf file and  use that. So the prompt images are here. I have  

  • 00:35:08 gone to the PNG info tab. Now I will just drag and  drop the image that I want to use as a checkpoint  

  • 00:35:16 comparison testing. You see the prompt images  are here. Which one we should use? Maybe yeah,  

  • 00:35:23 let's let's use this one perhaps. This is a good  one. Let's see the image. It is a decent image.  

  • 00:35:29 By the way with the very best configuration  that I have on Patreon right now, I can get  

  • 00:35:35 even better than this image. This image has been  generated without Text Encoder. Unfortunately,  

  • 00:35:41 the Kaggle performance is not as good as you are  doing it in your computer with 24 gigabyte GPU or  

  • 00:35:48 on RunPod. But still, we will see the performance  of the Kaggle training. I will use sampling steps  

  • 00:35:55 20. You can also use 40. It will be a little  bit better, but for this demonstration let's  

  • 00:36:00 use 20. You see this is the rare token that we did  training ohwx man. If you are training with woman,  

  • 00:36:07 you need to change this woman or whatever the  rare token and the class token you have chosen.  

  • 00:36:13 Don't forget that. Width and height has to be 1024  to 1024. Or if you have done the training with a  

  • 00:36:20 different resolution then you should use it. Let's  make the batch count 9. Let's make this random  

  • 00:36:26 seed. Let's uncheck this VAE. Because we have used  the best VAE. Now these models have the best VAE  

  • 00:36:34 automatically. You don't need any additional VAE.  Let's enable after detailer. You see the prompt  

  • 00:36:39 is also here. Let's make the detection mask only  the top k largest 1 so it will only inpaint my  

  • 00:36:47 face. If there are multiple persons in the image  they won't get inpainted. And I will also change  

  • 00:36:53 the in painting denoising strength to 0.5 it is  already like that. You don't need to change any  

  • 00:36:59 other options if you are not sure. The default are  good. Photo of ohwx man is the prompt. Let's make  

  • 00:37:06 the x/y/z plot. So this is how we are going to do  checkpoint comparison. We go to the very bottom.  

  • 00:37:11 We select the x/y/z plot from here. We select  the checkpoint name from here. It will list the  

  • 00:37:17 checkpoints. I will start from the least training  steps to the latest training steps. By the way,  

  • 00:37:23 2600 and the very last one are same. I also  compared them. So I will select the 2600 or  

  • 00:37:31 maybe let's select the last one. It shouldn't  matter. And let's also make the grid margins  

  • 00:37:36 50. All right. So this is it. Now we are ready to  do testing. At every checkpoint it will generate 9  

  • 00:37:43 images with these prompts and it will also do face  inpainting to improve the face quality. Cfg scale  

  • 00:37:50 is 7 then let's generate. Okay, the generation has  started. With this way we will compare the outputs  

  • 00:37:58 of the each checkpoint. Then we will decide which  one of the checkpoint is looking best. All right,  

  • 00:38:03 we got the images. There is one more thing that  I want to mention. If your GPU is not good then  

  • 00:38:10 you should edit webui-user.bat file and add here  like --medvram. If it is still working very slow  

  • 00:38:18 or not even working then you can add lowvram. This  will make the application work slower but it will  

  • 00:38:26 work. So try --medvram first and if still fails,  add --lowvram. Don't forget that. But I don't  

  • 00:38:34 need them to add right now. Okay, so we click this  icon. It will open the outputs folder like this.  

  • 00:38:41 Let's go to the text to image grids and in here at  the very bottom we will find the very last grid.  

  • 00:38:50 Okay. So we got the images. Let's look at them.  Here we see the images. The first checkpoint is  

  • 00:38:58 looking very bad actually. Let me also open one of  the images from the training data set so we will  

  • 00:39:05 have a better comparison. Let's go to the pictures  where my training images were okay here here. For  

  • 00:39:13 example. Let's use this one. Okay, now we will  have a better idea. You see the first checkpoint  

  • 00:39:19 is not good, then let's look at the second  checkpoint. By the way, I am using paint.net so  

  • 00:39:25 this is a free image editor. The second checkpoint  is also not looking good. Let's look at the third  

  • 00:39:31 checkpoint. The third checkpoint is very decent.  You see you can get very good images by generating  

  • 00:39:39 more images. As you can see the realism is  really good. So not all the images will be  

  • 00:39:44 super good. Because we are doing a free training.  We don't have a 24 gigabytes but very decent and  

  • 00:39:52 my data set is not good as I said. If you improve  your data set you will get better results. I am  

  • 00:39:58 using this at best medium quality data set as  I said. So that you can get a better data set  

  • 00:40:04 than this and collect better images. Also some  images will look according to your other images,  

  • 00:40:10 not the very best looking one. So improve your  data set to get better quality images. Okay,  

  • 00:40:15 this is the third checkpoint. 2080 steps. Actually  it is being effectively 4060 steps because the  

  • 00:40:22 batch size is 2. Yeah, we can see the results. The  head of the the shape type of the head is broken  

  • 00:40:29 for some reason and this is the last checkpoint.  Yeah, the head is broken. I think it is a little  

  • 00:40:36 bit over trained. So among these I think the best  looking one is the third checkpoint. Maybe if we  

  • 00:40:44 had a checkpoint between these two it would be  better. So you can do multiple trainings and  

  • 00:40:49 aim different checkpoints to see which number of  steps are best. I think it can be between these  

  • 00:40:55 two on these settings. But you can also do more  checkpoint comparison test with more images not  

  • 00:41:03 with just single prompt. So let's select another  prompt from our downloaded prompts. Let's see  

  • 00:41:09 which one should we try. Okay let's try this  prompt so I will copy the positive prompt and  

  • 00:41:17 I will copy the negative prompt. The rest will be  same. So let's hit, generate and see the results.  

  • 00:41:22 Okay, the second test has been completed as well.  Let's open the image file and let's look at the  

  • 00:41:28 each checkpoint. Okay let's see: first, second,  third checkpoint, fourth one and the fifth one  

  • 00:41:38 is here. Yeah the fifth one has face is really  broken for some reason. Some of them are looking  

  • 00:41:44 good though. I think the third one is still  looking the best. Probably from third checkpoint,  

  • 00:41:50 we can get really good images, but I would say  that something between third and fourth would  

  • 00:41:56 be better if we had more frequent checkpoints.  So perhaps we could reduce the number of steps  

  • 00:42:03 and have more frequent checkpoints. So to get good  images that we would like, what can we do? We need  

  • 00:42:10 to generate a lot of images and find the very best  ones. And we need to generate fast. For generating  

  • 00:42:17 fast we need TensorRT. For TensorRT I have this  auto installer in this post. Let's download the  

  • 00:42:24 TensorRT installer this bat file. I also have  a full tutorial here so you can also watch this  

  • 00:42:31 tutorial to manually install yourself and learn  everything about it. Let's move this into our  

  • 00:42:37 repository. So I will quickly install it into my  Stable Diffusion Web UI. Just double click. More  

  • 00:42:45 info, run anyway. It will install the extension  and also the necessary cuDNN library for me  

  • 00:42:52 automatically. Then I will generate the TensorRT  model quickly. Okay, download has been completed.  

  • 00:42:59 Let's restart the Web UI. I am in the development  branch. That is really important. Okay, TensorRT  

  • 00:43:06 installed and started. Let's enable the U-NET  from user interface. U-NET. From here sd_unet  

  • 00:43:14 apply and reload UI. Since I think that the third  checkpoint which is 1560 steps is best, I will  

  • 00:43:23 generate a TensorRT for that one. To do that  first, I will select the model from here as you  

  • 00:43:29 are seeing right now, once the model is loaded, I  will use 1024 batch size 1, but I will change the  

  • 00:43:37 optimal prompt token count to 225. So if the  prompt is long, it will not cause any errors.  

  • 00:43:46 Let's export engine. It will export the engine  and do everything for me automatically. Let's  

  • 00:43:52 see if there is any error. Okay, we have an error  at somewhere. It says self mean match size match  

  • 00:44:00 size self not batch size. Assertion error okay,  maybe we don't need to use static shapes since we  

  • 00:44:07 are changing the prompt token count. Yeah, let's  make it like this: export engine. Yeah, you see,  

  • 00:44:14 after I enabled the advanced settings, make the  mean prompt token count 75 optimal 75 and max  

  • 00:44:22 225. Now it is starting. This is really important.  Watch this full tutorial to learn more about it.  

  • 00:44:29 The tutorial link is here. The automatic installer  is here. Okay, so the TensorRT file has been  

  • 00:44:35 generated and saved into disk. Now we can generate  images very fast and I am going to use the amazing  

  • 00:44:44 prompt list as a txt file from this post. Let's  download it. Let's open all of the prompts like  

  • 00:44:52 this. If you have a different rare token, just  use notepad++ and replace all of them as you wish  

  • 00:44:59 for. Okay, let's copy it. Let's go here and  let's select and from the very bottom select  

  • 00:45:06 prompts from file or text box. You can upload or  copy them. Let's upload the downloaded file like  

  • 00:45:12 this. All right and let's enable after detailer.  Photo of ohwx man. For non-realistic prompts this  

  • 00:45:20 may override the style, but for realistic prompts  this will work very well. And for inpainting I  

  • 00:45:27 made it 50% so this is being equal to 50 percent.  Let's change the height and width. Batch count 1.  

  • 00:45:36 Let's select the best sampling method. All right,  everything is looking good. Then from SD_UNET I  

  • 00:45:42 will select the newly generated TensorRT file  and I will also delete the older generated  

  • 00:45:50 images so we will see the new generated images and  let's hit generate and see the speed. All right,  

  • 00:45:56 let's open the command line interface to see the  speed. Meanwhile I am generating. Uh okay we got  

  • 00:46:03 an error which is let's see. Maybe it didn't  see them as a single line. So let's try with a  

  • 00:46:12 single line first and verify the TensorRT is  working or not. Okay, let's generate. Yeah,  

  • 00:46:18 we have a problem. Multi-head attention, forward  name error. It is still trying to use all of the  

  • 00:46:25 prompts for some reason. That is weird. All right,  let's restart the Web UI. Sometimes there might be  

  • 00:46:32 some errors, it is not a very important issue.  You will also get these errors. You can ignore  

  • 00:46:37 them. Okay, accurate model are selected. Let's  make the resolution. Let's try quickly. Okay,  

  • 00:46:44 the image is getting generated. The speed is very  very decent. Now let's enable After Detailer and  

  • 00:46:52 change the everything to accurate settings. All  right. Okay, let's try one more time with After  

  • 00:46:59 Detailer enabled. Okay. Okay the speed is huge as  you are seeing right now. It is really fast even  

  • 00:47:06 though I am recording a video. Okay, face improved  but I can see the clothing over training. This is  

  • 00:47:15 like my shirt in my training images. So you really  need to improve your training data set if you want  

  • 00:47:21 the very best quality. Okay, it is looking decent.  If you want to use high resolution fix then you  

  • 00:47:27 need to increase the resolution of the TensorRT  that you have generated. Now time to try the  

  • 00:47:34 prompts from file or textbox. Now let's upload  all of them, insert prompts at the start or end  

  • 00:47:42 it says. So let's try like this: let's delete  the original image. Original prompt. Okay now  

  • 00:47:49 it started generating all of the images. For some  reason, probably due to a bug in the Automatic1111  

  • 00:47:56 Web UI it wasn't working. Now it will generate  all of the images very quickly and very amazingly.  

  • 00:48:04 With this strategy, you can get perfect images  from different styles and use the very best  

  • 00:48:11 ones. So you see in that image there were multiple  faces but it only changed the single face. Okay,  

  • 00:48:17 you see this face is also getting improved like  this, but the quality is not at the 24 gigabyte  

  • 00:48:24 training level with BF16. On Kaggle, we are using  FP16 and I think BF16 is working better than FP16  

  • 00:48:32 for some reason. I don't know, probably due to  the precision of the weights, but the results are  

  • 00:48:37 really really good. You cannot get these results  with Stable Diffusion 1.5 based models training.  

  • 00:48:44 Of course not all the images will be best so  you can generate multiple images and find the  

  • 00:48:50 very best ones. You see very very decent image.  You can even do high resolution fix by making  

  • 00:48:57 a TensorRT for it. So each image generation is  taking like 2 seconds to with the After Detailer  

  • 00:49:05 like five seconds. It is taking like 5 seconds. I  also have a tutorial for how to use Automatic1111  

  • 00:49:11 Web UI on a Kaggle. So by watching this: how  to use Stable Diffusion SDXL ControlNet LoRAs  

  • 00:49:17 for free without a GPU on Kaggle, you can watch  this tutorial. If you don't have a strong GPU,  

  • 00:49:22 strong computer, watch this tutorial to learn how  to use Automatic1111 Web UI on a Kaggle with super  

  • 00:49:29 fast speed. Believe me, it's really working very  well with the GPU that Kaggle provides. So you can  

  • 00:49:35 do everything that we are doing on our computer  on Kaggle for free and we are getting very good  

  • 00:49:41 images as well as this one you are seeing right  now. Results are decent, I like them and all of  

  • 00:49:48 these are made with a free Kaggle account. We  didn't spend any time you see. This is decent.  

  • 00:49:54 Okay, there is another one as you are seeing right  now. Very decent. The model is performing decent.  

  • 00:50:00 I think for free these are really really good  and you can try different prompts, improve your  

  • 00:50:05 training data set, change the number of steps that  you did training to get the very best results.  

  • 00:50:10 If you decide to use paid Google Colab, Kaggle  or RunPod and have 24 gigabyte having GPU, our  

  • 00:50:18 very best settings are shared here. These settings  found after 100+ full trainings. I just recently  

  • 00:50:26 updated them yesterday so you can download the 24  gigabyte Text Encoder json. It is the very best  

  • 00:50:33 version and do your training and get even much  higher quality results and accurate images. But  

  • 00:50:42 with even SDXL DreamBooth training on a Kaggle,  we are able to obtain amazing quality images like  

  • 00:50:49 this one. This is all for today. You will find all  of the links and instructions that you need on the  

  • 00:50:55 GitHub Readme file. I hope you have enjoyed.  Please join our Discord channel and ask me any  

  • 00:51:02 questions that you have or reply to this video.  Our Discord channel is really really important.  

  • 00:51:09 You see, we have Discord online here. When you  click this link you will get to this page. Join  

  • 00:51:15 our server. We have over 5500 members. Majority of  them related to Stable Diffusion, AI, Generative  

  • 00:51:23 AI. I am working on even more amazing tutorials.  You can also purchase our Udemy course, you  

  • 00:51:29 can follow me on LinkedIn, you can follow me on  Twitter, on Deviantart, on CivitAI or Medium. You  

  • 00:51:35 can support me with Patreon or Buy Me A Coffee. I  appreciate that very much. On our channel you will  

  • 00:51:42 find amazing videos. Click the playlist and you  will find the playlists that I have. So I suggest  

  • 00:51:49 you to watch the playlists that we have. Hopefully  see you in another amazing tutorial video.

Clone this wiki locally