Skip to content

Stable Diffusion XL SDXL Locally On Your PC 8GB VRAM Easy Tutorial With Automatic Installer

FurkanGozukara edited this page Oct 23, 2025 · 1 revision

Stable Diffusion XL (SDXL) Locally On Your PC - 8GB VRAM - Easy Tutorial With Automatic Installer

Stable Diffusion XL (SDXL) Locally On Your PC - 8GB VRAM - Easy Tutorial With Automatic Installer

image Hits Patreon BuyMeACoffee Furkan Gözükara Medium Codio Furkan Gözükara Medium

YouTube Channel Furkan Gözükara LinkedIn Udemy Twitter Follow Furkan Gözükara

#SDXL is currently in beta and in this video I will show you how to use it install it on your PC. This tutorial should work on all devices including Windows, Unix, Mac even may work with AMD but I couldn't test it. I also have shown settings for 8GB VRAM so don't forget to watch that chapter.

Source GitHub Readme File ⤵️

https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/How-To-Use-Stable-Diffusion-SDXL-Locally-And-Also-In-Google-Colab.md

Automatic Installer Script File ⤵️

https://www.patreon.com/posts/auto-installer-85678961

Our Discord server ⤵️

https://bit.ly/SECoursesDiscord

If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰 ⤵️

https://www.patreon.com/SECourses

Technology & Science: News, Tips, Tutorials, Tricks, Best Applications, Guides, Reviews ⤵️

https://www.youtube.com/playlist?list=PL_pbwdIyffsnkay6X91BWb9rrfLATUMr3

Playlist of #StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img ⤵️

https://www.youtube.com/playlist?list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3

00:00:00 How to use SDXL locally on your PC

00:01:01 How to install via Automatic installer script

00:01:35 Beginning manual installation

00:01:47 How to accept terms and conditions to access SDXL weights and model files (instantly approved)

00:02:08 How agreement page looks like and how to fill form for instant access

00:02:38 How to generate Hugging Face access token

00:02:53 Continuing the manual installation

00:03:36 Automatic installation is completed. How to start using SDXL

00:04:00 How to add your Hugging Face token so that Gradio will work

00:04:45 Continuing the manual installation

00:05:19 Manual installation is completed. How to start using SDXL

00:06:17 How to delete cached model and weight files

00:06:44 How the app will download weight files showing live

00:07:20 Advanced settings of the Gradio APP of SDXL

00:08:11 Speed of image generation with RTX 3090 TI

00:08:39 Where are the generated images are saved

00:09:44 8 GB VRAM settings - min VRAM settings for SDXL

00:10:06 How to see file extensions on Windows

Paper : https://github.com/Stability-AI/generative-models/blob/main/assets/sdxl_report.pdf

SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

Abstract

We present SDXL, a latent diffusion model for text-to-image synthesis. Compared

to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet

backbone: The increase of model parameters is mainly due to more attention blocks

and a larger cross-attention context as SDXL uses a second text encoder. We design

multiple novel conditioning schemes and train SDXL on multiple aspect ratios.

We also introduce a refinement model which is used to improve the visual fidelity

of samples generated by SDXL using a post-hoc image-to-image technique. We

demonstrate that SDXL shows drastically improved performance compared the

previous versions of Stable Diffusion and achieves results competitive with those

of black-box state-of-the-art image generators. In the spirit of promoting open

research and fostering transparency in large model training and evaluation, we

provide access to code and model weights.

The last year has brought enormous leaps in deep generative modeling across various data domains,

such as natural language [50], audio [17], and visual media [38, 37, 40, 44, 15, 3, 7]. In this report,

we focus on the latter and unveil SDXL, a drastically improved version of Stable Diffusion. Stable

Diffusion is a latent text-to-image diffusion model (DM), which serves as the foundation for an

array of recent advancements in, e.g., 3D classification [43], controllable image editing [54], image

personalization [10], synthetic data augmentation [48], graphical user interface prototyping [51], etc.

Remarkably, the scope of applications has been extraordinarily extensive, encompassing fields as

diverse as music generation [9] and reconstructing images from fMRI brain scans [49].

User studies demonstrate that SDXL consistently surpasses all previous versions of Stable Diffusion

by a significant margin (see Fig. 1). In this report, we present the design choices which lead to this

boost in performance encompassing i) a 3× larger UNet-backbone compared to previous Stable

Diffusion models (Sec. 2.1), ii) two simple yet effective additional conditioning techniques (Sec. 2.2)

which do not require any form of additional supervision, and iii) a separate diffusion-based refinement

model which applies a noising-denoising process [28] to the latents produced by SDXL to improve

the visual quality of its samples (Sec. 2.5).

A major concern in the field of visual media creation is that while black-box-models are often

recognized as state-of-the-art, the opacity of their architecture prevents faithfully assessing and

validating their performance.

thumb photo taken from twitter : stonekaiju

Video Transcription

  • 00:00:00 Greetings everyone.

  • 00:00:01 In this video I will show you how to install and use Stable Diffusion XL on your computer

  • 00:00:06 with an advanced Gradio interface like this.

  • 00:00:09 It is very convenient to use, very easy to use.

  • 00:00:12 You can now set number of images that you want to generate.

  • 00:00:15 You see I have generated 20 images with that simple prompt.

  • 00:00:19 Some of the images are amazing quality.

  • 00:00:22 I didn't have much time to test different prompts and you can test them.

  • 00:00:27 On my computer I am also able to use refiner steps.

  • 00:00:30 So this is better than free Colab.

  • 00:00:32 However, if you don't have such a strong GPU, I will also publish a RunPod tutorial as well.

  • 00:00:38 So I have prepared an amazing Github readme file.

  • 00:00:41 All of the information for this tutorial will be here.

  • 00:00:44 You will find the link of this tutorial in the description of the video.

  • 00:00:48 Also, I will update this file if it be necessary.

  • 00:00:52 To follow this tutorial all you need is installing Python.

  • 00:00:55 If you don't know how to install Python, watch this tutorial video.

  • 00:00:59 I prepared also automatic installer and we have manual installation as well.

  • 00:01:05 So if you are my Patreon supporter, go to this link.

  • 00:01:08 The instructions are also written here.

  • 00:01:11 Download the install.bat file and run.bat file.

  • 00:01:15 Automatic installation is the automated way of manual installation.

  • 00:01:19 I will begin with automatic installation.

  • 00:01:21 I cut the downloaded files.

  • 00:01:23 Make a new folder here as auto SDXL.

  • 00:01:27 Enter inside it.

  • 00:01:28 Just double click the install.bat file.

  • 00:01:31 More info, run anyway and it will start downloading and installing automatically.

  • 00:01:35 Let's also begin our manual installation SDXL manual.

  • 00:01:39 Enter inside this folder.

  • 00:01:41 We will begin with cloning my repo.

  • 00:01:43 Copy.

  • 00:01:44 Start a new cmd.

  • 00:01:46 Git clone.

  • 00:01:47 To be able to use SDXL, you have to join Hugging Face and get a token.

  • 00:01:52 So if you don't have an account on Hugging Face, click this link and register from here.

  • 00:01:58 If you already have an account, login from here.

  • 00:02:01 Once you logged in your account, you need to accept terms and services of these two

  • 00:02:06 repositories.

  • 00:02:07 Click both of them.

  • 00:02:08 When you click the link, it will ask you researcher early access agreement like this.

  • 00:02:13 Go to very bottom.

  • 00:02:15 Fill the fields as you wish.

  • 00:02:18 I am filling them with my.

  • 00:02:20 Check this checkbox and submit application.

  • 00:02:23 Then you will see all of the files like this.

  • 00:02:26 You need to complete this step.

  • 00:02:28 Otherwise, you won't be able to use Stable Diffusion SDXL.

  • 00:02:33 Once you have accepted the terms and conditions, they will give you access to those files.

  • 00:02:38 Then go to your Hugging Face tokens from this link.

  • 00:02:42 Generate a new token.

  • 00:02:44 Test one, read and write, generate token, copy test one.

  • 00:02:48 You can give it any name.

  • 00:02:50 Paste the generated token somewhere else to keep it.

  • 00:02:53 Then we will begin with generating a new virtual environment.

  • 00:02:57 So installation of Stable Diffusion SDXL won't affect your other installations such as Automatic1111

  • 00:03:04 web UI.

  • 00:03:05 So enter inside your cloned repository, start a new CMD, copy paste the command python -m

  • 00:03:12 venv venv.

  • 00:03:14 Then just follow the commands here copy, paste, copy, paste, copy, paste.

  • 00:03:21 Now our virtual environment is activated as you are seeing.

  • 00:03:25 Copy, paste, copy, paste, copy paste and hit enter.

  • 00:03:30 It will install the Torch latest version with CUDA support.

  • 00:03:34 Meanwhile, automatic installation is doing everything for us automatically.

  • 00:03:38 Automatic installation has been completed.

  • 00:03:40 Now we can begin to use it.

  • 00:03:43 By the way, if you encounter any problem with installation, just reply to this video or

  • 00:03:48 my Patreon post.

  • 00:03:50 And hopefully I will update the necessary readme file and automatic scripts.

  • 00:03:54 So let's run our automatic installation.

  • 00:03:57 After installation has been completed, you need to make this change both in automatic

  • 00:04:02 installation and also in manual installation.

  • 00:04:04 Go to inside your cloned folder, right click app2.py file and edit it.

  • 00:04:11 I am using Notepad++ but you can use any editor.

  • 00:04:14 In here all you need to do is you need to change access token.

  • 00:04:18 So I copy paste my access token save it.

  • 00:04:22 This is necessary.

  • 00:04:23 And then all I need is double clicking run.bat file.

  • 00:04:26 More info.

  • 00:04:28 Run anyway.

  • 00:04:29 When I first time run it, it will download the necessary weights and model files from

  • 00:04:35 Hugging Face automatically.

  • 00:04:37 Since I have them in my cache, it didn't download them.

  • 00:04:40 So it just started and I am able to now use it.

  • 00:04:44 But before doing that, let's continue with our manual installation.

  • 00:04:47 So the last step we executed was this one.

  • 00:04:50 Now we will execute this one.

  • 00:04:53 This development version may get outdated when you are watching this video.

  • 00:04:58 If it gets just reply me and I will update the xFormers to the latest version.

  • 00:05:03 xFormers is an optimization library that will speed up our inference and that will lower

  • 00:05:10 our VRAM usage.

  • 00:05:11 Okay, it is installed, then we will copy this and execute it here.

  • 00:05:16 Manual installation is also completed.

  • 00:05:18 So close this window.

  • 00:05:20 First, we need to add our access token.

  • 00:05:22 So I will go to my manual installation here inside this repository, edit app2 file.

  • 00:05:29 Right click.

  • 00:05:30 Edit, copy my access token, paste it here as you are seeing, save it.

  • 00:05:34 Then you need to save these commands as a bat file, I will use the high VRAM one copy

  • 00:05:41 it.

  • 00:05:42 So go to this folder, new run.bat file.

  • 00:05:47 You can give any name.

  • 00:05:48 Edit, copy paste the code, then double click it.

  • 00:05:52 It has to be inside this folder.

  • 00:05:54 Otherwise, it will throw you an error.

  • 00:05:57 Alternatively, if you want to put this inside this folder, edit it and remove this part

  • 00:06:03 of the folder path.

  • 00:06:04 Moreover, if you do that, you need to also remove this part as well.

  • 00:06:08 Okay, I am not saving changes.

  • 00:06:10 Okay, it is starting the manual installation.

  • 00:06:13 However, it didn't download the model files and weights because they were cached.

  • 00:06:17 Now I will delete them.

  • 00:06:19 So the files are located inside your C drive, go to your users folder, go to your username.

  • 00:06:26 In here you will see cache folder here cache there you will see Hugging Face in there you

  • 00:06:32 will go hub and in here you will see this particular folder you see it is 13 gigabyte.

  • 00:06:39 I will delete all of them like this, then I will run the bat file again.

  • 00:06:43 And let's see what happens now.

  • 00:06:45 Okay, you see it is starting to download all of the necessary files.

  • 00:06:49 Once these are downloaded, it will start the application.

  • 00:06:54 All files have been downloaded.

  • 00:06:55 Now it will start the application.

  • 00:06:57 You may also get model.safetensors not found.

  • 00:07:02 However, it is working.

  • 00:07:03 I don't know why it is displaying that.

  • 00:07:05 Now we need to open this URL, copy it, paste it into your browser.

  • 00:07:10 And here our Stable Diffusion SDXL application.

  • 00:07:15 Type anything you want such as sports car.

  • 00:07:19 And let's generate one image.

  • 00:07:20 There is also advanced settings here.

  • 00:07:23 This number of images set here is for batch generation.

  • 00:07:27 I mean generating multiple images at the same time.

  • 00:07:32 This requires more VRAM.

  • 00:07:34 So I don't suggest you to increase this.

  • 00:07:37 This is number of steps that it will use to generate images.

  • 00:07:40 This is refiner steps.

  • 00:07:42 Refiner is a new feature of SDXL.

  • 00:07:44 It improves the quality.

  • 00:07:47 However, it is also requiring the VRAM the memory that you need.

  • 00:07:51 And guidance scale is the CFG scale.

  • 00:07:54 Okay, our first image is here.

  • 00:07:56 When we typed a very primitive prompt like this, it is not very good.

  • 00:07:59 So let's try another one.

  • 00:08:01 A whale is flying in the sky amazing quality.

  • 00:08:07 And let's put some negative prompts let's generate.

  • 00:08:09 Let me show you the speed.

  • 00:08:10 Currently, I am recording a video with NVIDIA broadcast.

  • 00:08:14 However, my it per second is over three it per second as you are seeing right now.

  • 00:08:20 Let's look at the VRAM usage.

  • 00:08:22 It is using a lot of VRAM.

  • 00:08:24 Because we are also using refiner.

  • 00:08:26 In a moment I will show you the possible lowest VRAM settings as well.

  • 00:08:32 You see refiner is a little bit slower.

  • 00:08:35 Okay, we got our image.

  • 00:08:36 Yes, we got a better image.

  • 00:08:38 So where are these generated images are saved.

  • 00:08:42 Go into your installation, inside our cloned folder in here you will see outputs.

  • 00:08:47 And here the generated images.

  • 00:08:50 Let's make this like 3 and let it generate 3 images.

  • 00:08:54 It will generate 3 images 1 by 1.

  • 00:08:57 This is the VRAM usage right now.

  • 00:08:59 The speed is decent.

  • 00:09:00 I am pretty sure it may get better when Automatic1111 web UI starts supporting.

  • 00:09:06 But for now this is what we got.

  • 00:09:08 Moreover, since this is very new, the Hugging Face is also working on transformers to improve

  • 00:09:14 the speed of inference as well.

  • 00:09:16 Okay, first image generated it is not displayed here yet.

  • 00:09:20 But we can see it here.

  • 00:09:21 This is the another image.

  • 00:09:23 As usual, Stable Diffusion SDXL is also very dependent on your how good you are prompting.

  • 00:09:29 For example, with another prompt, this is what I got.

  • 00:09:32 Or with another prompt, or with another prompt, this is what I got.

  • 00:09:36 On Twitter, I have seen much better images.

  • 00:09:38 It is all about how you are prompting.

  • 00:09:40 And now it has generated 3 images as you are seeing right now.

  • 00:09:44 Okay, now I will show you how to use Stable Diffusion XL with minimal amount of VRAM.

  • 00:09:51 So let me close this.

  • 00:09:53 After I closed it down, my dedicated GPU memory is dropped 1.8 gigabyte.

  • 00:09:58 For low VRAM, I need to copy this, make a new bat file here.

  • 00:10:05 run2.bat file.

  • 00:10:07 If you are not seeing the file extensions like this, go to view and check file name

  • 00:10:13 extensions here.

  • 00:10:15 Right click edit, copy paste the code.

  • 00:10:17 I also need to edit app2.py file, right click Edit with notepad++.

  • 00:10:24 So we are going to disable pipe to CUDA and we will enable CPU offload like this.

  • 00:10:30 This will reduce our inference speed, but it will also reduce the VRAM usage.

  • 00:10:37 So let's try like this.

  • 00:10:39 I saved.

  • 00:10:40 Closed.

  • 00:10:41 This time we won't be using the refiner.

  • 00:10:44 Okay, let's see how much VRAM will increase, it is loading the model.

  • 00:10:49 So the model is loaded and it is only using about 900 megabytes right now.

  • 00:10:54 So let's refresh and let's try something like this, a fancy sports car.

  • 00:10:59 Okay, let's generate one image.

  • 00:11:01 Okay, it is starting to generate.

  • 00:11:03 Wow, it is only using about seven or eight gigabyte VRAM.

  • 00:11:08 So if you have eight gigabyte VRAM having GPU, you should be able to use Stable Diffusion

  • 00:11:15 XL model on your computer like this.

  • 00:11:18 And it was pretty fast as well.

  • 00:11:20 And here's our generated image.

  • 00:11:22 So with these settings with only 8 gigabyte VRAM, you can use Stable Diffusion XL on your

  • 00:11:29 computer.

  • 00:11:30 If you don't have a computer you can use our Google Colab as well.

  • 00:11:33 I hope you have enjoyed this video.

  • 00:11:35 Please like subscribe, join and support me on YouTube.

  • 00:11:39 I would appreciate that very much.

  • 00:11:41 Please also consider supporting me on Patreon.

  • 00:11:43 Your support on Patreon is tremendously important for me, because my YouTube views are very

  • 00:11:50 low because of the style that I am teaching.

  • 00:11:54 However, my patron supporters are helping me tremendously.

  • 00:11:57 I thank all of them very much.

  • 00:12:00 I hope that you will also support me.

  • 00:12:02 Hopefully see you in another amazing video and one more thing I will also make RunPod

  • 00:12:06 tutorial for SDXL as well.

Clone this wiki locally