When it comes to running Stable Diffusion, Automatic1111 is by far the most popular Web UI out there. This free Stable Diffusion Web UI lets you take advantage of all Stable Diffusion features and generate beautiful AI art.
In this comprehensive Automatic1111 guide, I’ll teach you step-by-step how Automatic1111 Web UI works and how you can use it to create images in Stable Diffusion.
Whether you’re just starting out with Stable Diffusion or someone experienced, you can use this Automatic1111 guide as a documentation manual covering all aspects of it.
So, without any further ado, let’s get started.
Table of Contents
What is Automatic111
Automatic1111 or A1111 is a GUI (Graphic User Interface) for running Stable Diffusion. It’s a Web UI that runs on your browser and lets you use Stable Diffusion with a simple and user-friendly interface.
Automatic1111 was originally called Stable Diffusion WebUI but the name Automatic1111 caught on instead as it was the GitHub username of the original author of the software.
The Automatic1111 Stable Diffusion WebUI has over 100k stars on GitHub and more than 480 people have so far contributed to improving the WebUI.
One of the biggest reasons why Automatic1111 has become the most popular or the defacto Stable Diffusion WebUI is because of it’s ease of use. It has a very user-friendly interface where you enter your commands and generate your image.
As you go through this guide, you’ll realize how easy it is to use Stable Diffusion with Automatic1111.
Note: Throughout this guide, I’ll be using the terms Automatic1111 and Stable Diffusion WebUI interchangeably but they both mean the same thing.
How To Download & Install Automatic1111
To download and use Automatic1111, you’ll need Python and Git installed on your computer. Then you can clone the Automatic1111 GitHub repository.
While this seems like a straightforward way of installing Automatic1111, it requires you to enter commands on a command prompt. This often leads to people making mistakes and doing something wrong.
So, I recommended a different approach which is recent but is being praised by the Stable Diffusion community for its simplicity.
The quickest and easiest way to install Automatic1111 is by using a software called Stability Matrix.
This is a one-click Automatic1111 installer that doesn’t require you to install Python or Git manually. The best part is that it works for Windows, Linux, and Mac OS.
It’s also fully portable so you can move the installation folder to a new directory any time you want.
Here’s how to install Stable Diffusion WebUI using Stability Matrix:
Step 1: Download & Install Stability Matrix
Visit the Stability Matrix GitHub page and you’ll find the download link right below the first image.
Click on the operating system for which you want to install Stability Matrix and download it. A .zip
file will be downloaded to your chosen destination.
Extract the .zip
file and you’ll find an .exe file named StabilityMatrix. Double-click on it to open the file.
Step 2: Install Stable Diffusion WebUI (Automatic1111)
When you open Stability Matrix, you’ll see a pop-up window prompting you to install Stable Diffusion.
By default, it’s selected to Stable Diffusion WebUI (Automatic1111) but you can also install other interfaces such as ComfyUI, InvokeAI, and more.
Click on the big green Install button and it’ll start downloading the necessary dependencies to install Stable Diffusion.
This step might take some time as it’ll download all the packages.
Step 3. Run Stable Diffusion
Once the installation is complete, you’ll see a green button named Launch. Click on the Launch button it’ll start running Stable Diffusion.
You’ll see everything that’s happening on the console in Stability Matrix.
Once Stable Diffusion is initialized, the WebUI will automatically launch in your web browser. In case it doesn’t, you can launch it by clicking on the Launch WebUI button.
Now, whenever you want to run Stable Diffusion, you can just open Stability Matrix and launch the Automatic1111 WebUI from there.
Related: How To Fix Stable Diffusion Exit Code 9009
How To Update Automatic1111
Updating Automatic1111 is very easy if you’re using Stability Matrix as you can do it without opening any command prompts.
Go to the packages menu in Stability Matrix and you’ll find the Automatic1111 package installed. Click on the Update package and it’ll update Automatic1111 and all the packages installed.
If you’ve already updated to the latest version, you won’t see the Update button in the Stability Matrix.
It’s that simple to update your Stable Diffusion WebUI to the latest version using Stability Matrix.
How To Use Automatic1111
When you launch the Stable Diffusion WebUI, this is how it will look:
From the above screenshot, you can see there are a ton of options on the screen which may seem a bit overwhelming to you.
But let’s go step-by-step with each setting and feature of Automatic1111.
At the very top, we have a checkpoint selector dropdown where you can choose the model you want to use while generating images. You can either use the default SD 1.5 model that came with your installation or download some really cool Stable Diffusion models.
Below the checkpoint selector, you’ll find tabs for the various features of Stable Diffusion.
Let’s go through each feature in detail.
Txt2Img (Text To Image)
The Txt2Img feature lets you generate an image by entering some prompts. You can enter a positive and negative prompt in the two big text fields.
The positive prompt defines what you wish to see in your image and the negative prompt lets you define what you don’t wish to see in the image.
Additionally, you can also define other parameters for your image in the Generation tab below the prompt fields:
Here are the definitions of each parameter and how to use them:
Sampling Method: The algorithm you want to use for the image sampling process. There are many methods you can choose from and they can affect how your output image looks. As a general rule, you should stick with DPM++ 2M Karras as it’s fast and tends to generate the best outputs.
Sampling Steps: The number of steps or iterations you want the sampler to go through while generating an image. The more steps you choose, the better the image will be. But it will also take more time. Anything between 20-35 sampling steps is the sweet spot as it generates a good image and doesn’t take a long time.
Width & Height: The width and height of the image you want to generate. You can choose from many image dimensions but 512×768 or 768×512 is a good choice.
Batch Count: The number of times you want to run an image generation.
Batch Size: The number of images you want to generate each time you run image generation.
CFG (Classifier Free Guidance) Scale: The CFG scale controls how close you want the model to follow your prompt. The higher the scale, the more closely your prompt will be followed. Setting it to 7-8 strikes a good balance between your prompt and the model’s freedom.
Seed: Each image generated in Stable Diffusion has a seed value and it controls the content of the image. By default, it’s set to -1 which means the seed value is chosen randomly and you shouldn’t change it in most cases.
Hires Fix: This enables the high-resolution fix which applies an upscaler to scale up your image. This improves the image quality especially if you’re generating images at 512 or 768 size.
Once you enable Hires Fix, you will have to configure the following settings:
- Upscaler: There are various upscalers available that have their advantages and disadvantages. As a general rule, I prefer using the R-ESRGAN 4x+ upscaler for most of the images.
- Hires Steps: The number of sampling steps for the upscaler. Set this to half the value of your sampling steps for good results. So, if you have set 30 sampling steps for the model, set 15 hires steps.
- Denoising Strength: The amount of noise added to the image before upscaling. This value can help you get sharper images but can also change your image drastically if you set it too high. You should set it between 0.5-0.8 for best results.
Refiner: A refiner lets you refine your generated image to improve it further. This feature is mostly useful if you’re using an SDXL model. With an SDXL model, you can use the SDXL refiner.
The refiner also has an option called Switch At which basically tells the sampler to switch to the refiner model at the defined steps.
Now that you know all about the Txt2Img configuration settings in Stable Diffusion, let’s generate a sample image.
Here are the configuration settings along with the prompt I’ve used:
- Checkpoint Model: v1-5-pruned-emaonly.
- Width: 512
- Height: 768
- Sampling Steps: 30
- Sampling Method: DPM++ 2M Karras
- CFG Scale: 7
- Seed: -1
Positive Prompt:
(masterpiece, top quality, best quality, official art, beautiful and aesthetic:1.2),(indian princess:1.3),extremely detailed,(fractal art:1.2),(colorful:1.2),highest detailed,(zentangle:1.2),(dynamic pose),(abstract background:1.5),(traditional indian dress:1.2),(shiny skin),(many colors:1.4),upper body, masterpiece,best quality,high quality,highres,16K,RAW,ultra highres,ultra details,finely detail,an extremely delicate and beautiful,extremely detailed,real shadow,slime girl, realistic,highly detailed photo,award winning glamour photograph,photorealistic,by Marta Nael
Negative Prompt:
nsfw,bareness,EasyNegative,anime,cartoon,large breasts, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry bad-hands-5,
I haven’t used any hires fix or refiner for this image generation. Here is the generated image:
Once your image is generated, you can choose one of the following options:
Open Directory: Opens the folder directory where the generated image(s) is located.
Save Image: Save the generated image(s) to your desired location.
Zip Archive: Archive the generated image(s) into a zip file.
Send To Img2Img: Send the selected image to the Img2Img tab
Send To Inpaint: Send the selected image to the Inpaint section of the Img2Img tab.
Send To Extras: Send the selected image to the Extras tab.
Let’s move on to the next tab.
Img2Img (Image To Image)
The Img2Img feature lets you generate an image using some other image. This can be useful if you want to create an image that looks similar to some image or if you want to modify an existing image using Stable Diffusion.
In the img2img tab, you have the same prompt fields for the positive and negative prompts. This works the same way as txt2img where you can add prompts in both fields.
In the generation tab, there are tabs for the different img2img methods you can use.
Let’s go through each tab in detail.
Img2img
The first tab is the img2img tab which is the basic functionality where you add a base image and generate a new image that has a similar composition as your base image.
You can drag and drop the image you want to use for img2img in this tab.
Once you’ve uploaded your image on the canvas, you can now set the parameters for the image generation. You’ll notice that there are a few additional options in the generation parameters.
Here’s what they mean:
Resize Mode: The resize mode lets you choose how the output image will be resized if it doesn’t have the same dimensions as your input image. There are different options you can choose from such as:
- Just Resize: The input image will be scaled to the specified dimensions of the output image. This could cause the image to be squeezed or stretched.
- Crop & Resize: This fits the output image dimensions to the input image. As a result, the output image isn’t stretched or squeezed but some parts of it are cropped out. Basically, it follows the aspect ratio of the input image.
- Resize & Fill: The input image is fit into the dimensions of the specified dimensions. The extra empty part is filled with the average color of the input image.
- Just Resize (Latent Upscale): It’s similar to “Just Resize” but the image is upscaled in this method.
Denoising Strength: The denoising strength controls how the input image will be changed. When it’s set to 0, nothing will change. You should set it to anything between 0.3 to 0.75 for good results. Setting it to more than 1 completely changes the image.
Besides the above two options, the other parameters are all the same. The height and width parameters are named Resize To or Resize By allowing you to choose how you want to resize the image.
By choosing Resize To, you can specify the exact dimensions for the output image. On the other hand, the Resize By option lets you specify how much you want to scale the image.
Here’s an image I generated using the img2img feature in Automatic1111. The input and output images are displayed side by side below:
Here is the prompt and configuration settings for the image:
- Checkpoint Model: RevAnimated
- Resize To Width: 512
- Resize To Height: 768
- Sampling Steps: 35
- Sampling Method: DPM++ 2M SDE Karras
- CFG Scale: 7
- Denoising Strength: 0.5
- Seed: -1
Positive Prompt:
hailstorm, thunder, lightning, ice, melting, heavy rain
Sketch
The Sketch feature lets you generate an image from a sketch which is very cool. If you switch over to the Sketch tab, you’ll find an option to upload an image.
Here, you should upload either a black or white background image. Once you’ve uploaded that, you can start sketching on the canvas.
You can also change the size of the brush and change its color allowing you to draw some really good sketches.
The rest of the generation parameters are the same as the img2img tab.
Here’s an image I generated from a sketch:
I used the prompt:
sci-fi, cyberpunk city, dark night, foggy atmosphere
So, with the Sketch tool, you can turn drawings into beautiful images using Stable Diffusion.
Inpaint
This is probably the most commonly used img2img feature of Stable Diffusion as it’s very powerful.
With inpaint, you can customize certain parts of an image by drawing a mask over it. Here are two scenarios where inpainting can be very useful.
- You generated an image using txt2img but it has a minor defect such as the hands looking bad. With inpaint, you can draw a mask over the hands and improve it.
- You have a beautiful image of a landscape but you want to add birds flying in the sky. You can paint over the sky and add birds using inpainting.
These are just two examples of how inpainting can be used. You can do a lot more than this as this, in my opinion, is one of the most powerful features of Stable Diffusion.
So, how does this work? Well, it’s pretty simple.
You upload an image into the canvas and paint over the part you want to customize. You can paint over the entire image or a certain part that needs to be changed or modified.
Before we generate an image using inpainting, let me explain the different generation parameters you’ll find in this tab.
Mask Blur: The mask blur changes how much the masked area is blurred before inpainting. I always set it to the default value of 4.
Mask Mode: This lets you decide what part of the image you want to change. With “Inpaint Masked”, the masked area will be changed whereas “Inpaint Not Masked” changes the area that’s not covered by the mask.
Masked Content: This lets you choose how the content of the masked area will be generated. From all the options available, you should stick with “Original” for most cases.
Inpaint Area: You can choose if you want to inpaint all over the image or only the selected mask area.
Only Masked Padding: The padding area of the mask. By default, it’s set to 32 pixels.
Here’s an image I generated using inpainting in Automatic1111.
I just added the positive prompt: birds flying in the sky
I’ve used the following configuration for the generated image:
Inpaint Sketch
The Inpaint Sketch is very similar to the Inpaint feature but with Inpaint Sketch, only the masked area is modified, and the unmasked area is not touched at all.
While you can get the same results with just Inpaint, it can often be a case of trial and error to get the desired output.
This won’t be the case with Inpaint Sketch as you can get a more desired output. Plus, you can switch the colors of your brush as the colors are literal in determining the color of the masked area in the output image.
Here’s an image I generated using Inpaint Sketch:
Notice how the output follows the color I’ve used for the mask.
Inpaint Upload
With Inpaint Upload, you can upload a mask image separately instead of drawing a mask over your input image.
This is useful if you’re drawing your own mask image on a different software such as Adobe Photoshop.
Batch
The batch feature in Automatic1111 allows you to perform img2img generation on multiple images at once.
You can set the input and output directory of where the images are stored and where you want the output images to be stored respectively.
Extras
The Extras feature in Automatic1111 lets you upscale your generated image to a larger dimension using upscalers. This is why you’ll see the “Send To Extras” button below the generated image.
Let’s say you generated an image using txt2img with 512×768 dimensions. In most cases, this image isn’t usable because it’s too small.
In the Extras tab, you can either manually upload the image or send it here from the txt2img or img2img tabs.
Here, you’ll find options to select the upscaler for scaling the image along with the dimension by which you want to upscale the image.
You can either scale the image by specifying the exact dimensions or you can scale it by a value.
There are two upscalers you can choose while scaling the image.
Why two?
Well, it’s useful to blend the two upscalers and get a more smooth output. You don’t necessarily have to use both the upscalers.
PNG Info
The PNG Info tab is very useful if you want to retrieve the information of an image.
Let’s say you generated an image using Stable Diffusion. Now, you want to go back and see the parameters of the image such as the prompts used and configuration settings.
You can upload the image to PNG Info and it’ll show you the image information if it’s left over on the file.
This can also be useful if you find an image generated using Stable Diffusion on the Internet and want to see its information. Just upload it to PNG Info and you’ll get information about its prompts and configuration settings.
Note that this only works for images that have these parameter values left on their file.This is a very helpful tab if you want to view prompt history in Stable Diffusion.
Checkpoint Merger
The Checkpoint Merger is used to combine two or more models into one. You can use it to create a new model that has the styles of two different models.
The generated model could have the desired result you want but it’s not always the case.
Since this is a beginner’s guide to Automatic1111, I’ll not be covering Checkpoint Merger here as it’s an advanced feature and requires a dedicated guide of its own.
Train
The Train feature lets you train your own model using images. You can set a directory where the images are stored that you want the model to be trained on.
This is a useful feature if you wish to create your own models. But I won’t be covering it in this guide as it’s a very detailed process and certainly not for beginners.
Settings
You can find Automatic1111 settings on this page where you can enable or disable certain features and customize various options.
There are a lot of options you can go through and change on this page. By default, most of the settings don’t need to be changed.
But if there is something you wish to change, make sure to click on the Apply Settings button after every change. You’ll also have to Reload UI for the changes to take effect.
Related: Stable Diffusion VAE Guide
Extensions
The Extensions tab lets you install additional extensions in Stable Diffusion. You can view the list of currently installed and enabled extensions on the Installed tab.
If you wish to install an extension, switch over to the Install from URL tab and enter the URL of the Git repository of the extension.
Whenever you install an extension in Automatic1111, you will have to restart it for it to work.
Automatic1111 Requirements
To run Automatic1111, you’ll need a system that is capable enough to handle the requirements of Stable Diffusion.
Here are the requirements for running Stable Diffusion:
- Intel/AMD CPU
- 16GB RAM
- Nvidia GTX 7xx or newer with atleast 4GB vRAM
- 10GB Storage Space
These are the minimum requirements for running Stable Diffusion. The most important thing to know is that the more powerful GPU you have, the better performance you’ll get.
Using Models In Automatic1111
In Automatic1111, you can use various types of Stable Diffusion models while generating images.
Here’s a breakdown of different models you can use and what they do:
Model Type | Purpose |
Checkpoint | Pre-trained models designed to generate images with a specific style or genre |
LORA | Models that apply small changes to a checkpoint model |
LyCORIS | An alternative to LORA models that work the same way but have a small file size |
Textual Inversion | Injects a new style into a model but doesn’t change the model |
Hypernetwork | A fine-tuning model similar to LORA |
Upscaler | Models that scale up an image by improving its quality and sharpness |
You can download and store Stable Diffusion models in your directory in the following path:
stable-diffusion\stable-diffusion-webui\models
If you’ve used Stability Matrix to install Automatic1111, your model directory will be:
stabilityMatrix\Models
In this folder, you’ll find folders for each type of model. The checkpoint models go in the Stable Diffusion folder whereas the rest are self-explanatory.
There are many Checkpoint, LORA, LyCoris, and Textual Inversion models available that can be downloaded from Civitai.
In Automatic1111, you can select the checkpoint model at the very top of the screen.
To apply other models, you can click on the Model tab and click on the model you want to add to your prompt.
Once you click on the model, it’s automatically added to the positive prompt. You can add multiple LORA, LyCoris, or Textual Inversion models to your prompts.
Models can also be used in your negative prompts. There are certain models trained for bad body anatomy and you can use them in your negative prompts to ensure your images don’t come out looking bad.
Related: How To Generate Consistent Faces In Stable Diffusion
FAQs
Here are some frequently asked questions about Automatic1111:
Does Automatic1111 use GPU?
Automatic1111 uses your computer’s GPU to generate images in Stable Diffusion. However, you can also use your CPU with Automatic1111.
What are some alternatives to Automatic1111?
There are many other Stable Diffusion GUIs such as ComfyUI, InvokeAI, and DiffusionBee.
Can I run Automatic1111 online?
You can run Automatic1111 online by installing it on Google Colab.You can also use various Stable Diffusion websites available.
Conclusion
If you’re someone who wants to use Stable Diffusion to generate AI art, then Automatic1111 is the best and easiest interface out there to get started.
The ease of use of A1111 makes it the ideal choice for beginners and even some experts. In this Automatic1111 guide, we covered all aspects of this software in-depth.
Hopefully, now you’ll be able to install Automatic1111 and start generating images in Stable Diffusion with ease.
But if you have any additional questions or doubts, feel free to ask them in the comments section below.