Skip to main content

AI painting with stable diffusion

final-output.jpg

There are 80GB A100 GPUs in the OAsis cluster. We will leverage it to play around with the stable diffusion model. Stable diffusion is a generative AI model which supports text-to-image generation, image-to-image generation, and image inpainting.

Stable_Diffusion_architecture.png

If you are interested in all the technical details, you may check out the original paper here.

This model is so popular the community is growing extremely fast. Because it can constantly produce stunning output with trivial computing power, end users may train an extra network or an embedding to vastly affect the output. There is also a platform called Civitai for users to share their models.

In this exercise, we will use the DreamShaper model.

The model is 5.6 GB large. To ease your journey, we have already placed it in /pfss/toolkit/stable-diffusion. You may use it directly without downloading.

Prepare the Conda environment.

# we'll use the scratch file system here since model files are large
cd $SCRATCH

# check out the webui from git
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

# create a symbolic link to load the DreamShaper model
# since DreamShaper is a base model, place it to the models/Stable-diffusion folder
ln -s /pfss/toolkit/stable-diffusion/dreamshaper4bakedvae.WZEK.safetensors \
  stable-diffusion-webui/models/Stable-diffusion/

# create the conda environment
cd stable-diffusion-webui
module load Anaconda3/2022.05
conda create --name StableDiffusionWebui python=3.10.6

Prepare the quick job script for launching in the portal

Create a file start-sdwebui.sbatch in your home folder with the content below.

#!/bin/bash -le

%node%
#SBATCH --time=0-03:00:00
#SBATCH --output=sdwebui.out

<<setup
desc: Start a stable diffusion web ui
inputs:
  - code: node
    display: Node
    type: node
    required: true
    placeholder: Please select a node
    default:
      cpu: 8
      mem: 16
setup

module load Anaconda3/2022.05 CUDA GCCcore git
source activate StableDiffusionWebui
cd $SCRATCH/stable-diffusion-webui

host=$(hostname)
port=$(hc acquire-port -j $SLURM_JOB_ID -u web --host $host -l WebUI)

./webui.sh --listen --port $port

Request a GPU node to launch the web UI

Log in to the web portal, open the file browser, and click on the .sbatch file you created. Pick a node with GPU and launch.

launch-sdwebui.png

Installing all python libraries and dependencies takes a long time on the first run. You may check the progress at the $HOME/sdwebui.out file.

When the Web UI is launched, you may access it at the running jobs window.

open-sdwebui.png

There are many options in the web UI waiting for you to explore. It may look overwhelming for the first time. One easier way is to find an artwork that people shared on Civitai. Then we take it as a starting point. For example, I chose this one.

We can replicate the prompts, sampler, and step settings to generate our artwork. If you replicate the seed, you may also reproduce the same image.

Stable-Diffusion-webui.png

Here I decided to generate five pieces at a time. Once I find a good one, I can upscale it to a larger image with more details.

Stable-Diffusion-hires-fix.png

Tada.

final-output.jpg