Skip to main content

AI painting with stable diffusion

final-output.jpg

ThereThe areOAsis cluster is equipped with 80GB A100 GPUs inthat thecan OAsisbe cluster. We will leverage itleveraged to playcreate aroundartwork with the stable diffusion model. Stable diffusion isusing a generative AI model whichcalled Stable Diffusion. This model supports text-to-image generation, image-to-image generation, and image inpainting.

Stable_Diffusion_architecture.png

If you areyou're interested in learning all the technical details, you maycan checkrefer outto the original paper available here.

ThisThe popularity of this model is soon popularthe rise, and the community is growing extremelyat fast.an Becauseexponential itrate candue constantlyto its ability to produce stunning output with trivialminimal computing power,power. end End-users maycan train anadditional extra networknetworks or an embeddingembeddings to vastlysignificantly affectinfluence the output. ThereAdditionally, is alsothere's a platform called Civitai forthat allows users to share their models.

InFor this exercise, wewe'll willbe useusing the DreamShaper model.

The model, which is 5.6 GB large.in size. To easemake yourit journey,easier wefor haveyou, we've already placed it in /pfss/toolkit/stable-diffusion. You maycan use it directly without downloading.

Prepare

To get started, let's prepare the Conda environment.environment first.

# we'll use the scratch file system here since model files are large
cd $SCRATCH

# check out the webui from git
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

# create a symbolic link to load the DreamShaper model
# since DreamShaper is a base model, place it to the models/Stable-diffusion folder
ln -s /pfss/toolkit/stable-diffusion/dreamshaper4bakedvae.WZEK.safetensors \
  stable-diffusion-webui/models/Stable-diffusion/

# create the conda environment
cd stable-diffusion-webui
module load Anaconda3/2022.05
conda create --name StableDiffusionWebui python=3.10.6

Prepare

Then thewe will create a quick job script for launching it in the portal

portal.

Create a file called "start-sdwebui.sbatch" in your home folder and fill it with the contentfollowing below.content. Once done, request a GPU node to launch the web UI.

#!/bin/bash -le

%node%
#SBATCH --time=0-03:00:00
#SBATCH --output=sdwebui.out

<<setup
desc: Start a stable diffusion web ui
inputs:
  - code: node
    display: Node
    type: node
    required: true
    placeholder: Please select a node
    default:
      cpu: 8
      mem: 16
setup

module load Anaconda3/2022.05 CUDA GCCcore git
source activate StableDiffusionWebui
cd $SCRATCH/stable-diffusion-webui

host=$(hostname)
port=$(hc acquire-port -j $SLURM_JOB_ID -u web --host $host -l WebUI)

./webui.sh --listen --port $port

Request

Once ayou GPU node to launch the web UI

Loglog in to the web portal, open the file browser,browser and click onselect the .sbatch file you created. Pick a node with GPU and launch.launch it.

launch-sdwebui.png

InstallingPlease note that the installation of all pythonPython libraries and dependencies takesmay atake longsome time on the first run. You maycan checkmonitor the progress atin the $HOME/sdwebui.out file.

When the Web UI is launched, you may access it at the running jobs window.

open-sdwebui.png

There are many options inOnce the web UI waitingis forlaunched, youyou'll have access to numerous options to explore. It may lookseem overwhelming forat thefirst, firstbut time.a One easiersimpler way to get started is to find an artwork that people shared on Civitai.Civitai Thenand we takeuse it as a starting point. For example, Iwe chose this one.

WeYou can replicate the prompts, sampler, and step settings to generate ouryour own artwork. If you replicate the seed, you may alsocan reproduce the same image.

Stable-Diffusion-webui.png

HereIn Iour case, we decided to generate five pieces at a time. Once Iwe findfound a good one, Iwe can upscaleupscaled it to a larger image with more details.

Stable-Diffusion-hires-fix.png

Tada.And voila! This is how Iwe created the cover image offor this article.

Wrap-up

In

Thisconclusion, this is just the beginning of a rapidly developing field. This article only covers the very beginning part. There isThere's so much more weto canexplore, explore. Besidesfrom trying different models shared by others.others Weto cantraining alsothe train itmodel to understand a new conceptconcepts or style.styles.