Go to HuggingFace or Civitai to find a model.
Use wget command to download the model
wget https://huggingface.co/andite/anything-v4.0/blob/main/anything-v4.5-pruned.ckpt
Here are some good models for your reference.
Stable Diffusion, the original model published by CompVis and StabilityAI.
I would suggest you to start from “Anything” model if you want to draw anime artworks.
1.Install proprietary Nvidia drivers in order to use CUDA. Then reboot.
- sudo apt update
- sudo apt purge *nvidia*
- # List available drivers for your GPU
- ubuntu-drivers list
- sudo apt install nvidia-driver-525
2.Follow the instructions on Nvidia Developers to install CUDA. Reboot again.
- wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
- sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
- wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb
- sudo dpkg -i cuda-repo-ubuntu2204-12-1-local_12.1.0-530.30.02-1_amd64.deb
- sudo cp /var/cuda-repo-ubuntu2204-12-1-local/cuda-*-keyring.gpg /usr/share/keyrings/
- sudo apt-get update
- sudo apt-get -y install cuda
3.Verify the installation
|
4.Install Python, wget, git
|
5.Because we need a Pyhon 3.6 enviroment for SD WebUi, we have to install Anaconda
|
6.Create a virtual environment of Python 3.10.6
|
1.Clone the repository of Stable Diffusion WebUI
|
2.Move .ckpt
models to stable-diffusion-webui
|
3.Enter the virtual enviroment
|
4.If you want to activate virtual environment in a bash script, add these on the top of webui-user.sh
|
According to Wiki,we have to change some commdanline arguments in order to start SD WebUI.
Edit webui-user.sh
|
If the VRAM of GPU is lower than 4GB, add: COMMANDLINE_ARGS=--medvram --opt-split-attention
If your PC has RAM lower than 8GB, add: COMMANDLINE_ARGS=--lowvram --opt-split-attention
You could also add --listen
so you can access the WebUI from other PC on the same network. Or add --share
to generate a public Gradio link for accessing WebUI while deploying SD WebUI to servers.
1.Run webui.sh
, it will install all the dependencies. Then a link should pop up: http://127.0.0.1:7860
|
2.To access WebUI from other PC on the same network,enter http://
in the address bar of your browser. Don’t forget to open firewall port
|
1.Get current branch
|
2.Pull latest files
|
3.If something is broken after updating, roll back to the previous branch
|
Use “Prompts” and “Ngative Prompts” to tell AI what to draw.
See Vodly Artist name and Danbooru tags for choosing prompts.
For example, to draw Jeanne from Fate/Grand Order, we type the name of the character and characteristics of her body in the prompt fields.
|
Then type negative prompts.
|
Go to SD WebUI, type the prompts
Check Restore faces
Clcik Generate
butoon, it will start generate a image
You shall see the result at the right panel
All generated images will be stored at stable-diffusion-webui/outputs
You can also increase the value of Batch count
so it will generate multiple images in one run.
Type the prompts
Upload a image. Check Restore faces
. Clcik Generate
.
You can change the value of CFG Scale
and Denoising strength
. The lower the value of Denoising strength
is, the output would be more similar the original image.
Click Interrogate Deepboooru
to generate prompts automatically accodring to the image you uploaded.