merge upstream

This commit is contained in:
Sheldon Hull 2022-09-16 20:51:22 -05:00
parent 6081a26d25
commit 1d0a00f5d2
17 changed files with 203 additions and 94 deletions

View file

@ -7,13 +7,17 @@ assignees: ''
---
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue) and in [the issues in the WebUI repo](https://github.com/hlky/stable-diffusion-webui)**
**Has this issue been opened before? Check the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Main), the [issues](https://github.com/AbdBarho/stable-diffusion-webui-docker/issues?q=is%3Aissue)**
**Describe the bug**
**Which UI**
hlky or auto or auto-cpu or lstein?
**Steps to Reproduce**
1. Go to '...'
2. Click on '....'
@ -22,8 +26,11 @@ assignees: ''
**Hardware / Software:**
- OS: [e.g. Windows / Ubuntu and version]
- RAM:
- GPU: [Nvidia 1660 / No GPU]
- Version [e.g. 22]
- VRAM:
- Docker Version, Docker compose version
- Release version [e.g. 1.0.1]
**Additional context**
Any other context about the problem here. If applicable, add screenshots to help explain your problem.

View file

@ -3,13 +3,17 @@ name: Build Images
on: [push]
jobs:
build_all:
build:
strategy:
matrix:
profile:
- auto
- hlky
- lstein
- download
runs-on: ubuntu-latest
name: All
name: ${{ matrix.profile }}
steps:
- uses: actions/checkout@v3
# better caching?
- run: docker compose --profile auto build --progress plain
- run: docker compose --profile hlky build --progress plain
- run: docker compose --profile lstein build --progress plain
- run: docker compose --profile download build --progress plain
- run: docker compose --profile ${{ matrix.profile }} build --progress plain

20
.github/workflows/stale.yml vendored Normal file
View file

@ -0,0 +1,20 @@
name: 'Close stale issues and PRs'
on:
schedule:
- cron: '30 1 * * *'
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v5
with:
only-labels: awaiting-response
stale-issue-message: This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
stale-pr-message: This PR is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 7 days.
close-issue-message: This issue was closed because it has been stalled for 7 days with no activity.
close-pr-message: This PR was closed because it has been stalled for 7 days with no activity.
days-before-issue-stale: 14
days-before-pr-stale: 14
days-before-issue-close: 7
days-before-pr-close: 7

View file

@ -7,12 +7,12 @@ This repository provides multiple UIs for you to play around with stable diffusi
## Quick Start
- Install [Taskfile](https://taskfile.dev/installation?ref=AbdBarho-stable-diffusion-webui-docker)
- Quick Snippet:
- Linux: `sudo snap install task --classic`
- MacOS: `HOMEBREW_NO_AUTO_UPDATE=1 brew install go-task/tap/go-task`
- Windows: `choco install go-task -y` or `scoop install task`
- Curl: `sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b ~/.local/bin`
- Go: `go install github.com/go-task/task/v3/cmd/task@latest`
- Quick Snippet:
- Linux: `sudo snap install task --classic`
- MacOS: `HOMEBREW_NO_AUTO_UPDATE=1 brew install go-task/tap/go-task`
- Windows: `choco install go-task -y` or `scoop install task`
- Curl: `sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b ~/.local/bin`
- Go: `go install github.com/go-task/task/v3/cmd/task@latest`
- Run `task` and see a list of all the pre-built tasks.
@ -27,7 +27,6 @@ This will download the models with the resume option using curl (allowing it to
## Features
### AUTOMATIC1111
[AUTOMATIC1111's fork](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is imho the most feature rich yet elegant UI:
@ -63,13 +62,22 @@ Screenshots:
### lstein
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, but less so for the WebUI.
[lstein's fork](https://github.com/lstein/stable-diffusion) is very mature when it comes to the cli, and the WebUI has potential.
| Text to image | Image to image | Extras |
| ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
| ![](https://user-images.githubusercontent.com/24505302/190662506-dabdc967-93af-4d78-8533-394604d29ba4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662557-7640d9f0-30d8-4527-97b0-07d3f48108d4.jpg) | ![](https://user-images.githubusercontent.com/24505302/190662588-37a01fad-f993-4674-9ae6-8714aa229f7b.jpg) |
## Setup & Usage
Visit the wiki for [Setup](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Setup) and [Usage](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage) instructions, checkout the [FAQ](https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/FAQ) page if you face any problems, or create a new issue!
## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Contributing
Contributions are welcome! create an issue first of what you want to contribute (before you implement anything) so we can talk about it.
## Disclaimer

View file

@ -54,3 +54,6 @@ services:
<<: *base_service
profiles: [ "lstein" ]
build: ./services/lstein/
environment:
- PRELOAD=false
- CLI_ARGS=

View file

@ -41,7 +41,8 @@ RUN pip install --prefer-binary --no-cache-dir -r ${ROOT}/repositories/CodeForme
# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step
ARG SHA=91c56c51c7c83e18adb8fc52a950ec481c93b1de
ARG SHA=7fe00d08402b8bf9f7f0ffef59ee3f3ad0187cfc
RUN <<EOF
cd stable-diffusion-webui
git pull --rebase

View file

@ -1,14 +0,0 @@
# WebUI for AUTOMATIC1111
The WebUI of [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) as docker container!
## Setup
Clone this repo, download the `model.ckpt` and `GFPGANv1.3.pth` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd AUTOMATIC1111
docker compose up --build
```
You can change the cli parameters in `AUTOMATIC1111/docker-compose.yml`. The full list of cil parameters can be found [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/shared.py)

View file

@ -7,7 +7,7 @@ file.write_text(
.replace(' return demo', """
with demo:
gr.Markdown(
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/tree/master/AUTOMATIC1111)'
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)'
)
return demo
""", 1)

View file

@ -1,6 +1,6 @@
FROM bash:alpine3.15
RUN apk add parallel
RUN apk add parallel aria2
COPY . /docker
RUN chmod +x /docker/download.sh
ENTRYPOINT ["/docker/download.sh"]

View file

@ -2,32 +2,10 @@
set -Eeuo pipefail
# [[ "$(sha256sum -b $file | head -c 64)" == "$sha" ]]
echo "Downloading, this might take a while..."
declare -A MODELS
MODELS['model.ckpt']='https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media'
MODELS['GFPGANv1.3.pth']='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth'
MODELS['RealESRGAN_x4plus.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
MODELS['RealESRGAN_x4plus_anime_6B.pth']='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'
MODELS['LDSR.yaml']='https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1'
MODELS['LDSR.ckpt']='https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1'
echo "Downloading..."
for file in "${!MODELS[@]}"; do
url=${MODELS[$file]}
full_path="/cache/models/$file"
if [[ -f "$full_path" ]]; then
echo "- $file exists"
continue
fi
mkdir -p $(dirname $full_path)
wget --tries=10 -c -O $full_path $url
done
aria2c --input-file /docker/links.txt --dir /cache/models --continue
echo "Checking SHAs..."
time parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"
parallel --will-cite -a /docker/checksums.sha256 "echo -n {} | sha256sum -c"

View file

@ -0,0 +1,12 @@
https://www.googleapis.com/storage/v1/b/aai-blog-files/o/sd-v1-4.ckpt?alt=media
out=model.ckpt
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth
out=GFPGANv1.3.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth
out=RealESRGAN_x4plus.pth
https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth
out=RealESRGAN_x4plus_anime_6B.pth
https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1
out=LDSR.yaml
https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1
out=LDSR.ckpt

View file

@ -9,10 +9,11 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF
git config --global http.postBuffer 1048576000
git clone https://github.com/sd-webui/stable-diffusion-webui.git stable-diffusion
cd stable-diffusion
git reset --hard 7623a5734740025d79b710f3744bff9276e1467b

View file

@ -2,9 +2,12 @@
general:
gpu: 0
outdir: /outputs
ckpt: "/cache/models/model.ckpt"
default_model: "Stable Diffusion v1.4"
default_model_config: "configs/stable-diffusion/v1-inference.yaml"
default_model_path: "/cache/models/model.ckpt"
fp:
name: "embeddings/alex/embeddings_gs-11000.pt"
name:
GFPGAN_dir: "./src/gfpgan"
RealESRGAN_dir: "./src/realesrgan"
RealESRGAN_model: "RealESRGAN_x4plus"
@ -15,43 +18,90 @@ general:
extra_models_cpu: False
extra_models_gpu: False
save_metadata: True
save_format: "png"
skip_grid: False
skip_save: False
grid_format: "jpg:95"
n_rows: -1
no_verify_input: False
no_half: False
use_float16: False
precision: "autocast"
optimized: False
optimized_turbo: False
optimized_turbo: True
optimized_config: "optimizedSD/v1-inference.yaml"
update_preview: True
update_preview_frequency: 1
update_preview_frequency: 5
txt2img:
prompt:
height: 512
width: 512
cfg_scale: 5.0
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 50
default_sampler: "k_lms"
sampling_steps: 30
default_sampler: "k_euler"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: True
use_RealESRGAN: True
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
txt2vid:
default_model: "CompVis/stable-diffusion-v1-4"
custom_models_list:
[
"CompVis/stable-diffusion-v1-4",
"naclbit/trinart_stable_diffusion_v2",
"hakurei/waifu-diffusion",
"osanseviero/BigGAN-deep-128",
]
prompt:
height: 512
width: 512
cfg_scale: 7.5
seed: ""
batch_count: 1
batch_size: 1
sampling_steps: 30
num_inference_steps: 200
default_sampler: "k_euler"
scheduler_name: "klms"
separate_prompts: False
update_preview: True
update_preview_frequency: 5
dynamic_preview_frequency: True
normalize_prompt_weights: True
save_individual_images: True
save_video: True
group_by_prompt: True
write_info_files: True
do_loop: False
save_as_jpg: False
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
beta_start: 0.00085
beta_end: 0.012
beta_scheduler_type: "linear"
max_frames: 1000
img2img:
prompt:
sampling_steps: 50
sampling_steps: 30
# Adding an int to toggles enables the corresponding feature.
# 0: Create prompt matrix (separate multiple prompts using |, and get all combinations of them)
# 1: Normalize Prompt Weights (ensure sum of weights add up to 1.0)
@ -64,11 +114,12 @@ img2img:
# 8: jpg samples
# 9: Fix faces using GFPGAN
# 10: Upscale images using Real-ESRGAN
sampler_name: k_lms
sampler_name: "k_euler"
denoising_strength: 0.45
# 0: Keep masked area
# 1: Regenerate only masked area
mask_mode: 0
mask_restore: False
# 0: Just resize
# 1: Crop and resize
# 2: Resize and fill
@ -76,7 +127,7 @@ img2img:
# Leave blank for random seed:
seed: ""
ddim_eta: 0.0
cfg_scale: 5.0
cfg_scale: 7.5
batch_count: 1
batch_size: 1
height: 512
@ -86,16 +137,19 @@ img2img:
loopback: True
random_seed_loopback: True
separate_prompts: False
update_preview: True
update_preview_frequency: 5
normalize_prompt_weights: True
save_individual_images: True
save_grid: True
group_by_prompt: True
save_as_jpg: False
use_GFPGAN: True
use_RealESRGAN: True
use_GFPGAN: False
use_RealESRGAN: False
RealESRGAN_model: "RealESRGAN_x4plus"
variant_amount: 0.0
variant_seed: ""
write_info_files: True
gfpgan:
strength: 100

View file

@ -9,7 +9,7 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN conda install python=3.8.5 && conda clean -a -y
RUN conda install pytorch==1.11.0 torchvision==0.12.0 cudatoolkit=11.3 -c pytorch && conda clean -a -y
RUN apt-get update && apt install fonts-dejavu-core rsync -y && apt-get clean
RUN apt-get update && apt install fonts-dejavu-core rsync gcc -y && apt-get clean
RUN <<EOF
@ -20,12 +20,25 @@ conda env update --file environment.yaml -n base
conda clean -a -y
EOF
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch CLI_ARGS=""
ARG BRANCH=development SHA=45af30f3a4c98b50c755717831c5fff75a3a8b43
# ARG BRANCH=main SHA=89da371f4841f7e05da5a1672459d700c3920784
RUN <<EOF
cd stable-diffusion
git fetch
git checkout ${BRANCH}
git reset --hard ${SHA}
conda env update --file environment.yaml -n base
conda clean -a -y
EOF
RUN pip uninstall opencv-python -y && pip install --prefer-binary --upgrade --force-reinstall --no-cache-dir opencv-python-headless
COPY . /docker/
RUN python3 /docker/info.py /stable-diffusion/static/dream_web/index.html && chmod +x /docker/mount.sh
ENV TRANSFORMERS_CACHE=/cache/transformers TORCH_HOME=/cache/torch PRELOAD=false CLI_ARGS=""
WORKDIR /stable-diffusion
EXPOSE 7860
# run, -u to not buffer stdout / stderr
CMD mkdir -p /stable-diffusion/models/ldm/stable-diffusion-v1/ && \
ln -sf /cache/models/model.ckpt /stable-diffusion/models/ldm/stable-diffusion-v1/model.ckpt && \
python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}
CMD /docker/mount.sh && python3 -u scripts/dream.py --outdir /output --web --host 0.0.0.0 --port 7860 ${CLI_ARGS}

View file

@ -1,14 +0,0 @@
# WebUI for lstein
The WebUI of [lstein/stable-diffusion](https://github.com/lstein/stable-diffusion) as docker container!
Although it is a simple UI, the project has a lot of potential.
## Setup
Clone this repo, download the `model.ckpt` and put into the `models` folder as mentioned in [the main README](../README.md), then run
```
cd lstein
docker compose up --build
```

10
services/lstein/info.py Normal file
View file

@ -0,0 +1,10 @@
import sys
from pathlib import Path
file = Path(sys.argv[1])
file.write_text(
file.read_text()\
.replace('GitHub site</a>', """
GitHub site</a>, Deployed with <a href="https://github.com/AbdBarho/stable-diffusion-webui-docker/">stable-diffusion-webui-docker</a>
""", 1)
)

26
services/lstein/mount.sh Executable file
View file

@ -0,0 +1,26 @@
#!/bin/bash
set -eu
ROOT=/stable-diffusion
mkdir -p "${ROOT}/models/ldm/stable-diffusion-v1/"
ln -sf /cache/models/model.ckpt "${ROOT}/models/ldm/stable-diffusion-v1/model.ckpt"
if test -f /cache/models/GFPGANv1.3.pth; then
base="${ROOT}/src/gfpgan/experiments/pretrained_models/"
mkdir -p "${base}"
ln -sf /cache/models/GFPGANv1.3.pth "${base}/GFPGANv1.3.pth"
echo "Mounted GFPGANv1.3.pth"
fi
# facexlib
FACEX_WEIGHTS=/opt/conda/lib/python3.8/site-packages/facexlib/weights
rm -rf "${FACEX_WEIGHTS}"
mkdir -p /cache/weights
ln -sf -T /cache/weights "${FACEX_WEIGHTS}"
if "${PRELOAD}" == "true"; then
python3 -u scripts/preload_models.py
fi