Skip to content

Commit 7d8d4bc

Browse files
thinkyheadlstein
andauthored
Global replace [ \t]+$, add "GB" (#1751)
* "GB" * Replace [ \t]+$ global Co-authored-by: Lincoln Stein <[email protected]>
1 parent 4fd97ce commit 7d8d4bc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+575
-148
lines changed

.gitattributes

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
# Auto normalizes line endings on commit so devs don't need to change local settings.
2-
# Only affects text files and ignores other file types.
2+
# Only affects text files and ignores other file types.
33
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
44
* text=auto

InvokeAI_Statement_of_Values.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<img src="docs/assets/invoke_ai_banner.png" align="center">
1+
<img src="docs/assets/invoke_ai_banner.png" align="center">
22

33
Invoke-AI is a community of software developers, researchers, and user
44
interface experts who have come together on a voluntary basis to build
@@ -81,5 +81,5 @@ area. Disputes are resolved by open and honest communication.
8181

8282
## Signature
8383

84-
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, and **keturn**. Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
84+
This document has been collectively crafted and approved by the current InvokeAI team members, as of 28 Nov 2022: **lstein** (Lincoln Stein), **blessedcoolant**, **hipsterusername** (Kent Keirsey), **Kyle0654** (Kyle Schouviller), **damian0815**, **mauwii** (Matthias Wild), **Netsvetaev** (Artur Netsvetaev), **psychedelicious**, **tildebyte**, and **keturn**. Although individuals within the group may hold differing views on particular details and/or their implications, we are all in agreement about its fundamental statements, as well as their significance and importance to this project moving forward.
8585

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,11 +53,11 @@ For full installation and upgrade instructions, please see:
5353

5454
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
5555
2. Download the .zip file for your OS (Windows/macOS/Linux).
56-
3. Unzip the file.
56+
3. Unzip the file.
5757
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
58-
5. Wait a while, until it is done.
58+
5. Wait a while, until it is done.
5959
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
60-
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
60+
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
6161
8. Type `banana sushi` in the box on the top left and click `Invoke`:
6262

6363
<div align="center"><img src="docs/assets/invoke-web-server-1.png" width=640></div>
@@ -161,9 +161,9 @@ problems and other issues.
161161
# Contributing
162162

163163
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
164-
cleanup, testing, or code reviews, is very much encouraged to do so.
164+
cleanup, testing, or code reviews, is very much encouraged to do so.
165165

166-
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
166+
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
167167

168168
If you are unfamiliar with how
169169
to contribute to GitHub projects, here is a

Stable_Diffusion_v1_Model_Card.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ This model card focuses on the model associated with the Stable Diffusion model,
2121

2222
# Uses
2323

24-
## Direct Use
24+
## Direct Use
2525
The model is intended for research purposes only. Possible research areas and
2626
tasks include
2727

@@ -68,11 +68,11 @@ Using the model to generate content that is cruel to individuals is a misuse of
6868
considerations.
6969

7070
### Bias
71-
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
72-
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
73-
which consists of images that are primarily limited to English descriptions.
74-
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
75-
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
71+
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
72+
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
73+
which consists of images that are primarily limited to English descriptions.
74+
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
75+
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
7676
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
7777

7878

@@ -84,7 +84,7 @@ The model developers used the following dataset for training the model:
8484
- LAION-2B (en) and subsets thereof (see next section)
8585

8686
**Training Procedure**
87-
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
87+
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
8888

8989
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
9090
- Text prompts are encoded through a ViT-L/14 text-encoder.
@@ -108,12 +108,12 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
108108
- **Batch:** 32 x 8 x 2 x 4 = 2048
109109
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
110110

111-
## Evaluation Results
111+
## Evaluation Results
112112
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
113113
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
114114
steps show the relative improvements of the checkpoints:
115115

116-
![pareto](assets/v1-variants-scores.jpg)
116+
![pareto](assets/v1-variants-scores.jpg)
117117

118118
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
119119
## Environmental Impact

backend/modules/get_canvas_generation_mode.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ def get_canvas_generation_mode(
4343
)
4444

4545
"""
46-
Mask images are white in areas where no change should be made, black where changes
46+
Mask images are white in areas where no change should be made, black where changes
4747
should be made.
4848
"""
4949

configs/INITIAL_MODELS.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ stable-diffusion-1.4:
3131
width: 512
3232
height: 512
3333
waifu-diffusion-1.3:
34-
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27)
34+
description: Stable Diffusion 1.4 fine tuned on anime-styled images (4.27 GB)
3535
repo_id: hakurei/waifu-diffusion-v1-3
3636
config: v1-inference.yaml
3737
file: model-epoch09-float32.ckpt

configs/stable-diffusion/v1-finetune.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,4 +107,4 @@ lightning:
107107
benchmark: True
108108
max_steps: 4000000
109109
# max_steps: 4000
110-
110+

configs/stable-diffusion/v1-m1-finetune.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,4 +107,4 @@ lightning:
107107
benchmark: False
108108
max_steps: 6200
109109
# max_steps: 4000
110-
110+

docs/features/UNIFIED_CANVAS.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
The Unified Canvas is a tool designed to streamline and simplify the process of composing an image using Stable Diffusion. It offers artists all of the available Stable Diffusion generation modes (Text To Image, Image To Image, Inpainting, and Outpainting) as a single unified workflow. The flexibility of the tool allows you to tweak and edit image generations, extend images beyond their initial size, and to create new content in a freeform way both inside and outside of existing images.
1+
The Unified Canvas is a tool designed to streamline and simplify the process of composing an image using Stable Diffusion. It offers artists all of the available Stable Diffusion generation modes (Text To Image, Image To Image, Inpainting, and Outpainting) as a single unified workflow. The flexibility of the tool allows you to tweak and edit image generations, extend images beyond their initial size, and to create new content in a freeform way both inside and outside of existing images.
22

33
This document explains the basics of using the Unified Canvas, introducing you to its features and tools one by one. It also describes some of the more advanced tools available to power users of the Canvas.
44

@@ -21,7 +21,7 @@ Accepting generations will commit the new generation to the **Base Layer**. You
2121
The **Mask Layer** consists of any masked sections that have been created to inform Inpainting generations. You can paint a new mask, or edit an existing mask, using the Brush tool and the Eraser with the Mask layer set as your Active layer. Any masked areas will only affect generation inside of the current bounding box.
2222

2323
### Bounding Box
24-
When generating a new image, Invoke will process and apply new images within the area denoted by the **Bounding Box**. The Width & Height settings of the Bounding Box, as well as its location within the Unified Canvas and pixels or empty space that it encloses, determine how new invocations are generated - see [Inpainting & Outpainting](#inpainting-and-outpainting) below. The Bounding Box can be moved and resized using the Move (V) tool. It can also be resized using the Bounding Box options in the Options Panel. By using these controls you can generate larger or smaller images, control which sections of the image are being processed, as well as control Bounding Box tools like the Bounding Box fill/erase.
24+
When generating a new image, Invoke will process and apply new images within the area denoted by the **Bounding Box**. The Width & Height settings of the Bounding Box, as well as its location within the Unified Canvas and pixels or empty space that it encloses, determine how new invocations are generated - see [Inpainting & Outpainting](#inpainting-and-outpainting) below. The Bounding Box can be moved and resized using the Move (V) tool. It can also be resized using the Bounding Box options in the Options Panel. By using these controls you can generate larger or smaller images, control which sections of the image are being processed, as well as control Bounding Box tools like the Bounding Box fill/erase.
2525

2626
### <a name="inpainting-and-outpainting"></a> Inpainting & Outpainting
2727
"Inpainting" means asking the AI to refine part of an image while leaving the rest alone. For example, updating a portrait of your grandmother to have her wear a biker's jacket.
@@ -48,9 +48,9 @@ To get started with the Unified Canvas, you will want to generate a new base lay
4848

4949
From there, you can consider the following techniques to augment your image:
5050
* **New Images**: Move the bounding box to an empty area of the Canvas, type in your prompt, and Invoke, to generate a new image using the Text to Image function.
51-
* **Image Correction**: Use the color picker and brush tool to paint corrections on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. You can also use the **ImageToImage** generation method to invoke new interpretations of the image.
51+
* **Image Correction**: Use the color picker and brush tool to paint corrections on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. You can also use the **ImageToImage** generation method to invoke new interpretations of the image.
5252
* **Image Expansion**: Move the bounding box to include a portion of your initial image, and a portion of transparent/empty pixels, then Invoke using a prompt that describes what you'd like to see in that area. This will Outpaint the image. You'll typically find more coherent results if you keep about 50-60% of the original image in the bounding box. Make sure that the Image To Image Strength slider is set to a high value - you may need to set it higher than you are used to.
53-
* **New Content on Existing Images**: If you want to add new details or objects into your image, use the brush tool to paint a sketch of what you'd like to see on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. If the masked area is small, consider using a smaller bounding box to take advantage of Invoke's automatic Scaling features, which can help to produce better details.
53+
* **New Content on Existing Images**: If you want to add new details or objects into your image, use the brush tool to paint a sketch of what you'd like to see on the image, switch to the Mask layer, and brush a mask over your painted area to use **Inpainting**. If the masked area is small, consider using a smaller bounding box to take advantage of Invoke's automatic Scaling features, which can help to produce better details.
5454
* **And more**: There are a number of creative ways to use the Canvas, and the above are just starting points. We're excited to see what you come up with!
5555

5656

@@ -82,27 +82,27 @@ Features with non-obvious behavior are detailed below, in order to provide clari
8282
## Toolbar
8383

8484
### Mask Options
85-
* **Enable Mask** - This flag can be used to Enable or Disable the currently painted mask. If you have painted a mask, but you don't want it affect the next invocation, but you *also* don't want to delete it, then you can set this option to Disable. When you want the mask back, set this back to Enable.
85+
* **Enable Mask** - This flag can be used to Enable or Disable the currently painted mask. If you have painted a mask, but you don't want it affect the next invocation, but you *also* don't want to delete it, then you can set this option to Disable. When you want the mask back, set this back to Enable.
8686
* **Preserve Masked Area** - When enabled, Preserve Masked Area inverts the effect of the Mask on the Inpainting process. Pixels in masked areas will be kept unchanged, and unmasked areas will be regenerated.
8787

8888
### Creative Tools
89-
* **Brush - Base/Mask Modes** - The Brush tool switches automatically between different modes of operation for the Base and Mask layers respectively.
90-
* On the Base layer, the brush will directly paint on the Canvas using the color selected on the Brush Options menu.
89+
* **Brush - Base/Mask Modes** - The Brush tool switches automatically between different modes of operation for the Base and Mask layers respectively.
90+
* On the Base layer, the brush will directly paint on the Canvas using the color selected on the Brush Options menu.
9191
* On the Mask layer, the brush will create a new mask. If you're finding the mask difficult to see over the existing content of the Unified Canvas, you can change the color it is drawn with using the color selector on the Mask Options dropdown.
9292
* **Erase Bounding Box** - On the Base layer, erases all pixels within the Bounding Box.
9393
* **Fill Bounding Box** - On the Base layer, fills all pixels within the Bounding Box with the currently selected color.
9494

9595
### Canvas Tools
9696
* **Move Tool** - Allows for manipulation of the Canvas view (by dragging on the Canvas, outside the bounding box), the Bounding Box (by dragging the edges of the box), or the Width/Height of the Bounding Box (by dragging one of the 9 directional handles).
97-
* **Reset View** - Click to re-orients the view to the center of the Bounding Box.
97+
* **Reset View** - Click to re-orients the view to the center of the Bounding Box.
9898
* **Merge Visible** - If your browser is having performance problems drawing the image in the Unified Canvas, click this to consolidate all of the information currently being rendered by your browser into a merged copy of the image. This lowers the resource requirements and should improve performance.
9999

100100
## Seam Correction
101-
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated by Stable Diffusion into your existing image. To do this, the area around the `seam` at the boundary between your image and the new generation is automatically blended to produce a seamless output. In a fully automatic process, a mask is generated to cover the seam, and then the area of the seam is Inpainted.
101+
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated by Stable Diffusion into your existing image. To do this, the area around the `seam` at the boundary between your image and the new generation is automatically blended to produce a seamless output. In a fully automatic process, a mask is generated to cover the seam, and then the area of the seam is Inpainted.
102102

103103
Although the default options should work well most of the time, sometimes it can help to alter the parameters that control the seam Inpainting. A wider seam and a blur setting of about 1/3 of the seam have been noted as producing consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with both sides). Seam strength of 0.7 is best for reducing hard seams.
104104
* **Seam Size** - The size of the seam masked area. Set higher to make a larger mask around the seam.
105-
* **Seam Blur** - The size of the blur that is applied on *each* side of the masked area.
105+
* **Seam Blur** - The size of the blur that is applied on *each* side of the masked area.
106106
* **Seam Strength** - The Image To Image Strength parameter used for the Inpainting generation that is applied to the seam area.
107107
* **Seam Steps** - The number of generation steps that should be used to Inpaint the seam.
108108

docs/help/SAMPLER_CONVERGENCE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Looking for a short version? Here's a TL;DR in 3 tables.
3939
!!! tip "suggestions"
4040

4141
For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.
42-
42+
4343
For variability, use `K_EULER_A` (runs 2x as quick as `K_DPM_2_A`).
4444

4545
---

0 commit comments

Comments
 (0)