High Definition Standard Definition Theater
Video id : No1_sq-i_5U
ImmersiveAmbientModecolor: #e6935d (color 1)
Video Format : 22 (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: Opus - Normalized audio
PokeTubeEncryptID: 8dd05685e39ff20185dbfe939013d7a3ad062f13b3c8f1423c7dfa6ddb3347572ab9ddc1a3302a57a95516a3def013c2
Proxy : eu-proxy.poketube.fun - refresh the page to change the proxy location
Date : 1715796890084 - unknown on Apple WebKit
Mystery text : Tm8xX3NxLWlfNVUgaSAgbG92ICB1IGV1LXByb3h5LnBva2V0dWJlLmZ1bg==
143 : true
Inpainting Tutorial - Stable Diffusion
Jump to Connections
245,724 Views • Apr 6, 2023 • Click to toggle off description
I'll teach you what you need to know about Inpainting in this Stable diffusion tutorial. Learn how to fix any Stable diffusion generated image through inpainting details.

github.com/richrobber2/canvas-zoom.git
huggingface.co/runwayml/stable-diffusion-inpaintin…

FREE Prompt styles here:
www.patreon.com/posts/sebs-hilis-79649068

Support me on Patreon to get access to unique perks! www.patreon.com/sebastiankamph

Chat with me in our community discord: discord.gg/sebastiankamph

My workflow to Perfect Images
   • Revealing my Workflow to Perfect AI I...  

Control Lights in Stable Diffusion
   • Control Light in AI Images  

LIVE Pose in Stable Diffusion
   • LIVE Pose in Stable Diffusion's Contr...  

ControlNet tutorial and install guide
   • NEW ControlNet for Stable diffusion R...  

Ultimate Stable diffusion guide
   • Stable diffusion tutorial. ULTIMATE g...  

The Rise of AI Art: A Creative Revolution
   • The Rise of AI Art - A Documentary  

7 Secrets to writing with ChatGPT (Don't tell your boss!)
   • 7 Secrets in ChatGPT (Don't tell your...  

Ultimate Animation guide in Stable diffusion
   • Stable diffusion animation tutorial. ...  

Dreambooth tutorial for Stable diffusion
   • Dreambooth tutorial for stable diffus...  

5 tricks you're not using
   • Top 5 Stable diffusion tips for newco...  

Avoid these 7 mistakes
   • Don't make these 7 mistakes in Stable...  

How to ChatGPT. ChatGPT explained:
   • How to ChatGPT? Chat GPT explained!  

How to fix live render preview:
   • Stable diffusion gui most important s...  
Metadata And Engagement

Views : 245,724
Genre: Education
Date of upload: Apr 6, 2023 ^^


Rating : 4.896 (161/6,039 LTDR)
RYD date created : 2024-05-15T16:22:56.67532Z
See in json
Tags
Connections

YouTube Comments - 303 Comments

Top Comments of this video!! :3

@sebastiankamph

8 months ago

Early access to videos as a Patreon supporter www.patreon.com/sebastiankamph

1 |

@Antalion20

1 year ago

The reason that the coffee cup doesn't fit well within the image is because the render box for the inpaint area is such a small part of the image - anything generated is done so only within the context of a) what is not denoised i.e. the original image and b) what SD can actually see (within the render box). For high denoising strength you generally want a larger render box, otherwise it's easy to lose context. But what if you only want to change a small area? No problem! The render area is created as a bounding box that contains all the inpaint area you've selected - so, you can increase it by adding tiny dots of inpaint area to the scene. If you make them very small then whatever is behind it will generally be unchanged once rendered, so only the main area you've selected will be altered - but the dots will still count towards the bounding box. In the example given, I would put one dot above and to the left of the first coffee cup, and one at the bottom and to the right of the table. That way, what is rendered will probably adher more closely to both the focus (the blur on the coffee cup) and orientation and size of the table. For lower denoising strength (0.5 or below I'd say) it will generally be able to glean the context from what remains of the original image, but for anything higher I get much better results with this method.

116 |

@bellsTheorem1138

1 year ago

A much better solution when using "Masked Only" is to place a tiny dot of masking on or near content of the image that gives your masked region and prompt some context. What happen with Masked Only is that the image is cropped to fit just the masked part so many times it loses context to fit the new generation into. So if you want to in paint a hand, add a dot of mask further up the arm so it knows how it should be positioned or sized to match the rest of the arm. In your example adding a tiny dot of mask to the other coffee cup would have produced a better result. Simply because the cropping will include that contextual information. You have to leave in enough for the AI to work with.

148 |

@zvit

1 year ago

The reason you were struggling with the coffee cup is because "full image" is not just to keep the inpaint part the same resolution, but it tells the inpaint engine to look at the entire picture when drawing. So, you will get a perfectly sized cup, and correct sunlight on the cup coming from the window, for example. But with "masked only", it only looks at the area of the cup, and that's why it won't fit the scene as well.

29 |

@nightai

6 months ago

I immediately scrolled to comments after coffee cup fiasko and I wasn't disappointed. This community is so great, you can learn so much, so fast.

3 |

@Smudgie

7 months ago

It's like ASMR for SD. Thank you!

1 |

@zoybean

1 year ago

Hey Seb, just a little tip. Once you're done inpainting, you can put the final image into img2img with very low denoise to remove inpainting blurs, shadows, etc. for a smoother result.

163 |

@user-iv9bh1cv5x

1 year ago

Thank you for these tutorials and sharing your process. They've been a huge help for me as a beginner to these tools.

16 |

@coda514

1 year ago

Thank you for the knowledge. BTW, Did you hear about the artist who took things too far? Guess he didn't know where to draw the line.

5 |

@alexkatzfey

1 year ago

Awesome stuff! I was wondering what all the different settings for inpainting were for. I've just started messing around with Stable Diffusion and watching your videos has definitely helped me to start figuring out what's possible. Thanks again!

1 |

@ovalshrimp

1 year ago

This was helpful Seb, thanks. Inpainting has always been a bit of a mystery.

3 |

@AIKnowledge2Go

11 months ago

Awesome tutorial. Understanding the difference for masked content "original" and "latent noise" helped me so much. As others had already mentioned, making the inpaint area bigger also helps. When i Inpaint Body parts like the upper torso, I often mask the start of arms and the neck as well, so that inpaint understands in which direction the body is moving.

2 |

@Minimalici0us

11 months ago

1:09 - The scroll works by holding down Shift and scrolling the mouse wheel

1 |

@runebinder

3 months ago

Hi, only got into Stable Diffusion a couple of weeks ago and hadn't had much luck with Inpainting, this tutorial made a lot of sense and got much better results with my first Inpaint after watching this. Thanks :)

|

@matbeedotcom

1 year ago

Alright I've subbed? joined? I dont know what the term is- I'm giving myself 2 full time weeks to really pick up on stable diffusion and you're helping to launch me. Thank you.

3 |

@sphericalearther1461

4 months ago

Thanks. Finally got it working. So cool. My best AI idea. Develop a model from stereoscopic images only so it can draw in SBS 3D!

|

@aliceleblanc7318

1 year ago

I am new at the whole impainting topic and this video helped me a lot to get an overview of the possibilities. Many thanks ❤

3 |

@MrPlasmo

1 year ago

Damn thanks for the zooooom! Was wondering when we gonna get it

1 |

@joeduffy52

9 months ago

A tutorial for inpainting in ComfyUI would be good 😉 Your SDXL Workflow file is the best I've tried so far

|

@pizzaluvah

5 months ago

I'm all for self-deprecating humo(u)r, but in all seriousness, you are a very capable teacher. Thank you.

|

Go To Top