comfyui sdxl refiner. . comfyui sdxl refiner

 
comfyui sdxl refiner This seems to give some credibility and license to the community to get started

u/EntrypointjipThe two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. Table of Content ; Searge-SDXL: EVOLVED v4. Fooocus, performance mode, cinematic style (default). Fooocus and ComfyUI also used the v1. a closeup photograph of a korean k-pop. SDXL uses natural language prompts. The two-model setup that SDXL uses has the base model is good at generating original images from 100% noise, and the refiner is good at adding detail at 0. ComfyUI and SDXL. Im new to ComfyUI and struggling to get an upscale working well. WAS Node Suite. I've been trying to find the best settings for our servers and it seems that there are two accepted samplers that are recommended. 5-38 secs SDXL 1. png . 2. Drag & drop the . Img2Img. I think this is the best balanced I. 34 seconds (4m)SDXL 1. Example script for training a lora for the SDXL refiner #4085. Searge SDXL v2. from_pretrained (. Currently, a beta version is out, which you can find info about at AnimateDiff. best settings for Stable Diffusion XL 0. 2. ·. (introduced 11/10/23). With Vlad releasing hopefully tomorrow, I'll just wait on the SD. I’m sure as time passes there will be additional releases. I had experienced this too, dont know checkpoint is corrupted but it’s actually corrupted Perhaps directly download into checkpoint folderDo you have ComfyUI manager. Per the announcement, SDXL 1. 5B parameter base model and a 6. Testing was done with that 1/5 of total steps being used in the upscaling. The workflow should generate images first with the base and then pass them to the refiner for further. 9 testing phase. Stability. For me, it has been tough, but I see the absolute power of the node-based generation (and efficiency). About Different Versions:-Original SDXL - Works as intended, correct CLIP modules with different prompt boxes. Sometimes I will update the workflow, all changes will be on the same link. 5 refiner tutorials into your ComfyUI browser and the workflow is loaded. Before you can use this workflow, you need to have ComfyUI installed. Also how to organize them when eventually end up filling the folders with SDXL LORAs since I cant see thumbnails or metadata. . 5 + SDXL Refiner Workflow : StableDiffusion. 6B parameter refiner model, making it one of the largest open image generators today. 3) Not at the moment I believe. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. . Step 1: Download SDXL v1. Upto 70% speed. ComfyUI a model "Queue prompt"をクリック。. For a purely base model generation without refiner the built-in samplers in Comfy are probably the better option. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. png . GTM ComfyUI workflows including SDXL and SD1. 9" what is the model and where to get it? Reply reply Adventurous-Abies296 After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. 0; Width: 896; Height: 1152; CFG Scale: 7; Steps: 30; Sampler: DPM++ 2M Karras; Prompt: As above. To download and install ComfyUI using Pinokio, simply go to and download the Pinokio browser. Why so slow? In comfyUI the speed was approx 2-3 it/s for 1024*1024 image. 5 + SDXL Refiner Workflow : StableDiffusion. SD1. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Upcoming features:Automatic1111’s support for SDXL and the Refiner model is quite rudimentary at present, and until now required that the models be manually switched to perform the second step of image generation. Stable Diffusion XL 1. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion -. The result is a hybrid SDXL+SD1. There are two ways to use the refiner: ; use the base and refiner models together to produce a refined image ;Got playing with SDXL and wow! It's as good as they stay. 0 BaseYes it’s normal, don’t use refiner with Lora. Those are two different models. There are settings and scenarios that take masses of manual clicking in an. The workflow should generate images first with the base and then pass them to the refiner for further. sdxl is a 2 step model. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Experimental/sdxl-reencode":{"items":[{"name":"1pass-sdxl_base_only. SDXL Base + SD 1. 0 for ComfyUI | finally ready and released | custom node extension and workflows for txt2img, img2img, and inpainting with SDXL 1. 0 base and have lots of fun with it. You can find SDXL on both HuggingFace and CivitAI. 0 base checkpoint; SDXL 1. . Step 3: Download the SDXL control models. A workflow that can be used on any SDXL model with Base generation, upscale and refiner. The prompts aren't optimized or very sleek. After inputting your text prompt and choosing the image settings (e. One has a harsh outline whereas the refined image does not. You need to use advanced KSamplers for SDXL. I also used the refiner model for all the tests even though some SDXL models don’t require a refiner. My advice, have a go and try it out with comfyUI, its unsupported but its likely to be the first UI that works with SDXL when it fully drops on the 18th. Here are the configuration settings for the SDXL models test: I've been having a blast experimenting with SDXL lately. sdxl_v1. To use the Refiner, you must enable it in the “Functions” section and you must set the “End at Step / Start at Step” switch to 2 in the “Parameters” section. 0 was released, there has been a point release for both of these models. Commit date (2023-08-11) I was having very poor performance running SDXL locally in ComfyUI to the point where it was basically unusable. We are releasing two new diffusion models for research purposes: SDXL-base-0. 9vae Image size: 1344x768px Sampler: DPM++ 2s Ancestral Scheduler: Karras Steps: 70 CFG Scale: 10 Aesthetic Score: 6Config file for ComfyUI to test SDXL 0. Subscribe for FBB images @ These configs require installing ComfyUI. 9 Base Model + Refiner Model combo, as well as perform a Hires. It also works with non. 20:43 How to use SDXL refiner as the base model. For upscaling your images: some workflows don't include them, other workflows require them. 0. 0 for ComfyUI, today I want to compare the performance of 4 different open diffusion models in generating photographic content: SDXL 1. Using SDXL 1. Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. 2 noise value it changed quite a bit of face. 1 and 0. safetensors. 5 base model vs later iterations. Part 2 - We added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. 5 and send latent to SDXL BaseIt has the SDXL base and refiner sampling nodes along with image upscaling. I'm trying ComfyUI for SDXL, but not sure how to use loras in this UI. Refiner: SDXL Refiner 1. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. Download the SD XL to SD 1. One of the most powerful features of ComfyUI is that within seconds you can load an appropriate workflow for the task at hand. . Then this is the tutorial you were looking for. 0 in ComfyUI, with separate prompts for text encoders. This workflow and supporting custom node will support iterating over the SDXL 0. I suspect most coming from A1111 are accustomed to switching models frequently, and many SDXL-based models are going to come out with no refiner. The prompt and negative prompt for the new images. InstallationBasic Setup for SDXL 1. Thank you so much Stability AI. With ComfyUI it took 12sec and 1mn30sec respectively without any optimization. Start with something simple but that will be obvious that it’s working. Both ComfyUI and Foooocus are slower for generation than A1111 - YMMW. The next step for Stable Diffusion has to be fixing prompt engineering and applying multimodality. 5 fine-tuned model: SDXL Base + SD 1. Place LoRAs in the folder ComfyUI/models/loras. comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like. This gives you the ability to adjust on the fly, and even do txt2img with SDXL, and then img2img with SD 1. g. 5x), but I can't get the refiner to work. X etc. July 14. SDXLの特徴の一つっぽいrefinerを使うには、それを使うようなフローを作る必要がある。. json. Basic Setup for SDXL 1. refiner_output_01030_. e. Especially on faces. The refiner is entirely optional and could be used equally well to refine images from sources other than the SDXL base model. ZIP file. The creator of ComfyUI and I are working on releasing an officially endorsed SDXL workflow that uses far less steps, and gives amazing results such as the ones I am posting below Also, I would like to note you are not using the normal text encoders and not the specialty text encoders for base or for the refiner, which can also hinder results. You can use the base model by it's self but for additional detail you should move to. 5 tiled render. . それ以外. comfyui 如果有需求之后开坑讲。. Embeddings/Textual Inversion. RunDiffusion. 5 + SDXL Refiner Workflow : StableDiffusion Continuing with the car analogy, ComfyUI vs Auto1111 is like driving manual shift vs automatic (no pun intended). Special thanks to @WinstonWoof and @Danamir for their contributions! ; SDXL Prompt Styler: Minor changes to output names and printed log prompt. Two Samplers (base and refiner), and two Save Image Nodes (one for base and one for refiner). All the list of Upscale model is. None of them works. json and add to ComfyUI/web folder. Automate any workflow Packages. Compatible with: StableSwarmUI * developed by stability-ai uses ComfyUI as backend, but in early alpha stage. Fully configurable. Fixed SDXL 0. You can type in text tokens but it won’t work as well. safetensors + sdxl_refiner_pruned_no-ema. 0 involves an impressive 3. Use SDXL Refiner with old models. I feel like we are at the bottom of a big hill with Comfy, and the workflows will continue to rapidly evolve. If you don't need LoRA support, separate seeds, CLIP controls, or hires fix - you can just grab basic v1. VAE selector, (needs a VAE file, download SDXL BF16 VAE from here, and VAE file for SD 1. Having previously covered how to use SDXL with StableDiffusionWebUI and ComfyUI, let’s now explore SDXL 1. Host and manage packages. 0 performs. sdxl 1. 0_0. I also automated the split of the diffusion steps between the Base and the. Put into ComfyUImodelsvaeSDXL and ComfyUImodelsvaeSD15). You can disable this in Notebook settingsYesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. It would be neat to extend the SDXL dreambooth Lora script with an example of how to train the refiner. Search for “post processing” and you will find these custom nodes, click on Install and when prompted, close the browser and restart ComfyUI. 你可以在google colab. i've switched from a1111 to comfyui for sdxl for a 1024x1024 base + refiner takes around 2m. In any case, we could compare the picture obtained with the correct workflow and the refiner. x for ComfyUI. 0. If the noise reduction is set higher it tends to distort or ruin the original image. The SDXL Discord server has an option to specify a style. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. Your image will open in the img2img tab, which you will automatically navigate to. SDXL generations work so much better in it than in Automatic1111, because it supports using the Base and Refiner models together in the initial generation. SDXL base → SDXL refiner → HiResFix/Img2Img (using Juggernaut as the model, 0. 9. Basic Setup for SDXL 1. High likelihood is that I am misunderstanding how I use both in conjunction within comfy. 0 or 1. 5GB vram and swapping refiner too , use --medvram-sdxl flag when startingSDXL Prompt Styler Advanced: New node for more elaborate workflows with linguistic and supportive terms. ComfyUI * recommended by stability-ai, highly customizable UI with custom workflows. x for ComfyUI; Table of Content; Version 4. Together, we will build up knowledge,. Can anyone provide me with a workflow for SDXL ComfyUI r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. Inpainting a woman with the v2 inpainting model: . Prerequisites. With Tiled Vae (im using the one that comes with multidiffusion-upscaler extension) on, you should be able to generate 1920x1080, with Base model, both in txt2img and img2img. SDXL works "fine" with just the base model, taking around 2m30s to create a 1024x1024 image (SD1. For my SDXL model comparison test, I used the same configuration with the same prompts. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Per the announcement, SDXL 1. Just training the base model isn't feasible for accurately generating images of subjects such as people, animals, etc. Note: I used a 4x upscaling model which produces a 2048x2048, using a 2x model should get better times, probably with the same effect. 5B parameter base model and a 6. The issue with the refiner is simply stabilities openclip model. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. 0 base and have lots of fun with it. In this ComfyUI Tutorial we'll install ComfyUI and show you how it works. 12:53 How to use SDXL LoRA models with Automatic1111 Web UI. 5s, apply weights to model: 2. Explain COmfyUI Interface Shortcuts and Ease of Use. That’s because the creator of this workflow has the same 4GB. Next support; it's a cool opportunity to learn a different UI anyway. The Tutorial covers:1. 0 仅用关键词生成18种风格高质量画面#comfyUI,简单便捷的SDXL模型webUI出图流程:SDXL Styles + Refiner,SDXL Roop 工作流优化,SDXL1. Adjust the workflow - Add in the. import json from urllib import request, parse import random # this is the ComfyUI api prompt format. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. I've successfully downloaded the 2 main files. To simplify the workflow set up a base generation and refiner refinement using two Checkpoint Loaders. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. A technical report on SDXL is now available here. 9 and sd_xl_refiner_0. 0 refiner checkpoint; VAE. The generation times quoted are for the total batch of 4 images at 1024x1024. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. 0 + LoRA + Refiner With Comfy UI + Google Colab fot FREEExciting news! Introducing Stable Diffusion XL 1. ( I am unable to upload the full-sized image. I've been able to run base models, Loras, multiple samplers, but whenever I try to add the refiner, I seem to get stuck on that model attempting to load (aka the Load Checkpoint node). These ports will allow you to access different tools and services. this creats a very basic image from a simple prompt and sends it as a source. SDXL in anime has bad performence, so just train base is not enough. A good place to start if you have no idea how any of this works is the: with sdxl . 5 models for refining and upscaling. Here Screenshot . Also, you could use the standard image resize node (with lanczos or whatever it is called) and pipe that latent into sdxl then refiner. set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention. 35%~ noise left of the image generation. Refiner same folder as Base model, although with refiner i can't go higher then 1024x1024 in img2img. 0 links. if it is even possible. In this quick episode we do a simple workflow where we upload an image into our SDXL graph inside of ComfyUI and add additional noise to produce an altered i. 5 does and what could be achieved by refining it, this is really very good, hopefully it will be as dynamic as 1. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. So I created this small test. FromDetailer (SDXL/pipe), BasicPipe -> DetailerPipe (SDXL), Edit DetailerPipe (SDXL) - These are pipe functions used in Detailer for utilizing the refiner model of SDXL. Experiment with various prompts to see how Stable Diffusion XL 1. My 2-stage (base + refiner) workflows for SDXL 1. Drag the image onto the ComfyUI workspace and you will see. 0_comfyui_colab (1024x1024 model) please use with. We all know SD web UI and ComfyUI - those are great tools for people who want to make a deep dive into details, customize workflows, use advanced extensions, and so on. It will destroy the likeness because the Lora isn’t interfering with the latent space anymore. 🚀LCM update brings SDXL and SSD-1B to the game 🎮photo of a male warrior, modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, medieval armor, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High. 0 Base and Refiners models downloaded and saved in the right place, it. BRi7X. . 0. 0. json: 🦒 Drive. 0 with ComfyUI's Ultimate SD Upscale Custom Node in this illuminating tutorial. json file to ComfyUI window. But this only increased the resolution and details a bit since it's a very light pass and doesn't change the overall. Hires fix is just creating an image at a lower resolution, upscaling it and then sending it through img2img. How to get SDXL running in ComfyUI. 0 and Refiner 1. And to run the Refiner model (in blue): I copy the . (In Auto1111 I've tried generating with the Base model by itself, then using the Refiner for img2img, but that's not quite the same thing, and it doesn't produce the same output or the same. This repo contains examples of what is achievable with ComfyUI. 0-RC , its taking only 7. So overall, image output from the two-step A1111 can outperform the others. please do not use the refiner as an img2img pass on top of the base. 5. png","path":"ComfyUI-Experimental. That's the one I'm referring to. A (simple) function to print in the terminal the. Regenerate faces. Now with controlnet, hires fix and a switchable face detailer. 9. Supports SDXL and SDXL Refiner. The base model generates (noisy) latent, which. SDXL afaik have more inputs and people are not entirely sure about the best way to use them, also refiner model make things even more different, because it should be used mid generation and not after it, and a1111 was not built for such a use case. latent file from the ComfyUIoutputlatents folder to the inputs folder. But, as I ventured further and tried adding the SDXL refiner into the mix, things. Pull requests A gradio web UI demo for Stable Diffusion XL 1. I just downloaded the base model and the refiner, but when I try to load the model it can take upward of 2 minutes, and rendering a single image can take 30 minutes, and even then the image looks very very weird. Searge-SDXL: EVOLVED v4. 0 seed: 640271075062843 To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's output to improve detail. As soon as you go out of the 1megapixels range the model is unable to understand the composition. x, SD2. SD1. refiner_output_01036_. 0—a remarkable breakthrough. It now includes: SDXL 1. 9. 9. r/StableDiffusion. The refiner model. The denoise controls the amount of noise added to the image. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. June 22, 2023. 5 works with 4GB even on A1111 so you either don't know how to work with ComfyUI or you have not tried it at all. The refiner refines the image making an existing image better. Includes LoRA. I have updated the workflow submitted last week, cleaning up a bit the layout and adding many functions I wanted to learn better. I know a lot of people prefer Comfy. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Think of the quality of 1. 17. You can assign the first 20 steps to the base model and delegate the remaining steps to the refiner model. 9 the base and refiner models. 0, it has been warmly received by many users. In summary, it's crucial to make valid comparisons when evaluating the SDXL with and without the refiner. Yes only the refiner has aesthetic score cond. Step 6: Using the SDXL Refiner. The base model was trained on the full range of denoising strengths while the refiner was specialized on "high-quality, high resolution data" and denoising of <0. ComfyUI插件使用. The sample prompt as a test shows a really great result. I need a workflow for using SDXL 0. 0 with ComfyUI. 0 (26 July 2023)! Time to test it out using a no-code GUI called ComfyUI!. The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. 0 Download Upscaler We'll be using. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. 0 is “built on an innovative new architecture composed of a 3. Comfyroll. py script, which downloaded the yolo models for person, hand, and face -. 5 from here. and After 4-6 minutes until the both checkpoints are loaded (SDXL 1. json. It MAY occasionally fix. ComfyUI_00001_. Installing ControlNet for Stable Diffusion XL on Windows or Mac. Works with bare ComfyUI (no custom nodes needed). It allows users to design and execute advanced stable diffusion pipelines with a flowchart-based interface. 1. SEGSPaste - Pastes the results of SEGS onto the original. 0 links. 0. at least 8GB VRAM is recommended. 0 is “built on an innovative new architecture composed of a 3. I was able to find the files online. Use "Load" button on Menu. 0の特徴. Having issues with refiner in ComfyUI. I wonder if I have been doing it wrong -- right now, when I do latent upscaling with SDXL, I add an Upscale Latent node after the refiner's KSampler node, and pass the result of the latent upscaler to another KSampler. 9 VAE; LoRAs. 35%~ noise left of the image generation. SDXL Models 1. Install this, restart ComfyUI and click “manager” then “install missing custom nodes” restart again and it should work. Automatic1111 tested and verified to be working amazing with. SECourses. 0 through an intuitive visual workflow builder. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora)To use the Refiner, you must enable it in the “Functions” section and you must set the “refiner_start” parameter to a value between 0. SDXL VAE. 0 on ComfyUI. The fact that SDXL has NSFW is a big plus, i expect some amazing checkpoints out of this. 0 base model. In this guide, we'll show you how to use the SDXL v1. 0 Download Upscaler We'll be using NMKD Superscale x 4 upscale your images to 2048x2048. 详解SDXL ComfyUI稳定工作流程:我在Stability使用的AI艺术内部工具接下来,我们需要加载我们的SDXL基础模型(改个颜色)。一旦我们的基础模型加载完毕,我们还需要加载一个refiner,但是我们会稍后处理这个问题,不用着急。 The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0. These were all done using SDXL and SDXL Refiner and upscaled with Ultimate SD Upscale 4x_NMKD-Superscale. Otherwise, I would say make sure everything is updated - if you have custom nodes, they may be out of sync with the base comfyui version. You will need a powerful Nvidia GPU or Google Colab to generate pictures with ComfyUI. safetensors + sd_xl_refiner_0. install or update the following custom nodes. 5. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. Hi all, As per this thread it was identified that the VAE on release had an issue that could cause artifacts in fine details of images. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0.