Best upscaler for comfyui reddit

Best upscaler for comfyui reddit


Best upscaler for comfyui reddit. 9, end_percent 0. Is there any benefit in using something like in my post? I tried adjusting prompts to see if maybe the idea is to inject some differences in the photo, but it doesn't change style or adjust the photo at all. Im not a expert in upscaling, but my workflow right now is that I render 768x512 with first ksampler, the. 4. If you want more resolution you can simply add another Ultimate SD Upscale node. 5 for the diffusion after scaling. It doesn't turn out well with my hands, unlucky. Tried the llite custom nodes with lllite models and impressed. 0 + Refiner) This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. You can also look into the custom node, "Ultimate SD Upscaler", and youtube tutorial for it. waifu2x does not change details. Always wanted to integrate one myself. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? Welcome to the unofficial ComfyUI subreddit. Use it when generating, not for upscaling a final image. LDSR might in this family, its SLOW. Hires fix with add detail lora. The whole point of commas is to make sure CLIP understands 2 words as 1. go up by 4x then downscale to your desired resolution using image upscale. Good for depth, open pose so far so good. I had the same problem and those steps tanks performances as well. I solved that with using only 1 steps and adding multiple iterative upscale nodes. Thanks. My current workflow involves going back and forth between a regional sampler, an upscaler, and Krita (for inpainting to fix errors & fill in the details) to refine the output iteratively. 5, euler, sgm_uniform or CNet strength 0. I try to use comfyUI to upscale (use SDXL 1. Upscale Latent By: 1. 5, but I have some really old images I'd like to add detail to. Share Sort by: Best. New. I've tried to work a perfect Clarity AI | AI Image Upscaler & Enhancer - free and open-source Magnific Alternative. Which do you think is best/most realistic? Any thoughts on how to improve? perfecteyes and SkinUpMerge loras for the first image, resampled based on mask of hair and face. ufreak33 . 0 (code) Which introduces 2 new upscaling methods: '4x_overlapped_checkboard' and '4x_overlapped_constant'. Whether you're a seasoned professional or just starting out, this subreddit is the perfect place to ask questions, seek advice, and engage in discussions about all things photography. After Ultimate SD Upscale Welcome to the unofficial ComfyUI subreddit. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. 16K subscribers in the comfyui community. which is traditionally a image generator ai. Problem is, you advertise it as an upscaler, and its dogshit by the upscaler standards! I am sorry, but there's no other way to put it, even the thumbnail exposes the issue very well. 1. If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. I've been working on my first decent workflow and uploaded a version a few days back onto OpenArt, the idea was to have Txt2Img with a face swapper and upscaler. I don't want any changes or additions to the image, just a straightforward upscale and quality enhancement. However, I switched to Ultimate SD Upscale custom node. I use this youtube video workflow, and he uses a basic one. Btw, I don't know what size you're trying to go to but I get the best results doing only a 1. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. Requirements. I haven't needed to. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Then another node under loaders> "load upscale model" node. Comparison. This way it replicates the sd upscale/ultimate upscale scripts from A1111. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Just use an upscale node. Also how it upscale is kinda too Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step SUPIR upscaler is the highest quality, but it is very slow, requires high hardware/configuration and has a few exceptions that cause the image to be blurry than the original (like the gem image). I've seen this issue being a memory issue in A1111, but could that also be the problem here in ComfyUI? (I'm using a 1080TI) It happens no matter if i scale up 1,5x, 2x, or 4x Would appreciate the help! 17K subscribers in the comfyui community. Best uspcaler/refiner for animatediff vid2vid workflows? jump to content. Almost exaggerated. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 3) sampler pass. 2 I usually use 512x768 so it goes 512x768 -> 1024x1536 -> 4096x6144 ComfyUI is a completely different conceptual approach to generative art. Images reduced from 12288 to 3840 px width. I need to KSampler it again after upscaling. I find that tricky and eats up more time. Please share your tips, tricks, and workflows for using this 19K subscribers in the comfyui community. 5 to 0. true. But if you expect everthing to work right away without learning how it applies to your own workflows, comfyui might not be the best for you. You should bookmark the upscaler DB, it’s the best place to look: https://openmodeldb. One recommendation I saw a long time ago was to use a tile width that matched the width of the upscaled output. New to Comfyui, so not an expert. 2) or (bad code:0. Stephan You can use () to change emphasis of a word or phrase like: (good code:1. I'm trying to find a way of upscaling the SD video up from its 1024x576. (canny), inpainting, HiRes upscale using the same models. 50 votes, 20 comments. Ultimate SD upscaler Colab . 2 I'm a recent convert from A1111 since rendering an image in it takes around 5 minutes. Share /r/StableDiffusion is back open after the protest Welcome to the unofficial ComfyUI subreddit. I've tried CCSR, SUPIR, latent upscales and UltimateSDUpscaler by itself your workflow beats them all. I'm sure my VRAM is why the workflow takes so long, but I can't argue with the results. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. So segmenting the car with SAM & DINO, inverting the mask and putting the car in the scene, got some great compositions, only issue is I feel with some of them is, while the lighting works I feel as if the colours between the inpainted car and the overall scene Okay, I'm trying to do a kind of a highres fix in my workflow, where I upscale latent (with NNLatentUpscale) and do a low-denoise (0. And here's my first question : Is one better than the Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This Added support for AuraSR v0. Because i dont understand why ultimate-sd-upscale can manage same resolution in same The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. 35 users here now. I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. But somehow it creates additional person inside already generated images. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. Hopefully you can see this in the following picture. 1 with the Ultimate Upscaler, otherwise it gets even worse. 0 if the base resolution is not too high) because it allow you to use greater batch sizes, and has virtually no processing time cost. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. I love to go with an SDXL model for the initial image and with a good 1. 9 , euler If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. 2x upscale using Ultimate SD Upscale and TileE Controlnet. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Here's what I typed last time this question was asked: AFAIK for automatic1111 only the "SD upscaler" script uses SD to upscale and its hit or miss. Best of Reddit We would like to show you a description here but the site won’t allow us. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. I added a switch toggle for the group on the right. If you want actual detail at a reasonable amount of time I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even Ultimate sd upscale is the best for me, you can use it with controlnet tile in SD 1. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. Comfyui SDXL upscaler / hires fix Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. Please share your tips, tricks, and workflows for using this software to create your AI art. comfyui You are the best! Upscale Speed is fast compared to other upscalers. SD upscaler and upscale from that. Also IPAdapter was used to impact Updated ComfyUI Workflow: SDXL (Base+Refiner) + XY Plot + Control-LoRAs + ReVision + ControlNet XL OpenPose + Upscaler I have updated the workflow submitted last week, cleaning up a bit the layout and ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Put your folder in the top left text input. 4. I would start here, compare different upscalers. This may not be perfect as I am a ComfyUI newbie, and I spent way to many hours making the lines look nice. cheers Here is a workflow that I use currently with Ultimate SD Upscale. New comments cannot be posted. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. To use () characters in your actual prompt In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. Please share your tips, tricks, and hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Please stick to the notes, I have tried to explain as well as I can what happens everywhere. Please share your tips, tricks, and workflows for using this You don't need to press the queue. First I generate an Image with TXT2IMG. info Always wanted to integrate one myself. You've changed the batch size on the "pipeLoader - Base" node to be greater than 1 -> Change it to 1 and try again. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. this breaks the composition a little bit, because the mapped face is most of the time to clean or has Welcome to the unofficial ComfyUI subreddit. Using Ultimate Sd Upscaler, I'm always getting this weird black grid pattern with the SDXL image only visible through the cracks. Would anyone be able to provide a link to such workflows? Thank you in forward! The best result I have gotten so far is from the regional sampler from Impact Pack, but it doesn't support SDE or UniPC samplers, unfortunately. Open comment sort options. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. Here is my demo of Würstchen v3 architecture at 1120x1440 resolution I'm using Ultimate SD Upscale with SDXL Lightning without any issues. 45 is minimum and fairly jagged. I always lose details with sd upscaler 🤔 25K subscribers in the comfyui community. Best ComfyUI Detail Adding Upscaler . upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. For the samplers I've used Best uspcaler/refiner for animatediff vid2vid workflows? jump to content. my subreddits. ComfyUI Upscaling is best for a dozen or so Upscales alas would take all week to do 100+ Your UltimateSDUpscale Lightning workflow is slow as heck, but 100% the best upscaler I have used so far. 47 votes, 24 comments. So normally I will work from a 1344x768 image then take it up about 1. That means I'm also a beginner. 4) It's 100% depending on what you are upscaling and what you want it to look like when done. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. You just need to press 'refresh' and go to the node to see if the models are there to choose. It's the best at keeping faithful to the original image whilst adding extra detail. (Change the Pos and Neg Prompts in this method to match the Primary Pos and Neg Prompts). edit subscriptions. Im trying to use Ultimate SD upscale for upscaling images. I find Upscale useful, but as I often Upscale to 6144 x 6144 GigaPixel has the batch speed and capacity to make 100+ Upscales worthwhile. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). My workflow is more: generate images at a smaller size, like 512x384 once you have a good prompt and/or seed, use hires fix to upscale in the txt2img tab (main thing there for me is cutting down the default denoising to something like 0. These new If they are wired correctly, clicking Queue Prompt should show two large images, one with the AI upscaler and the other with Ultimate Upscale. Then use those with the Upscale Using Model node. I've had very mixed results with SD Upscale, and don't use it much. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username Get We would like to show you a description here but the site won’t allow us. Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 We would like to show you a description here but the site won’t allow us. What upscalers work best with animation based on LCM + diffusion models? ( self. Outdated custom nodes -> Fetch Updates and Update in ComfyUI Manager. Last is orginal upscale only. they can all "upscale" (bigger image but slightly different detail image, hallucination-esque) as well as import trained models. Always good to see more ComfyUI users in the wild, it's too underappreciated IMO. However, I'm can't find any workflows that incorporate upscalers for 1. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together If you were advertising it as an "image enhancer" instead of a unpscaler then sure, but saying magnific. Instead, I use Tiled KSampler with 0. 5 "Upscaling with model" and then denoising 0. I gave up on latent upscale. But a lot of details get lost after this and the image becomes flat -ish. Please share your tips, tricks, and workflows for using this The world’s best aim trainer, trusted by top pros, streamers, and players like you. It takes a while to process but the details seem to be more accurate than other methods, such as Ultimate SD. Now go back to img2img generated mask the important parts of your images and upscale that. 2-0. The 4X upscalers I've tried aren't great with it, I suspect the starting detail is too low. For illustration/anime models you will want something smoother that would tend to look “airbrushed” or overly smoothed out for more realistic images, there are many options. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. Please keep posted images SFW. The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled The best upscaler I've used in comfy is SUPIR. 4 steps with CFG of 1, RealVisxlV40. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. If you really want to learn comfy, the best way imo is to start by drilling the core pipeline into your head and not by jumping straight into someone else's advanced workflow. . The images look better than most 1. View community ranking In the Top 1% of largest communities on Reddit. Warning. For realism I use 4x_NMKD-Siax_200k as the upscaler when using hi-res fix @ 2x scale (512x512 or 512x768 base res), then if I want to go further I push the image to img2img and use Ultimate SD Upscale (with the same upscaler) at 4x scale with a denoise of around 0. comfyui) submitted 3 hours ago by alloutcraziness. Reply reply Top 1% Rank by size Yup, quite often I will happily use an iterative upscale with some sharpening and face detailing in-between through 3 or 4 steps to get to 8k so I can downscale to 4k to get the best image. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. I'll need to make a few tests and compare it with Siax and Ultrasharp to see how it performs with the type of work I make. But you'll see for yourself. I ask because after Kohya's deepshrink fix became available, I haven't done any upscaling at all in A1111 or Comfy. Please share your tips, tricks, and workflows for using this The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. I learned this from Sytan's Workflow, I like the result. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. 4 for denoise for the original SD Upscale. r/StableDiffusion • [Major Update] sd-webui-controlnet 1. fix. I hope this is due to your settings or because this is a WIP, since otherwise I'll stay away. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. If you have any questions, please write to me! As special there is also a workflow inside to create images with style transfer :) Caution! Hello Guys, I've discovered that Magnific and Krea excel in upscaling while automatically enhancing images by creatively repairing distortions and filling in gaps with contextually appropriate details, all without the need for prompts, just with images as input thats it. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) So my favorite so far is ESRGAN_4x but I am willing to try other upscaler good for adding fine detail and sharpeness. 2 and 0. I'm looking for a workflow for ComfyUI that can take an uploaded image and generate an identical one, but upscaled using Ultimate SD Upscaling. The other approach is to use a locked upscaler. It uses CN tile with ult SD upscale. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird 17 votes, 14 comments. Thanks for sharing those. Superscale is the other general upscaler I use a lot. 5 models. 25- 1. 0. Thoughts on variations for detailing a face. Using ComfyUI, you can increase the siz I will say SUPIR is the best upscaling at the moment. ) Please see the original 512px image: Original 512px 4x_UltraSharp upscale to 1536x1536px ESRGAN_4x upscale to 1536x1536px Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. 40. Do you just upscale it or? Or is it a custom node from Searge / others? I can't see it, because I cant find the link for workflow. But basically txt2img, img2img, 4x upscale with a few different upscalers. yalm. Does such a workflow exist? If so, could you guide me on how to set it up? Thanks in advance! I too use SUPIR, but just to sharpen my images on the first pass. Only the LCM Sampler extension is needed, as shown in this video. Any idea? (Already tried SIAX_200K, good details but adds too much texture/noise to the image. So I added the 4x/16x quick upscaler option with high speed, which doesn't require much vram. ComfyUI Workflow 4x upscaler, variable prompter (SD1. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. Heres an example with some math to double the original images resolution That might me a great upscale if you want semi-cartoony output, but it's nowhere near realistic. Second image only used the FaceDetailer node. 80 is usually mutated but sometimes looks great. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). It's messy right now but does the job. ComfyUI-WD14-Tagger ComfyUI_UltimateSDUpscale ComfyUI-Advanced-ControlNet ComfyUI-KJNodes ComfyUI-Frame-Interpolation ComfyUI-AnimateDiff-Evolved rgthree-comfy comfyui_controlnet_aux ComfyUI_Dave_CustomNode ComfyUI-Flowty-LDSR ComfyUI_InstantID ComfyUI-VideoHelperSuite ComfyUI-Manager clipseg. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. workflow - google drive link. 8). You can This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Please share your tips, tricks, and Welcome to the unofficial ComfyUI subreddit. 3 usually gives you the best results. all in one workflow would be awesome. It’s not very fancy, but it gets the job done. Us - upscaler is Ultimate SD Upscale node DlRp - using two roop nodes for two persistent characters (for illustrations) LoRA - I have a LoRA group that I connect, swap and change as I want to In faceswaplab on A1111 there is an option to upscale the swapped image before it gets pasted back resulting in a much sharper result. They work as a single node without a sampler, but of course can be part of a larger comfy workflow. use the following search parameters to narrow your results: subreddit:subreddit and directly support Reddit. You could also try a standard checkpoint with say 13, and 30. upscale by model will take you up to like 2x or 4x or whatever. I did once get some noise I didn't like, but rebooted & all was good second try. In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. The initial Latents are randomized Fractal Noise (Custom Node named Perlin Power Fractal Noise). 5 & SDXL/Turbo) Resource - Update I created my first workflow for ComfyUI and decided to share it with you since I found it quite helpful for me. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. 9. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Maybe someone can help me out with this -- I'm using your basic upscale latent -> ksampler approach to upscaling my generations, but as you can see in the picture, the original image has a load of character and fine detail, but the higher resolution image seems to have lost more or less all of it, and has become hugely diffused (ha) and indistinct. Please share your tips, tricks, and workflows for using this After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. I generate an image that I like then mute the first ksampler, unmute Ult. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. popular limit my search to r/comfyui. Also with good results. And when purely upscaling, the best Bonus: There’s a plush complimentary teen club for youth aged 13 to 17, with games including an arcade with a claw machine, table tennis and Xbox. attach to it a "latent_image" in this case it's "upscale latent" So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. 19K subscribers in the comfyui community. 5x upscale in hires (from an SDXL sized original). Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. ai is the best image upscaler in existance is like saying that an m32 mgl granade launcher is the best way to get rid of rats, sure it will kill rats better than other means (adding detail) but at the same time it destroys and changes the Welcome to the unofficial ComfyUI subreddit. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. Please share your tips, tricks, and started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Best. If it’s a distant face then you probably don’t have enough pixel area to do the fix USD is great at upscaling and not changing details, of course with very low denoise. Was using FaceIDV2 and the Upscaler I had was good Comfyui SDXL upscaler / hires fix Question - Help Sorry for the possibly repetitive question, but I wanted to get an image with a resolution of 1080x2800, while the original image is generated as 832x1216. Please help me fix this issue. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. 25, 1. 0 which also has a good upscaling node? 44 votes, 21 comments. txt after you removed the extension « txt » It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. Welcome to the unofficial ComfyUI subreddit. 114 votes, 43 comments. 10 votes, 18 comments. A step-by-step guide to mastering image quality. py. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username Get an ad-free experience with special benefits, and directly support Reddit. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. 400 It says by default masterpiece best quality girl, how does CLIP interprets best quality as 1 concept rather than 2? The whole point of commas is to make sure CLIP understands 2 words as 1. You've possibly messed the noodles up on the "Get latent size" node under the Ultimate SD Upscale node -> It should use the Two INT outputs. in fact, waifu2x is NOT ai image To get the absolute best upscales, requires a variety of techniques and often requires regional upscaling at some points. That works with an 8GB card :) comments sorted by Best Top New Controversial Q&A Add a Comment. Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. I tried LCM with Ultimate SD upscale, it's fast but very bad quality. Also make sure you install missing nodes with ComfyUI Manager. I eventually just started using a simple Upscale Image (using model) node, selecting the upscaler model and sending it my image. I have reduced the noise to 0. May be somewhere i will point out the issue. 5, sometimes 2. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode The ComfyUI Workflow I'm currently utilizing with an upscaler for SDXL is functioning smoothly. All hair strands are super thick and contrasty, the lips look plastic and the upscale couldn't deal with her weird mouth expression because she was singing. I use the defaults and a 1024x1024 tile. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. 222. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. The higher the denoise number the more things it tries to change. I created this workflow to do just that. get reddit premium. 15-0. I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference We would like to show you a description here but the site won’t allow us. 5 -ish new size Seed: 12345 (same seed) CFG: 3 (same CFG) Steps: 5 (same) Denoise: this is where you have to test. This will allow detail to be built in during the upscale. 21K subscribers in the comfyui community. (in the 250 pixel range)? I assume most everything is 512 and higher based on SD1. Top. I have yet to find an upscaler that can outperform the proteus model. Go to civitai and filter for Upscale models to find the best (that you like/adapted for the style of wallpaper). comfyui join leave 29,864 readers. how can ı run ultimate SD upscaler on colab? anyone knows? Locked post. You'll find upscale models here: https://openmodeldb. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. I just generate my base image at 2048x2048 or higher, and if I need to upscale the image, I run it through Topaz video AI to 4K and up. There is no best at everything option IMO. What am I doing wrong and what are the best practices doing a highres pass? 10K subscribers in the comfyui community. The best car photography sub on reddit [This subreddit is now 25K subscribers in the comfyui community. Curious my best option/operation/workflow and upscale model 24K subscribers in the comfyui community. 6 denoise and either: Cnet strength 0. use the following If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 5 based models with greater detail in SDXL 0. 9 but it looks like I need to switch my upscaling method. Still working on the the whole thing but I got the idea down Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. Please share your tips, tricks, and 43 votes, 16 comments. The default emphasis for () is 1. (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. i just told you, chaiNNer, midjourney, a111, comfyui is all stable diffusion based. Got a tiled controlnet and patchmodeladddownscale. But if I try the same settings in something like NNLatent or Upscale Latent By, it still changes details quite a bit. The process changes the image so much the output is useless in every case other than brute force t2i diffusion. Please share your tips, tricks, and workflows for using this Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. Help me make it better! Welcome to the unofficial ComfyUI subreddit. 5 noise I was always told to use cfg:10 and between 0. 63 votes, 31 comments. Another good alternative is tiled controlnet upscaling I needed a workflow to upscale and interpolate the frames to improve the quality of the video. PS. Please share your tips, tricks, and workflows for using this Welcome to the unofficial ComfyUI subreddit. TLDR: Check out Dalle-3 or some of the other all-in-one workflows if minutiae isn't your thing. 40 users here now. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. It depends on how large the face in your original composition is. Spend some time (a week or two, assuming you're a daily user) forcing yourself to open up a blank canvas and building the basic workflow from scratch. Latent upscale to double so 1536x1024, detailer for face and then sd ultimate upscale. 20K subscribers in the comfyui community. 0. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. If it’s a close up then fix the face first. 17K subscribers in the comfyui community. Is there any way something similar can be done in comfy using reactor or a similar node? The classic GAN based upscalers are the most straightforward and, IMO, the best at the moment. Endresult (with workflow) I have no idea what else I can or must do, I have already tried various upscale models, but none of them really work. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. Using Comfyui, is there a good way to downscale a 4096x4096 (for example) sized image, sample it then re-upscale it for faster generations? I'm playing around with "Image Scale by Ratio" and "Upscale Latent" but unsure of a good 49 votes, 12 comments. Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. But I been thinking about trying detailer after sd Ultimate upscaler. 5 upscale) upscaler to ksampler running 20-30 steps at . Don't use YAML; try the default one first and only it. More posts you may like. The images were created with ComfyUI. 10K subscribers in the comfyui community. 0 Alpha + SD XL Refiner 1. Having said that, more than half of the 10 variations of FaceID implementation explored by the developer did well it was just a few that seemed to have issues. A working You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. Hello, I'm a beginner looking for a somewhat simple all in one workflow that would work on my 4070 Ti super with 16gb vram. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. Please share your tips, tricks, and And at the end of it, I have a latent upscale step that I can't for the life of me figure out. I upscaled it to a resolution of 10240x6144 px for us to examine the results. IMHO the best use case for Latent upscalers is in highres fix, with a reasonable upscale (1. 5, now I use it only with SDXL (bigger tiles 1024x1024) and I do it multiple times with decreasing denoise and cfg. info/ including some 1:1 models for the reduction of jpeg artefacts etc. Upscaling from 2K to 4K no problem, using 2k tiles, half tile seam fix and Chess mode type, denoise set to 0. This is done after the refined image is upscaled and encoded into a latent. There is no tiling in the default A1111 hires. It's an 2x upscale workflow. After borrowing many ideas, and learning ComfyUI. and using tiled diffusion to go higher never looks as good. Please share your tips, tricks, and workflows for using this Not familiar with that upscaler though. HTH Welcome to the unofficial ComfyUI subreddit. It works more Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. Pop the one you choose into models>upscale_models. Smarter folk than me must have figured stuff out, so I'm just wondering if you'd be so kind to share your ComfyUI workspace for SDXL 1. This may have more to do with the base model vs FaceID but in some cases cranking the weight of the IPAdapter to the max (3) would result in a tan face with few of ethnic features at best. 7 times, then face detail and sharpen. 15K subscribers in the comfyui community. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. Montage limit my search to r/comfyui. Please share your tips, tricks, and workflows for using this Hey guys, So I've been doing some inpainting, putting a car into other scenes using masked inpainting. It seemed like a smaller tile would add more detail, and a larger tile would add less. 65 seems to be the best. But I probably wouldn't upscale by 4x at all if I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. limit my search to r/comfyui. It does fix some things like faces or hands. uaml erlu uuqjjk efzfqyo ryjuqd mkpqsk mwry wuec lsg ahqdakg