Comfyui workflow examples reddit

Comfyui workflow examples reddit. You can find the Flux Dev diffusion model weights here. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. Flux. But let me know if you need help replicating some of the concepts in my process. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. So. but mine do include workflows for the most part in the video description. Civitai has few workflows as well. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. That being said, here's a 1024x1024 comparison also. com/. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Only the LCM Sampler extension is needed, as shown in this video. Is there a workflow with all features and options combined together that I can simply load and use ? 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) Welcome to the unofficial ComfyUI subreddit. 1 or not. We would like to show you a description here but the site won’t allow us. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to I recently switched from A1111 to ComfyUI to mess around AI generated image. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. Table of contents. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel It works by converting your workflow. Everything else is the same. But for a base to start at it'll work. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. this is just a simple node build off what's given and some of the newer nodes that have come out. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. You can then load or drag the following image in ComfyUI to get the workflow: 6 min read. Workflow Image with generated image But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Ignore the prompts and setup That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. K12sysadmin is open to view and closed to post. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. Just my two cents. Starting workflow. The idea of this workflow is to sample different parts of the sigma_min, cfg_scale, and steps space with a fixed prompt and seed. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 1 with ComfyUI Get the Reddit app Scan this QR code to download the app now Here are approx. 150 workflow examples of things I created with ComfyUI and ai models from Civitai This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. The sample prompt as a test shows a really great result. Step 2: Download this sample Image. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. I found it very helpful. 1 ComfyUI install guidance, workflow and example. hopefully this will be useful to you. I then just sort of pasted them together. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt. Put the flux1-dev. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. all in one workflow would be awesome. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. The example images on the top are using the "clip_g" slot on the SDXL encoder on the left, but using the default workflow CLIPText on the right. Comfy Workflows Comfy Workflows. Please keep posted images SFW. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Warning. SDXL Default ComfyUI workflow. 1 checkpoint. Thats where I'd gotten my second workflow I posted from, which got me going. Merging 2 Images together. To add content, your account must be vetted/verified. I put the workflow to test by creating people with hands etc. second pic. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. Flux Schnell is a distilled 4 step model. It covers the following topics: Introduction to Flux. You can find the workflow here and the full image with metadata here. I originally wanted to release 9. 1; Flux Hardware Requirements; How to install and use Flux. I think perfect place for them is Wiki on GitHub. 5 with lcm with 4 steps and 0. sft file in your: ComfyUI/models/unet/ folder. json files into an executable Python script that can run without launching the ComfyUI server. and it got very good results. 1. You can encode then decode bck to a normal ksampler with an 1. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. K12sysadmin is for K12 techs. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. I think it was 3DS Max. . In addition, I provide some sample images that can be imported into the program. Aug 2, 2024 · Flux Dev. Surprisingly, I got the most realistic images of all so far. The examples were generated with the RealisticVision 5. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. be/ppE1W0-LJas - the tutorial. No Loras, no fancy detailing (apart from face detailing). Seems very hit and miss, most of what I'm getting look like 2d camera pans. 1; Overview of different versions of Flux. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. Ending Workflow. Welcome to the unofficial ComfyUI subreddit. The video is just a screenshot of the workflow I used in ComfyUI to get the output files. Still working on the the whole thing but I got the idea down And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. You can't change clipskip and get anything useful from some models (SD2. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. (Same seed, etc, etc. 0 for ComfyUI. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference whatsoever so it can be ignored as well. Workflow. 4 - The best workflow examples are through the github examples pages. comfy uis inpainting and masking aint perfect. 86s/it on a 4070 with the 25 frame model, 2. Hi everyone, I’m working on a project to generate furnished interiors from images of empty rooms using ComfyUI and Stable Diffusion, but I want to avoid using inpainting. AP Workflow 9. 4. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Breakdown of workflow content. For your all-in-one workflow, use the Generate tab. (for 12 gb VRAM Max is about 720p resolution). If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. of course) To make differences somewhat easiser to see, the above image is at 512x512. You can construct an image generation workflow by chaining different blocks (called nodes) together. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). ControlNet Depth ComfyUI workflow. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: Share, discover, & run thousands of ComfyUI workflows. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). Create animations with AnimateDiff. Jul 28, 2024 · Over the last few months I have been working on a project with the goal of allowing users to run ComfyUI workflows from devices other than a desktop as ComfyUI isn't well suited to run on devices with smaller screens. EDIT: For example this workflow shows the use of the other prompt windows. Please share your tips, tricks, and workflows for using this software to create your AI art. Inside the workflow, you will find a box with a note containing instructions and specifications on the settings to optimize its use. Upscaling ComfyUI workflow. https://youtu. 75s/it with the 14 frame model. or through searching reddit, the comfyUI manual needs updating imo. WAS suite has some workflow stuff in its github links somewhere as well. Just bse sampler and upscaler. it's nothing spectacular but gives good consistent results without Welcome to the unofficial ComfyUI subreddit. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Upcoming tutorial - SDXL Lora + using 1. Img2Img ComfyUI workflow. wqyig tuej kcnh trehkke qihh jauvy krtukz ysfv kbyuf jishuf