How to use comfyui trigger reddit

How to use comfyui trigger reddit. Sometimes it's easier to load a workflow 5-10 minutes ago than spend 15-30 seconds to reconnect and readjust settings. Yes: tiny terra nodes or ComfyRoll. Then drag the requirements_win. His previous tutorial using 1. I use the following actions in the custom nodes directory. Less is best. And boy I was blown away by the fact that how well it uses a GPU. 11. --listen [IP] Specify the IP address to listen on (default: 127. Once an image has been generated into an image preview, it is possible to right-click and save the image, but this process is a bit too manual as it makes you type context-based filenames unless you like having "Comfy- [number]" as the name, plus browser save dialogues are annoying. Iǘe started to use Comfy UI but Loras dont work, they are in the correct folder and have used all triggers but nothing happens with any. Then in a second stage, I define the composition using controlnet and apply the ipadapter using the ref images. In contrast, the SDXL-clip driven image on the left, has much greater complexity of composition. This can easily be done in comfyUI using masquerade custom nodes. r/StableDiffusion. 1 ). Both work, but I don't want to duplicate resources. 00 0. 8>. Just saying. 4 - 0. You can also do this all in one with the mile high styler I've used Comfyui to rotoscope the actor and modify the background to look like a different style living room, so it doesn't look like we're shooting the same location for every video. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. STEP 1: Open the venv folder, then type on its path. Then add an empty text box so you can write a prompt and add a text concat to combine the prompt and the style and run that into the input. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results Welcome to the unofficial ComfyUI subreddit. Just set both weights to 1, and play with the values. You need CLIP Set Last Layer node set to -2. strip the data, copy paste the image, not upload it per se. Also, if this is new and exciting to you, feel free to post you have to load [load loras] before postitive/negative prompt, right after load checkpoint. Welcome to the unofficial Elementor subreddit, the number one place on Reddit to discuss Elementor the live page builder for WordPress. Without mentioning anything related to the lora in the prompt, and you will see its effect. ComfyUI Command-line Arguments. On the other hand, in ComfyUI you load the lora with a lora loader node and you get 2 options strength_model and strength_clip and you also have the text prompt thing <lora:Dragon_Ball_Backgrounds_XL> . Rightclick the "Load line from text file" node and choose the "convert index to input" option. Try ezXY. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. 6 on just 1-2 loras only. As in define a new node which contains a set of nodes connected to each other, super node should then be able to take inputs, pass through the inner nodes and give you final output. And above all, BE NICE. This subreddit is not run by or affiliated with Elementor. open a command prompt, and type this: pip install -r. Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. I have seen in CivitAI that people use "Steps: 6" or so and they get great looking images using Turbo_Dreamshaper_DPM++ SDE Karras. ComfyUI Command Line Arguments: Informational. 00 1. This should convert the "index" to a connector. If you only have one folder in the training dataset, Lora's filename is the trigger word. 9 safetesnors file. To set a clip skip of 1 is to not skip any layers, and to use all 12. . I always add a lora node when I need to use loras in my prompts. Learn how to use Comfy UI, a powerful GUI for Stable Diffusion, with this full guide. Follow Scott Detweiler on youtube. assuming your using a fixed seed you could link the output to a preview and a save node then press ctrl+m with the save node to disable Welcome to the unofficial ComfyUI subreddit. Using multiple LoRA's in ComfyUI. ago. Some update to either ComfyUI or maybe a custom node must have broken that functionality. And because of this I had always face Open in photoshop or similar program and resave as png and it will remove all the saved data in the image! besides getting rid of the meta data in a comfyui png, you could save as a jpg if your just sharing to random people. You will need to restart Comfyui to activate the new nodes. Actually no, I found his approach better for me. )Then just paste this over your image A using the mask. txt). File run_nvidia_gpu. Plus, just the tools like segment and depth maps for images and video Welcome to the unofficial ComfyUI subreddit. To be begin with I have only 4GB Vram and by todays standards it's considered potato. I came across comfyui purely by chance and despite the fact that there is something of a learning curve compared to a few others I have tried, it's well worth the effort since even on a low end machine image generation seems to be much quicker(at least when using the default workflow) In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. EDIT: There is something already like this built in to WAS. Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced things together so it would work for me. git" cmd /c "cd /d %a && git pull" " . Usefully, look at the image you want to imitate on Civitai and take a look at their CFG values and where they place their loras in prompt, how long their prompt is and try to reproduce it. I have noticed this behavior with the KSampler Efficient and Primere Seed nodes. Use this subreddit to ask questions, show off your Elementor creations, and meet other Elementor enthusiasts. ComfyUI workflow to play with this, embedded here: nxde_ai. Reply reply More replies More replies More replies Welcome to the unofficial ComfyUI subreddit. 5 refiner node. • 1 yr. ttn nodes also has an xyplot but it's a little clunky. So I generate a batch of ref images to capture the style, not caring about composition and layout, then use them as inputs to ipadaptor. Find tips, tricks and refiners to enhance your image quality. I came across the SaveImageWebsocket node Install the custom nodes via the manager, use 'pythongoss' as search term to find the "Custom Scripts". Make sure there is a space after that. encoding). Usually the upper one is the most important. Sorry for formatting, just copy and pasted out of the command prompt pretty much. Set the correct LoRA within each node and include the relevant trigger words in the text prompt before clicking the Queue Prompt. I have, once, left the damned thing running overnight, thinking I had stopped an active session running, when I had not. Let me know. It’s in the manager. I've noticed comfy be like "Yo, you ran out of ram, let me tile that for you". I even changed confit file manually and set to read only, it’s still don’t show button. This really just boils down to the the strength of the control net in relation to the cfg scale. I want to send a prompt to the API and receive the generated image as a result. It's called "Image Refiner" you should look into. Both are unlimited use. Create a Load Checkpoint node, in that node select the sd_xl_refiner_0. They have two sample workflows in the github . There are two weights for the Loras, and telling you the thruth I don't understand them very well. I've been exploring the ComfyUI API and trying to integrate it into my own application. A lot of people are just discovering this technology, and want to show off what they created. It’s no soooo simple but I figured it out. Knowing what you just tried, remote desktop Welcome to the unofficial ComfyUI subreddit. Prompt: Add a Load Image node to upload the picture you want to modify. Copy that path (we’ll need it later). There is an SDXL 0. Welcome to the unofficial ComfyUI subreddit. It’s a B- but it works. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There is no such thing as an SD 1. excellent, works fine on sdxl 1. I think you should be able to create super-nodes. bat runs as expected. How to do Control Net "Guess Mode" within comfy? "controlnet is more important" is using the advanced controlnet apply node and only connecting the positive. •. I don't know if it's supported in ComfyUI or not but the JS library it's built on does support that. 5D / 3D Model that can give me good body anatomy and position of limbs etc as I describe them (RevAnimated). EDIT: the website https://comfyui. Observe the circled selection as it shows the connected input / output colors. Just use your mask as a new image and make an image from it (independently of image A. As far as that's concerned, it looks like it's baked in. 20 an hour — it's shown as 2. The little grey dot on the upper left of the various nodes will minimize a node if clicked. Or, best bet, start at -1, this triggers the second Loras to null, thus switching off if necessary. Civit helper used to have a button for trigger word under each Lora, but it is broken now for some reason. When I use same values, I get terrible results. I’m using Welcome to the unofficial ComfyUI subreddit. Maybe Comfy UI just need quick settings or previous settings like the all-in-on prompt extension saved that way people don't have to type it all again. You'd probably want to right click the clip text encode and turn the prompt into an input. Not sure how Comfy handles loras that have been trained on different character/styles etc for different trigger words! Loaded Loras affect even without using trigger words. ComfyUI\models\loras Be careful because you will need to read the description of the Lora to know which trigger words in the prompt activate it. Belittling their efforts will get you banned. In the case of ComfyUI though, there is none, so create it x). (if you’re on Windows; otherwise, I assume you should grab the other file, requirements. the easiest way is to RD into the desktop and run it through that, the harder way it to change the IP comfy uses to and address on you lan unsed by other devices, reconfig your firewall on your PC and router, config port forwarding on your router, then anything on your lan can hit that IP webserver. You must be mistaken, I will reiterate again, I am not the OG of this question. Dragging it will copy its path in the command prompt. If you encounter that case, ignore the first step. I've tried a few approaches, such as using the /history and /view endpoints to retrieve the address of the image on the hard drive. e training data have 2 folders 20_bluefish and 20_redfish, bluefish and redfish are the trigger words), CMIIW Welcome to the unofficial ComfyUI subreddit. I know there are lots of other commands, but this just does the job very quickly. But adding trigger words in the promt for a lora in ComfyUI does nothing besides how the model is interpretating that words as any other word in the prompt. Yes, this is the most reliable way. ) do u/if exist "%a/. options: -h, --help show this help message and exit. Please keep posted images SFW. So you can say, queue your first task in the first workflow, switch to second tab, load image Changing the style affects the layout/composition and vice versa. Save about 7K mps, which isn’t much, but adds up at scale. 0. To duplicate parts of a workflow from one area to another, select the nodes as usual, CTRL + C to copy, but use CTRL + SHIFT + V to paste. They were working up until about a week or so ago for me. Do I need to Adjust something or do anything else? same Loras worked fine in 1111. Here is ComfyUI's workflow: Checkpoint: First, download the inpainting model Dreamshaper 8-inpainting (opens in a new tab) and place it in the models/checkpoints folder inside ComfyUI. I did some X Y testing and this is clear. Does anyone have a way of getting LORA trigger words in comfyui? I was using civitAI helper on A1111 and don't know if there's anything similar for getting that information. txt file in the command prompt. That might work. but it has the complexity of an SD1. Yes, you can actually run two instances at once by having a second tab of ComfyUI up, and load the second workflow on the second tab. If your LORA was created by a responsible individual, it will then show the "Training dataset tags" which will show your trigger word (s). 5 model. edit 9/13: someone made something to help read LORA meta and civitai info Tutorial video showing how to use the new node for ComfyUI called AnyNode. (Clip Skip 2) I can send you my work flow that generate 4k images. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. Return in the default folder and type on its path too, then remove it and type “cmd” instead. 9 refiner node. 5 was very basic with some few tips and tricks, but I used that basic workflow and figured out myself how to add a Lora, Upscale, and bunch of other stuff using what I learned. Note that when you close the UIs, the last one you closed is the one that will pop up the next time you get on Comfy. Please share your tips, tricks, and workflows for using this software to create your AI art. I've also used comfyui to do a style transfer to videos and images with our brand style. 25 0. Cmd then " for /r %a in (. The second point hasn't been addressed here so just a note that Loras cannot be added as part of the prompt like textual inversion can, due to what they modify (model/clip vs. it will take a bit of getting used to, and things like inpainting take a bit of getting used to with custom nodes (from data, the man's a godsend), but on the whole, comfyui is hands down way better than any of the other ai generation tools out there. . That was my issue. I didn't know you could do this. I see examples with 200+ nodes on that site. If it does not, blame the LORA author for not including that information. ComfyUI Is pretty Dope To be Honest. So I have a few questions about Loras and Lycoris (As I understand i use them both the same way in ComfyUI) I have the following workflow: I generate an image with a 2. use ComfyUI as a backend (Python) Hey community, Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. I use a script that updates Comfyui and checks all the Custom Nodes. Above that, there’s an $8/mo tier for more storage and vram. I show a couple of use case and go over general usage. 3. CLIP set last layer inverts this, where -1 is the last layer. Connect the multiple LoRA nodes in a series connection. it defaults to save without asking where on operagx which is what I run comfy through. I typically use the T4 option, which costs £0. Great question! I hate the efficiency node workflows and I looked around. to get the kind of button functionality you want, you would need a different UI mod of some kind that sits above comfyUI. If you checkmark the button for “always show button” it won’t ever save. For example a faceswap with a decent detailer and upscaler should contain no more than 20 nodes. Ok_Zombie_8307. " Then add Lora Loader and generate images for example with strengths 0. 50 0. Try it with loras that have a drastic effect that can't happen by coincidence. Replace with your favored loras one at a time. This is a series and I have feeling there is a method and a direction these tutorial are Welcome to the unofficial ComfyUI subreddit. Go to this page and the first example picture shows how to set it up It seems to be an issue with the randomize button widget. To disable/mute a node (or group of nodes) select them and press CTRL + m. 05 units an hour. Then switch to this model in the checkpoint node. My next step is using controlnet to extract a depth map or a canny ComfyUI installed. SDXL cliptext node used on left, but default on right sdxl-clip vs default clip. I did a full reinstall of ComfyUI and it still doesn't work. Automatic1111 installed. Please repost it to the OG question instead. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. I mean, the image on the right looks "nice" and all. You can wire the nodes up so that the calculations are automatic or just do the math. It would be awesome if this works, but I have tested lots of LoRA on my system using prompt reference only, and it has no effect whatsoever. I keep the string in a text file. py -h. Using only the trigger word in the prompt, you cannot control Lora. To prevent the application of Lora that is not used in the prompt, you need to directly connect the model that does not have Lora applied. You can construct an image generation workflow by chaining different blocks (called nodes) together. So you have 1 image A (here the portrait of the woman) and 1 mask. From my observations, the best users of loras have these settings 0. cd into your comfy directory ; run python main. It uses a LLM (OpenAi API or Local LLM) to generate code that creates any node you can think of as long as the solution can be written with code. Make sure you're using the VAE from the Pony Checkpoint and not a different VAE loader. ComfyUI has supplied directions It's not abundantly clear at this url Welcome to the unofficial ComfyUI subreddit. chat. usually the smaller workflows are more efficient or make use of specialized nodes. its not that big workflows are better. Paperspace has a free tier and you can fit a ComfyUI install into the available storage space (I think it’s 8gb iirc). 5. anyway, i hope you have fun with messing around with the workflows! Welcome to the unofficial ComfyUI subreddit. Try making a simple prompt like "Photo of a woman. Currently just going on civitAI and looking up the pages manually, but hoping there's an easier way. I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. You will need to stick to one model at a time though and keep extensions to a minimum. Just Started using ComfyUI when I got to know about it in the recent SD XL news. A clip skip of 2 omits the final layer. But if you train Lora with several folder to teach it multiple char/concept, the name in the folder is the trigger word (i. 75 1. I have never tried the load styles CSV. I'm more careful now. At the corner of each icon is (at minimum) a little tool gear icon that shows up on hover. 0 ;) Welcome to the unofficial ComfyUI subreddit. Mar 20, 2024 · ComfyUI is a node-based GUI for Stable Diffusion. hk ho zg df ia uh ss ok nl jn