Comfyui lora strength reddit 0 to +1. Stopped linking my models here for that very reason. But I can’t seem to figure out how to pass all that to a ksampler for model. If we've got LoRA loader nodes with actual sliders to set the strength value, I've not come across them yet. - At the latest in the second step the golden CFG must be used. Belittling their efforts will get you banned. It then applies ControlNet (1. 75, and and an end percent of 0. py. 49 votes, 21 comments. 75. Dec 7, 2024 · From a user perspective, the delta (which I'm calling a ConDelta, for Conditioning Delta, or Concept Delta, if you prefer) can be used the same way a LoRA can, by loading it with a node and setting a strength (positive or negative). I'd say one image in each batch of 8 is a keeper. X:X. dose this "<lora:easynegative:1. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. So using the same type of prompts like he is doing for pw_a, pw_b, etc. 2 seconds, with TensorRT. I’m pretty sure the LoRA file has to go under models/lora to work in a prompt instead of the Additional Networks LoRA Welcome to the unofficial ComfyUI subreddit. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. Right: Increased smooth step strength No lora applied, scaled down 50%. Checkpoints --> Lora. I've trained a LoRA with two different photo sets/modes, and different trigger (unique trained) words to distinguish them, but was using A1111 (or Vlad) at the time, and never have tried it in ComfyUI yet. 5 8 steps CFG LoRA strength: 1. So on X type select Prompt S/R, on X values type the name of your 1st lora, 2nd lora, 3rd lora etc. Please share your tips, tricks, and… I can already use wildcards in ComfyUI via Lilly Nodes, but there's no node I know of that makes it possible to call one or more LoRAs from a text prompt. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 000 and ControlNet strength 0. 8 for example is the same as setting both strength_model and strength_clip to 0. io/ComfyUI_examples/lora/ after the protest of Reddit For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. 3 weight, it doesn't mean that the trigger word is only applied to the lora. 0 and 2. If I lower the strength, it loses the characteristics of the original character. And also, value the generations with the same LoRA strength from 1 to 5 according to how well the concept is represented. All I see is (masterpiece) Blonde, 1girl Brunette, <lora:Redhead:0. For example I have portrait of someone, and want to put them into different scenes like playing basketball, or driving a car. Idk if it is done like this but what I would do is generate few, let's say 6 images with the same prompt and LoRA intensity for each methodology to test and ask a random five people to give scores to each group of six. using the same ratios/weights,etc. 3 weight and I have a trigger word at 0. Reddit user, _roblaughter_, discovered a severe security issue in the ComfyUI_LLMVISION node created by user u/AppleBotzz. Reply reply aerilyn235 Isn't that the truth of the day. Also, if this is new and exciting to you, feel free to post What does the LoRA strength clip function do? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no LoRA has no concept of precedence (where it appears in the prompt order makes no difference), so the standard ComfyUI workflow of not injecting them into prompts at all actually makes sense. 6 strength value. 0, again at 1. I found I can send the clip to negative text encode …. I follow his stuff a lot trying to learn. 5 for converting an anime image of a character into a photograph of the same character while preserving the features? I am struggling hell just telling me some good controlnet strength and image denoising values would already help a lot! Not sure why one can't lower the strength of the trigger word itself if they do need to add extra stuff to a prompt. For the LoRA I prefer to use one that focuses on lineart and sketches, set to near full strength. LoRA: Hyper SD 1. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . 20K subscribers in the comfyui community. Jul 29, 2023 · Eventually add some more parameter for the clip strength like lora:full_lora_name:X. Don't know how comfy behaves if it's not the case but you have to have a lora Thursday m that's compatible with your checkpoint. The leftmost column is only the lora, Down: Increased lora strength. It remains as "undefined" I am able to drag in sample files like the videos from the CivitAI page and it will update the "Lora_name" field as expected, but it will not run even if I have that LORA loaded. 5 lora and for a SDXL model you need a SDXL lora. (Same image takes 5. Generate a set of "sample image" for commonly used models, LoRA etc. I'm still experimenting and figuring out a good workflow. In your prompt put your 1st lora. But I've seen it enhance features with some loras. Share Sort by: Also the IPAdapter strength sweet spot seems to be between 0. If I have a chain of Loras and I… This slider is the only setting you have access to in A1111. In most UIs adjusting the LoRA strength is only one number and setting the lora strength to 0. 7>, scared, looking own, panic, screaming, a portrait of a ginger teen, blue eyes, short bob cut, ginger, black winter dress, fantasy art, 4K resolution, unreal engine, high resolution wallpaper, sharp focus” Final version. But it's not really predictable how it's changing. Not sure how to configure the Lora strengths in ComfyUI. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. We would like to show you a description here but the site won’t allow us. I'm starting to believe, it isn't on my end and the loras are just completely broken, but if anyone else could test them, that would be awesome. The image comes out looking dappled and fuzzy, not nearly as good as ddim for example. X or something. The Lora has improved with the step increases. Even though it's a slight annoyance having to wire them up, especially more than one - that does come with some UI validation and cleaner prompts. Please keep posted images SFW. Prompt: “<lora:skatirFace:0. Dec 17, 2024 · Rescale the LoRA Strength Finally test again the LoRA, consider that it might need a higher strength now. 75) to weaken it in relation to the trigger word. 0 but they kind of work at 2. Save some of the information (for example, name of LoRA with associated activation word) into a text file, which I can search easily. You adjust the weights in the prompt, like: <lora:catpics_lora:0. Welcome to the unofficial ComfyUI subreddit. It works for all Checkpoints, Loras, Textual Inversionss, Hypernetworkss, and VAEs. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. To my knowledge, Combine and Average work almost the same, but combine averages the weights based on the prompts, and average can average the StabilityAI just release new ControlNet LoRA for SDXL so you can run these on your GPU without having to sell a kidney to buy a new one. Before clicking the Queue Prompt, be sure that the LoRA in the LoRA Stack is Switched ON and you have selected your desired LoRA. 0 and the impact should be obvious. This may need to be adjusted on a drawing to drawing basis. 2) or something? If the clip is the text or trigger word, isn't it the same to put (loratriggerword:1. I have yet to see any switches allowing more than 2 options, which is the major limitation here. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. People have been extremely spoiled and think the internet is here to give away free shit for them to barf on - instead of seeing it as a collaboration between human minds from different economic and cultural spheres binding together to create a global culture that elevates people and where we give Start with a full 1. The "Model" output of the last Load Lora node goes to the "Model" input of the sampler node. After some investigation, I found that Forge seems to ignore the Lora strength. Tested a bunch of others of that author, now also in comfyui, and they all produce the same image, no matter the strength, too. The reason you can tune both in ComfyUI is because the CLIP and MODEL/UNET part of the LoRA will most likely have learned different concepts so tweaking them separately As with lots of things in ComfyUI there are multiple ways to do this. I feel like it works better if I put it in the prompt with <lora:name-of-LCM-lora-file:0. Please share your tips, tricks, and workflows for using this software to create your AI Lora usage is confusing in ComfyUI. When you use Lora stacker, Lora weight & Clip weight of the Lora are the same, when you load a lora in the lora loader, you can use 2 differents values. 1) using a Lineart model at strength 0. Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. Ksampler takes only one model. 0 / 1. 2 change in weight, so I can compare them and choose the best one, I use Efficiency Nodes for ComfyUI Version 2. g. Edit: Thank you everyone especially u/VeryAngrySquirrel for mentioning Mikey Nodes ! The "Wildcard And Lora Syntax Processor" is exactly what I'm looking for! If I click on "Lora_name" literally nothing happens. Works well, but stretches my RAM to the absolute limit. - If you set all ControlNet strength to 0. A lot of people are just discovering this technology, and want to show off what they created. And some LoRA do not play well with some checkpoint models. The only way I've found to not use a LORA, other than disconnecting the nodes each time, is to set the model strength to 0. Beneath the main part there are three modules: LORA, IP-adapter and controlnet. “I don’t even see the prompts anymore. 8> would set that LoRA to 0. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. when I start a trainingsession and I don't see the downtrend in loss that I'm hoping for I abort the process to save time and retry with new values. 75, which is used for a new txt2img generation of the same prompt at a standard 512 x 640 pixel size, using CFG of 5 and 25 steps with uni_pc_bh2 sampler, but this time adding the character LoRA for the woman featured (which I trained myself), and here I switch to Wyvern v8 In ComfyUI you use LoraLoader to use a Lora and it contains the strength parameter as well. Hi everyone, I am looking for a way to train LoRA using ComfyUI. 000 means it is disabled and will be bypassed. But what do I do with the model? The positive has a Lora loader. Since I've 'got' you here and we're on the subject, I'd like your take on a small matter: 89 votes, 24 comments. 5, from then on you can get very impressive results by playing with the strength. 5 you can easily prompt a background and other things. Please share your tips, tricks, and… The leftmost column is only the lora, Down: Increased lora strength. 7> which would use the LCM at 70% strength. To test, render once with 1024x1024 at strength 0. I usually txt2img at CFG 5-7, and inpaint around 3-5. 01 at around 10k iterations. 8 without reading the settings. This is unnecessary but hardcoding a 1. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Its as if anything this lora is included in gets corrupted, regardless of strength? Then split out the images into seperate png's and use to create a Lora in Kohya_SS (optionally can upscale each image first with a low denoise strength for extra detail) Once the Lora was trained on the first 10 images, i went back into stable diffusion and created 24 new images using the Lora, at various angles and higher resolution (between Hello, I am new to stable diffusion and I tried fine-tuning using LoRa. Once it's working, then start fiddling with the resolution. In A1111 they are placed in models/LoRA and called like this: <lora:loraname:0. Base model "Model" and "Clip" outputs go to the respective "Model" and "Clip" inputs of the first Load Lora node. 0, and some may support values outside that range. And wait for ControlNet reference 👀 You can do img2img at 1. 0 LoRA strength and adjust down if you need. Option a) t2i + low denoising strength + controlnet tile resample b) i2i inpaint + controlnet tile resample (if you want to maintain all texts) We would like to show you a description here but the site won’t allow us. 5 version stable diffusion, however when i tried using it with other models, not all worked. For the Controlnet, I use t2i-adapter_xl_sketch, initially set to strength of 0. 0 or 0. 4 LoRA to 0. Most LoRAs also need one or more keywords to trigger. Most of the time when you inpaint or use Adetailer, you will want to reduce CFG and lora weight, sometimes prompt weight, because they will overcook the image at lower values than in txt2img. And above all, BE NICE. Never set Shuffle, Normal BAE to high or it is like an inpainting. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. You often need to reduce the CFG to let the system make it “nice” … at the cost of potentially losing the Lora “model” side of things When you mix Lora’s this can get compounded … though it depends on the type of Lora’s. 0 to 1. appreciated - thanks! And certainly, though I need to put more thought into that for SDXL and how I'll differentiate it from what's already out there and from ClassipeintXL. In A111, my sdxl Lora is perfect at :1. 0+1. A little late to this post, but I have the solution for Automatic1111 users. Adding a Lora that was trained on anime and simple 2d drawing … with an “add detail” Lora …. 0. 7>, <lora:transformerstyle:0. Or just skip the lora download python code and just upload the lora manually to the loras folder. Generate your images! Hope you liked this guide! Edit: Added more example images. Right click on your LoRa loader node then > convert widget to input > lora name add a primitive node and plug it to the lora name input then on control after generate chooe randomize. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. 0 denoising strength and get amazing results We would like to show you a description here but the site won’t allow us. this will then be replaced by the next on your list when you run the script. 5 DreamBooths. For a 1. To prevent the application of Lora that is not used in the prompt, you need to directly connect the model that does not have Lora applied. 9> where the number is the strength. Please share your tips, tricks, and… Assuming both Lora's have trigger words, the easiest thing to try is to use the BREAK keyword to separate character descriptions, with each sub prompt containing a different trigger word(it doesn't matter where in the prompt the Loras are called though). To facilitate the listing, you could start to type "<lora:" and then a bunch of lora appears to choose from and filter the request the more you type the name of the lora. Even just SDXL with celebs doesn't seem to work that well, but then I don't generate boring portrait photos, it's all more "involved" and complex and celeb loras often ruin whole result, I'll have otherwise perfect image, right position, right composition, details, all, add celeb lora Adding the LoRA stack node in ComfyUI Adding the LoRA stack node in ComfyUI. Lowering the strength of the trigger word doesn't fix this problem. I have so far just tested with base SDXL1. It will pick a random LoRa each time you queue a prompt to generate. 4, it renders the co Welcome to the unofficial ComfyUI subreddit. Download this extension: stable-diffusion-webui-composable-lora A quick step by step for installing extensions: Click the Extensions tab within the Automatic1111 web app> Click Available sub-tab > Load from > search composable LORA > install > Then restart the web app and reload the UI Until then, I've light a candle to the gods of Copy & Paste and created the Lora vs Lora plot in a workflow. However, the prompt makes a lot of difference. <lora:foobar:0. Model + Lora 100% Model + Lora 75% Model + Lora 50% And then tweak around as necessary PS: Also works for controlnet with ConditioningAverage node, especially considering high strength controlnet in low resolution will look jagged sometimes in higher res output so lowering the effect in the hiresfix steps can mitigate the issue. Strength of the lora applied on the CLIP model vs the main MODEL. 0 and can't comment on how well it will work with various fine tunes. I've settled on simple prompts that include some of face and body features I'm aiming for. 5> generates the same image as <lora:foobar:1>. An added benefit is if I train the LoRA with a 1. I was using it successfully for SD1. In the Lora Loader, I set the strength to "1" essentially turning it "on" In the Prompt, I'm calling the Lora <lora:whatever:1. CLIP Strength: Most LoRAs don't contain any text token training (classification labels for image concepts in the LoRA data set). 25. github. I tried IPAdapter, but if I set the strength too high, it tries to be too close to the original image. Atleast for me it was like that, but i can't say for you since we don't have the workflow you use We would like to show you a description here but the site won’t allow us. The intended way to use SDXL is that you use the Base model to make a "draft" image and then you use the Refiner to make it better. I cannot find settings that work well for SDXL with LCM Lora. This is because the model's patch for Lora is applied regardless of the presence of the trigger word. Ex. Is there a node that lets me decide the strength schedule for a lora? Or can I simply turn a Lora off by putting it in the negative prompts? I have a node called "Lora Scheduler" that lets you adjust weights throughout the steps, but unfortunately I'm not sure which node pack it's in. this prompt was 'woman, blonde hair, leather jacket, blue jeans, white t-shirt'. 0 (I should probably have put the clip_strength to 0 but I did not) sampler: Euler scheduler: Normal steps: 16 My favorite recipe was with the Restart KSampler though, at 64 steps, but it had its own limitations (no SGM_Uniform scheduler for AnimateDiff). What is your Lora strength in comfy sdxl? My Lora doesn’t appear in the images at 1. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. 0 for all of the loaders you have chained in. The LORA modifies the base model. If I set the strength high, and start step at a higher value like 0. 97 votes, 17 comments. BTW, SDXL LoRAs do not work in non-SDXL and the opposite also happens. Because a LoRA places a layer in the currently selected checkpoint. My best lora tensorfile hit a lossrate at around 0. I’m starting to dream in prompts. Showing the LoRA stack connected to other nodes. . In your case, i think it would be better to use controlnet and face lora. As you can see, it's not simply scaling strength, the concept can change as you increase the smooth step. Tried a few combinations but, you know, ram is scarce while testing. /r/StableDiffusion is back open after the So far the only lora I used was either in a1111, or lcm lora, now I made my own, but it doesn't seem to work. You can also decrease the lenght by reducing the batch size (number of frames) regardless what says the prompt schedule (useful for doing quick tests) Has anyone gotten a good simple ComfyUI workflow for 1. The only reason I'm needing to get into actual LoRA training at this pretty nascent stage of its usability is that Kohya's DreamBooth LoRA extractor has been broken since Diffusers moved things around a month back; and the dev team are more interested in working on SDXL than fixing Kohya's ability to extract LoRAs from V1. So to use them in ComfyUI, load them like you would any other LoRA and change the strength to somewhere between 0. (i don't need the plot just individual images so i can compare myself). 4-0. e. In A1111, each LoRA you are using should have an entry for it in the prompt box. The issue has been that Automatic1111 didn't support this initially, so people ended up trying to set-up work arounds. I've made a few loras now of a person (ball park about 70 photos each). I find it starts to look weird if you have more than three LoRA at the same time. 8 strength. Please share your tips, tricks, and workflows for using this software to create your AI art. 2 and go to town. 5 with the following settings: LCM lora strength 1. I use SD Library Notes, and copy everything -- EVERYTHING!-- from the model card into a text file, and make sure to use Markdown formatting. Is this workflow at all possible in ComfyUI? I want to automate the weight adjustment of the Lora weight, I would like to generate multiple images for every 0. What am I doing wrong here in ComfyUI? The Lora is an Asian woman Now I want to use a video game character lora. Lora weights I typically divide in half, and tweak from that starting point. What am I doing wrong? I played around a lot with lora strength, but the result always seems to have lot of artifacts. I attached 2 images only inpainting and using the same lora, the white haired one is when i used a1111, the other is using comfyui (searge) . If I have a lora at 0. It gives very good results at around 0. , so that I can either cut and paste their metadata into Automatic1111 or open the PNG in ComfyUI to recover the workflow. , set your lora loader to allow strength input, and just direct that type of scheduling prompt to the strength of the Lora, it works just with the adjusted code in the node. 5 and 0. The extension also provides XY plot components to better evaluate merge settings. It worked normally with the regular 1. Using only the trigger word in the prompt, you cannot control Lora. Whereas a single wildcard prompt can range from 0 LoRAs to 10. It would clutter less the workflow. 0>, " if written in the -prompt without any other lora loading do its job ? in efficiency nodes if i load easynegative and give it a -1 weight dose it work like a -prompt imbed ? do i have to use the trigger word for loras i imbed like this "<lora:easynegative:1. Just inpaint her face with lora + standard prompt. The image below is the workflow with LoRA Stack added and connected to the other nodes. Maybe try putting everything except the lora trigger word in ( prompt here:0. The output from the latter is a model with all the LoRAs included, which can then route into your KSampler Im quite new to ComfyUI. Some prompts which work great without the Lora produce terrible results. 0> which is calling it for that particular image with it's standard strength applied. 8> . I had some success using stuff like position/concept loras from SDXL in Pony, but celebs? Characters? Nope. 2) or something? I'm new to ComfyUI and using stable diffusion in general. And a few Lora’s require a positive weight in the negative text encode. Try changing that or use a lora stacker that can allow lora/clip weight. When I use this LORA it always messes up my image. I can however update the strength field as one would expect. When you have a Lora that accepts float strength values between -1 and 1, how can you randomize this for every generation? There is the randomized primitive INT and there are math nodes that convert integers to floats. Reply reply AmericanKamikaze Lmao. 8> Red head” Oh, another Lora tip to throw on the bonfire: Since everything is a mix of a mix of a mix… watch out for Lora ‘overfitting’ that makes your images look like deep fried memes. Simply adding detail to existing crude structures is the easiest and I mostly only use LORA. Seems like it's busted. The process was: Create a 4000 * 4000 grid with pose positions (from open pose or Maximo etc), then use img2img in comfyUI with your prompt, e. I can select the LoRA I want to use and then select Anythingv3 or Protogen 2. - Lora strength_model 0. Choose a weight between 0. Some may work from -1. Sorry if this has been asked before but i can't seem to find answers. I tested all of them which are now accompanied with a ComfyUI workflow that will get you started in no time. If the denoising strength must be brought up to generate something interesting, controlnet can help to retain composition. In Comfy UI, you don't need to use the trigger word (especially if it's only one for the entire LoRA), mess with the strength_model setting in the LoRA loader instead. There you have it! I hope this helps 5. Since adding endless lora nodes tends to mess the simplest workflow, I'm looking for a plugin with a lora stacker node. 0>, " ? is there a comfyui discord server ? Put in the same information as you would with conditioning 1 and conditioning 2, but you can control the weight of the first conditioning (conditioning to) with the conditioning_to_strength variable. Also, I’ve had bad luck with using the LCM LoRA from the Additional Networks plug-in. decreasing the lora strength removing negative prompts decreasing/increasing steps messing with clip skip None of it worked and the outcome is always full of digital artifacts and is completely unusable. I recommend the DPM samplers, but use your favorite. high clip strength makes your prompt activate the features in the training data that were captioned, and also the trigger word. It does work if connected with lcm lora, but the images are too sharp where it shouldn't be (burnt), and not sharp enough where it should be. So if you have different LORAs applied to the base model, each pipeline will have a different model configuration. So to replicate the same workflow in ComfyUI, insert a LoRA, set the strength via the loader's slider, and do not insert anything special in the prompt. you can by using Prompt S/R, where one lora will be replaced by the next. ComfyUI only allows stacking LoRA nodes, as far as I know. the classipeint LoRA actually does a really great job of overlapping with that style, if you throw in artist names and aesthetic terms with a slightly lower LoRA strength. 0 Scheduler settings: CFG Scale: 1. 4x KSampler - ELI5: We would like to show you a description here but the site won’t allow us. As usually animateDiff has trouble keeping consistency, i tried making my first Lora. In practice, both are usually highly correlated, but there are situations where you want high model strength to capture a style but low clip strength to avoid a certain keyword in the captions. If you have installed and used this node, your sensitive data, including browser passwords, credit card information, and browsing history, may have been compromised and sent to a Discord server via webhook. Not to mention ComfyUI just straight up crashes when there are too many options included. 5>, and play around with the weight numbers until it looks how you want. 5-1. Any advice or resource regarding the topic would be greatly appreciated! Welcome to the unofficial ComfyUI subreddit. Take outputs of that Load Lora node and connect to the inputs of the next Lora Node if you are using more than one Lora model. 23K subscribers in the comfyui community. Currently I have the Lora Stacker from efficiency nodes, but it works only with the propietary Efficient KSampler node, and to make it worse the repository has been archived on Jan 9, 2024, meaning it could permanently stop working with the next comfyui update any minute now. The negative has a Lora loader. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. Also, I heard at some point that the prompt weights are calculated differently in comfyui, so it may be that the non-lora parts of the prompt are applied more strongly in comfy than a1111. Apr 17, 2025 · Welcome to the unofficial/community-run ComfyUI subreddit. Go to the Lora tab and use the LoRA named "ip-adapter-faceid-plus_sd15_lora" in the positive prompt. 5 Steps: 4 Scheduler: LCM On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. 5 model I can then use it with many different other checkpoints within the WebUI to create many different styles of the face. If you have a set model + Lora stack you want to save and reuse, you can use the Save Checkpoint node at the output of the model+lora stack merge to reuse it as a base model in the future. It has a clear effect in a minimal workflow (like the default one), but only if you set the strength to relatively high values, like the hand-fix had little to no effect below 4. I've developed a comfyui extension that offers a wide range of LoRA merge techniques (including dare). 0+for stacked Lora so making the change in the weight of the lora can make huge different in image but with stacked Lora it becomes time-consuming and tiring process so I We would like to show you a description here but the site won’t allow us. Do you experience the same? Is the syntax for Lora strength changed in Forge? We would like to show you a description here but the site won’t allow us. 5 model you need a 1. Previously I used to train LoRAs with Kohya_ss, but I think it would be very useful to train and test LoRAs directly in ComfyUI. I have tried sending the float output values from scheduler nodes into the input values for motion_scale or lora_strength but I get errors when I run the workflow. However, the image generated with Forge is quite different from the original A1111 webUI. 8. Heres’s mine: I use a couple of custom nodes -LoRA Stacker (from the Efficiency Nodes set) along feeding into CR Apply LoRA Stack node (from the Comfyroll set). Specifically changing the motion_scale or lora_strength values during the video to make the video move in time with the music. The Lora works in A1111. <lora:LORANAMEHERE:0. 18K subscribers in the comfyui community. Used the same as other lora loaders (chaining a bunch of nodes) but unlike the others it has an on/off switch. 8 might be beneficial if sharing on Civitai, as users often default to 1. Where do I want to change the number to make it stronger or weaker? In the Loader, or in the prompt? Both? Thanks. rgmltctv fmuca hdfjff eaey wjpiy boitvj srvrw djslqz aeb rlzrc
© Copyright 2025 Williams Funeral Home Ltd.