ComfyUI: Difference between revisions
m (→Misc) |
m (→LTX-2) |
||
| Line 262: | Line 262: | ||
[https://www.reddit.com/r/StableDiffusion/comments/1q9utj2/this_fixed_my_oom_issues_with_ltx2/ This fixed my OOM issues with LTX-2] - Didn't work. |
[https://www.reddit.com/r/StableDiffusion/comments/1q9utj2/this_fixed_my_oom_issues_with_ltx2/ This fixed my OOM issues with LTX-2] - Didn't work. |
||
== Ace Step 1.5 == |
|||
[https://civitai.com/models/1558969?modelVersionId=1832664 Civit AI Workflow] - "Cover - Style transfer that preserves the semantic structure (rhythm, melody) while generating new audio with different characteristics" |
|||
Revision as of 00:57, 10 February 2026
TODO
Upscale Fix?
Tiled Diffusion?
IPAdapter
Outpainting - https://www.youtube.com/watch?v=j20P4hAZS1Q https://www.youtube.com/watch?v=qLZJ7iSq9tY
Misc
Regional prompting for AI Image Generation (Flux and ComfyUI) - Both DetailD (including the custom sampler stuff) and region based prompting
ComfyUI Tutorial, SEGS Picker: How to prompt for each individual face
EASY Outpainting in ComfyUI: 3 Simple Ways with Auto-Tagging (Joytag) | Creative Workflow Tutorial
https://www.youtube.com/@goshniiAI/videos - Upscaling
Generate Million-Megapixel Images with Z Image Turbo!
How to Train a Z-Image-Turbo LoRA with AI Toolkit
Stop Making Blurry AI Art! The "Halftone" Fix for Wan & Qwen (ComfyUI Workflow)
Insane ComfyUI Upgrade: 165 Custom Nodes That Actually Work
Different Region = Different LoRA! New ComfyUI Nodes for Area Composition
Comfyui 101: How to Create X/Y Plots for SDXL & Flux How to Use Flux Unet ( & GGUF) Models With TinyTerraNodes XY Plot
https://civitaiarchive.com https://civitasbay.org/
zer0int/CLIP-GmP-ViT-L-14/tree/main - what's the best version of Vit-L clip encoder? smooth and detail both r good
Updateing
git pull
Installing
uv pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu130
uv pip install sageattention==2.2.0 --no-build-isolation
pip install -r requirements.txt
Prompts
[X : Y : N] - Use X for N steps then Y. Not sure if this works in comfyui.
Prompting Guide - FLUX.2 [klein]
https://docs.bfl.ai/guides/prompting_guide_kontext_i2i
https://danbooru.donmai.us/wiki_pages/tag_groups
https://danbooru.donmai.us/related_tag
https://danbooru.donmai.us/posts?tags=list_of_style_parodies&z=2
https://betterwaifu.com/blog/ai-pose-prompts
Artists
Deak Ferrand - Dune, Sandman
by Diane Pierce and Alena Aenami in the style of Dan Mumford,
Asaf hanuka
rodolphe wytsman impressionism
by Michal Karcz, by Arthur Sarnoff, by Marc Simonetti,
Greg Rutkowski
Syd Mead
Cyberpunk
Syd Mead - Bladerunner concept art
Josan Gonzalez
Sparth (Nicolas Bouvier)
Beeple (Mike Winkelmann) - Weird colours
Simon Stålenhag - Giant robots guy
Fantasy
Greg Rutkowski
Benchmark
Video Upscale
| 2026-01-30 | 336.80s | GAN upscaler template in ColmfyUI | for a 3 second video using the | |
| 2026-01-30 | 19.04s | GAN upscaler template in ColmfyUI | for a second pass (Cached or load time?) | |
| 2026-01-30 | 343.10s | seedvr2_ema_3b_fp16 | ||
| 2026-01-30 | 351.37 | seedvr2_ema_3b_fp8 | ||
| 2026-01-30 | 354.54 | seedvr2_ema_3b-Q4_K_M | ||
| 2026-01-30 | 442.38 | seedvr2_ema_3b_fp8 | With torch compile and tile size of 768 (First run) | |
| 2026-01-30 | 305.02 | seedvr2_ema_3b_fp8 | With torch compile and tile size of 768 (Second run) | |
| 2026-01-30 | 408.12 | seedvr2_ema_3b_fp8 | With torch compile and tile size of 768, swap blocks 6 (First run) | |
| 2026-01-30 | 341.21 | seedvr2_ema_3b_fp8 | With torch compile and tile size of 768, swap blocks 6, batch size 21 | |
| 2026-01-30 | 375.13 | seedvr2_ema_3b_fp8 | With torch compile and encoding tile size of 1024, swap blocks 6, batch size 21 | |
| 2026-01-30 | 323.73 | seedvr2_ema_3b_fp8 | With torch compile and encoding tile size of 1024, swap blocks 9, batch size 21, sageattention2 | |
| 2026-01-30 | 405.08 | seedvr2_ema_7b_Q4_K_M | Default settings | |
Training LORAs
All the training programs just seem to be wrappers around hugging face scripts.
| Last Updated | ⭐ | Z-Image | Flux.2 Klein | Qwen | |
|---|---|---|---|---|---|
| Kohya SS | 7 Months Ago | 11.9k | |||
| OneTrainer | 2 days ago. | 2.7k | ✔️ (unknown base/turbo) | ❌? | ✔️ |
| Ai Toolkit | 2 days ago. | 9.2k | |||
| Musubi | Yesterday. | 1.7k | ✔️ (base) | ✔️ | ✔️ (including layered) |
Seed VR2
SeedVR2 BEST SETTINGS for Every GPU! (6GB–32GB+) Nobody Told You This…
🤿 SeedVR2 v2.5 Video Upscaling: Official Guide from the ComfyUI Integration Team | AInVFX Nov 7 - Talks about out of memory at different phases.
LTX-2
LTX 2 Q6 16GB on 8GB VRAM? 🤯 Free ComfyUI Workflow 2026
LTX-2 generate a 30s video in 310seconds - Has distilled GGUFs
LTX-2 Simplified Workflow 🔥 Distilled Checkpoints or Separated VAE & Transformer?
LTX-2 Fix: Yes, You Can Actually Use It Now (16-24gb VRAM) - Didn't work
This fixed my OOM issues with LTX-2 - Didn't work.
Ace Step 1.5
Civit AI Workflow - "Cover - Style transfer that preserves the semantic structure (rhythm, melody) while generating new audio with different characteristics"