Trying the Compositor Node with Flux
So I was messing around with this Compositor Node that lets you blend up to 8 images into one scene—dragging, rotating, and placing them wherever you want. At first, I tried using SD1.5 like the docs suggested, but the results weren’t quite what I expected. The edges felt off, and the blending wasn’t as smooth as I’d hoped.
That’s when I switched to the Flux model. Here’s the thing: Flux handles multi-image compositions way better, especially with its hybrid architecture. The alignment between layers just works, and you get sharper details without that weird artifacting SD1.5 sometimes throws at you.
How I Got It Running
I grabbed the Compositor Node from erosDiffusion’s GitHub—either by cloning the repo into the custom_nodes
folder or just installing it through ComfyUI Manager (search “Compositor” and pick the one from erosDiffusion). If you’re lazy like me, the Manager’s “Install via Git URL” button works too—paste this link:
https://github.com/erosDiffusion/ComfyUI-enricos-nodes.git.
For Flux, you’ll need a few extra files:
- The Flux FP8 model (link)
- Text encoders (here)
- VAE (nerualdreming’s version)
Drop those into your ComfyUI/models
folders, and you’re set. The whole setup took maybe 10 minutes, including the trial-and-error with SD1.5.
What Actually Worked
Once Flux was in place, the Compositor Node just clicked. I stacked 4 images—a background, two characters, and some text—and tweaked their positions with the node’s sliders. The real win? Flux’s prompt adherence kept everything cohesive, even when I rotated layers or resized them. No weird seams or mismatched lighting.
If you’re trying this, I’d skip SD1.5 entirely and go straight to Flux. The difference is night and day. For ControlNet users, there’s a solid pre-trained model here too:
https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro.
Next up: testing how far I can push the 8-image limit. I’ll report back once I’ve crashed my GPU a few more times.