Here’s the thing about AI image generation—every new model promises to be a game-changer. Pulid 2 actually delivers on some of that hype, especially with the dual image support and that new Wave Speed feature. I just finished setting it up in ComfyUI, and yeah, it’s noticeably faster than the last version.
Getting the Models
First, you’ll need to grab the Flux models. The full 32bit/16bit version needs about 22GB VRAM, which is pretty hefty. If you’re running tighter on resources, the 8bit FP8 variants work surprisingly well—I tested the e4m3fn version on a 12GB card and it handled most prompts without issues.
All the model links are on the Flux Hugging Face page, including the required CLIP and VAE files. Just download and drop them into your usual ComfyUI model folders.
Models you’ll need:
- 32bit/16bit (~22GB VRAM): model, encoder
- 8bit gguf (~12GB VRAM): model, encoder
- 8 bit FP8 e5m2 (~12GB VRAM): model, encoder
- 8 bit FP8 e4m3fn (~12GB VRAM): model, encoder
- Clip and VAE (for all models): clip, vae
Cleaning Up Old Installations
This part caught me off guard—if you had the older ComfyUI-PuLID-Flux nodes installed, you’ll need to remove them first. They use the same node names (ApplyPulidFlux) which causes conflicts. I learned this the hard way after my first test run crashed with cryptic errors.
After uninstalling the old version, installing Pulid 2 through ComfyUI Manager is straightforward. Just search for “Pulid 2” and hit install.
Dealing With Missing Nodes
When I first loaded the example workflow, ComfyUI flagged a few missing nodes. The Manager’s “Missing Nodes” section handled most of them automatically, but I had to manually install one dependency by running:
pip install -r requirements.txt
from the ComfyUI_PuLID_Flux_ll folder. Not a big deal, but worth noting if you’re not comfortable with command line stuff.
Models you’ll need:
- Download pulid_flux_v0.9.1.safetensors:
/models/pulid/
- EVA02-CLIP-L-14-336:
/ComfyUI/models/clip/
(Auto-download supported) - AntelopeV2 Models:
/ComfyUI/models/insightface/models/antelopev2/
(Auto-download supported) - Parsing Models:
Where Things Get Interesting
The real magic happens when you add the Tea Cache and Wave Speed nodes. Wave Speed especially makes a visible difference—it’s like having a turbo button for image generation. The GitHub repo has the installation details if you want to try it.
What surprised me most was how well the dual image support works. Feeding it two reference images actually blends the styles in ways I didn’t expect. It’s not perfect—sometimes the merge gets a bit chaotic—but when it works, the results are impressive.