InstallationVisit github.com
These nodes are built to work with the ComfyUI-MochiWrapper nodes and soon will work with native ComfyUI Mochi too. For now please follow the installation for the wrapper.
Then git clone this repo into your ComfyUI/custom_nodes/
directory or use the ComfyUI Manager to install (when this repo is added there).
There are no additional requirements.
Mochi ResamplerVisit github.com
This node creates a sampler that can convert the noise into a video.
latents
: the latents of the original videoeta
: the strength that the generation should align with the original video- higher values lead the generation closer to the original
start_step
: the starting step to where the original video should guide the generation- a lower value (e.g. 0) will have much closer following but not allow for additional objects like a hat to be placed
- a higher value (e.g. 6) will allow for new objects like a hat to be placed, but may not follow the original video. Higher values can also lead to bad results (blurs)
end_step
the step to stop guiding the generation closer to the original video- a lower value will lead to more differences in the video output
eta_trend
: whether the eta (strength of guidance) should stay constant, increase, or decrease as steps progress.linear_decrease
is the recommended setting for most changes.
SamplerCustom (MochiWrapper)Visit github.com
This is the classic KSampler or SamplerCustom from ComfyUI but for the MochiWrapper.
positive
andnegative
can be anything you like.positive
shoud be the target prompt.cfg
: can have any cfg that would work with normal Mochi (e.g. 4.50)latents
: should be the latent from unsamplingsigmas
: must be prepared but NOT flippedseed
: the seed has no effect