InstallationVisit github.com
These nodes are built to work with the ComfyUI-MochiWrapper nodes and soon will work with native ComfyUI Mochi too. For now please follow the installation for the wrapper.
Then git clone this repo into your
ComfyUI/custom_nodes/
directory or use the ComfyUI Manager to install (when this repo is added there).There are no additional requirements.
Mochi ResamplerVisit github.com
This node creates a sampler that can convert the noise into a video.
: the latents of the original videolatents
: the strength that the generation should align with the original videoeta
- higher values lead the generation closer to the original
: the starting step to where the original video should guide the generationstart_step
- a lower value (e.g. 0) will have much closer following but not allow for additional objects like a hat to be placed
- a higher value (e.g. 6) will allow for new objects like a hat to be placed, but may not follow the original video. Higher values can also lead to bad results (blurs)
the step to stop guiding the generation closer to the original videoend_step
- a lower value will lead to more differences in the video output
: whether the eta (strength of guidance) should stay constant, increase, or decrease as steps progress.eta_trend
is the recommended setting for most changes.linear_decrease
SamplerCustom (MochiWrapper)Visit github.com
This is the classic KSampler or SamplerCustom from ComfyUI but for the MochiWrapper.
andpositive
can be anything you like.negative
shoud be the target prompt.positive
: can have any cfg that would work with normal Mochi (e.g. 4.50)cfg
: should be the latent from unsamplinglatents
: must be prepared but NOT flippedsigmas
: the seed has no effectseed