Workflows

ComfyUI MV-Adapter Advanced Workflow

0
Please log in or register to do it.

Today, I’ll walk you through my experience setting up Comfy UI, writing the best prompts, and using the perfect settings for the MV-Adapter. If you’ve been struggling or want to know how to get professional-quality images, you’re in the right place. Let’s get started!

Writing the Best Prompts

Creating stunning images starts with a good prompt. Here’s what I learned about writing effective prompts:

  • Be Specific: Instead of saying, “a cat,” I wrote something like, “a fluffy orange tabby cat with blue eyes, sitting on a wooden chair.”
  • Describe the Style: I added details like “3D style” or “realistic painting” to get the look I wanted.
  • Include Lighting and Background: Prompts like “soft morning light in a quiet meadow” really helped bring my images to life.

Here’s an example prompt that worked perfectly for me:
“3D style, a decorative figurine of a young anime-style girl, with pink hair and a shy smile, standing in a sunny garden.”

Best Settings to Run the MV-Adapter

To get the best results with the MV-Adapter, follow these settings:

  1. Choose the Right Model:
    • Use Stable Diffusion XL (SDXL) for high-quality images.
    • You can also try SD 2.1 for slightly lighter results.
  2. Set Image Size:
    • For clear and detailed results, use a size of 768×768 pixels.
  3. Configure Steps:
    • Set the Inference Steps to 50. This gives the model enough time to process and create detailed images.
  4. Set CFG Scale:
    • Keep the CFG scale at 7. This ensures the output matches your prompt while allowing some creative flexibility.
  5. Use the Diffuser MV Sampler:
    • Add the MV-Adapter node to your workflow.
    • Ensure the sampler is set to generate six views of the image.
  6. GPU Settings:
    • If your system has a GPU, enable it for faster processing.
    • For removing backgrounds or applying textures, you can turn on GPU-specific settings in the MV-Adapter.

Upscaling the Images

Once I had the multi-view images, I wanted to enhance them further. I used the 4x Megapixel Upscaler, and the difference was mind-blowing!

https://www.mediafire.com/file/txp3gv5mxl1av99/comfyuiupsacle.pth/file

The details became sharper, the textures on the character’s hair and clothes were more refined, and the colors popped beautifully. Even small elements, like the folds in fabric or highlights on the figurine’s face, looked stunning after upscaling.


Troubleshooting Errors

I did face a few issues, like errors when trying to remove backgrounds. Here’s what fixed it:

https://www.mediafire.com/file/rhddod5pgyk9xxe/nodes.py/file

  1. Download the Node.py File
  2. I went to the Comfy UI MV Adapter folder.
  3. Replaced the existing file with the one I provided in the description.
  4. Refreshed the workflow, and everything worked perfectly!

If you’re using a GPU and still have issues, disable the GPU option. But if you want to use the GPU for tasks like removing backgrounds, set it to true.

Loras File: 3d_render_style_xl.safetensors · goofyai/3d_render_style_xl at main

How to Run Micromamba in ComfyUI with Triton, Sage-Attention, Flash-Attention, and X-formers for ComfyUI 3D
ComfyUI Shuttle Aesthetic Workflow in 3 second with 8x Tile Upscale

Your email address will not be published. Required fields are marked *