The open-source Text-to-Image model FLUX, created by the original team of Stable Diffusion, has become popular. Due to its excellent generation quality approaching Midjourney, it has become the “open-source king” in the Text-to-Image field. However, the birth of top-level images cannot be separated from powerful graphics card computing power support. Is there a way to use FLUX on a small graphics memory computer? Of course, there is. This article will tell you.
II. Download GGUF
The core of using FLUX on small memory computers lies in GGUF, which maximizes the restoration of FLUX’s original model effect while preserving details as much as possible
Compared to the speed improvement of the original model, the memory requirement has been reduced to 6G.
https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main
How to download the model? It depends on the size of your video memory.
If you have 24GB of video memory, then download the model around 24GB
Similarly, if your existing device only has 6G, then download a 4-6G model
The bigger the model, the better the details, but the difference is not that big
- Download to your Comfyui root directory/models/unet
- After downloading the model, we also need to download the clip model
https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
All three of them need to be downloaded and placed in the root directory/models/CLIP.
Finally, we install the vae of flux.
black-forest-labs/FLUX.1-dev at main (huggingface.co)
Put the vae file in the root directory/models/vae folder
Set up the FULX Text-to-Image workflow
First, install the new extension: Comfyui-GGUF.
If the online installation fails, we can install it locally
https://github.com/city96/ComfyUI-GGUF?tab=readme-ov-file
Download and unzip ComfyUI-GGF-main to custom_nodes
Then we start Comfyui.
New node Unet Loader (GGUF) Select the model you downloaded