dok34.ru
Moderator
The checkpoint goes in ComfyUI/models/unet (not checkpoints)
Download the original weights here:
	
Download the fp8 version for <24gb vram systems:
	
	
		
			
				
					
						 
					
				
			
			
				
		
	
Text encoders go in ComfyUI/models/clip:
	
	
		
			
				
					
						 
					
				
			
			
				
		
	
VAE (ae.sft) goes in ComfyUI/models/vae:
	
Download the fp8 t5xxl for degraded quality but less RAM use
Launch ComfyUI with "--lowvram" arg (in the .bat file) to offload text encoder to CPU.
I can confirm this runs on:
- RTX 3090 (24gb) 1.29s/it
- RTX 4070 (12gb) 85s/it
Both running the fp8 quantized version. The 4070 is very slow though.
				
			Download the original weights here:
Download the fp8 version for <24gb vram systems:
 
					
				flux1-dev-fp8.safetensors · Kijai/flux-fp8 at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
				
					
						
					
					huggingface.co
				
			Text encoders go in ComfyUI/models/clip:
 
					
				comfyanonymous/flux_text_encoders at main
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
				
					
						
					
					huggingface.co
				
			VAE (ae.sft) goes in ComfyUI/models/vae:
Download the fp8 t5xxl for degraded quality but less RAM use
Launch ComfyUI with "--lowvram" arg (in the .bat file) to offload text encoder to CPU.
I can confirm this runs on:
- RTX 3090 (24gb) 1.29s/it
- RTX 4070 (12gb) 85s/it
Both running the fp8 quantized version. The 4070 is very slow though.
 
	
 
 
		 
					
				 
						
					 
 
		


 
					
				

