IF by DeepFloyd Lab at StabilityAI

We introduce DeepFloyd IF, a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. DeepFloyd IF is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: a base model that generates 64x64 px image based on text prompt and two super-resolution models, each designed to generate images of increasing resolution: 256x256 px and 1024x1024 px. All stages of the model utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis.

Minimum requirements to use all IF models:
- 16GB vRAM for IF-I-XL (4.3B text to 64x64 base module) & IF-II-L (1.2B to 256x256 upscaler module)
- 24GB vRAM for IF-I-XL (4.3B text to 64x64 base module) & IF-II-L (1.2B to 256x256 upscaler module) & Stable x4 (to 1024x1024 upscaler)
xformers
and set env variableFORCE_MEM_EFFICIENT_ATTN=1
Quick Start
Local notebooks
The Dream, Style Transfer, Super Resolution or Inpainting modes are avaliable in a Jupyter Notebook here.
Integration with Diffusers

IF is also integrated with the
Hugging Face Diffusers library.
Diffusers runs each stage individually allowing the user to customize the image generation process as well as allowing to inspect intermediate results easily.
Example
Before you can use IF, you need to accept its usage conditions. To do so:
- Make sure to have a Hugging Face account and be loggin in
- Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0
- Make sure to login locally. Install
huggingface_hub
run the login function in a Python shell
and enter your Hugging Face Hub access token.
Next we install
diffusers
and dependencies:And we can now run the model locally.
By default
diffusers
makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM.If you are using
torch>=2.0.0
, make sure to delete all enable_xformers_memory_efficient_attention()
functions.There are multiple ways to speed up the inference time and lower the memory consumption even more with
diffusers
. To do so, please have a look at the Diffusers docs:For more in-detail information about how to use IF, please have a look at the IF blog post and the documentation

.
Run the code locally
Loading the models into VRAM
I. Dream
Dream is the text-to-image mode of the IF model

II. Zero-shot Image-to-Image Translation
In Style Transfer mode, the output of your prompt comes out at the style of the

In Style Transfer mode, the output of your prompt comes out at the style of the
support_pil_img

III. Super Resolution
For super-resolution, users can run
IF-II
and IF-III
or 'Stable x4' on an image that was not necessarely generated by IF (two cascades):
IV. Zero-shot Inpainting

Model Zoo
The link to download the weights as well as the model cards will be available soon on each model of the model zoo
Original
Name | Cascade | Params | FID | Batch size | Steps |
I | 400M | 8.86 | 3072 | 2.5M | |
I | 900M | 8.06 | 3200 | 3.0M | |
I | 4.3B | 6.66 | 3072 | 2.42M | |
II | 450M | - | 1536 | 2.5M | |
II | 1.2B | - | 1536 | 2.5M | |
IF-III-L* (soon) | III | 700M | - | 3072 | 1.25M |
- best modules
Quantitative Evaluation
FID = 6.66

License
The code in this repository is released under the bespoke license (see added point two).
The weights will be available soon via the DeepFloyd organization at Hugging Face and have their own LICENSE.
Disclaimer: The initial release of the IF model is under a restricted research-purposes-only license temporarily to gather feedback, and after that we intend to release a fully open-source model in line with other Stability AI models.
Limitations and Biases
The models available in this codebase have known limitations and biases. Please refer to the model card for more information.
DeepFloyd IF creators:

Research Paper (Soon)

Acknowledgements
Special thanks to StabilityAI and its CEO Emad Mostaque for invaluable support, providing GPU compute and infrastructure to train the models (our gratitude goes to Richard Vencu); thanks to LAION and Christoph Schuhmann in particular for contribution to the project and well-prepared datasets; thanks to Huggingface teams for optimizing models' speed and memory consumption during inference, creating demos and giving cool advice!
External Contributors
- The Biggest Thanks @Apolinário, for ideas, consultations, help and support on all stages to make IF available in open-source; for writing a lot of documentation and instructions; for creating a friendly atmosphere in difficult moments ;

- Thanks, @patrickvonplaten, for improving loading time of unet models by 80%; for integration Stable-Diffusion-x4 as native pipeline ;

- Thanks, @williamberman and @patrickvonplaten for diffusers integration ;

- Thanks, @hysts and @Apolinário for creating the best gradio demo with IF ;
- Thanks, @Dango233, for adapting IF with xformers memory efficient attention ;