Nov 20, 2024
10 min read

Use FLUX, PyTorch, and Streamlit to Build an AI Image Generation App

The need for AI-generated images has been growing rapidly in recent years. These images are not only used for artistic purposes, but also for practical applications in various industries. For example, in the fashion industry, AI-generated images can be used to create virtual models for showcasing clothing. In the automotive industry, AI-generated images can be used for designing and testing new car models. And the best part? You can now run your own AI image generation machine on Koyeb GPUs.

The FLUX.1 [dev] model (by BlackForestLabs) is an advanced AI image generation model that produces outstanding output quality. It is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. It features competitive prompt following, and is trained using guidance distillation. Additionally, the generated outputs can be utilized for personal, scientific, and non-commercial purposes as outlined in the FLUX.1 [dev] Non-Commercial License.

In this tutorial, we will learn how to set up a Streamlit application, integrate the FLUX model for real-time image generation, and deploy the application using Docker and Koyeb, ensuring a scalable image generation service.

You can deploy the FLUX application as built in this tutorial using the Deploy to Koyeb button below:

Deploy to Koyeb

Requirements

To successfully follow this tutorial, you will need the following:

Understanding of the components

Text-To-Image Generation and FLUX Model

The process of text-to-image generation involves the model interpreting input text and translating it into visual representations. This process consists of several steps: first, the model encodes the textual input into a latent space, capturing the semantic meaning of the words. Next, it employs a generative process to sample from this latent space, producing images that align with the described concepts. The FLUX model, developed by BlackForestLabs, is a state-of-the-art 12 billion parameter rectified flow transformer trained on extensive datasets containing pairs of images and their corresponding textual descriptions. This training enables the model to learn the intricate relationships between language and visual content. Consequently, users can input detailed prompts, and the FLUX model generates images that reflect those prompts with accuracy (and creativity).

Streamlit

Streamlit is an open-source Python library designed to create interactive data applications, often referred to as dashboards. It empowers developers to build and share data apps simply and intuitively, eliminating the need for extensive web development expertise.

Streamlit apps are created as Python scripts, which are then executed within the Streamlit environment. The library offers a set of functions that can be utilized to add interactive elements to the app such as upload file button.

Steps

  1. Set up the environment: Start by setting up your project directory, installing necessary dependencies, and configuring environment variables.
  2. Set up Streamlit: Next, install Streamlit and create the initial user interface for your application.
  3. Generate AI Images with FLUX Model: Using the FLUX model to generate AI images based on the user prompt and configuration.
  4. Dockerize the Streamlit application: Create a Dockerfile to containerize your application for consistent deployment.
  5. Deploy to Koyeb: Finally, deploy your application on the Koyeb platform.

Set up the environment

Let's start by creating a new Streamlit project. To keep your Python dependencies organized you should create a virtual environment.

First, create and navigate into a local directory:

# Create and move to the new directory
mkdir example-koyeb-flux-images
cd example-koyeb-flux-images

Afterwards, create and activate a new virtual environment:

# Create a virtual environment
python -m venv venv

# Active the virtual environment (Windows)
.\venv\Scripts\activate.bat

# Active the virtual environment (Linux)
source ./venv/bin/activate

Now, create a requirements.txt file with the following dependencies:

streamlit
watchdog
diffusers
torch
torchvision
einops
huggingface_hub[hf_transfer]
safetensors
sentencepiece
transformers
tokenizers
protobuf
requests
invisible-watermark
accelerate
peft

In terms of dependencies, we have included Streamlit for building a web app in Python, Hugging Face for real-time local use of the FLUX model, and watchdog to monitor file system events. The accelerate package is used to streamline the training and inference processes of machine learning models, making it easier to manage distributed training and optimize performance. The invisible-watermark package is utilized to embed invisible watermarks in generated content, ensuring uniqueness of the outputs produced by the model.

Now, you can install the dependencies with the following command:

pip install -r requirements.txt

Now, let's move on to creatng a new Streamlit project.

Set up Streamlit

In this step, you will set up the Streamlit UI that will define the visual layout of the page, and the ability for users to load the FLUX model to start generating images with AI. All the logic for the project will reside in this file, so you can start by creating an app.py file with the following code:

# File: app.py

import os, torch, streamlit
from diffusers import FluxPipeline
from huggingface_hub import login, snapshot_download

# Log in to Hugging Face using the provided token from environment variables
login(token=os.getenv('HF_TOKEN'))

# Set the title of the Streamlit application
streamlit.title("AI Image Generation with FLUX.1-dev")

# Create a text input field for the user to enter a prompt for image generation
prompt = streamlit.text_input("Enter your prompt:", "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k")

# Create four columns for user inputs
col1, col2, col3, col4 = streamlit.columns(4)

# Column 1: Input for image width
with col1:
    num_width = streamlit.number_input("Width:", min_value=1, value=256)

# Column 2: Input for image height
with col2:
    num_height = streamlit.number_input("Height:", min_value=1, value=256)

# Column 3: Input for the number of images to generate
with col3:
    num_images = streamlit.number_input("Images:", min_value=1, value=3)

# Column 4: Input for the number of inference steps
with col4:
    num_inference_steps = streamlit.number_input("Steps:", min_value=1, value=3)

# Button to load the model from Hugging Face
if streamlit.button("Load Model"):
    snapshot_download(repo_id="black-forest-labs/FLUX.1-dev", cache_dir="./FLUX_1_dev")

The code above does the following:

  • Begins with importing the Streamlit module.
  • Logs into Hugging Face using a token stored in environment variables, allowing access to the model repository.
  • Defines text values for each step and page's title and description
  • Creates four columns to organize user inputs for image dimensions, the number of images, and inference steps. Each column contains a number input field with specified minimum values and default values.
  • Creates a Load Model button to load the FLUX model from Hugging Face. When clicked, it downloads the model snapshot to a specified cache directory using snapshot_download function.

With this, you have setup a UI that is able to load the FLUX model per user request. Now, let's move on to generating AI images based on the user prompt and settings.

Generate AI Images with FLUX Model

In this step, you will invoke FLUX model to generate AI images based on the user prompt. Per the default values set in the input, the model will be generate 3 images for each prompt by user. Make the following additions in the app.py file:

# File: app.py

# Existing code

# Button to generate images using the FLUX model
if streamlit.button("Generate Image"):
    # Load the FLUX model with specified parameters
    pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, cache_dir="./FLUX_1_dev")

    # Generate images based on the user input parameters
    images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, height=num_height, width=num_width, num_images_per_prompt=num_images).images

    # Create three columns to display the generated images
    cols = streamlit.columns(3)

    # Loop through the generated images and display them in the columns
    for i, image in enumerate(images):
        cols[i % 3].image(image, caption=prompt)

The code addition above does the following:

  • Creates a button titled Generate Image. When clicked, it will instantiate a Flux pipeline for text-to-image generation.
  • Invokes the pipeline with the user configured parameters to generate the desired height and width, number of inference steps to be taken, and number of images to be generated.
  • Creates three columns to organize the generated images.

Now, you can run the Streamlit application with:

streamlit run ./app.py --server.port 8000

The application would now be ready on http://localhost:8000. Test the application by generating AI images by altering the default input values and your own custom prompt.

Now, let's dockerize the application to ensure consistency between multiple deployments.

Dockerize the Streamlit application

Dockerizing deployments helps by creating a consistent and reproducible environment, ensuring that the application runs the same way on any system. It simplifies dependencies management and enhances scalability, making deployments more efficient and reliable. To dockerize, create a Dockerfile at the root of your project with the following code:

FROM python:3.12 AS runner

WORKDIR /app

COPY app.py requirements.txt .

RUN pip install -r requirements.txt --root-user-action=ignore
RUN pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 --root-user-action=ignore # [!code ++]

RUN apt-get update
RUN apt-get install -y libsm6 libxext6 git git-lfs # [!code ++]
RUN rm -rf /var/lib/apt/lists/*

EXPOSE 8000 # [!code ++]

ENV HF_HUB_ENABLE_HF_TRANSFER=1

ARG HF_TOKEN # [!code ++]
CMD ["streamlit", "run", "./app.py", "--server.port", "8000"] # [!code ++]

Apart from the usual Dockerfile to deploy Python applications, following tweaks/additions have been made in this code:

  • pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 --root-user-action=ignore to install PyTorch with CUDA support for GPU acceleration.
  • RUN apt-get update && apt-get install -y libsm6 libxext6 git git-lfs && rm -rf /var/lib/apt/lists/* is used to install git-lfs and git, and then clean up package lists to reduce image size.
  • EXPOSE 8000 is used to specify the port on which the Streamlit application will run.
  • CMD ["streamlit", "run", "./app.py", "--server.port", "8000"] is used to define the command to start the Streamlit app on port 8000.

With all configured, let's move on to deploy the application to Koyeb.

Deploy to Koyeb

Now that you have the application running locally you can also deploy it on Koyeb and make it available on the internet.

Create a new repository on your GitHub account so that you can push your code.

You can download a standard .gitignore file for Python from GitHub to exclude certain directories and files from being pushed to the repository:

curl -L https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore -o .gitignore

Run the following commands in your terminal to commit and push your code to the repository:

git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin [Your GitHub repository URL]
git push -u origin main

You should now have all your local code in your remote repository. Now it is time to deploy the application.

Within the Koyeb control panel, while on the Overview tab, initiate the app creation and deployment process by clicking Create Web Service.

  1. Select GitHub as the deployment method.
  2. Select your repository from the menu. Alternatively, deploy from the example repository associated with this tutorial by entering https://github.com/koyeb/example-flux-1-pytorch in the public repository field.
  3. In the Instance selection, select a GPU Instance.
  4. Set your HuggingFace access token in the HF_TOKEN environment variable.
  5. Finally, click the Deploy button.

Once the application is deployed, you can visit the Koyeb service URL (ending in .koyeb.app) to access the Streamlit application.

Conclusion

In this tutorial, you built an AI Image Generation application using FLUX dev model with Streamlit framework. During the process, you learned how to invoke the Flux pipeline in python to generate AI images on the go, customized with user fed prompt and settings, and use the Streamlit framework to quickly prototype the user interface.

Given that the application was deployed using the Git deployment option, subsequent code push to the deployed branch will automatically initiate a new build for your application. Changes to your application will become live once the deployment is successful. In the event of a failed deployment, Koyeb retains the last operational production deployment, ensuring the uninterrupted operation of your application.


Deploy AI apps to production in minutes

Koyeb is a developer-friendly serverless platform to deploy apps globally. No-ops, servers, or infrastructure management.
All systems operational
© Koyeb