Spaces:
Running
on
Zero
A newer version of the Gradio SDK is available:
6.1.0
title: Flux 1 Panorama
emoji: 🖼️
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.50.0
app_file: app.py
pinned: true
license: apache-2.0
short_description: Flux 1 Panorama
Panorama FLUX 🏞️✨
Create stunning, seamless panoramic images by combining multiple distinct scenes with the power of the FLUX.1-schnell model. This application uses an advanced "Mixture of Diffusers" tiling pipeline to generate high-resolution compositions from left, center, and right text prompts.
What is Panorama FLUX?
Panorama FLUX is a creative tool that leverages a sophisticated tiling mechanism to generate a single, wide-format image from three separate text prompts. Instead of stretching a single concept, you can describe different but related scenes for the left, center, and right portions of the image. The pipeline then intelligently generates each part and seamlessly blends them together.
This is ideal for:
- Creating expansive landscapes: Describe a beach that transitions into an ocean, which then meets a distant jungle.
- Composing complex scenes: Place different characters or objects side-by-side in a shared environment.
- Generating ultra-wide art: Create unique, high-resolution images perfect for wallpapers or digital art.
The core technology uses a custom FluxMoDTilingPipeline built on the Diffusers library, specifically adapted for the FLUX.1-schnell model's "Embedded Guidance" mechanism for fast, high-quality results.
Key Features
- Multi-Prompt Composition: Control the left, center, and right of your image with unique prompts.
- Seamless Stitching: Uses advanced blending methods (Cosine or Gaussian) to eliminate visible seams between tiles.
- High-Resolution Output: Generates images far wider than what a standard pipeline can handle in a single pass.
- Efficient Memory Management: Integrates
mmgpfor local use on consumer GPUs and supports standarddiffusersoffloading for cloud environments via theUSE_MMGPenvironment variable. - Optimized for FLUX.1-schnell: Tailored to the 4-step inference and
guidance_scale=0.0architecture of the distilled FLUX model.
Running the App Locally
Follow these steps to run the Gradio application on your own machine.
1. Prerequisites
- Python 3.9+
- Git and Git LFS installed (
git-lfsis required to clone large model files).
2. Clone the Repository
git clone https://huggingface.co/spaces/elismasilva/flux-1-panorama
cd flux-1-panorama
3. Set Up a Virtual Environment (Recommended)
# Windows
python -m venv venv
.\venv\Scripts\activate
# macOS / Linux
python3 -m venv venv
source venv/bin/activate
4. Install Dependencies
This project includes a specific requirements file for local execution.
pip install -r requirements_local.txt
5. Configure the Model Path
By default, the app is configured to load the model from the Hugging Face Hub ("black-forest-labs/FLUX.1-schnell"). If you have downloaded the model locally (e.g., to F:\models\flux_schnell), you need to update the path in app.py.
Open app.py and modify this line:
# app.py - Line 26 (approximately)
pipe = FluxMoDTilingPipeline.from_pretrained(
"path/to/your/local/model", # <-- CHANGE THIS
torch_dtype=torch.bfloat16
).to("cuda")
6. Run the Gradio App
python app.py
The application will start and provide a local URL (usually http://127.0.0.1:7860) that you can open in your web browser.
Using the Command-Line Script (infer.py)
The infer.py script is a great way to test the pipeline directly, without the Gradio interface. This is useful for debugging, checking performance, and ensuring everything works correctly.
1. Configure the Script
Open the infer.py file in a text editor. You can modify the parameters inside the main() function to match your desired output.
# infer.py
# ... (imports)
def main():
# --- 1. Load Model ---
MODEL_PATH = "black-forest-labs/FLUX.1-schnell" # Or your local path
# ... (model loading code)
# --- 2. Set Up Inference Parameters ---
prompt_grid = [[
"Your left prompt here.",
"Your center prompt here.",
"Your right prompt here."
]]
target_height = 1024
target_width = 3072
# ... and so on for other parameters like steps, seed, etc.
2. Run the Script
Execute the script from your terminal:
python infer.py
The script will print its progress to the console, including the tqdm progress bar, and save the final image as inference_output_schnell.png in the project directory.
Environment Variables
USE_MMGP
This variable controls which memory optimization strategy to use.
To use
mmgp(Recommended for local use): Ensure the variable is not set, or set it totrue. This is the default behavior.# (No action needed, or run) # Linux/macOS: export USE_MMGP=true # Windows CMD: set USE_MMGP=true python app.pyTo disable
mmgpand use standarddiffusersCPU offloading (For Hugging Face Spaces or troubleshooting): Set the variable tofalse.# Linux/macOS USE_MMGP=false python app.py # Windows CMD set USE_MMGP=false python app.py # Windows PowerShell $env:USE_MMGP="false" python app.py
Acknowledgements
- Black Forest Labs for the powerful FLUX models.
- The original authors of the Mixture of Diffusers technique.
- Hugging Face for the
diffuserslibrary.