import os import gradio as gr import torch import numpy as np import random from diffusers import FluxPipeline, FluxTransformer2DModel import spaces from translatepy import Translator # 환경 변수 설정 os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1" translator = Translator() HF_TOKEN = os.environ.get("HF_TOKEN", None) # 상수 model = "black-forest-labs/FLUX.1-dev" MAX_SEED = np.iinfo(np.int32).max # CSS 및 JS 설정 CSS = """ footer { visibility: hidden; } """ JS = """function () { gradioURL = window.location.href if (!gradioURL.endsWith('?__theme=dark')) { window.location.replace(gradioURL + '?__theme=dark'); } }""" # Initialize `pipe` to None globally pipe = None # 모델 로드 시도 try: transformer = FluxTransformer2DModel.from_pretrained("sayakpaul/FLUX.1-merged", torch_dtype=torch.bfloat16) if torch.cuda.is_available(): pipe = FluxPipeline.from_pretrained( model, transformer=transformer, torch_dtype=torch.bfloat16).to("cuda") else: print("CUDA is not available. Check your GPU settings.") except Exception as e: print(f"Failed to load the model: {e}") # 이미지 생성 함수 def generate_image(prompt, width=1024, height=1024, scales=5, steps=4, seed=-1, nums=1, progress=gr.Progress(track_tqdm=True)): if pipe is None: print("Model is not loaded properly. Please check the logs for details.") return None, "Model not loaded." if seed == -1: seed = random.randint(0, MAX_SEED) seed = int(seed) text = str(translator.translate(prompt, 'English')) generator = torch.Generator().manual_seed(seed) try: images = pipe(prompt=text, height=height, width=width, guidance_scale=scales, num_inference_steps=steps, max_sequence_length=512, num_images_per_prompt=nums, generator=generator).images except Exception as e: print(f"Error generating image: {e}") return None, "Error during image generation." return images, seed # Gradio 인터페이스 구성 및 실행 with gr.Blocks(css=CSS, js=JS, theme="soft") as demo: gr.HTML("