{"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.10.10","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"\n## NovelAi sd-webui AI绘画项目 修复版(完全免费,无需任何配置!)\n**torch: 2.0.0+cu118  •  xformers: 0.0.19**\n​\n# 有问题请加qq群632428790 (865/2000)\n### 急需一名宣传人员,无偿,会做B站视频就行\n## 拿我云端倒卖的,死个妈先,臭不要脸。早点死掉\n## 若您是通过付费渠道获得的此笔记,请立即退款并收集证据给群主以便起诉","metadata":{}},{"cell_type":"markdown","source":"## 使用教程:https://www.kaggle.com/code/at2020dead/novelai-stable-diffusion/notebook \n","metadata":{}},{"cell_type":"markdown","source":"\n
\n 📌 2022年11月18日: Crtated By Yiyiooo & Loading\n
\n最近更新日志:\n
\n 2023年3月5日更新:现在支持通过下载链接上传模型了,省去了下载模型后再上传后的麻烦.()\n
\n
\n 2023年5月15日更新:现在可以双开webui了,可以双线程跑图(GPU请选择 T4 x2 , 将use2设置为True)\n
\n
\n 2023年5月15日更新:更新了多线程启动,启动速度更快一些\n
\n
\n 2023年6月6日更新:更新了xformers版本,生成速度更快一些\n
","metadata":{}},{"cell_type":"markdown","source":"# 注意事项/WARNING:\n- ### 1.将设置中的PERSISTENCE改为Files Only方便下次打开提高启动速度,第一次启动后下载Python环境包就不用下载第二次了\n- ### 2.检测到出现涩图会容易导致封号\n- ### 3.如果不能启动,请新建一个notebook并且重新导入\n- ### 4.若出现BUG,请跟我们反馈","metadata":{}},{"cell_type":"markdown","source":"## Ai绘画模型下载站:\n#### [Civitai](http://civitai.com)\n \n#### [huggingface](http://huggingface.co)\n# 友情合作 \n### [pix.ink](http://pix.ink) # 片绘\n## [hua-der.com](http://hua-der.com) # 画der","metadata":{}},{"cell_type":"code","source":"# 安装目录\ninstall_path=\"/kaggle/working\" #或者/kaggle\nupdata_webui = False #是否开机自动更新webui\n\n# 重置变量 会删掉sd_webui重新安装\nreLoad = False\nupdata_webui = False\n\n#清理和打包生成的图片\nzip_output=True\nclear_output=True\n#打包环境减少下次启动时\nuse_zip_venv = False\n\n# 使用huggingface保存和载入webui配置文件\nhuggingface_use = True\nhuggingface_token_file = '/kaggle/input/tenkens/hugfacetoken.txt'\nhuggiingface_repo_id = 'ACCA225/sdconfig'\n# 将会同步的文件\nyun_files = [\n'ui-config.json',\n'config.json',\n'styles.csv'\n]","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.091415Z","iopub.execute_input":"2023-07-04T14:44:03.091790Z","iopub.status.idle":"2023-07-04T14:44:03.106210Z","shell.execute_reply.started":"2023-07-04T14:44:03.091759Z","shell.execute_reply":"2023-07-04T14:44:03.104893Z"},"trusted":true},"execution_count":1,"outputs":[]},{"cell_type":"code","source":"#模型和插件\n\n# 插件列表: git仓库地址\n# 不需要的插件在前面加 # ,插件地址之间需要用英语逗号隔开\nextensions = [\n 'https://github.com/Elldreth/loopback_scaler',\n 'https://github.com/jexom/sd-webui-depth-lib',\n #'https://github.com/AlUlkesh/stable-diffusion-webui-images-browser',\n 'https://github.com/camenduru/sd-civitai-browser',\n 'https://github.com/Mikubill/sd-webui-controlnet',\n 'https://github.com/nonnonstop/sd-webui-3d-open-pose-editor',\n 'https://github.com/2575044704/stable-diffusion-webui-localization-zh_CN2',\n 'https://github.com/opparco/stable-diffusion-webui-two-shot',\n #'https://github.com/minicacas/stable-diffusion-webui-composable-lora',\n 'https://github.com/DominikDoom/a1111-sd-webui-tagcomplete',\n 'https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111',\n #'https://github.com/KohakuBlueleaf/a1111-sd-webui-locon',\n 'https://github.com/hnmr293/sd-webui-cutoff',\n 'https://github.com/hako-mikan/sd-webui-lora-block-weight',\n 'https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper',\n 'https://github.com/catppuccin/stable-diffusion-webui',\n #'https://github.com/Nevysha/Cozy-Nest',\n 'https://github.com/Scholar01/sd-webui-mov2mov',\n 'https://github.com/toriato/stable-diffusion-webui-wd14-tagger',\n 'https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris',\n 'https://github.com/deforum-art/sd-webui-deforum',\n 'https://github.com/Scholar01/sd-webui-mov2mov',\n 'https://github.com/zanllp/sd-webui-infinite-image-browsing',\n]\n\n# Stable Diffusion模型请放在这里(不用填模型的文件名,只填模型的目录即可)\nsd_model = [\n#'/kaggle/input/cetus-mix/',\n#'/kaggle/input/aom3ackpt',\n'/kaggle/input/9527-fp16',\n#'/kaggle/input/dalcefo-painting',\n ]\n# Stable Diffusion模型下载链接放这里\nsd_model_urls=[\n#GhostMix_v1.2\n'https://civitai.com/api/download/models/59685',\n'https://huggingface.co/swl-models/9527/resolve/main/9527-non-ema-fp16.safetensors',\n#Counterfeit-V3.0\n'https://civitai.com/api/download/models/57618',\n#LibraMix\n'https://civitai.com/api/download/models/41391',\n'https://huggingface.co/datasets/sukaka/sd_models_fp16/resolve/main/cetusMix_Coda2.safetensors',\n'https://huggingface.co/datasets/sukaka/sd_models_fp16/resolve/main/cetusMix_Version35.safetensors',\n\n]\n\n# VAE模型请放在这里(不用填模型的文件名,只填模型的目录即可)\nvae_model = []\n#VAE模型下载链接放这里\nvae_model_urls=[\n'https://huggingface.co/stabilityai/sd-vae-ft-ema-original/resolve/main/vae-ft-ema-560000-ema-pruned.safetensors',\n'https://huggingface.co/datasets/sukaka/sd_models_fp16/resolve/main/clearvae.vae.pt',\n'https://huggingface.co/datasets/sukaka/sd_models_fp16/resolve/main/klF8Anime2.vae.pt',\n'https://huggingface.co/dector/vae-840000/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt'\n]\n\n# Lora模型的数据集路径请写在这里:\nlora_model = [\n#'/kaggle/input/lora-1',\n] \n# Lora模型下载链接放这里\nlora_model_urls=[\n#墨心\n'https://civitai.com/api/download/models/14856',\n#山楂糕\n'https://civitai.com/api/download/models/41580',\n#细节调整\n'https://civitai.com/api/download/models/62833'\n]\n# Lycoris和loha模型的数据集路径请写在这里:\nlyco_model = [\n#'/kaggle/input/lora-1',\n] \n# Lycoris和loha模型下载链接放这里\nlyco_model_urls=[\n#FilmGirl 胶片风\n'https://civitai.com/api/download/models/75069',\n#Teacher clothes 教师衣服\n\"https://civitai.com/api/download/models/65426\",\n#伪日光\n'https://civitai.com/api/download/models/71235'\n]\n\n# ControlNet模型data请放在这里:\ncn_model = [\n]\n# controlnet模型下载链接放这里\ncn_model_urls = [\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_canny_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_lineart_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_openpose_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_scribble_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15_softedge_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors',\n'https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11u_sd15_tile_fp16.safetensors',\n]\n\n# Hypernetworks超网络模型路径请放在这里:\nhypernetworks_model = []\n#Hypernetworks超网络模型下载链接请放在这里\nhypernetworks_model_urls = []\n\n#放大算法路径请放在这里\nESRGAN = []\n#放大算法链接请放在这里\nESRGAN_urls = [\n'https://huggingface.co/FacehugmanIII/4x_foolhardy_Remacri/resolve/main/4x_foolhardy_Remacri.pth',\n'https://huggingface.co/konohashinobi4/4xAnimesharp/resolve/main/4x-AnimeSharp.pth',\n'https://huggingface.co/lokCX/4x-Ultrasharp/resolve/main/4x-UltraSharp.pth',\n]\n\n# embeddings(pt文件)请放在这里:\nembeddings_model = [\n'/kaggle/input/bad-embedding',\n] \n# embeddings(pt文件)下载链接请放在这里:\nembeddings_model_urls=[\n'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/%E4%BA%BA%E4%BD%93%E4%BF%AE%E6%AD%A3/EasyNegative.pt',\n'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/%E4%BA%BA%E4%BD%93%E4%BF%AE%E6%AD%A3/bad-artist-anime.pt',\n'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/%E4%BA%BA%E4%BD%93%E4%BF%AE%E6%AD%A3/bad-hands-5.pt',\n'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/%E4%BA%BA%E4%BD%93%E4%BF%AE%E6%AD%A3/bad_prompt_version2.pt',\n'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/%E4%BA%BA%E4%BD%93%E4%BF%AE%E6%AD%A3/bad-image-v2-39000.pt',\n]\n\n#script文件导入\nscripts = []\n#script文件下载链接导入\nscripts_urls = [\n#'https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/repositories/k-diffusion/k_diffusion/sampling.py'\n]\n\n#tag词库文件导入\ntags = []\n#tag词库文件下载链接导入\ntags_urls=[\n\"https://huggingface.co/datasets/sukaka/sd_configs/resolve/main/danbooru.zh_CN.csv\",\n]\n","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.109828Z","iopub.execute_input":"2023-07-04T14:44:03.110789Z","iopub.status.idle":"2023-07-04T14:44:03.126257Z","shell.execute_reply.started":"2023-07-04T14:44:03.110756Z","shell.execute_reply":"2023-07-04T14:44:03.125161Z"},"trusted":true},"execution_count":2,"outputs":[]},{"cell_type":"code","source":"#ngrok穿透\nngrok_use = True\nngrokTokenFile='/kaggle/input/tenkens/Authtoken.txt' # 非必填 存放ngrokToken的文件的路径\n#Frp 内网穿透\nuse_frpc = False\nfrpconfigfile = '/kaggle/input/tenkens/7860.ini' # 非必填 frp 配置文件,本地端口 7860\n\n# 启动时默认加载的模型名称 填模型名称,名称建议带上文件名后缀\nusedCkpt = 'cetusMix_Coda2.safetensors'\n\n#启动参数\nargs = [\n '--share',\n '--xformers',\n '--lowram',\n '--no-hashing',\n '--disable-nan-check',\n '--enable-insecure-extension-access',\n '--disable-console-progressbars',\n '--enable-console-prompts',\n '--no-gradio-queue',\n '--no-half-vae',\n '--api',\n f'--lyco-dir {install_path}/stable-diffusion-webui/models/lyco',\n]","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.128332Z","iopub.execute_input":"2023-07-04T14:44:03.128633Z","iopub.status.idle":"2023-07-04T14:44:03.141223Z","shell.execute_reply.started":"2023-07-04T14:44:03.128610Z","shell.execute_reply":"2023-07-04T14:44:03.140082Z"},"trusted":true},"execution_count":3,"outputs":[]},{"cell_type":"code","source":"use2 = False#是否开启两个webui\n#ngrok穿透\nngrok_use1 = True\nngrokTokenFile1='/kaggle/input/tenkens/Authtoken1.txt' # 非必填 存放ngrokToken的文件的路径\n#Frp 内网穿透\nuse_frpc1 = False\nfrpconfigfile1 = '/kaggle/input/tenkens/7861.ini' # 非必填 frp 配置文件,本地端口 7860\n\n#第二个webui使用的模型\nusedCkpt1 = 'cetusMix_Coda2.safetensors'\n\n#启动参数\nargs1 = [\n '--share',\n '--xformers',\n '--lowram',\n '--no-hashing',\n '--disable-nan-check',\n '--enable-insecure-extension-access',\n '--disable-console-progressbars',\n '--enable-console-prompts',\n '--no-gradio-queue',\n '--no-half-vae',\n '--api',\n f'--lyco-dir {install_path}/stable-diffusion-webui/models/lyco',\n]","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.144137Z","iopub.execute_input":"2023-07-04T14:44:03.145273Z","iopub.status.idle":"2023-07-04T14:44:03.156403Z","shell.execute_reply.started":"2023-07-04T14:44:03.145226Z","shell.execute_reply":"2023-07-04T14:44:03.155453Z"},"trusted":true},"execution_count":4,"outputs":[]},{"cell_type":"code","source":"#使用的库\nfrom pathlib import Path\nimport subprocess\nimport pandas as pd\nimport shutil\nimport os\nimport time\nimport re\nimport gc\nimport requests\nimport zipfile\nfrom concurrent.futures import ProcessPoolExecutor\nos.environ['install_path'] = install_path","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.157804Z","iopub.execute_input":"2023-07-04T14:44:03.158362Z","iopub.status.idle":"2023-07-04T14:44:03.174459Z","shell.execute_reply.started":"2023-07-04T14:44:03.158328Z","shell.execute_reply":"2023-07-04T14:44:03.173344Z"},"trusted":true},"execution_count":5,"outputs":[]},{"cell_type":"code","source":"#功能函数,内存优化\ndef libtcmalloc():\n if os.path.exists('/kaggle/temp'):\n os.chdir('/kaggle')\n os.chdir('temp')\n os.environ[\"LD_PRELOAD\"] = \"libtcmalloc.so\"\n print('内存优化已安装')\n else:\n \n if use_frpc:\n !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/datasets/ACCA225/Frp/resolve/main/frpc -d /kaggle/working/frpc -o frpc\n os.system('pip install -q pyngrok ')\n os.chdir('/kaggle')\n os.makedirs('temp', exist_ok=True)\n os.chdir('temp')\n os.system('wget -qq http://launchpadlibrarian.net/367274644/libgoogle-perftools-dev_2.5-2.2ubuntu3_amd64.deb')\n os.system('wget -qq https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/google-perftools_2.5-2.2ubuntu3_all.deb')\n os.system('wget -qq https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb')\n os.system('wget -qq https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb')\n os.system('apt install -qq libunwind8-dev -y')\n !dpkg -i *.deb\n os.environ[\"LD_PRELOAD\"] = \"libtcmalloc.so\"\n !rm *.deb\n print('内存优化已安装')","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.248235Z","iopub.execute_input":"2023-07-04T14:44:03.248588Z","iopub.status.idle":"2023-07-04T14:44:03.268389Z","shell.execute_reply.started":"2023-07-04T14:44:03.248561Z","shell.execute_reply":"2023-07-04T14:44:03.267426Z"},"trusted":true},"execution_count":6,"outputs":[]},{"cell_type":"code","source":"#功能函数,环境和sd_webui安装\ndef unzip_file(src: str, dest: str = '/kaggle/outputs'):\n if os.path.exists(src):\n with zipfile.ZipFile(src, 'r') as zip_ref:\n for member in zip_ref.namelist():\n filename = os.path.basename(member)\n if not filename:\n continue\n dest_file = os.path.join(dest, filename)\n if os.path.exists(dest_file):\n os.remove(dest_file)\n zip_ref.extract(member, dest)\n\ndef webui_config_download(yun_files, huggiingface_repo_id):\n %cd $install_path/stable-diffusion-webui/\n for yun_file in yun_files:\n url = f'https://huggingface.co/datasets/{huggiingface_repo_id}/resolve/main/{yun_file}'\n response = requests.head(url)\n if response.status_code == 200:\n result = subprocess.run(['wget', '-O', yun_file, url, '-q'], capture_output=True)\n if result.returncode != 0:\n print(f'Error: Failed to download {yun_file} from {url}')\n else:\n print(f'Error: Invalid URL {url}')\n \ndef venv_install():\n %cd /opt/conda/envs\n if os.path.exists('venv'):\n print('环境已安装')\n else:\n %cd /kaggle/working/\n if not os.path.exists('venv.tar.gz'):\n print('Downloading venv')\n !wget https://huggingface.co/datasets/sukaka/venv_ai_drow/resolve/main/sd_webui/sd_webui_torch201_cu118_xf20.tar.gz -O venv.tar.gz\n print('successfully downloaded venv.tar.gz')\n %cd /opt/conda/envs/\n !mkdir venv\n %cd venv\n print('installing venv')\n os.system('apt -y install -qq pigz > /dev/null 2>&1')\n !pigz -dc -p 5 /kaggle/working/venv.tar.gz | tar xf -\n !source /opt/conda/bin/activate venv\n print('环境安装完毕')\n\n\ndef install_webui():\n %cd $install_path\n if reLoad:\n !rm -rf stable-diffusion-webui\n if Path(\"stable-diffusion-webui\").exists():\n if updata_webui:\n %cd $install_path/stable-diffusion-webui/\n !git pull\n else:\n os.system('git clone https://github.com/PNuwa/stable-diffusion-webui.git')\n %cd $install_path/stable-diffusion-webui/\n with open('launch.py', 'r') as f:\n content = f.read()\n with open('launch.py', 'w') as f:\n f.write('import ssl\\n')\n f.write('ssl._create_default_https_context = ssl._create_unverified_context\\n')\n f.write(content)\n if huggingface_use:\n webui_config_download(yun_files, huggiingface_repo_id)\n \n unzip_file('/kaggle/working/图片.zip')\n install_extensions(install_path, extensions)\n download_model()\n link_models()","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.270444Z","iopub.execute_input":"2023-07-04T14:44:03.271108Z","iopub.status.idle":"2023-07-04T14:44:03.395686Z","shell.execute_reply.started":"2023-07-04T14:44:03.271075Z","shell.execute_reply":"2023-07-04T14:44:03.394523Z"},"trusted":true},"execution_count":7,"outputs":[]},{"cell_type":"code","source":"from concurrent.futures import ThreadPoolExecutor\n# 安装插件,下载和同步模型\ndef install_extensions(install_path, extensions):\n print('安装插件,此处出现红条是正常的')\n os.chdir(os.path.join(install_path, 'stable-diffusion-webui'))\n os.makedirs('extensions', exist_ok=True)\n os.chdir('extensions')\n\n def clone_repo(ex):\n repo_name = ex.split('/')[-1]\n if not os.path.exists(repo_name):\n os.system('git clone ' + ex)\n\n with ThreadPoolExecutor(max_workers=4) as executor:\n executor.map(clone_repo, extensions)\n \ndef download_link(link, target_folder):\n if link.startswith('https://huggingface.co/'):\n filename = re.search(r'[^/]+$', link).group(0)\n return f'aria2c --console-log-level=error -q -c -x 16 -s 16 -k 1M -d \"{target_folder}\" -o \"{filename}\" \"{link}\"'\n else:\n return f'aria2c --console-log-level=error -q -c -x 16 -s 16 -k 1M --remote-time -d \"{target_folder}\" \"{link}\"'\n\ndef download_links(links, target_folder):\n tasks = []\n for link in links:\n tasks.append(download_link(link, target_folder))\n return tasks\n\ndef download_links_all(tasks):\n with ThreadPoolExecutor(max_workers=5) as executor:\n for task in tasks:\n executor.submit(os.system, task)\n \n# 下载模型文件\ndef download_model():\n os.chdir('/kaggle')\n os.makedirs('models', exist_ok=True)\n os.chdir('models')\n os.makedirs('VAE', exist_ok=True)\n os.makedirs('Stable-diffusion', exist_ok=True)\n os.makedirs('Lora', exist_ok=True)\n os.makedirs('cn-model', exist_ok=True)\n os.makedirs('hypernetworks', exist_ok=True)\n os.makedirs('ESRGAN', exist_ok=True)\n os.makedirs('lyco', exist_ok=True)\n tasks = []\n tasks.extend(download_links(vae_model_urls, 'VAE'))\n tasks.extend(download_links(sd_model_urls, 'Stable-diffusion'))\n tasks.extend(download_links(lora_model_urls, 'Lora'))\n tasks.extend(download_links(cn_model_urls, 'cn-model'))\n tasks.extend(download_links(hypernetworks_model_urls, 'hypernetworks'))\n tasks.extend(download_links(ESRGAN_urls, 'ESRGAN'))\n tasks.extend(download_links(lyco_model_urls, 'lyco'))\n tasks.extend(download_links(embeddings_model_urls, f'{install_path}/stable-diffusion-webui/embeddings'))\n tasks.extend(download_links(scripts_urls, f'{install_path}/stable-diffusion-webui/scripts'))\n tasks.extend(download_links(tags_urls, f'{install_path}/stable-diffusion-webui/extensions/a1111-sd-webui-tagcomplete/tags'))\n download_links_all(tasks)\n\ndef create_symlinks(folder_paths, target_dir):\n # Create target directory if it doesn't exist\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n # Remove broken symlinks in target directory\n for filename in os.listdir(target_dir):\n target_path = os.path.join(target_dir, filename)\n if os.path.islink(target_path) and not os.path.exists(target_path):\n os.unlink(target_path)\n # Create new symlinks\n for source_path in folder_paths:\n if not os.path.exists(source_path):\n continue\n if os.path.isdir(source_path):\n for filename in os.listdir(source_path):\n source_file_path = os.path.join(source_path, filename)\n target_file_path = os.path.join(target_dir, filename)\n if not os.path.exists(target_file_path):\n os.symlink(source_file_path, target_file_path)\n print(f'Created symlink for {filename} in {target_dir}')\n else:\n filename = os.path.basename(source_path)\n target_file_path = os.path.join(target_dir, filename)\n if not os.path.exists(target_file_path):\n os.symlink(source_path, target_file_path)\n print(f'Created symlink for {filename} in {target_dir}')\n\n# 链接模型文件\ndef link_models():\n cn_model.append('/kaggle/models/cn-model')\n vae_model.append('/kaggle/models/VAE')\n sd_model.append('/kaggle/models/Stable-diffusion')\n lora_model.append('/kaggle/models/Lora')\n hypernetworks_model.append('/kaggle/models/hypernetworks')\n ESRGAN.append('/kaggle/models/ESRGAN')\n lyco_model.append('/kaggle/models/lyco')\n \n create_symlinks(vae_model,f'{install_path}/stable-diffusion-webui/models/VAE')\n create_symlinks(sd_model,f'{install_path}/stable-diffusion-webui/models/Stable-diffusion')\n create_symlinks(lora_model,f'{install_path}/stable-diffusion-webui/models/Lora')\n create_symlinks(cn_model,f'{install_path}/stable-diffusion-webui/extensions/sd-webui-controlnet/models')\n create_symlinks(embeddings_model,f'{install_path}/stable-diffusion-webui/embeddings')\n create_symlinks(hypernetworks_model,f'{install_path}/stable-diffusion-webui/models/hypernetworks')\n create_symlinks(ESRGAN,f'{install_path}/stable-diffusion-webui/models/ESRGAN')\n create_symlinks(tags,f'{install_path}/stable-diffusion-webui/extensions/a1111-sd-webui-tagcomplete/tags')\n create_symlinks(scripts,f'{install_path}/stable-diffusion-webui/scripts')\n create_symlinks(lyco_model,f'{install_path}/stable-diffusion-webui/models/lyco')\n","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.398890Z","iopub.execute_input":"2023-07-04T14:44:03.399484Z","iopub.status.idle":"2023-07-04T14:44:03.426140Z","shell.execute_reply.started":"2023-07-04T14:44:03.399450Z","shell.execute_reply":"2023-07-04T14:44:03.425063Z"},"trusted":true},"execution_count":8,"outputs":[]},{"cell_type":"code","source":"# 功能函数:内网穿透\n#ngrok\ndef ngrok_start(ngrokTokenFile: str, port: int, address_name: str, should_run: bool):\n if not should_run:\n print('Skipping ngrok start')\n return\n if Path(ngrokTokenFile).exists():\n with open(ngrokTokenFile, encoding=\"utf-8\") as nkfile:\n ngrokToken = nkfile.readline()\n print('use nrgok')\n from pyngrok import conf, ngrok\n conf.get_default().auth_token = ngrokToken\n conf.get_default().monitor_thread = False\n ssh_tunnels = ngrok.get_tunnels(conf.get_default())\n if len(ssh_tunnels) == 0:\n ssh_tunnel = ngrok.connect(port, bind_tls=True)\n print(f'{address_name}:' + ssh_tunnel.public_url)\n else:\n print(f'{address_name}:' + ssh_tunnels[0].public_url)\n else:\n print('skip start ngrok')\n\n#Frp内网穿透 \nimport subprocess\n\ndef install_Frpc(port, frpconfigfile, use_frpc):\n if use_frpc:\n subprocess.run(['chmod', '+x', '/kaggle/working/frpc/frpc'], check=True)\n print(f'正在启动frp ,端口{port}')\n subprocess.Popen(['/kaggle/working/frpc/frpc', '-c', frpconfigfile])\n","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.429565Z","iopub.execute_input":"2023-07-04T14:44:03.429930Z","iopub.status.idle":"2023-07-04T14:44:03.444563Z","shell.execute_reply.started":"2023-07-04T14:44:03.429898Z","shell.execute_reply":"2023-07-04T14:44:03.443438Z"},"trusted":true},"execution_count":9,"outputs":[]},{"cell_type":"code","source":"#sd_webui启动\ndef start_webui_1():\n if use2:\n install_Frpc('7861',frpconfigfile1,use_frpc1)\n ngrok_start(ngrokTokenFile1,7861,'第二个webui',ngrok_use1)\n !sleep 90\n %cd $install_path/stable-diffusion-webui\n args1.append(f'--ckpt=models/Stable-diffusion/{usedCkpt1}')\n !/opt/conda/envs/venv/bin/python3 launch.py {' '.join(args1)} --port 7861 --device-id=1\n pass\n\ndef start_webui_0():\n %cd $install_path\n install_Frpc('7860',frpconfigfile,use_frpc)\n ngrok_start(ngrokTokenFile,7860,'第一个webui',ngrok_use)\n %cd $install_path/stable-diffusion-webui\n !mkdir models/lyco\n args.append(f'--ckpt=models/Stable-diffusion/{usedCkpt}')\n !/opt/conda/envs/venv/bin/python3 launch.py {' '.join(args)}\n \ndef start_webui():\n with ProcessPoolExecutor() as executor:\n futures = []\n for func in [start_webui_0, start_webui_1]:\n futures.append(executor.submit(func))\n time.sleep(1)\n for future in futures:\n future.result()","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.447867Z","iopub.execute_input":"2023-07-04T14:44:03.448658Z","iopub.status.idle":"2023-07-04T14:44:03.479149Z","shell.execute_reply.started":"2023-07-04T14:44:03.448629Z","shell.execute_reply":"2023-07-04T14:44:03.478257Z"},"trusted":true},"execution_count":10,"outputs":[]},{"cell_type":"code","source":"def main():\n startTicks = time.time()\n os.system('apt-get update')\n os.system('apt -y install -qq aria2')\n with ProcessPoolExecutor() as executor:\n futures = []\n for func in [install_webui, venv_install,libtcmalloc]:\n futures.append(executor.submit(func))\n time.sleep(0.5)\n for future in futures:\n future.result()\n libtcmalloc()\n ticks = time.time()\n print(\"加载耗时:\",(ticks - startTicks),\"s\")\n start_webui()","metadata":{"ExecutionIndicator":{"show":false},"tags":[],"execution":{"iopub.status.busy":"2023-07-04T14:44:03.480566Z","iopub.execute_input":"2023-07-04T14:44:03.481286Z","iopub.status.idle":"2023-07-04T14:44:03.492150Z","shell.execute_reply.started":"2023-07-04T14:44:03.481253Z","shell.execute_reply":"2023-07-04T14:44:03.491240Z"},"trusted":true},"execution_count":11,"outputs":[]},{"cell_type":"code","source":"#功能函数,清理打包上传\nfrom pathlib import Path\nfrom huggingface_hub import HfApi, login\n\ndef zip_venv():\n !pip install conda-pack\n !rm -rf /kaggle/working/venv.tar.gz\n !conda pack -n venv -o /kaggle/working/venv.tar.gz --compress-level 0\n\ndef hugface_upload(huggingface_token_file, yun_files, repo_id):\n if Path(huggingface_token_file).exists():\n with open(huggingface_token_file, encoding=\"utf-8\") as nkfile:\n hugToken = nkfile.readline()\n if hugToken != '':\n # 使用您的 Hugging Face 访问令牌登录\n login(token=hugToken)\n # 实例化 HfApi 类\n api = HfApi()\n print(\"HfApi 类已实例化\")\n %cd $install_path/stable-diffusion-webui\n # 使用 upload_file() 函数上传文件\n print(\"开始上传文件...\")\n for yun_file in yun_files:\n if Path(yun_file).exists():\n response = api.upload_file(\n path_or_fileobj=yun_file,\n path_in_repo=yun_file,\n repo_id=repo_id,\n repo_type=\"dataset\"\n )\n print(\"文件上传完成\")\n print(f\"响应: {response}\")\n else:\n print(f'Error: File {yun_file} does not exist')\n else:\n print(f'Error: File {huggingface_token_file} does not exist')\n\ndef clean_folder(folder_path):\n if not os.path.exists(folder_path):\n return\n for filename in os.listdir(folder_path):\n file_path = os.path.join(folder_path, filename)\n if os.path.isfile(file_path):\n os.remove(file_path)\n elif os.path.isdir(file_path):\n shutil.rmtree(file_path)\n\ndef zip_clear_updata():\n if zip_output:\n output_folder = '/kaggle/outputs/'\n if os.path.exists(output_folder):\n shutil.make_archive('/kaggle/working/图片', 'zip', output_folder)\n print('图片已压缩到output')\n else:\n print(f'文件夹 {output_folder} 不存在,跳过压缩操作')\n if clear_output:\n %cd /kaggle/outputs/\n clean_folder('img2img-images')\n clean_folder('txt2img-images')\n clean_folder('img2img-grids')\n clean_folder('txt2img-grids')\n clean_folder('extras-images')\n print('清理完毕')\n if huggingface_use == True:\n hugface_upload(huggingface_token_file,yun_files,huggiingface_repo_id)\n if use_zip_venv == True:\n zip_venv()","metadata":{"execution":{"iopub.status.busy":"2023-07-04T14:44:03.495542Z","iopub.execute_input":"2023-07-04T14:44:03.495840Z","iopub.status.idle":"2023-07-04T14:44:03.813874Z","shell.execute_reply.started":"2023-07-04T14:44:03.495816Z","shell.execute_reply":"2023-07-04T14:44:03.812931Z"},"trusted":true},"execution_count":12,"outputs":[]},{"cell_type":"code","source":"# 启动的输出日志,部署结果在此处看\nmain()","metadata":{"_kg_hide-input":true,"_kg_hide-output":false,"execution":{"iopub.status.busy":"2023-07-04T14:44:03.815216Z","iopub.execute_input":"2023-07-04T14:44:03.815575Z"},"trusted":true},"execution_count":null,"outputs":[{"name":"stdout","text":"Get:1 http://packages.cloud.google.com/apt gcsfuse-focal InRelease [5002 B]\nGet:2 https://packages.cloud.google.com/apt cloud-sdk InRelease [6361 B]\nGet:3 https://packages.cloud.google.com/apt google-fast-socket InRelease [5015 B]\nHit:4 http://archive.ubuntu.com/ubuntu jammy InRelease\nGet:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64 InRelease [1581 B]\nGet:6 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]\nGet:7 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]\nGet:8 http://packages.cloud.google.com/apt gcsfuse-focal/main amd64 Packages [2356 B]\nGet:9 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB]\nGet:10 https://packages.cloud.google.com/apt cloud-sdk/main amd64 Packages [474 kB]\nGet:11 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64 Packages [410 kB]\nGet:12 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [674 kB]\nGet:13 http://archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [49.0 kB]\nGet:14 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [1197 kB]\nGet:15 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [978 kB]\nGet:16 http://archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [25.5 kB]\nGet:17 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [684 kB]\nGet:18 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [43.2 kB]\nGet:19 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [633 kB]\nGet:20 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [944 kB]\nFetched 6471 kB in 2s (3110 kB/s)\nReading package lists...\n","output_type":"stream"},{"name":"stderr","text":"W: http://packages.cloud.google.com/apt/dists/gcsfuse-focal/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.\nW: https://packages.cloud.google.com/apt/dists/google-fast-socket/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.\n\nWARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n\n","output_type":"stream"},{"name":"stdout","text":"The following additional packages will be installed:\n libaria2-0 libc-ares2 libssh2-1\nThe following NEW packages will be installed:\n aria2 libaria2-0 libc-ares2 libssh2-1\n","output_type":"stream"},{"name":"stderr","text":"dpkg-preconfigure: unable to re-open stdin: No such file or directory\n","output_type":"stream"},{"name":"stdout","text":"0 upgraded, 4 newly installed, 0 to remove and 84 not upgraded.\nNeed to get 1622 kB of archives.\nAfter this operation, 5817 kB of additional disk space will be used.\nSelecting previously unselected package libc-ares2:amd64.\n(Reading database ... 115376 files and directories currently installed.)\nPreparing to unpack .../libc-ares2_1.18.1-1ubuntu0.22.04.2_amd64.deb ...\nUnpacking libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.2) ...\nSelecting previously unselected package libssh2-1:amd64.\nPreparing to unpack .../libssh2-1_1.10.0-3_amd64.deb ...\nUnpacking libssh2-1:amd64 (1.10.0-3) ...\nSelecting previously unselected package libaria2-0:amd64.\nPreparing to unpack .../libaria2-0_1.36.0-1_amd64.deb ...\nUnpacking libaria2-0:amd64 (1.36.0-1) ...\nSelecting previously unselected package aria2.\nPreparing to unpack .../aria2_1.36.0-1_amd64.deb ...\nUnpacking aria2 (1.36.0-1) ...\nSetting up libc-ares2:amd64 (1.18.1-1ubuntu0.22.04.2) ...\nSetting up libssh2-1:amd64 (1.10.0-3) ...\nSetting up libaria2-0:amd64 (1.36.0-1) ...\nSetting up aria2 (1.36.0-1) ...\nProcessing triggers for man-db (2.10.2-1) ...\nProcessing triggers for libc-bin (2.35-0ubuntu3.1) ...\n/kaggle/working\nstable-diffusion-webui安装中\n","output_type":"stream"},{"name":"stderr","text":"Cloning into 'stable-diffusion-webui'...\n","output_type":"stream"},{"name":"stdout","text":"/opt/conda/envs\n/kaggle/working\n环境包下载中\n--2023-07-04 14:44:13-- https://huggingface.co/datasets/sukaka/venv_ai_drow/resolve/main/sd_webui/sd_webui_torch201_cu118_xf20.tar.gz\nResolving huggingface.co (huggingface.co)... 65.8.49.38, 65.8.49.53, 65.8.49.2, ...\nConnecting to huggingface.co (huggingface.co)|65.8.49.38|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://cdn-lfs.huggingface.co/repos/93/61/936160f9623602ad97a9fe4c639531b59f4fe39854fcc22d75692344fb5dfbe2/609336a63928c38f99b0f842dd5d08e1bf255a8add61cbe82301ca5611831b34?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sd_webui_torch201_cu118_xf20.tar.gz%3B+filename%3D%22sd_webui_torch201_cu118_xf20.tar.gz%22%3B&response-content-type=application%2Fgzip&Expires=1688738514&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzkzLzYxLzkzNjE2MGY5NjIzNjAyYWQ5N2E5ZmU0YzYzOTUzMWI1OWY0ZmUzOTg1NGZjYzIyZDc1NjkyMzQ0ZmI1ZGZiZTIvNjA5MzM2YTYzOTI4YzM4Zjk5YjBmODQyZGQ1ZDA4ZTFiZjI1NWE4YWRkNjFjYmU4MjMwMWNhNTYxMTgzMWIzND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODg3Mzg1MTR9fX1dfQ__&Signature=K3qO1uPsep4RsUfYO--Gdi7YhI3WsGMwm33D4ie6d6TqRqT0ZHsb0EiNSew7vHGEG1XDx2HitFKhWWv5RgtoPDaSY%7E5ChePd74kCXbOkIfHRDqvPKJylMHKe7USOgsi7q9UsOGgnQKN1V2jKrv02Cz%7EvpkcgbiPVhn3IJP-kEhurI8XVLyvFEru%7E7ZbWjzF3G79HRu7jV4d-8SS3GdeSR97m39iZL6AlJH3TA-5VK09pgtK7IOyxqGig1c94TwlvC6zqUjA6aqLNV4aXt0iX-1AnOtA7fB7%7EwduM%7E5jv9bs950aoXx-9r3y0VrcEk3S78YxKo-3K1%7E0S3Evs5V59Sw__&Key-Pair-Id=KVTP0A1DKRTAX [following]\n--2023-07-04 14:44:14-- https://cdn-lfs.huggingface.co/repos/93/61/936160f9623602ad97a9fe4c639531b59f4fe39854fcc22d75692344fb5dfbe2/609336a63928c38f99b0f842dd5d08e1bf255a8add61cbe82301ca5611831b34?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sd_webui_torch201_cu118_xf20.tar.gz%3B+filename%3D%22sd_webui_torch201_cu118_xf20.tar.gz%22%3B&response-content-type=application%2Fgzip&Expires=1688738514&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzkzLzYxLzkzNjE2MGY5NjIzNjAyYWQ5N2E5ZmU0YzYzOTUzMWI1OWY0ZmUzOTg1NGZjYzIyZDc1NjkyMzQ0ZmI1ZGZiZTIvNjA5MzM2YTYzOTI4YzM4Zjk5YjBmODQyZGQ1ZDA4ZTFiZjI1NWE4YWRkNjFjYmU4MjMwMWNhNTYxMTgzMWIzND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODg3Mzg1MTR9fX1dfQ__&Signature=K3qO1uPsep4RsUfYO--Gdi7YhI3WsGMwm33D4ie6d6TqRqT0ZHsb0EiNSew7vHGEG1XDx2HitFKhWWv5RgtoPDaSY%7E5ChePd74kCXbOkIfHRDqvPKJylMHKe7USOgsi7q9UsOGgnQKN1V2jKrv02Cz%7EvpkcgbiPVhn3IJP-kEhurI8XVLyvFEru%7E7ZbWjzF3G79HRu7jV4d-8SS3GdeSR97m39iZL6AlJH3TA-5VK09pgtK7IOyxqGig1c94TwlvC6zqUjA6aqLNV4aXt0iX-1AnOtA7fB7%7EwduM%7E5jv9bs950aoXx-9r3y0VrcEk3S78YxKo-3K1%7E0S3Evs5V59Sw__&Key-Pair-Id=KVTP0A1DKRTAX\nResolving cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)... 54.230.18.98, 54.230.18.111, 54.230.18.21, ...\nConnecting to cdn-lfs.huggingface.co (cdn-lfs.huggingface.co)|54.230.18.98|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 2912904935 (2.7G) [application/gzip]\nSaving to: ‘venv.tar.gz’\n\nvenv.tar.gz 7%[> ] 207.55M 147MB/s /kaggle/working/stable-diffusion-webui\nstable-diffusion-webui已安装\n/kaggle/working/stable-diffusion-webui\nvenv.tar.gz 16%[==> ] 460.58M 191MB/s 安装插件,此处出现红条是正常的\nvenv.tar.gz 17%[==> ] 497.84M 190MB/s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'loopback_scaler'...\nCloning into 'sd-webui-depth-lib'...\nCloning into 'sd-civitai-browser'...\nCloning into 'sd-webui-controlnet'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 22%[===> ] 618.36M 196MB/s eta 11s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'sd-webui-3d-open-pose-editor'...\nCloning into 'stable-diffusion-webui-localization-zh_CN'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 25%[====> ] 716.63M 182MB/s eta 12s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'stable-diffusion-webui-two-shot'...\nCloning into 'a1111-sd-webui-tagcomplete'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 28%[====> ] 793.90M 147MB/s eta 13s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'multidiffusion-upscaler-for-automatic1111'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 29%[====> ] 833.03M 130MB/s eta 13s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'sd-webui-cutoff'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 30%[=====> ] 856.28M 120MB/s eta 13s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'sd-webui-lora-block-weight'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 31%[=====> ] 880.97M 119MB/s eta 13s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'Stable-Diffusion-Webui-Civitai-Helper'...\nCloning into 'stable-diffusion-webui'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 34%[=====> ] 965.91M 116MB/s eta 12s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'sd-webui-mov2mov'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 36%[======> ] 1.00G 117MB/s eta 12s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'stable-diffusion-webui-wd14-tagger'...\nCloning into 'a1111-sd-webui-lycoris'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 40%[=======> ] 1.09G 121MB/s eta 12s ","output_type":"stream"},{"name":"stderr","text":"Cloning into 'sd-webui-deforum'...\nCloning into 'sd-webui-infinite-image-browsing'...\n","output_type":"stream"},{"name":"stdout","text":"venv.tar.gz 100%[===================>] 2.71G 147MB/s in 20s \n\n2023-07-04 14:44:34 (138 MB/s) - ‘venv.tar.gz’ saved [2912904935/2912904935]\n\n环境包已下载\n/opt/conda/envs\n/opt/conda/envs/venv\n环境安装中\nCreated symlink for clearvae.vae.pt in /kaggle/working/stable-diffusion-webui/models/VAE\nCreated symlink for vae-ft-mse-840000-ema-pruned.ckpt in /kaggle/working/stable-diffusion-webui/models/VAE\nCreated symlink for vae-ft-ema-560000-ema-pruned.safetensors in /kaggle/working/stable-diffusion-webui/models/VAE\nCreated symlink for klF8Anime2.vae.pt in /kaggle/working/stable-diffusion-webui/models/VAE\nCreated symlink for ghostmix_v12.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for libramix_v10.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for cetusMix_Coda2.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for 9527-non-ema-fp16.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for CounterfeitV30_v30.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for cetusMix_Version35.safetensors in /kaggle/working/stable-diffusion-webui/models/Stable-diffusion\nCreated symlink for shanzhagao128dim-epoch-000010.safetensors in /kaggle/working/stable-diffusion-webui/models/Lora\nCreated symlink for add_detail.safetensors in /kaggle/working/stable-diffusion-webui/models/Lora\nCreated symlink for MoXinV1.safetensors in /kaggle/working/stable-diffusion-webui/models/Lora\nCreated symlink for control_v11p_sd15_mlsd_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15s2_lineart_anime_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_openpose_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_normalbae_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_canny_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_lineart_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11e_sd15_ip2p_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_inpaint_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_scribble_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11u_sd15_tile_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11f1p_sd15_depth_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11e_sd15_shuffle_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for control_v11p_sd15_softedge_fp16.safetensors in /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/models\nCreated symlink for 4x-AnimeSharp.pth in /kaggle/working/stable-diffusion-webui/models/ESRGAN\nCreated symlink for 4x_foolhardy_Remacri.pth in /kaggle/working/stable-diffusion-webui/models/ESRGAN\nCreated symlink for 4x-UltraSharp.pth in /kaggle/working/stable-diffusion-webui/models/ESRGAN\nCreated symlink for 5bloconlora5D20pseudoDaylight.lJ6p.safetensors in /kaggle/working/stable-diffusion-webui/models/lyco\nCreated symlink for E69599E5B888E8A1A3E6.rDrD.safetensors in /kaggle/working/stable-diffusion-webui/models/lyco\nCreated symlink for FilmProvia2.safetensors in /kaggle/working/stable-diffusion-webui/models/lyco\n","output_type":"stream"},{"name":"stderr","text":"WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\n\nWARNING: apt does not have a stable CLI interface. Use with caution in scripts.\n\n","output_type":"stream"},{"name":"stdout","text":"The following additional packages will be installed:\n liblzma-dev\nSuggested packages:\n liblzma-doc\nThe following NEW packages will be installed:\n liblzma-dev libunwind-dev\n","output_type":"stream"},{"name":"stderr","text":"dpkg-preconfigure: unable to re-open stdin: No such file or directory\n","output_type":"stream"},{"name":"stdout","text":"0 upgraded, 2 newly installed, 0 to remove and 84 not upgraded.\nNeed to get 2040 kB of archives.\nAfter this operation, 6754 kB of additional disk space will be used.\nSelecting previously unselected package liblzma-dev:amd64.\n(Reading database ... 115466 files and directories currently installed.)\nPreparing to unpack .../liblzma-dev_5.2.5-2ubuntu1_amd64.deb ...\nUnpacking liblzma-dev:amd64 (5.2.5-2ubuntu1) ...\nSelecting previously unselected package libunwind-dev:amd64.\nPreparing to unpack .../libunwind-dev_1.3.2-2build2_amd64.deb ...\nUnpacking libunwind-dev:amd64 (1.3.2-2build2) ...\nSetting up liblzma-dev:amd64 (5.2.5-2ubuntu1) ...\nSetting up libunwind-dev:amd64 (1.3.2-2build2) ...\nProcessing triggers for man-db (2.10.2-1) ...\n环境安装完毕\nSelecting previously unselected package google-perftools.\n(Reading database ... 115559 files and directories currently installed.)\nPreparing to unpack google-perftools_2.5-2.2ubuntu3_all.deb ...\nUnpacking google-perftools (2.5-2.2ubuntu3) ...\nSelecting previously unselected package libgoogle-perftools-dev.\nPreparing to unpack libgoogle-perftools-dev_2.5-2.2ubuntu3_amd64.deb ...\nUnpacking libgoogle-perftools-dev (2.5-2.2ubuntu3) ...\nSelecting previously unselected package libgoogle-perftools4.\nPreparing to unpack libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb ...\nUnpacking libgoogle-perftools4 (2.5-2.2ubuntu3) ...\nSelecting previously unselected package libtcmalloc-minimal4.\nPreparing to unpack libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb ...\nUnpacking libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...\nSetting up libtcmalloc-minimal4 (2.5-2.2ubuntu3) ...\nSetting up libgoogle-perftools4 (2.5-2.2ubuntu3) ...\nSetting up google-perftools (2.5-2.2ubuntu3) ...\nSetting up libgoogle-perftools-dev (2.5-2.2ubuntu3) ...\nProcessing triggers for man-db (2.10.2-1) ...\nProcessing triggers for libc-bin (2.35-0ubuntu3.1) ...\n内存优化已安装\n内存优化已安装\n加载耗时: 245.30565690994263 s\n/kaggle/working\nskip start ngrok\n/kaggle/working/stable-diffusion-webui\nmkdir: cannot create directory ‘models/lyco’: File exists\nfatal: No names found, cannot describe anything.\nPython 3.10.11 | packaged by conda-forge | (main, May 10 2023, 18:58:44) [GCC 11.3.0]\nVersion: ## 1.4.0\nCommit hash: 60a69ce2e17578f816e6ec557b94e8b64348ef2e\nInstalling clip\nInstalling open_clip\nCloning Stable Diffusion into /kaggle/working/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...\nCloning K-diffusion into /kaggle/working/stable-diffusion-webui/repositories/k-diffusion...\nCloning CodeFormer into /kaggle/working/stable-diffusion-webui/repositories/CodeFormer...\nCloning BLIP into /kaggle/working/stable-diffusion-webui/repositories/BLIP...\nInstalling requirements for CodeFormer\nInstalling requirements\nInstalling sd-webui-controlnet requirement: mediapipe\nInstalling sd-webui-controlnet requirement: svglib\nInstalling sd-webui-controlnet requirement: fvcore\n\nInstalling requirements for CivitAI Browser\n\nDownloading pose_landmark_full.tflite...\nDownloading pose_web.binarypb...\nDownloading pose_solution_packed_assets.data...\nDownloading pose_solution_simd_wasm_bin.wasm...\nDownloading pose_solution_packed_assets_loader.js...\nDownloading pose_solution_simd_wasm_bin.js...\n\nInstalling Deforum requirement: numexpr\nInstalling Deforum requirement: av\nInstalling Deforum requirement: pims\nInstalling Deforum requirement: imageio_ffmpeg\nInstalling Deforum requirement: rich\n\nInstalling requirements for Mov2mov\nInstalling requirements for ffmpeg\n\nInstalling sd-webui-infinite-image-browsing requirement: python-dotenv\n\nLaunching Web UI with arguments: --share --xformers --lowram --no-hashing --disable-nan-check --enable-insecure-extension-access --disable-console-progressbars --enable-console-prompts --no-gradio-queue --no-half-vae --api --lyco-dir /kaggle/working/stable-diffusion-webui/models/lyco --ckpt=models/Stable-diffusion/cetusMix_Coda2.safetensors\nDownloading (…)_schema%400.0.5.json: 100%|█| 13.4k/13.4k [00:00<00:00, 53.3MB/s]\nCivitai Helper: Get Custom Model Folder\nCivitai Helper: Load setting from: /kaggle/working/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper/setting.json\nCivitai Helper: No setting file, use default\n2023-07-04 14:49:56,449 - ControlNet - \u001b[0;32mINFO\u001b[0m - ControlNet v1.1.229\nControlNet preprocessor location: /kaggle/working/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads\n2023-07-04 14:49:56,650 - ControlNet - \u001b[0;32mINFO\u001b[0m - ControlNet v1.1.229\nLoading weights [None] from /kaggle/working/stable-diffusion-webui/models/Stable-diffusion/cetusMix_Coda2.safetensors\n\u001b[0;32m*Deforum ControlNet support: enabled*\u001b[0m\nCreating model from config: /kaggle/working/stable-diffusion-webui/configs/v1-inference.yaml\nLatentDiffusion: Running in eps-prediction mode\nDiffusionWrapper has 859.52 M params.\nDownloading (…)olve/main/vocab.json: 100%|███| 961k/961k [00:00<00:00, 5.22MB/s]\nDownloading (…)olve/main/merges.txt: 100%|███| 525k/525k [00:00<00:00, 4.66MB/s]\nDownloading (…)cial_tokens_map.json: 100%|█████| 389/389 [00:00<00:00, 1.91MB/s]\nDownloading (…)okenizer_config.json: 100%|█████| 905/905 [00:00<00:00, 5.96MB/s]\nDownloading (…)lve/main/config.json: 100%|█| 4.52k/4.52k [00:00<00:00, 19.9MB/s]\nLoading VAE weights specified in settings: /kaggle/working/stable-diffusion-webui/models/VAE/clearvae.vae.pt\nApplying attention optimization: xformers... done.\nTextual inversion embeddings loaded(5): bad-artist-anime, bad-hands-5, bad-image-v2-39000, bad_prompt_version2, EasyNegative\nModel loaded in 32.2s (load weights from disk: 20.5s, create model: 2.4s, apply weights to model: 4.8s, apply half(): 1.7s, load VAE: 1.1s, move model to device: 1.3s, calculate empty prompt: 0.2s).\nRunning on local URL: http://127.0.0.1:7860\npreload_extensions_git_metadata for 25 extensions took 1.11s\nRunning on public URL: https://2b8f0a2f3310456e85.gradio.live\n\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces\nStartup time: 48.5s (import torch: 2.5s, import gradio: 2.2s, import ldm: 2.3s, other imports: 1.8s, opts onchange: 0.3s, setup codeformer: 0.1s, load scripts: 2.8s, create ui: 32.8s, gradio launch: 3.4s, add APIs: 0.1s).\n\ntxt2img: masterpiece, (best quality), 1girl with hair ornament and yellow hair, sunlight, forest, nature \n100%|███████████████████████████████████████████| 20/20 [00:07<00:00, 2.53it/s]\n\ntxt2img: masterpiece, (best quality), 1girl with hair ornament and yellow hair, sunlight, forest, nature \n100%|███████████████████████████████████████████| 20/20 [00:05<00:00, 3.67it/s]\n","output_type":"stream"}]},{"cell_type":"code","source":"#跑图结束,手动执行,清理图片并打包到output方便下载,同时同步配置文件\nzip_clear_updata()","metadata":{"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"# 第一次部署好之后可以直接手动执行下面这个代码块极速启动","metadata":{}},{"cell_type":"code","source":"# 要第一次部署好之后并且右边的PERSISTENCE设置为Files Only才能单击运行这个代码块,只需要30秒就能加载完毕\nstart_webui()","metadata":{"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"# 使用帮助\n## kaggle账号\n- 注册账号需要手机号,国内手机号也行,如果点击注册后没反应,估计是需要梯子,用于人机验证\n- 注册后点此笔记的 **Copy & Edit** 按钮就进到编辑界面\n\n## **准备工作**\n1. 右侧面板 **Settings/ACCELERATOR** 需要选择GPU **P100 或 T4x2** 这两据说有差异,但我用起来差不多\n2. 右侧面板 **Settings/LANGUAGE** 需要选择Python\n2. 右侧面板 **Settings/PERSISTENCE** 建议选择 Files only **作用是保存Outpot目录内的文件**\n3. 右侧面板 **Settings/ENVIRONMENT** 建议不改这个配置,使用当前默认值就行\n4. 右侧面板 **Settings/INTERNET** 需要打开 用于联网,没网跑不起来的啊\n\n## **启动**\n#### 启动方式一 **直接点击页面上边的 RunAll**\n- 在没有关闭电源的情况下,后几次点击RunAll的输出在页面上端 (其实没有必要了,之前不知道代码块可以收起,很烦滚动到页面底端才能看见输出)\n- 手机端可能会出现页面上边的工具栏不显示的情况,左侧菜单按钮里也有相关的操作\n- 长时间不操作页面会导致脚本停止 (应该是40分钟吧)\n\n#### 启动方式二 **使用页面上边的 Save Version 后台运行**\n- 后台运行不用担心长时间不操作脚本停止\n- Version Type 选择 **Save & Run All**\n- 在Save Version弹窗里需要选择使用**GPU**环境 (Advanced Settings 里最后一个选项)\n- 后台运行的输出的图片可以在运行结束后下载(但是保存时间有限制,我就经常下不到,不够问题不大,喜欢的图在生成后就下载了)\n- 如果你需要下载运行后的图片,请不要把安装目录修改到 /kaggle/working 这个目录下,因为没有写打包功能,下载只能下载整个输出目录,也就是 /kaggle/working 目录\n\n## 访问\n- 如果你使用了ngrok或者frpc,可以访问你这两对应的地址\n- 如果你不知道你的ngrok或者frpc的地址可以在控制台(页面最下方Console)的输出里面查看\n- 使用Run All方式启动,控制台在启动完成后会输出访问网址,网址内容包含**gradio.live**,可以在页面中搜索快速找到\n- 如果使用Save Verson的方式启动,点击左下角的**View Active Events**点击刚刚启动的脚步,在**Log**里找访问网址\n- 一般情况下第一次启动此脚本需要等待kaggle下载模型文件,进度在页面上方\n- 第二次及以后(不增加新的文件)需要3到5分钟\n\n## **增加模型**\n1. 先创建数据集,也就是dataset\n2. 创建时需要添加文件,选择自己的模型文件就行\n3. 同类型文件放相同的数据集里面,一个数据集也不要太大\n4. 可以在dataset搜索其他人上传的模型\n5. 通过右侧的 **Add Data** 按钮选择已经上传的模型文件或者别人上传的模型文件\n - input 下面的列表就是模型文件,可以点击名称后面的复制按钮复制路径\n6. 将模型路径放在配置里的对应配置里即可,支持文件夹和文件路径,参考 **modelDirs**\n - 如果目录里还有子目录也是需要加载的,可以用*表示子目录 例子:比如Loras目录下还有角色、画风、涩涩的文件夹,那路径里写成 '/kaggle/input/Loras/*'就可以加载子目录里面的文件了\n - 模型加载使用的文件链接方式,如果你融模型的时候新模型名字和原有模型名字一样,会出现不能修改只读文件的错误\n - 同理,直接对模型做编辑的工具可能也会出现相同的错误\n \n \n- **为了提高启动速度,导致切换模型过程较慢,点击切换模型后进度条大概率会一直存在,但模型在1分半左右基本能加载完。** \n- **受到kaggle内存大小的影响,切换多个模型后大概率爆内存导致停止运行**\n \n**下边的配置项都写了对应配置的作用和使用说明,不理解的话也不用改,用默认的就好**\n\n## 下载文件\n#### 方式一\n- 在浏览器直接下 比如你需要下载的文件路径在 /kaggle/stable-diffusion-webui/models/Lora/dow_a.safetensors\n - 比如你需要下载的文件路径在 /kaggle/stable-diffusion-webui/models/Lora/dow_a.safetensors\n - 你的访问地址是 https://123123123.gradio.live\n - 则可以在浏览器输入 https://123123123.gradio.live/file=/kaggle/stable-diffusion-webui/models/Lora/dow_a.safetensors 下载你的文件\n \n#### 方式二\n- 复制到Output目录下载 仅支持使用Run All方式运行的\n - 比如你需要下载的文件路径在 /kaggle/stable-diffusion-webui/models/Lora/dow_a.safetensors\n - 先停止笔记本(不是关机,是停止)\n - 然后新建一个代码块,在里面输入 !cp -f /kaggle/stable-diffusion-webui/models/Lora/dow_a.safetensors /kaggle/working/\n - 就可以在右侧列表的Output目录看见复制出来的文件,点击下载即可\n \n#### 方式三\n- 开启链接输出目录的配置 (配置在第二个代码块,通过搜索**配置文件链接**快速查找)\n - 此方法会把已知的三个训练输出目录链接到Output目录下,直接去下载即可(两种启动方式都可以用)\n - 如果有新的目录需要链接,可以参考着自己写或者联系我\n \n#### 方式四\n- 将安装目录改到输出目录(配置在第二个代码块,通过搜索**安装目录**快速查找)\n - 此方式会把所有文件都放在安装目录,找到并下载即可\n - 如果使用这个方式,右侧的设置里**PERSISTENCE**这个设置项建议选No pensistence。如果选其他项,可能会出现关机特别慢的情况,因为需要上传输出目录的文件。\n\n## **一些可能没用的说明**\n- 配置说明 **True或者False**表示布尔值 **True**表示“**是**” **False**表示“**否**” 只有这两个值\n- 配置说明 **[]** 表示数组,里面可以存放内容,每个内容需要用**英语(半角)逗号**隔开\n- 配置说明 **''或者\"\"** 英语(半角)的双引号或者单引号包裹的内容是**字符串**,比如放在数组里面的路径就需要是一个字符串\n- 配置说明 **#** **#** 后面的内容是**注释**,是帮助性内容,对整个代码的执行不会有影响\n\n## **一些常见的错误**\n 1.Run All后白屏:可能是开了网页自动翻译导致,请重试\n 2.跑到一半出错了:更新到最新版本重新导入,下载地址 https://huggingface.co/datasets/ACCA225/Kaggle-Stable-Diffusion , 如果还是出问题了,请联系管理员\n # 群号码:632428790","metadata":{}},{"cell_type":"markdown","source":"-----------------","metadata":{}},{"cell_type":"markdown","source":"---------------","metadata":{}},{"cell_type":"markdown","source":"## 以下代码以后可能有用,全部由Yiyiooo编写","metadata":{}},{"cell_type":"code","source":"# def initKaggleConfig():\n# if Path('~/.kaggle/kaggle.json').exists():\n# return True\n# if Path(kaggleApiTokenFile).exists():\n# !mkdir -p ~/.kaggle/\n# os.system('cp '+kaggleApiTokenFile+' ~/.kaggle/kaggle.json')\n# !chmod 600 ~/.kaggle/kaggle.json\n# return True\n# print('缺少kaggle的apiToken文件,访问:https://www.kaggle.com/你的kaggle用户名/account 获取')\n# return False\n\n# def getUserName():\n# if not initKaggleConfig(): return\n# import kaggle\n# return kaggle.KaggleApi().read_config_file()['username']\n\n# def createOrUpdateDataSet(path:str,datasetName:str):\n# if not initKaggleConfig(): return\n# print('创建或更新数据集 '+datasetName)\n# import kaggle\n# os.system('mkdir -p $install_path/kaggle_cache')\n# os.system('rm -rf $install_path/kaggle_cache/*')\n# datasetDirPath = install_path+'/kaggle_cache/'+datasetName\n# os.system('mkdir -p '+datasetDirPath)\n# os.system('cp -f '+path+' '+datasetDirPath+'/')\n# username = getUserName()\n# print(\"kaggle username:\"+username)\n# datasetPath = username+'/'+datasetName\n# datasetList = kaggle.api.dataset_list(mine=True,search=datasetPath)\n# print(datasetList)\n# if len(datasetList) == 0 or datasetPath not in [str(d) for d in datasetList]: # 创建 create\n# os.system('kaggle datasets init -p' + datasetDirPath)\n# metadataFile = datasetDirPath+'/dataset-metadata.json'\n# os.system('sed -i s/INSERT_TITLE_HERE/'+ datasetName + '/g ' + metadataFile)\n# os.system('sed -i s/INSERT_SLUG_HERE/'+ datasetName + '/g ' + metadataFile)\n# os.system('cat '+metadataFile)\n# os.system('kaggle datasets create -p '+datasetDirPath)\n# print('create database done')\n# else:\n# kaggle.api.dataset_metadata(datasetPath,datasetDirPath)\n# kaggle.api.dataset_create_version(datasetDirPath, 'auto update',dir_mode='zip')\n# print('upload database done')\n\n# def downloadDatasetFiles(datasetName:str,outputPath:str):\n# if not initKaggleConfig(): return\n# print('下载数据集文件 '+datasetName)\n# import kaggle\n# username = getUserName()\n# datasetPath = username+'/'+datasetName\n# datasetList = kaggle.api.dataset_list(mine=True,search=datasetPath)\n# if datasetPath not in [str(d) for d in datasetList]:\n# return False\n# os.system('mkdir -p '+outputPath)\n# kaggle.api.dataset_download_files(datasetPath,path=outputPath,unzip=True)\n# return True\n","metadata":{"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"# ~~绕过 os.systen 的限制执行命令~~","metadata":{}},{"cell_type":"code","source":"\n# def run(shell:str,shellName=''):\n# if shellName == '': shellName = str(time.time())\n# !mkdir -p $install_path/run_cache\n# with open(install_path+'/run_cache/run_cache.'+shellName+'.sh','w') as sh:\n# sh.write(shell)\n# !bash {install_path}/run_cache/run_cache.{shellName}.sh\n\n# 连接多个路径字符串 让路径在shell命令中能正常的执行\n# def pathJoin(*paths:str):\n# pathStr = ''\n# for p in paths:\n# pathStr += '\"'+p+'\"'\n# pathStr = '\"*\"'.join(pathStr.split('*'))\n# pathStr = '\"$\"'.join(pathStr.split('$'))\n# pathStr = '\"(\"'.join(pathStr.split('('))\n# pathStr = '\")\"'.join(pathStr.split(')'))\n# pathStr = '\"{\"'.join(pathStr.split('{'))\n# pathStr = '\"}\"'.join(pathStr.split('}'))\n# pathStr = re.sub(r'\"\"','',pathStr)\n# pathStr = re.sub(r'\\*{2,}','\"',pathStr)\n# pathStr = re.sub(r'/{2,}','/',pathStr)\n# pathStr = re.sub(r'/\\./','/',pathStr)\n# return pathStr\n\n# 判断路径是不是一个文件或者可能指向一些文件\n# def pathIsFile(path):\n# if Path(path).is_file():\n# return True\n# if re.search(r'\\.(ckpt|safetensors|png|jpg|txt|pt|pth|json|yaml|\\*)$',path):\n# return True\n# return False\n\n# def echoToFile(content:str,path:str):\n# with open(path,'w') as sh:\n# sh.write(content)\n\n# ngrok\n# def startNgrok(ngrokToken:str,ngrokLocalPort:int):\n# from pyngrok import conf, ngrok\n# try:\n# conf.get_default().auth_token = ngrokToken\n# conf.get_default().monitor_thread = False\n# ssh_tunnels = ngrok.get_tunnels(conf.get_default())\n# if len(ssh_tunnels) == 0:\n# ssh_tunnel = ngrok.connect(ngrokLocalPort)\n# print('address:'+ssh_tunnel.public_url)\n# else:\n# print('address:'+ssh_tunnels[0].public_url)\n# except:\n# print('启动ngrok出错')\n \n# def startFrpc(name,configFile):\n# run(f'''\n# cd $install_path/frpc/\n# $install_path/frpc/frpc {configFile}\n# ''',name)\n \n# def installProxyExe():\n# if useFrpc:\n# print('安装frpc')\n# !mkdir -p $install_path/frpc\n# if Path(frpcExePath).exists():\n# os.system(f'cp -f -n {frpcExePath} $install_path/frpc/frpc')\n# else:\n# !wget \"https://huggingface.co/datasets/ACCA225/Frp/resolve/main/frpc\" -O $install_path/frpc/frpc\n \n# for ssl in frpcSSLFFlies:\n# if Path(ssl).exists():\n# os.system('cp -f -n '+pathJoin(ssl,'/*')+' $install_path/frpc/')\n# !chmod +x $install_path/frpc/frpc\n# !$install_path/frpc/frpc -v\n# if useNgrok:\n# %pip install pyngrok\n \n# def startProxy():\n# if useNgrok:\n# startNgrok(ngrokToken,webuiPort)\n# if useFrpc:\n# startFrpc('frpc_proxy',frpcStartArg)\n\n \n# def zipPath(path:str,zipName:str,format='tar'):\n# if path.startswith('$install_path'):\n# path = path.replace('$install_path',install_path)\n# if path.startswith('$output_path'):\n# path = path.replace('$install_path',output_path)\n# if not path.startswith('/'):\n# path = pathJoin(install_path,'/stable-diffusion-webui','/',path)\n# if Path(path).exists():\n# if 'tar' == format:\n# os.system('tar -cf $output_path/'+ zipName +'.tar -C '+ path +' . ')\n# elif 'gz' == format:\n# os.system('tar -czf $output_path/'+ zipName +'.tar.gz -C '+ path +' . ')\n# return\n# print('指定的目录不存在:'+path)\n","metadata":{"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"~~下载文件的判断逻辑~~","metadata":{}},{"cell_type":"code","source":"# import os\n# import re\n# # 加入文件到下载列表\n# def putDownloadFile(url:str,distDir:str,file_name:str=None):\n# if re.match(r'^[^:]+:(https?|ftps?)://', url, flags=0):\n# file_name = re.findall(r'^[^:]+:',url)[0][:-1]\n# url = url[len(file_name)+1:]\n# if not re.match(r'^(https?|ftps?)://',url):\n# return\n# file_name = re.sub(r'\\s+','_',file_name or '')\n# dir = str(hash(url)).replace('-','')\n# down_dir = f'{install_path}/down_cache/{dir}'\n# !mkdir -p {down_dir}\n# return [url,file_name,distDir,down_dir]\n\n# def get_file_size_in_gb(file_path):\n# size_in_bytes = Path(file_path).stat().st_size\n# size_in_gb = size_in_bytes / (1024 ** 3)\n# return '%.2f' % size_in_gb\n \n# # 下载文件\n# def startDownloadFiles(download_list):\n# print('下载列表:\\n','\\n'.join([f'{item[0]} -> {item[2]}/{item[1]}' for item in download_list]))\n# dist_list = []\n# for dow_f in download_list:\n# !mkdir -p {dow_f[3]}\n# print('下载 名称:',dow_f[1],'url:',dow_f[0])\n# output_file = f' -O {dow_f[3]}/{dow_f[1]}'\n# if len(os.listdir(dow_f[3])) > 0:\n# continue\n# os.system(f\"wget {dow_f[0]} --tries=3 --timeout=60 -P {dow_f[3]} {output_file if len(dow_f[1]) > 0 else ''} -o {install_path}/down_cache/log.log\")\n# if len(os.listdir(dow_f[3])) == 0:\n# print('下载出错:',dow_f[0])\n# continue\n# file_name = os.listdir(dow_f[3])[0]\n# !mkdir -p {dow_f[2]}\n# down_file_path = f'{dow_f[3]}/{file_name}'\n# if Path(down_file_path).is_symlink():\n# down_file_path = os.readlink(down_file_path)\n# print('文件真实地址:'+down_file_path)\n# if not Path(down_file_path).exists():\n# print('文件异常')\n# continue\n# print(f'文件大小:{get_file_size_in_gb(down_file_path)}G')\n# dist_path = f'{dow_f[2]}/{file_name}'\n# dist_path = dist_path.replace('%20',' ').strip().replace(' ','_')\n# print(f'移动文件 {down_file_path} -> {dist_path}')\n# os.system(f'ln -f \"{down_file_path}\" \"{dist_path}\"')\n# if dow_f[2] not in dist_list:\n# dist_list.append(dow_f[2])\n# for dist_dir in dist_list:\n# print(dist_dir,os.listdir(dist_dir))\n","metadata":{"trusted":true},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":"###### ?","metadata":{}}]}