import gradio as gr import os import requests import openai def greet(modelpath1="",modelpath2="",modelpath3="",modelpath4="",modelpath5=""): names = [modelpath1,modelpath2,modelpath3,modelpath4,modelpath5] names = [name for name in names if name != ""] if names == []: return "Please enter at least one model name." if len(names) < 2: return "Please enter at least 2 model names." urls = [] for name in names: urls.append("https://huggingface.co/" + name + "/raw/main/config.json") configs = [] index_to_ignore =[] for i in range(len(urls)): get_result = requests.get(urls[i]) if get_result.status_code == 200: configs.append(get_result.json()) else: configs.append("") index_to_ignore.append(i) if configs == []: return "Could not find any models. Please check the model name." gpt_input = "" gpt_input += os.environ["prompts"] +"\n\n" for i in range(len(names)): if i not in index_to_ignore: gpt_input += "modelname: " + names[i] + "\n" + " config file:" + str(configs[i]) + "\n\n" openai.api_key = os.environ["APIKEY"] respose = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ { "role": "system", "content": gpt_input }, ], ) response_text = respose["choices"][0]["message"]["content"] # get the first | to the last | response_text = response_text[response_text.find("|")+1:response_text.rfind("|")+1] return response_text text1 = gr.inputs.Textbox(placeholder="ower/modelname1", label="Input modelname like rinna/japanese-gpt-neox-3.6b",lines=1,optional=False) text2 = gr.inputs.Textbox(placeholder="ower/modelname2", label="model 2",lines=1,optional=False) text3 = gr.inputs.Textbox(placeholder="ower/modelname3", label="model 3",lines=1,optional=True) if __name__ == '__main__': interFace = gr.Interface(fn=greet, inputs=[text1,text2,text3], outputs=[gr.Markdown(value="")], title="LLM Comparer⚖️", description="Plsease copy and paste the owner name / model name from the Hugging Face model hub.", theme='finlaymacklon/smooth_slate', allow_flagging=False, ) interFace.launch(share=False)