yhyu13
Add chatgpt alpaca eval
43b1a6a
raw
history blame
60.3 kB
INFO:root:Evaluating the phi-2-alpaca-gpt4-dpo outputs.
INFO:root:Creating the annotator from `chatgpt_fn`.
INFO:root:Saving annotations to `/home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json`.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
https://api.openai-proxy.org/v1
Annotation chunk: 0%| | 0/7 [00:00<?, ?it/s]INFO:root:Annotating 0 examples with chatgpt_fn
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 14%|β–ˆβ– | 1/7 [00:00<00:01, 3.87it/s]INFO:root:Annotating 0 examples with chatgpt_fn
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 29%|β–ˆβ–ˆβ–Š | 2/7 [00:00<00:01, 4.15it/s]INFO:root:Annotating 0 examples with chatgpt_fn
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 3/7 [00:00<00:00, 4.19it/s]INFO:root:Annotating 64 examples with chatgpt_fn
INFO:root:Using `openai_completions` on 64 prompts using gpt-3.5-turbo-16k-0613.
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5
prompt_batches: 0%| | 0/64 [00:00<?, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 2%|▏ | 1/64 [00:01<01:54, 1.82s/it]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 9%|β–‰ | 6/64 [00:02<00:20, 2.86it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 11%|β–ˆ | 7/64 [00:02<00:17, 3.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 16%|β–ˆβ–Œ | 10/64 [00:03<00:12, 4.44it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 17%|β–ˆβ–‹ | 11/64 [00:03<00:17, 3.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 23%|β–ˆβ–ˆβ–Ž | 15/64 [00:04<00:10, 4.71it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 25%|β–ˆβ–ˆβ–Œ | 16/64 [00:04<00:12, 3.70it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 27%|β–ˆβ–ˆβ–‹ | 17/64 [00:05<00:11, 4.02it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 30%|β–ˆβ–ˆβ–‰ | 19/64 [00:05<00:08, 5.34it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 31%|β–ˆβ–ˆβ–ˆβ– | 20/64 [00:05<00:10, 4.28it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 21/64 [00:06<00:12, 3.41it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 34%|β–ˆβ–ˆβ–ˆβ– | 22/64 [00:06<00:11, 3.55it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 38%|β–ˆβ–ˆβ–ˆβ–Š | 24/64 [00:06<00:09, 4.19it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 39%|β–ˆβ–ˆβ–ˆβ–‰ | 25/64 [00:06<00:09, 4.05it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 26/64 [00:07<00:11, 3.40it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 28/64 [00:07<00:09, 3.78it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 29/64 [00:08<00:10, 3.35it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 31/64 [00:08<00:10, 3.28it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 33/64 [00:09<00:07, 4.34it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 34/64 [00:09<00:09, 3.05it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 36/64 [00:09<00:06, 4.12it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 37/64 [00:10<00:07, 3.56it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 39/64 [00:10<00:06, 3.77it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 40/64 [00:11<00:05, 4.08it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 41/64 [00:11<00:05, 4.56it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 42/64 [00:11<00:05, 3.91it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 43/64 [00:11<00:04, 4.39it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 44/64 [00:12<00:07, 2.54it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 47/64 [00:12<00:03, 4.71it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 48/64 [00:12<00:03, 4.81it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 49/64 [00:13<00:04, 3.57it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 50/64 [00:13<00:04, 3.21it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 51/64 [00:14<00:04, 3.22it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 52/64 [00:14<00:03, 3.68it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 54/64 [00:14<00:02, 4.61it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 55/64 [00:15<00:03, 2.68it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 58/64 [00:15<00:01, 4.84it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 59/64 [00:15<00:00, 5.12it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 60/64 [00:16<00:01, 3.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 61/64 [00:16<00:00, 3.55it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 62/64 [00:16<00:00, 4.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 64/64 [00:17<00:00, 4.04it/s] prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 64/64 [00:17<00:00, 3.70it/s]
INFO:root:Completed 64 examples in 17.3 seconds.
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 4/7 [00:18<00:21, 7.11s/it]INFO:root:Annotating 128 examples with chatgpt_fn
INFO:root:Using `openai_completions` on 128 prompts using gpt-3.5-turbo-16k-0613.
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5
prompt_batches: 0%| | 0/128 [00:00<?, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 1%| | 1/128 [00:01<03:00, 1.42s/it]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 5%|▍ | 6/128 [00:02<00:46, 2.64it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 7%|β–‹ | 9/128 [00:02<00:27, 4.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 9%|β–Š | 11/128 [00:04<00:41, 2.82it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 9%|β–‰ | 12/128 [00:04<00:38, 3.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 11%|β–ˆ | 14/128 [00:04<00:38, 2.93it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 12%|β–ˆβ–Ž | 16/128 [00:05<00:28, 3.98it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 13%|β–ˆβ–Ž | 17/128 [00:05<00:27, 4.08it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 15%|β–ˆβ– | 19/128 [00:05<00:27, 4.01it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 16%|β–ˆβ–Œ | 20/128 [00:06<00:27, 3.95it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 16%|β–ˆβ–‹ | 21/128 [00:06<00:36, 2.91it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 19%|β–ˆβ–‰ | 24/128 [00:07<00:31, 3.31it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 20%|β–ˆβ–ˆ | 26/128 [00:07<00:24, 4.15it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 21%|β–ˆβ–ˆ | 27/128 [00:08<00:25, 4.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 22%|β–ˆβ–ˆβ– | 28/128 [00:08<00:22, 4.47it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 23%|β–ˆβ–ˆβ–Ž | 29/128 [00:08<00:28, 3.47it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 24%|β–ˆβ–ˆβ– | 31/128 [00:09<00:25, 3.83it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 25%|β–ˆβ–ˆβ–Œ | 32/128 [00:09<00:28, 3.41it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 27%|β–ˆβ–ˆβ–‹ | 34/128 [00:09<00:23, 3.98it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 27%|β–ˆβ–ˆβ–‹ | 35/128 [00:09<00:20, 4.50it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 28%|β–ˆβ–ˆβ–Š | 36/128 [00:10<00:24, 3.79it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 29%|β–ˆβ–ˆβ–‰ | 37/128 [00:10<00:23, 3.90it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 30%|β–ˆβ–ˆβ–ˆ | 39/128 [00:11<00:25, 3.53it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 32%|β–ˆβ–ˆβ–ˆβ– | 41/128 [00:12<00:27, 3.11it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 34%|β–ˆβ–ˆβ–ˆβ–Ž | 43/128 [00:12<00:21, 3.98it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 34%|β–ˆβ–ˆβ–ˆβ– | 44/128 [00:12<00:23, 3.52it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 35%|β–ˆβ–ˆβ–ˆβ–Œ | 45/128 [00:12<00:20, 4.08it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 36%|β–ˆβ–ˆβ–ˆβ–Œ | 46/128 [00:13<00:19, 4.15it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 37%|β–ˆβ–ˆβ–ˆβ–‹ | 47/128 [00:13<00:21, 3.69it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 38%|β–ˆβ–ˆβ–ˆβ–Š | 48/128 [00:14<00:41, 1.91it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 53/128 [00:15<00:22, 3.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 55/128 [00:15<00:17, 4.22it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 56/128 [00:15<00:16, 4.42it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 58/128 [00:16<00:22, 3.11it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 59/128 [00:17<00:22, 3.11it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 63/128 [00:18<00:20, 3.16it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 65/128 [00:18<00:15, 4.04it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 68/128 [00:19<00:16, 3.66it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 69/128 [00:19<00:16, 3.59it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 72/128 [00:20<00:10, 5.19it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 73/128 [00:20<00:14, 3.77it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 74/128 [00:20<00:14, 3.61it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 75/128 [00:21<00:12, 4.09it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 76/128 [00:21<00:11, 4.61it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 78/128 [00:22<00:14, 3.48it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 79/128 [00:22<00:16, 2.98it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 83/128 [00:23<00:10, 4.29it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 84/128 [00:23<00:10, 4.19it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 85/128 [00:23<00:12, 3.52it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 86/128 [00:24<00:16, 2.55it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 90/128 [00:25<00:10, 3.78it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 92/128 [00:25<00:07, 4.88it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 93/128 [00:25<00:07, 4.80it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 94/128 [00:25<00:07, 4.56it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 95/128 [00:26<00:10, 3.11it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 96/128 [00:26<00:09, 3.53it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 97/128 [00:26<00:08, 3.82it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 99/128 [00:27<00:06, 4.49it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 100/128 [00:27<00:08, 3.35it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 101/128 [00:27<00:07, 3.84it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 102/128 [00:28<00:07, 3.59it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 105/128 [00:29<00:06, 3.69it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 83%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 106/128 [00:29<00:05, 4.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 84%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 107/128 [00:29<00:05, 4.07it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 109/128 [00:29<00:04, 4.23it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 110/128 [00:30<00:05, 3.57it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 112/128 [00:30<00:03, 4.84it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 114/128 [00:31<00:03, 4.34it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 115/128 [00:31<00:03, 3.71it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 91%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 116/128 [00:31<00:03, 3.86it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 118/128 [00:31<00:01, 5.14it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 119/128 [00:32<00:02, 3.71it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 120/128 [00:32<00:02, 3.82it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 121/128 [00:32<00:01, 3.77it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 124/128 [00:33<00:00, 4.28it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 125/128 [00:34<00:01, 2.98it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:34<00:00, 4.91it/s] prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:34<00:00, 3.71it/s]
INFO:root:Completed 128 examples in 34.5 seconds.
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 5/7 [00:53<00:34, 17.14s/it]INFO:root:Annotating 127 examples with chatgpt_fn
INFO:root:Using `openai_completions` on 127 prompts using gpt-3.5-turbo-16k-0613.
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5
prompt_batches: 0%| | 0/127 [00:00<?, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 1%| | 1/127 [00:01<03:13, 1.54s/it]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 5%|▍ | 6/127 [00:02<00:46, 2.58it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 7%|β–‹ | 9/127 [00:02<00:31, 3.81it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 9%|β–Š | 11/127 [00:03<00:37, 3.10it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 11%|β–ˆ | 14/127 [00:04<00:30, 3.75it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 13%|β–ˆβ–Ž | 16/127 [00:04<00:30, 3.69it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 13%|β–ˆβ–Ž | 17/127 [00:05<00:35, 3.09it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 17%|β–ˆβ–‹ | 21/127 [00:06<00:29, 3.62it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 18%|β–ˆβ–Š | 23/127 [00:06<00:24, 4.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 19%|β–ˆβ–‰ | 24/127 [00:06<00:22, 4.58it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 20%|β–ˆβ–ˆ | 26/127 [00:08<00:33, 3.03it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 23%|β–ˆβ–ˆβ–Ž | 29/127 [00:08<00:24, 4.01it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 24%|β–ˆβ–ˆβ– | 31/127 [00:09<00:25, 3.77it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 25%|β–ˆβ–ˆβ–Œ | 32/127 [00:09<00:25, 3.73it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 26%|β–ˆβ–ˆβ–Œ | 33/127 [00:09<00:22, 4.10it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 27%|β–ˆβ–ˆβ–‹ | 34/127 [00:09<00:24, 3.81it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 28%|β–ˆβ–ˆβ–Š | 35/127 [00:09<00:23, 4.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 28%|β–ˆβ–ˆβ–Š | 36/127 [00:10<00:24, 3.71it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 30%|β–ˆβ–ˆβ–‰ | 38/127 [00:10<00:25, 3.49it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 31%|β–ˆβ–ˆβ–ˆβ– | 40/127 [00:11<00:19, 4.56it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 32%|β–ˆβ–ˆβ–ˆβ– | 41/127 [00:11<00:19, 4.42it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 42/127 [00:11<00:26, 3.20it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 35%|β–ˆβ–ˆβ–ˆβ– | 44/127 [00:12<00:19, 4.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 35%|β–ˆβ–ˆβ–ˆβ–Œ | 45/127 [00:12<00:27, 3.03it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 37%|β–ˆβ–ˆβ–ˆβ–‹ | 47/127 [00:13<00:20, 3.96it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 38%|β–ˆβ–ˆβ–ˆβ–Š | 48/127 [00:13<00:27, 2.83it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 51/127 [00:14<00:15, 4.82it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 52/127 [00:14<00:20, 3.72it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 53/127 [00:14<00:21, 3.45it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 54/127 [00:15<00:21, 3.43it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 57/127 [00:15<00:18, 3.88it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 58/127 [00:16<00:19, 3.45it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 60/127 [00:16<00:14, 4.59it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 61/127 [00:16<00:18, 3.64it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 62/127 [00:17<00:17, 3.65it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 63/127 [00:17<00:19, 3.29it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 65/127 [00:18<00:17, 3.50it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 67/127 [00:18<00:14, 4.22it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 68/127 [00:18<00:14, 4.18it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 69/127 [00:19<00:23, 2.46it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 73/127 [00:20<00:12, 4.49it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 74/127 [00:20<00:14, 3.70it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 75/127 [00:20<00:13, 3.79it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 76/127 [00:21<00:16, 3.15it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 79/127 [00:21<00:11, 4.24it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 80/127 [00:21<00:11, 4.16it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 81/127 [00:22<00:10, 4.30it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 82/127 [00:22<00:12, 3.60it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 84/127 [00:23<00:11, 3.80it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 85/127 [00:23<00:10, 3.94it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 86/127 [00:23<00:09, 4.55it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 87/127 [00:23<00:12, 3.26it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 89/127 [00:24<00:10, 3.66it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 92/127 [00:24<00:06, 5.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 93/127 [00:25<00:08, 4.03it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 94/127 [00:25<00:08, 3.80it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 96/127 [00:26<00:08, 3.65it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 98/127 [00:26<00:06, 4.55it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 99/127 [00:27<00:08, 3.37it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 101/127 [00:27<00:06, 3.92it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 102/127 [00:27<00:05, 4.47it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 104/127 [00:29<00:10, 2.22it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 109/127 [00:29<00:04, 4.03it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 110/127 [00:29<00:04, 4.03it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 87%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 111/127 [00:30<00:03, 4.33it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 113/127 [00:30<00:02, 4.80it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 114/127 [00:30<00:03, 3.63it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 117/127 [00:31<00:02, 4.64it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 93%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 118/127 [00:31<00:02, 3.84it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž| 119/127 [00:32<00:02, 3.50it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ| 122/127 [00:32<00:00, 5.12it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 123/127 [00:33<00:01, 3.69it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 124/127 [00:33<00:00, 4.13it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 98%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š| 125/127 [00:33<00:00, 4.04it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:33<00:00, 4.37it/s] prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 127/127 [00:33<00:00, 3.75it/s]
INFO:root:Completed 127 examples in 33.9 seconds.
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 6/7 [01:27<00:22, 22.97s/it]INFO:root:Annotating 37 examples with chatgpt_fn
INFO:root:Using `openai_completions` on 37 prompts using gpt-3.5-turbo-16k-0613.
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5
prompt_batches: 0%| | 0/37 [00:00<?, ?it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 3%|β–Ž | 1/37 [00:01<00:53, 1.50s/it]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 16%|β–ˆβ–Œ | 6/37 [00:02<00:13, 2.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 30%|β–ˆβ–ˆβ–‰ | 11/37 [00:03<00:07, 3.58it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 32%|β–ˆβ–ˆβ–ˆβ– | 12/37 [00:04<00:08, 3.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 41%|β–ˆβ–ˆβ–ˆβ–ˆ | 15/37 [00:04<00:05, 3.84it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 16/37 [00:05<00:05, 3.83it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 17/37 [00:05<00:05, 3.39it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 19/37 [00:06<00:05, 3.17it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 21/37 [00:06<00:03, 4.22it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 22/37 [00:06<00:03, 4.44it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 23/37 [00:06<00:03, 4.00it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 24/37 [00:07<00:05, 2.51it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 28/37 [00:08<00:01, 4.59it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 29/37 [00:08<00:02, 3.09it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 30/37 [00:09<00:02, 3.27it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 89%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 33/37 [00:09<00:00, 4.73it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 34/37 [00:10<00:00, 3.31it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 95%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 35/37 [00:10<00:00, 3.12it/s]INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK"
prompt_batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 37/37 [00:10<00:00, 3.48it/s]
INFO:root:Completed 37 examples in 10.7 seconds.
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json.
Annotation chunk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [01:38<00:00, 19.06s/it] Annotation chunk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7/7 [01:38<00:00, 14.09s/it]
INFO:root:drop 1 outputs that are not[0, 1, 2]
INFO:root:Saving all results to output/chatgpt_fn_--phi-2-alpaca-gpt4-dpo-eval
INFO:root:Saving result to the precomputed leaderboard at /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/leaderboards/data_AlpacaEval/chatgpt_fn_leaderboard.csv
win_rate standard_error n_total avg_length
gpt4 73.79 1.54 805 1365
claude 70.37 1.60 805 1082
chatgpt 66.09 1.66 805 811
wizardlm-13b 65.16 1.67 805 985
vicuna-13b 64.10 1.69 805 1037
guanaco-65b 62.36 1.71 805 1249
oasst-rlhf-llama-33b 62.05 1.71 805 1079
alpaca-farm-ppo-human 60.25 1.72 805 803
falcon-40b-instruct 56.52 1.74 805 662
phi-2-alpaca-gpt4-dpo 55.60 1.75 804 4532
text_davinci_003 50.00 0.00 805 307
alpaca-7b 45.22 1.74 805 396
text_davinci_001 28.07 1.56 805 296