|
INFO:root:Evaluating the phi-2-alpaca-gpt4-dpo outputs. |
|
INFO:root:Creating the annotator from `chatgpt_fn`. |
|
INFO:root:Saving annotations to `/home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json`. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
https://api.openai-proxy.org/v1 |
|
Annotation chunk: 0%| | 0/7 [00:00<?, ?it/s]INFO:root:Annotating 0 examples with chatgpt_fn |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 14%|ββ | 1/7 [00:00<00:01, 3.87it/s]INFO:root:Annotating 0 examples with chatgpt_fn |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 29%|βββ | 2/7 [00:00<00:01, 4.15it/s]INFO:root:Annotating 0 examples with chatgpt_fn |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 43%|βββββ | 3/7 [00:00<00:00, 4.19it/s]INFO:root:Annotating 64 examples with chatgpt_fn |
|
INFO:root:Using `openai_completions` on 64 prompts using gpt-3.5-turbo-16k-0613. |
|
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5 |
|
|
|
prompt_batches: 0%| | 0/64 [00:00<?, ?it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 2%|β | 1/64 [00:01<01:54, 1.82s/it][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 9%|β | 6/64 [00:02<00:20, 2.86it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 11%|β | 7/64 [00:02<00:17, 3.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 16%|ββ | 10/64 [00:03<00:12, 4.44it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 17%|ββ | 11/64 [00:03<00:17, 3.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 23%|βββ | 15/64 [00:04<00:10, 4.71it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 25%|βββ | 16/64 [00:04<00:12, 3.70it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 27%|βββ | 17/64 [00:05<00:11, 4.02it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 30%|βββ | 19/64 [00:05<00:08, 5.34it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 31%|ββββ | 20/64 [00:05<00:10, 4.28it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 33%|ββββ | 21/64 [00:06<00:12, 3.41it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 34%|ββββ | 22/64 [00:06<00:11, 3.55it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 38%|ββββ | 24/64 [00:06<00:09, 4.19it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 39%|ββββ | 25/64 [00:06<00:09, 4.05it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 41%|ββββ | 26/64 [00:07<00:11, 3.40it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 44%|βββββ | 28/64 [00:07<00:09, 3.78it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 45%|βββββ | 29/64 [00:08<00:10, 3.35it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 48%|βββββ | 31/64 [00:08<00:10, 3.28it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 52%|ββββββ | 33/64 [00:09<00:07, 4.34it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 53%|ββββββ | 34/64 [00:09<00:09, 3.05it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 56%|ββββββ | 36/64 [00:09<00:06, 4.12it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 58%|ββββββ | 37/64 [00:10<00:07, 3.56it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 61%|ββββββ | 39/64 [00:10<00:06, 3.77it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 62%|βββββββ | 40/64 [00:11<00:05, 4.08it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 64%|βββββββ | 41/64 [00:11<00:05, 4.56it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 66%|βββββββ | 42/64 [00:11<00:05, 3.91it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 67%|βββββββ | 43/64 [00:11<00:04, 4.39it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 69%|βββββββ | 44/64 [00:12<00:07, 2.54it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 73%|ββββββββ | 47/64 [00:12<00:03, 4.71it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 75%|ββββββββ | 48/64 [00:12<00:03, 4.81it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 77%|ββββββββ | 49/64 [00:13<00:04, 3.57it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 78%|ββββββββ | 50/64 [00:13<00:04, 3.21it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 80%|ββββββββ | 51/64 [00:14<00:04, 3.22it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 81%|βββββββββ | 52/64 [00:14<00:03, 3.68it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 84%|βββββββββ | 54/64 [00:14<00:02, 4.61it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 86%|βββββββββ | 55/64 [00:15<00:03, 2.68it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 91%|βββββββββ | 58/64 [00:15<00:01, 4.84it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 92%|ββββββββββ| 59/64 [00:15<00:00, 5.12it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 94%|ββββββββββ| 60/64 [00:16<00:01, 3.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 95%|ββββββββββ| 61/64 [00:16<00:00, 3.55it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 97%|ββββββββββ| 62/64 [00:16<00:00, 4.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 100%|ββββββββββ| 64/64 [00:17<00:00, 4.04it/s][A
prompt_batches: 100%|ββββββββββ| 64/64 [00:17<00:00, 3.70it/s] |
|
INFO:root:Completed 64 examples in 17.3 seconds. |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 57%|ββββββ | 4/7 [00:18<00:21, 7.11s/it]INFO:root:Annotating 128 examples with chatgpt_fn |
|
INFO:root:Using `openai_completions` on 128 prompts using gpt-3.5-turbo-16k-0613. |
|
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5 |
|
|
|
prompt_batches: 0%| | 0/128 [00:00<?, ?it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 1%| | 1/128 [00:01<03:00, 1.42s/it][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 5%|β | 6/128 [00:02<00:46, 2.64it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 7%|β | 9/128 [00:02<00:27, 4.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 9%|β | 11/128 [00:04<00:41, 2.82it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 9%|β | 12/128 [00:04<00:38, 3.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 11%|β | 14/128 [00:04<00:38, 2.93it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 12%|ββ | 16/128 [00:05<00:28, 3.98it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 13%|ββ | 17/128 [00:05<00:27, 4.08it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 15%|ββ | 19/128 [00:05<00:27, 4.01it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 16%|ββ | 20/128 [00:06<00:27, 3.95it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 16%|ββ | 21/128 [00:06<00:36, 2.91it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 19%|ββ | 24/128 [00:07<00:31, 3.31it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 20%|ββ | 26/128 [00:07<00:24, 4.15it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 21%|ββ | 27/128 [00:08<00:25, 4.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 22%|βββ | 28/128 [00:08<00:22, 4.47it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 23%|βββ | 29/128 [00:08<00:28, 3.47it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 24%|βββ | 31/128 [00:09<00:25, 3.83it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 25%|βββ | 32/128 [00:09<00:28, 3.41it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 27%|βββ | 34/128 [00:09<00:23, 3.98it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 27%|βββ | 35/128 [00:09<00:20, 4.50it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 28%|βββ | 36/128 [00:10<00:24, 3.79it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 29%|βββ | 37/128 [00:10<00:23, 3.90it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 30%|βββ | 39/128 [00:11<00:25, 3.53it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 32%|ββββ | 41/128 [00:12<00:27, 3.11it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 34%|ββββ | 43/128 [00:12<00:21, 3.98it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 34%|ββββ | 44/128 [00:12<00:23, 3.52it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 35%|ββββ | 45/128 [00:12<00:20, 4.08it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 36%|ββββ | 46/128 [00:13<00:19, 4.15it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 37%|ββββ | 47/128 [00:13<00:21, 3.69it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 38%|ββββ | 48/128 [00:14<00:41, 1.91it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 41%|βββββ | 53/128 [00:15<00:22, 3.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 43%|βββββ | 55/128 [00:15<00:17, 4.22it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 44%|βββββ | 56/128 [00:15<00:16, 4.42it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 45%|βββββ | 58/128 [00:16<00:22, 3.11it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 46%|βββββ | 59/128 [00:17<00:22, 3.11it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 49%|βββββ | 63/128 [00:18<00:20, 3.16it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 51%|βββββ | 65/128 [00:18<00:15, 4.04it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 53%|ββββββ | 68/128 [00:19<00:16, 3.66it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 54%|ββββββ | 69/128 [00:19<00:16, 3.59it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 56%|ββββββ | 72/128 [00:20<00:10, 5.19it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 57%|ββββββ | 73/128 [00:20<00:14, 3.77it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 58%|ββββββ | 74/128 [00:20<00:14, 3.61it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 59%|ββββββ | 75/128 [00:21<00:12, 4.09it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 59%|ββββββ | 76/128 [00:21<00:11, 4.61it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 61%|ββββββ | 78/128 [00:22<00:14, 3.48it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 62%|βββββββ | 79/128 [00:22<00:16, 2.98it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 65%|βββββββ | 83/128 [00:23<00:10, 4.29it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 66%|βββββββ | 84/128 [00:23<00:10, 4.19it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 66%|βββββββ | 85/128 [00:23<00:12, 3.52it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 67%|βββββββ | 86/128 [00:24<00:16, 2.55it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 70%|βββββββ | 90/128 [00:25<00:10, 3.78it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 72%|ββββββββ | 92/128 [00:25<00:07, 4.88it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 73%|ββββββββ | 93/128 [00:25<00:07, 4.80it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 73%|ββββββββ | 94/128 [00:25<00:07, 4.56it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 74%|ββββββββ | 95/128 [00:26<00:10, 3.11it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 75%|ββββββββ | 96/128 [00:26<00:09, 3.53it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 76%|ββββββββ | 97/128 [00:26<00:08, 3.82it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 77%|ββββββββ | 99/128 [00:27<00:06, 4.49it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 78%|ββββββββ | 100/128 [00:27<00:08, 3.35it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 79%|ββββββββ | 101/128 [00:27<00:07, 3.84it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 80%|ββββββββ | 102/128 [00:28<00:07, 3.59it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 82%|βββββββββ | 105/128 [00:29<00:06, 3.69it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 83%|βββββββββ | 106/128 [00:29<00:05, 4.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 84%|βββββββββ | 107/128 [00:29<00:05, 4.07it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 85%|βββββββββ | 109/128 [00:29<00:04, 4.23it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 86%|βββββββββ | 110/128 [00:30<00:05, 3.57it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 88%|βββββββββ | 112/128 [00:30<00:03, 4.84it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 89%|βββββββββ | 114/128 [00:31<00:03, 4.34it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 90%|βββββββββ | 115/128 [00:31<00:03, 3.71it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 91%|βββββββββ | 116/128 [00:31<00:03, 3.86it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 92%|ββββββββββ| 118/128 [00:31<00:01, 5.14it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 93%|ββββββββββ| 119/128 [00:32<00:02, 3.71it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 94%|ββββββββββ| 120/128 [00:32<00:02, 3.82it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 95%|ββββββββββ| 121/128 [00:32<00:01, 3.77it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 97%|ββββββββββ| 124/128 [00:33<00:00, 4.28it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 98%|ββββββββββ| 125/128 [00:34<00:01, 2.98it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 100%|ββββββββββ| 128/128 [00:34<00:00, 4.91it/s][A
prompt_batches: 100%|ββββββββββ| 128/128 [00:34<00:00, 3.71it/s] |
|
INFO:root:Completed 128 examples in 34.5 seconds. |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 71%|ββββββββ | 5/7 [00:53<00:34, 17.14s/it]INFO:root:Annotating 127 examples with chatgpt_fn |
|
INFO:root:Using `openai_completions` on 127 prompts using gpt-3.5-turbo-16k-0613. |
|
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5 |
|
|
|
prompt_batches: 0%| | 0/127 [00:00<?, ?it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 1%| | 1/127 [00:01<03:13, 1.54s/it][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 5%|β | 6/127 [00:02<00:46, 2.58it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 7%|β | 9/127 [00:02<00:31, 3.81it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 9%|β | 11/127 [00:03<00:37, 3.10it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 11%|β | 14/127 [00:04<00:30, 3.75it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 13%|ββ | 16/127 [00:04<00:30, 3.69it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 13%|ββ | 17/127 [00:05<00:35, 3.09it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 17%|ββ | 21/127 [00:06<00:29, 3.62it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 18%|ββ | 23/127 [00:06<00:24, 4.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 19%|ββ | 24/127 [00:06<00:22, 4.58it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 20%|ββ | 26/127 [00:08<00:33, 3.03it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 23%|βββ | 29/127 [00:08<00:24, 4.01it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 24%|βββ | 31/127 [00:09<00:25, 3.77it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 25%|βββ | 32/127 [00:09<00:25, 3.73it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 26%|βββ | 33/127 [00:09<00:22, 4.10it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 27%|βββ | 34/127 [00:09<00:24, 3.81it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 28%|βββ | 35/127 [00:09<00:23, 4.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 28%|βββ | 36/127 [00:10<00:24, 3.71it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 30%|βββ | 38/127 [00:10<00:25, 3.49it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 31%|ββββ | 40/127 [00:11<00:19, 4.56it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 32%|ββββ | 41/127 [00:11<00:19, 4.42it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 33%|ββββ | 42/127 [00:11<00:26, 3.20it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 35%|ββββ | 44/127 [00:12<00:19, 4.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 35%|ββββ | 45/127 [00:12<00:27, 3.03it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 37%|ββββ | 47/127 [00:13<00:20, 3.96it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 38%|ββββ | 48/127 [00:13<00:27, 2.83it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 40%|ββββ | 51/127 [00:14<00:15, 4.82it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 41%|ββββ | 52/127 [00:14<00:20, 3.72it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 42%|βββββ | 53/127 [00:14<00:21, 3.45it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 43%|βββββ | 54/127 [00:15<00:21, 3.43it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 45%|βββββ | 57/127 [00:15<00:18, 3.88it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 46%|βββββ | 58/127 [00:16<00:19, 3.45it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 47%|βββββ | 60/127 [00:16<00:14, 4.59it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 48%|βββββ | 61/127 [00:16<00:18, 3.64it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 49%|βββββ | 62/127 [00:17<00:17, 3.65it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 50%|βββββ | 63/127 [00:17<00:19, 3.29it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 51%|βββββ | 65/127 [00:18<00:17, 3.50it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 53%|ββββββ | 67/127 [00:18<00:14, 4.22it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 54%|ββββββ | 68/127 [00:18<00:14, 4.18it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 54%|ββββββ | 69/127 [00:19<00:23, 2.46it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 57%|ββββββ | 73/127 [00:20<00:12, 4.49it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 58%|ββββββ | 74/127 [00:20<00:14, 3.70it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 59%|ββββββ | 75/127 [00:20<00:13, 3.79it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 60%|ββββββ | 76/127 [00:21<00:16, 3.15it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 62%|βββββββ | 79/127 [00:21<00:11, 4.24it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 63%|βββββββ | 80/127 [00:21<00:11, 4.16it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 64%|βββββββ | 81/127 [00:22<00:10, 4.30it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 65%|βββββββ | 82/127 [00:22<00:12, 3.60it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 66%|βββββββ | 84/127 [00:23<00:11, 3.80it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 67%|βββββββ | 85/127 [00:23<00:10, 3.94it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 68%|βββββββ | 86/127 [00:23<00:09, 4.55it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 69%|βββββββ | 87/127 [00:23<00:12, 3.26it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 70%|βββββββ | 89/127 [00:24<00:10, 3.66it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 72%|ββββββββ | 92/127 [00:24<00:06, 5.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 73%|ββββββββ | 93/127 [00:25<00:08, 4.03it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 74%|ββββββββ | 94/127 [00:25<00:08, 3.80it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 76%|ββββββββ | 96/127 [00:26<00:08, 3.65it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 77%|ββββββββ | 98/127 [00:26<00:06, 4.55it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 78%|ββββββββ | 99/127 [00:27<00:08, 3.37it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 80%|ββββββββ | 101/127 [00:27<00:06, 3.92it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 80%|ββββββββ | 102/127 [00:27<00:05, 4.47it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 82%|βββββββββ | 104/127 [00:29<00:10, 2.22it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 86%|βββββββββ | 109/127 [00:29<00:04, 4.03it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 87%|βββββββββ | 110/127 [00:29<00:04, 4.03it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 87%|βββββββββ | 111/127 [00:30<00:03, 4.33it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 89%|βββββββββ | 113/127 [00:30<00:02, 4.80it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 90%|βββββββββ | 114/127 [00:30<00:03, 3.63it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 92%|ββββββββββ| 117/127 [00:31<00:02, 4.64it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 93%|ββββββββββ| 118/127 [00:31<00:02, 3.84it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 94%|ββββββββββ| 119/127 [00:32<00:02, 3.50it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 96%|ββββββββββ| 122/127 [00:32<00:00, 5.12it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 97%|ββββββββββ| 123/127 [00:33<00:01, 3.69it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 98%|ββββββββββ| 124/127 [00:33<00:00, 4.13it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 98%|ββββββββββ| 125/127 [00:33<00:00, 4.04it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 100%|ββββββββββ| 127/127 [00:33<00:00, 4.37it/s][A
prompt_batches: 100%|ββββββββββ| 127/127 [00:33<00:00, 3.75it/s] |
|
INFO:root:Completed 127 examples in 33.9 seconds. |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 86%|βββββββββ | 6/7 [01:27<00:22, 22.97s/it]INFO:root:Annotating 37 examples with chatgpt_fn |
|
INFO:root:Using `openai_completions` on 37 prompts using gpt-3.5-turbo-16k-0613. |
|
INFO:root:Kwargs to completion: {'n': 1, 'model': 'gpt-3.5-turbo-16k-0613', 'is_chat': True, 'temperature': 0, 'function_call': {'name': 'print_best_model'}, 'functions': [{'name': 'print_best_model', 'description': 'Print the best model given the preferred output.', 'parameters': {'type': 'object', 'properties': {'best_output': {'type': 'string', 'description': "Name of the best output, should be 'Output (a)' or 'Output (b)'"}}}, 'required': ['best_output']}]}. num_procs=5 |
|
|
|
prompt_batches: 0%| | 0/37 [00:00<?, ?it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 3%|β | 1/37 [00:01<00:53, 1.50s/it][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 16%|ββ | 6/37 [00:02<00:13, 2.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 30%|βββ | 11/37 [00:03<00:07, 3.58it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 32%|ββββ | 12/37 [00:04<00:08, 3.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 41%|ββββ | 15/37 [00:04<00:05, 3.84it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 43%|βββββ | 16/37 [00:05<00:05, 3.83it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 46%|βββββ | 17/37 [00:05<00:05, 3.39it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 51%|ββββββ | 19/37 [00:06<00:05, 3.17it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 57%|ββββββ | 21/37 [00:06<00:03, 4.22it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 59%|ββββββ | 22/37 [00:06<00:03, 4.44it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 62%|βββββββ | 23/37 [00:06<00:03, 4.00it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 65%|βββββββ | 24/37 [00:07<00:05, 2.51it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 76%|ββββββββ | 28/37 [00:08<00:01, 4.59it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 78%|ββββββββ | 29/37 [00:08<00:02, 3.09it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 81%|ββββββββ | 30/37 [00:09<00:02, 3.27it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 89%|βββββββββ | 33/37 [00:09<00:00, 4.73it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 92%|ββββββββββ| 34/37 [00:10<00:00, 3.31it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
INFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
|
|
prompt_batches: 95%|ββββββββββ| 35/37 [00:10<00:00, 3.12it/s][AINFO:httpx:HTTP Request: POST https://api.openai-proxy.org/v1/chat/completions "HTTP/1.1 200 OK" |
|
prompt_batches: 100%|ββββββββββ| 37/37 [00:10<00:00, 3.48it/s] |
|
INFO:root:Completed 37 examples in 10.7 seconds. |
|
INFO:root:Saving all annotations to /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
INFO:root:Loading all annotations from /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/evaluators_configs/chatgpt_fn/annotations_seed0_configs.json. |
|
Annotation chunk: 100%|ββββββββββ| 7/7 [01:38<00:00, 19.06s/it]
Annotation chunk: 100%|ββββββββββ| 7/7 [01:38<00:00, 14.09s/it] |
|
INFO:root:drop 1 outputs that are not[0, 1, 2] |
|
INFO:root:Saving all results to output/chatgpt_fn_--phi-2-alpaca-gpt4-dpo-eval |
|
INFO:root:Saving result to the precomputed leaderboard at /home/hangyu5/Documents/Git-repoMy/AIResearchVault/repo/LLM-infrastructure/alpaca_eval/src/alpaca_eval/leaderboards/data_AlpacaEval/chatgpt_fn_leaderboard.csv |
|
win_rate standard_error n_total avg_length |
|
gpt4 73.79 1.54 805 1365 |
|
claude 70.37 1.60 805 1082 |
|
chatgpt 66.09 1.66 805 811 |
|
wizardlm-13b 65.16 1.67 805 985 |
|
vicuna-13b 64.10 1.69 805 1037 |
|
guanaco-65b 62.36 1.71 805 1249 |
|
oasst-rlhf-llama-33b 62.05 1.71 805 1079 |
|
alpaca-farm-ppo-human 60.25 1.72 805 803 |
|
falcon-40b-instruct 56.52 1.74 805 662 |
|
phi-2-alpaca-gpt4-dpo 55.60 1.75 804 4532 |
|
text_davinci_003 50.00 0.00 805 307 |
|
alpaca-7b 45.22 1.74 805 396 |
|
text_davinci_001 28.07 1.56 805 296 |
|
|