evelyn / conversation_log /gradio_web_server.log
evelyn-lo's picture
Upload folder using huggingface_hub
37c870e verified
2024-07-03 06:04:17 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 06:04:17 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-03 06:04:17 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-03 06:04:18 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 06:04:18 | INFO | stdout |
2024-07-03 06:04:18 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 06:04:35 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 06:04:40 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 12
2024-07-03 06:04:40 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 06:04:40 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb151112e0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:04:40 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-03 06:04:40 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello there! ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-03 06:04:42 | INFO | gradio_web_server | Hello! How can I help you today?
2024-07-03 06:04:44 | INFO | gradio_web_server | upvote. ip: 127.0.0.1
2024-07-03 06:04:50 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 12
2024-07-03 06:04:51 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 06:04:51 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbb1509e910>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:04:51 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-03 06:04:51 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello there! ASSISTANT: Hello! How can I help you today?</s>USER: who are you? ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-03 06:04:52 | INFO | gradio_web_server | My name is Vicuna, and I'm a language model developed by Large Model Systems Organization (LMSYS).
2024-07-03 06:04:56 | INFO | gradio_web_server | upvote. ip: 127.0.0.1
2024-07-03 06:05:57 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 06:05:57 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 06:05:57 | ERROR | stderr | time.sleep(0.1)
2024-07-03 06:05:57 | ERROR | stderr | KeyboardInterrupt
2024-07-03 06:05:57 | ERROR | stderr |
2024-07-03 06:05:57 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 06:05:57 | ERROR | stderr |
2024-07-03 06:05:57 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 06:05:57 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 06:05:57 | ERROR | stderr | exec(code, run_globals)
2024-07-03 06:05:57 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1049, in <module>
2024-07-03 06:05:57 | ERROR | stderr | demo.queue(
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 06:05:57 | ERROR | stderr | self.block_thread()
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 06:05:57 | ERROR | stderr | self.server.close()
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 06:05:57 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 06:05:57 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 06:05:57 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 06:05:57 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 06:05:57 | ERROR | stderr | KeyboardInterrupt
2024-07-03 06:06:10 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connection.py", line 196, in _new_conn
2024-07-03 06:06:10 | ERROR | stderr | sock = connection.create_connection(
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/util/connection.py", line 85, in create_connection
2024-07-03 06:06:10 | ERROR | stderr | raise err
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/util/connection.py", line 73, in create_connection
2024-07-03 06:06:10 | ERROR | stderr | sock.connect(sa)
2024-07-03 06:06:10 | ERROR | stderr | ConnectionRefusedError: [Errno 111] Connection refused
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | The above exception was the direct cause of the following exception:
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 789, in urlopen
2024-07-03 06:06:10 | ERROR | stderr | response = self._make_request(
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 495, in _make_request
2024-07-03 06:06:10 | ERROR | stderr | conn.request(
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connection.py", line 398, in request
2024-07-03 06:06:10 | ERROR | stderr | self.endheaders()
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/lib/python3.8/http/client.py", line 1251, in endheaders
2024-07-03 06:06:10 | ERROR | stderr | self._send_output(message_body, encode_chunked=encode_chunked)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/lib/python3.8/http/client.py", line 1011, in _send_output
2024-07-03 06:06:10 | ERROR | stderr | self.send(msg)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/lib/python3.8/http/client.py", line 951, in send
2024-07-03 06:06:10 | ERROR | stderr | self.connect()
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connection.py", line 236, in connect
2024-07-03 06:06:10 | ERROR | stderr | self.sock = self._new_conn()
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connection.py", line 211, in _new_conn
2024-07-03 06:06:10 | ERROR | stderr | raise NewConnectionError(
2024-07-03 06:06:10 | ERROR | stderr | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fe97dcc79d0>: Failed to establish a new connection: [Errno 111] Connection refused
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | The above exception was the direct cause of the following exception:
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/adapters.py", line 667, in send
2024-07-03 06:06:10 | ERROR | stderr | resp = conn.urlopen(
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 843, in urlopen
2024-07-03 06:06:10 | ERROR | stderr | retries = retries.increment(
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/urllib3/util/retry.py", line 519, in increment
2024-07-03 06:06:10 | ERROR | stderr | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
2024-07-03 06:06:10 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=21001): Max retries exceeded with url: /refresh_all_workers (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe97dcc79d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 06:06:10 | ERROR | stderr |
2024-07-03 06:06:10 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 06:06:10 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 06:06:10 | ERROR | stderr | exec(code, run_globals)
2024-07-03 06:06:10 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_multi.py", line 277, in <module>
2024-07-03 06:06:10 | ERROR | stderr | models, all_models = get_model_list(
2024-07-03 06:06:10 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 178, in get_model_list
2024-07-03 06:06:10 | ERROR | stderr | ret = requests.post(controller_url + "/refresh_all_workers")
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/api.py", line 115, in post
2024-07-03 06:06:10 | ERROR | stderr | return request("post", url, data=data, json=json, **kwargs)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/api.py", line 59, in request
2024-07-03 06:06:10 | ERROR | stderr | return session.request(method=method, url=url, **kwargs)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 589, in request
2024-07-03 06:06:10 | ERROR | stderr | resp = self.send(prep, **send_kwargs)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/sessions.py", line 703, in send
2024-07-03 06:06:10 | ERROR | stderr | r = adapter.send(request, **kwargs)
2024-07-03 06:06:10 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/requests/adapters.py", line 700, in send
2024-07-03 06:06:10 | ERROR | stderr | raise ConnectionError(e, request=request)
2024-07-03 06:06:10 | ERROR | stderr | requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=21001): Max retries exceeded with url: /refresh_all_workers (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe97dcc79d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:07:28 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-03 06:07:28 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-03 06:07:28 | INFO | gradio_web_server | All models: []
2024-07-03 06:07:28 | INFO | gradio_web_server | Visible models: []
2024-07-03 06:07:28 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/components/dropdown.py:181: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: or set allow_custom_value=True.
2024-07-03 06:07:28 | ERROR | stderr | warnings.warn(
2024-07-03 06:07:28 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 06:07:28 | INFO | stdout |
2024-07-03 06:07:28 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 06:08:06 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 06:08:06 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe037fb45e0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:08:06 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-03 06:08:06 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 06:08:06 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe03a010fd0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-03 06:08:06 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-03 06:08:06 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-03 06:08:06 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-03 06:08:12 | INFO | gradio_web_server | Hello! How can I help you today? Is there something specific you would like to know? I'm here to answer any questions you may have.
2024-07-03 06:08:13 | INFO | gradio_web_server | Hello! How can I help you today? Is there something specific you would like to know or discuss? I am here to provide information and answer any questions you may have. Feel free to ask me anything.
2024-07-03 06:09:58 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 06:09:59 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 06:09:59 | ERROR | stderr | time.sleep(0.1)
2024-07-03 06:09:59 | ERROR | stderr | KeyboardInterrupt
2024-07-03 06:09:59 | ERROR | stderr |
2024-07-03 06:09:59 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 06:09:59 | ERROR | stderr |
2024-07-03 06:09:59 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 06:09:59 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 06:09:59 | ERROR | stderr | exec(code, run_globals)
2024-07-03 06:09:59 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_multi.py", line 301, in <module>
2024-07-03 06:09:59 | ERROR | stderr | demo.queue(
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 06:09:59 | ERROR | stderr | self.block_thread()
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 06:09:59 | ERROR | stderr | self.server.close()
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 06:09:59 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 06:09:59 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 06:09:59 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 06:09:59 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 06:09:59 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:00:42 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:00:42 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:00:42 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:00:42 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:00:42 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:00:42 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 116, in <module>
2024-07-03 07:00:42 | ERROR | stderr | "anony_only": false
2024-07-03 07:00:42 | ERROR | stderr | NameError: name 'false' is not defined
2024-07-03 07:01:06 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 07:01:06 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-03 07:01:06 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-03 07:01:06 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 07:01:06 | INFO | stdout |
2024-07-03 07:01:06 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 07:01:14 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:02:00 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 07:02:00 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 07:02:00 | ERROR | stderr | time.sleep(0.1)
2024-07-03 07:02:00 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:02:00 | ERROR | stderr |
2024-07-03 07:02:00 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 07:02:00 | ERROR | stderr |
2024-07-03 07:02:00 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:02:00 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:02:00 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:02:00 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1060, in <module>
2024-07-03 07:02:00 | ERROR | stderr | demo.queue(
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 07:02:00 | ERROR | stderr | self.block_thread()
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 07:02:00 | ERROR | stderr | self.server.close()
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 07:02:00 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 07:02:00 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 07:02:00 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 07:02:00 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 07:02:00 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:02:06 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 07:02:07 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-03 07:02:07 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-03 07:02:07 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 07:02:07 | INFO | stdout |
2024-07-03 07:02:07 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 07:02:18 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:02:29 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:02:30 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:05:44 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 07:05:44 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 07:05:44 | ERROR | stderr | time.sleep(0.1)
2024-07-03 07:05:44 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:05:44 | ERROR | stderr |
2024-07-03 07:05:44 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 07:05:44 | ERROR | stderr |
2024-07-03 07:05:44 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:05:44 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:05:44 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:05:44 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1060, in <module>
2024-07-03 07:05:44 | ERROR | stderr | demo.queue(
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 07:05:44 | ERROR | stderr | self.block_thread()
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 07:05:44 | ERROR | stderr | self.server.close()
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 07:05:44 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 07:05:44 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 07:05:44 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 07:05:44 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 07:05:44 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:06:30 | ERROR | stderr | usage: gradio_web_server.py [-h] [--host HOST] [--port PORT] [--share]
2024-07-03 07:06:30 | ERROR | stderr | [--controller-url CONTROLLER_URL]
2024-07-03 07:06:30 | ERROR | stderr | [--concurrency-count CONCURRENCY_COUNT]
2024-07-03 07:06:30 | ERROR | stderr | [--model-list-mode {once,reload}]
2024-07-03 07:06:30 | ERROR | stderr | [--moderate] [--show-terms-of-use]
2024-07-03 07:06:30 | ERROR | stderr | [--register-api-endpoint-file REGISTER_API_ENDPOINT_FILE]
2024-07-03 07:06:30 | ERROR | stderr | [--gradio-auth-path GRADIO_AUTH_PATH]
2024-07-03 07:06:30 | ERROR | stderr | [--gradio-root-path GRADIO_ROOT_PATH]
2024-07-03 07:06:30 | ERROR | stderr | [--use-remote-storage]
2024-07-03 07:06:30 | ERROR | stderr | gradio_web_server.py: error: unrecognized arguments: --register_api_endpoint_file /LLM_32T/evelyn/FastChat/fastchat/serve/azure_api.json
2024-07-03 07:07:40 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file='/LLM_32T/evelyn/FastChat/fastchat/serve/azure_api.json', share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 07:07:40 | INFO | gradio_web_server | All models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:07:40 | INFO | gradio_web_server | Visible models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:07:41 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 07:07:41 | INFO | stdout |
2024-07-03 07:07:41 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 07:07:48 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:07:57 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 15
2024-07-03 07:07:57 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:07:57 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:07:57 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:07:57 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1514, in call_function
2024-07-03 07:07:57 | ERROR | stderr | prediction = await anyio.to_thread.run_sync(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:07:57 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:07:57 | ERROR | stderr | return await future
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:07:57 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 833, in wrapper
2024-07-03 07:07:57 | ERROR | stderr | response = f(*args, **kwargs)
2024-07-03 07:07:57 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 337, in add_text
2024-07-03 07:07:57 | ERROR | stderr | flagged = moderation_filter(all_conv_text, [state.model_name])
2024-07-03 07:07:57 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 203, in moderation_filter
2024-07-03 07:07:57 | ERROR | stderr | return oai_moderation(text, custom_thresholds)
2024-07-03 07:07:57 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 156, in oai_moderation
2024-07-03 07:07:57 | ERROR | stderr | import openai
2024-07-03 07:07:57 | ERROR | stderr | ModuleNotFoundError: No module named 'openai'
2024-07-03 07:07:57 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 07:07:57 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:07:57 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:07:57 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:07:57 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1526, in call_function
2024-07-03 07:07:57 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 657, in async_iteration
2024-07-03 07:07:57 | ERROR | stderr | return await iterator.__anext__()
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 650, in __anext__
2024-07-03 07:07:57 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:07:57 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:07:57 | ERROR | stderr | return await future
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:07:57 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 633, in run_sync_iterator_async
2024-07-03 07:07:57 | ERROR | stderr | return next(iterator)
2024-07-03 07:07:57 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 816, in gen_wrapper
2024-07-03 07:07:57 | ERROR | stderr | response = next(iterator)
2024-07-03 07:07:57 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 429, in bot_response
2024-07-03 07:07:57 | ERROR | stderr | if state.skip_next:
2024-07-03 07:07:57 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-03 07:10:52 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 07:10:52 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 07:10:52 | ERROR | stderr | time.sleep(0.1)
2024-07-03 07:10:52 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:10:52 | ERROR | stderr |
2024-07-03 07:10:52 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 07:10:52 | ERROR | stderr |
2024-07-03 07:10:52 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:10:52 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:10:52 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:10:52 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1060, in <module>
2024-07-03 07:10:52 | ERROR | stderr | auth=auth,
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 07:10:52 | ERROR | stderr | self.block_thread()
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 07:10:52 | ERROR | stderr | self.server.close()
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 07:10:52 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 07:10:52 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 07:10:52 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 07:10:52 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 07:10:52 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:10:59 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file='/LLM_32T/evelyn/FastChat/fastchat/serve/azure_api.json', share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 07:10:59 | INFO | gradio_web_server | All models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:10:59 | INFO | gradio_web_server | Visible models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:10:59 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 07:10:59 | INFO | stdout |
2024-07-03 07:10:59 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 07:11:05 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:11:08 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-03 07:11:08 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:11:08 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:11:08 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:11:08 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1514, in call_function
2024-07-03 07:11:08 | ERROR | stderr | prediction = await anyio.to_thread.run_sync(
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:11:08 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:11:08 | ERROR | stderr | return await future
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:11:08 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:11:08 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 833, in wrapper
2024-07-03 07:11:08 | ERROR | stderr | response = f(*args, **kwargs)
2024-07-03 07:11:08 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 328, in add_text
2024-07-03 07:11:08 | ERROR | stderr | flagged = moderation_filter(all_conv_text, [state.model_name])
2024-07-03 07:11:08 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 203, in moderation_filter
2024-07-03 07:11:08 | ERROR | stderr | return oai_moderation(text, custom_thresholds)
2024-07-03 07:11:08 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 156, in oai_moderation
2024-07-03 07:11:08 | ERROR | stderr | import openai
2024-07-03 07:11:08 | ERROR | stderr | ModuleNotFoundError: No module named 'openai'
2024-07-03 07:11:09 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 07:11:09 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:11:09 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:11:09 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:11:09 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1526, in call_function
2024-07-03 07:11:09 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 657, in async_iteration
2024-07-03 07:11:09 | ERROR | stderr | return await iterator.__anext__()
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 650, in __anext__
2024-07-03 07:11:09 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:11:09 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:11:09 | ERROR | stderr | return await future
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:11:09 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 633, in run_sync_iterator_async
2024-07-03 07:11:09 | ERROR | stderr | return next(iterator)
2024-07-03 07:11:09 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 816, in gen_wrapper
2024-07-03 07:11:09 | ERROR | stderr | response = next(iterator)
2024-07-03 07:11:09 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 420, in bot_response
2024-07-03 07:11:09 | ERROR | stderr | if state.skip_next:
2024-07-03 07:11:09 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-03 07:12:04 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 07:12:04 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 07:12:04 | ERROR | stderr | time.sleep(0.1)
2024-07-03 07:12:04 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:12:04 | ERROR | stderr |
2024-07-03 07:12:04 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 07:12:04 | ERROR | stderr |
2024-07-03 07:12:04 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:12:04 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:12:04 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:12:04 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1051, in <module>
2024-07-03 07:12:04 | ERROR | stderr | demo.queue(
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 07:12:04 | ERROR | stderr | self.block_thread()
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 07:12:04 | ERROR | stderr | self.server.close()
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 07:12:04 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 07:12:04 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 07:12:04 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 07:12:04 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 07:12:04 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:12:42 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file='/LLM_32T/evelyn/FastChat/fastchat/serve/azure_api.json', share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 07:12:42 | INFO | gradio_web_server | All models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:12:42 | INFO | gradio_web_server | Visible models: ['gpt-3.5-turbo', 'vicuna-7b-v1.5']
2024-07-03 07:12:43 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 07:12:43 | INFO | stdout |
2024-07-03 07:12:43 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-03 07:12:51 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-03 07:12:53 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-03 07:12:53 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:12:53 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:12:53 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:12:53 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1514, in call_function
2024-07-03 07:12:53 | ERROR | stderr | prediction = await anyio.to_thread.run_sync(
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:12:53 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:12:53 | ERROR | stderr | return await future
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:12:53 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:12:53 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 833, in wrapper
2024-07-03 07:12:53 | ERROR | stderr | response = f(*args, **kwargs)
2024-07-03 07:12:53 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 328, in add_text
2024-07-03 07:12:53 | ERROR | stderr | flagged = moderation_filter(all_conv_text, [state.model_name])
2024-07-03 07:12:53 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 203, in moderation_filter
2024-07-03 07:12:53 | ERROR | stderr | return oai_moderation(text, custom_thresholds)
2024-07-03 07:12:53 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/utils.py", line 158, in oai_moderation
2024-07-03 07:12:53 | ERROR | stderr | client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
2024-07-03 07:12:53 | ERROR | stderr | AttributeError: module 'openai' has no attribute 'OpenAI'
2024-07-03 07:12:54 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-03 07:12:54 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 541, in process_events
2024-07-03 07:12:54 | ERROR | stderr | response = await route_utils.call_process_api(
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 276, in call_process_api
2024-07-03 07:12:54 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1928, in process_api
2024-07-03 07:12:54 | ERROR | stderr | result = await self.call_function(
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1526, in call_function
2024-07-03 07:12:54 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 657, in async_iteration
2024-07-03 07:12:54 | ERROR | stderr | return await iterator.__anext__()
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 650, in __anext__
2024-07-03 07:12:54 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-03 07:12:54 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-03 07:12:54 | ERROR | stderr | return await future
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-03 07:12:54 | ERROR | stderr | result = context.run(func, *args)
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 633, in run_sync_iterator_async
2024-07-03 07:12:54 | ERROR | stderr | return next(iterator)
2024-07-03 07:12:54 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 816, in gen_wrapper
2024-07-03 07:12:54 | ERROR | stderr | response = next(iterator)
2024-07-03 07:12:54 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 420, in bot_response
2024-07-03 07:12:54 | ERROR | stderr | if state.skip_next:
2024-07-03 07:12:54 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-03 07:20:26 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-03 07:20:26 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2664, in block_thread
2024-07-03 07:20:26 | ERROR | stderr | time.sleep(0.1)
2024-07-03 07:20:26 | ERROR | stderr | KeyboardInterrupt
2024-07-03 07:20:26 | ERROR | stderr |
2024-07-03 07:20:26 | ERROR | stderr | During handling of the above exception, another exception occurred:
2024-07-03 07:20:26 | ERROR | stderr |
2024-07-03 07:20:26 | ERROR | stderr | Traceback (most recent call last):
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-03 07:20:26 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-03 07:20:26 | ERROR | stderr | exec(code, run_globals)
2024-07-03 07:20:26 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1051, in <module>
2024-07-03 07:20:26 | ERROR | stderr | demo.queue(
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2569, in launch
2024-07-03 07:20:26 | ERROR | stderr | self.block_thread()
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 2668, in block_thread
2024-07-03 07:20:26 | ERROR | stderr | self.server.close()
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/http_server.py", line 68, in close
2024-07-03 07:20:26 | ERROR | stderr | self.thread.join(timeout=5)
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1015, in join
2024-07-03 07:20:26 | ERROR | stderr | self._wait_for_tstate_lock(timeout=max(timeout, 0))
2024-07-03 07:20:26 | ERROR | stderr | File "/usr/lib/python3.8/threading.py", line 1027, in _wait_for_tstate_lock
2024-07-03 07:20:26 | ERROR | stderr | elif lock.acquire(block, timeout):
2024-07-03 07:20:26 | ERROR | stderr | KeyboardInterrupt
2024-07-03 09:11:12 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-03 09:11:12 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-03 09:11:12 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-03 09:11:13 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-03 09:11:13 | INFO | stdout |
2024-07-03 09:11:13 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 01:18:15 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 01:18:15 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 01:18:15 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 01:18:15 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-04 01:18:15 | INFO | stdout |
2024-07-04 01:18:15 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 01:18:22 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 01:18:41 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 5
2024-07-04 01:18:42 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 01:18:42 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe924b05b50>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 01:18:42 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 01:18:42 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 01:18:45 | INFO | gradio_web_server | Hello! How can I help you today? Is there something you want to talk about or ask me? I'm here to assist you with any information or advice you might need. Just let me know what's on your mind.
2024-07-04 01:18:54 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 01:28:52 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 01:28:52 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 01:28:52 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fe90fea27c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 01:28:52 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 01:28:52 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello ASSISTANT: Hello! How can I help you today? Is there something you want to talk about or ask me? I'm here to assist you with any information or advice you might need. Just let me know what's on your mind.</s>USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 01:28:54 | INFO | gradio_web_server | Hello! How can I help you today? Is there something specific you would like to know or discuss? I'm here to answer any questions you may have. Just let me know what's on your mind.
2024-07-04 06:39:15 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:39:15 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:39:15 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:39:15 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:39:15 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:39:15 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:39:15 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:39:15 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:39:15 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server.py", line 1075, in <module>
2024-07-04 06:39:15 | ERROR | stderr | demo.queue(
2024-07-04 06:39:15 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 837, in wrapper
2024-07-04 06:39:15 | ERROR | stderr | return queue(*args, **kwargs)
2024-07-04 06:39:15 | ERROR | stderr | TypeError: queue() got an unexpected keyword argument 'default_concurrency_limit'
2024-07-04 06:39:15 | INFO | stdout | IMPORTANT: You are using gradio version 3.48.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:39:15 | INFO | stdout | --------
2024-07-04 06:41:04 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:41:04 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:41:04 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:41:04 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:41:04 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:41:04 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:41:04 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:41:04 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:41:04 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1131, in <module>
2024-07-04 06:41:04 | ERROR | stderr | demo = build_demo(models)
2024-07-04 06:41:04 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1033, in build_demo
2024-07-04 06:41:04 | ERROR | stderr | state, model_selector = build_single_model_ui(models)
2024-07-04 06:41:04 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 984, in build_single_model_ui
2024-07-04 06:41:04 | ERROR | stderr | feedback_btn.click(None, None, hidden_textbox, _js=js_feedback)
2024-07-04 06:41:04 | ERROR | stderr | TypeError: event_trigger() got an unexpected keyword argument '_js'
2024-07-04 06:41:51 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:41:51 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:41:51 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:41:51 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:41:51 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:41:51 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:41:51 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:41:51 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:41:51 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1131, in <module>
2024-07-04 06:41:51 | ERROR | stderr | demo = build_demo(models)
2024-07-04 06:41:51 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1033, in build_demo
2024-07-04 06:41:51 | ERROR | stderr | state, model_selector = build_single_model_ui(models)
2024-07-04 06:41:51 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 984, in build_single_model_ui
2024-07-04 06:41:51 | ERROR | stderr | feedback_btn.click(None, None, hidden_textbox, _js=js_feedback)
2024-07-04 06:41:51 | ERROR | stderr | TypeError: event_trigger() got an unexpected keyword argument '_js'
2024-07-04 06:41:52 | INFO | stdout | IMPORTANT: You are using gradio version 4.13.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:41:52 | INFO | stdout | --------
2024-07-04 06:42:24 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:42:24 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:42:24 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:42:24 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:42:24 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:42:24 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:42:24 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:42:24 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:42:24 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1131, in <module>
2024-07-04 06:42:24 | ERROR | stderr | demo = build_demo(models)
2024-07-04 06:42:24 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1033, in build_demo
2024-07-04 06:42:24 | ERROR | stderr | state, model_selector = build_single_model_ui(models)
2024-07-04 06:42:24 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 988, in build_single_model_ui
2024-07-04 06:42:24 | ERROR | stderr | inputs=[hidden_textbox, num],
2024-07-04 06:42:24 | ERROR | stderr | NameError: name 'num' is not defined
2024-07-04 06:42:24 | INFO | stdout | IMPORTANT: You are using gradio version 3.50.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:42:24 | INFO | stdout | --------
2024-07-04 06:43:05 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:43:05 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:43:05 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:43:05 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:812: UserWarning: Expected 2 arguments for function <function save_feedback at 0x7ff434f193a0>, received 1.
2024-07-04 06:43:05 | ERROR | stderr | warnings.warn(
2024-07-04 06:43:05 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:816: UserWarning: Expected at least 2 arguments for function <function save_feedback at 0x7ff434f193a0>, received 1.
2024-07-04 06:43:05 | ERROR | stderr | warnings.warn(
2024-07-04 06:43:05 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:43:05 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:43:05 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:43:05 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:43:05 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:43:05 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1132, in <module>
2024-07-04 06:43:05 | ERROR | stderr | demo.queue(
2024-07-04 06:43:05 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 837, in wrapper
2024-07-04 06:43:05 | ERROR | stderr | return queue(*args, **kwargs)
2024-07-04 06:43:05 | ERROR | stderr | TypeError: queue() got an unexpected keyword argument 'default_concurrency_limit'
2024-07-04 06:43:05 | INFO | stdout | IMPORTANT: You are using gradio version 3.50.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:43:05 | INFO | stdout | --------
2024-07-04 06:43:29 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:43:29 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:43:29 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:43:29 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:812: UserWarning: Expected 2 arguments for function <function save_feedback at 0x7f9a9d6923a0>, received 1.
2024-07-04 06:43:29 | ERROR | stderr | warnings.warn(
2024-07-04 06:43:29 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:816: UserWarning: Expected at least 2 arguments for function <function save_feedback at 0x7f9a9d6923a0>, received 1.
2024-07-04 06:43:29 | ERROR | stderr | warnings.warn(
2024-07-04 06:43:30 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 06:43:30 | INFO | stdout |
2024-07-04 06:43:30 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 06:43:30 | INFO | stdout | IMPORTANT: You are using gradio version 3.50.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:43:30 | INFO | stdout | --------
2024-07-04 06:43:37 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: None
2024-07-04 06:43:37 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 407, in call_prediction
2024-07-04 06:43:37 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 226, in call_process_api
2024-07-04 06:43:37 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1550, in process_api
2024-07-04 06:43:37 | ERROR | stderr | result = await self.call_function(
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1185, in call_function
2024-07-04 06:43:37 | ERROR | stderr | prediction = await anyio.to_thread.run_sync(
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 06:43:37 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 06:43:37 | ERROR | stderr | return await future
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 06:43:37 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 06:43:37 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 661, in wrapper
2024-07-04 06:43:37 | ERROR | stderr | response = f(*args, **kwargs)
2024-07-04 06:43:37 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 250, in load_demo
2024-07-04 06:43:37 | ERROR | stderr | return load_demo_single(models, url_params)
2024-07-04 06:43:37 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 229, in load_demo_single
2024-07-04 06:43:37 | ERROR | stderr | if "model" in url_params:
2024-07-04 06:43:37 | ERROR | stderr | TypeError: argument of type 'NoneType' is not iterable
2024-07-04 06:44:35 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 06:45:48 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:45:48 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:45:48 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:45:48 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 06:45:48 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 06:45:48 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 06:45:48 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 06:45:48 | ERROR | stderr | exec(code, run_globals)
2024-07-04 06:45:48 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1131, in <module>
2024-07-04 06:45:48 | ERROR | stderr | demo = build_demo(models)
2024-07-04 06:45:48 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1033, in build_demo
2024-07-04 06:45:48 | ERROR | stderr | state, model_selector = build_single_model_ui(models)
2024-07-04 06:45:48 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 984, in build_single_model_ui
2024-07-04 06:45:48 | ERROR | stderr | feedback_btn.click(None, None, hidden_textbox, _js=js_feedback)
2024-07-04 06:45:48 | ERROR | stderr | TypeError: event_trigger() got an unexpected keyword argument '_js'
2024-07-04 06:45:48 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:45:48 | INFO | stdout | --------
2024-07-04 06:46:53 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:46:53 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:46:53 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:46:53 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:855: UserWarning: Expected 2 arguments for function <function save_feedback at 0x7fccb504b040>, received 1.
2024-07-04 06:46:53 | ERROR | stderr | warnings.warn(
2024-07-04 06:46:53 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:859: UserWarning: Expected at least 2 arguments for function <function save_feedback at 0x7fccb504b040>, received 1.
2024-07-04 06:46:53 | ERROR | stderr | warnings.warn(
2024-07-04 06:46:55 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 06:46:55 | INFO | stdout |
2024-07-04 06:46:55 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 06:46:55 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:46:55 | INFO | stdout | --------
2024-07-04 06:47:04 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 06:47:16 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 06:47:16 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 06:47:16 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fcca31f13a0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 06:47:16 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 06:47:16 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 06:47:24 | INFO | gradio_web_server | Hello! How can I help you today? Is there something on your mind that you would like to talk about or ask about? I'm here to assist with any questions you may have.
2024-07-04 06:52:49 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 06:52:57 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 06:52:57 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 06:52:57 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 06:52:57 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:855: UserWarning: Expected 2 arguments for function <function save_feedback at 0x7f109f96e0d0>, received 1.
2024-07-04 06:52:57 | ERROR | stderr | warnings.warn(
2024-07-04 06:52:57 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:859: UserWarning: Expected at least 2 arguments for function <function save_feedback at 0x7f109f96e0d0>, received 1.
2024-07-04 06:52:57 | ERROR | stderr | warnings.warn(
2024-07-04 06:52:57 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 06:52:57 | INFO | stdout |
2024-07-04 06:52:57 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 06:52:57 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 06:52:57 | INFO | stdout | --------
2024-07-04 06:53:08 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 06:53:12 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 06:53:12 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 06:53:12 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f109e383220>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 06:53:12 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 06:53:12 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 06:53:17 | INFO | gradio_web_server | Hello! How can I help you today? Is there something you would like to talk about or ask me a question? I'm here to assist you in any way I can.
2024-07-04 06:56:52 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 18
2024-07-04 06:56:52 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 06:56:52 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f10900d8310>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 06:56:52 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 06:56:52 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT: Hello! How can I help you today? Is there something you would like to talk about or ask me a question? I'm here to assist you in any way I can.</s>USER: what is your name! ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 06:56:55 | INFO | gradio_web_server | I'm a language model called Vicuna, and I was trained by Large Model Systems Organization (LMSYS) researchers.
2024-07-04 06:57:57 | INFO | gradio_web_server | downvote. ip: 127.0.0.1
2024-07-04 07:24:04 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 08:29:23 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:29:23 | INFO | gradio_web_server | All models: []
2024-07-04 08:29:23 | INFO | gradio_web_server | Visible models: []
2024-07-04 08:29:23 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/components/dropdown.py:173: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: or set allow_custom_value=True.
2024-07-04 08:29:23 | ERROR | stderr | warnings.warn(
2024-07-04 08:29:23 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:29:23 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
2024-07-04 08:29:23 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-07-04 08:29:23 | ERROR | stderr | File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
2024-07-04 08:29:23 | ERROR | stderr | exec(code, run_globals)
2024-07-04 08:29:23 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1151, in <module>
2024-07-04 08:29:23 | ERROR | stderr | demo = build_demo(models)
2024-07-04 08:29:23 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1053, in build_demo
2024-07-04 08:29:23 | ERROR | stderr | state, model_selector = build_single_model_ui(models)
2024-07-04 08:29:23 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 1006, in build_single_model_ui
2024-07-04 08:29:23 | ERROR | stderr | feedback_btn.click(None, None, hidden_feedback, _js=js_feedback)
2024-07-04 08:29:23 | ERROR | stderr | TypeError: event_trigger() got an unexpected keyword argument '_js'
2024-07-04 08:29:23 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:29:23 | INFO | stdout | --------
2024-07-04 08:29:45 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:29:45 | INFO | gradio_web_server | All models: []
2024-07-04 08:29:45 | INFO | gradio_web_server | Visible models: []
2024-07-04 08:29:45 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/components/dropdown.py:173: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: or set allow_custom_value=True.
2024-07-04 08:29:45 | ERROR | stderr | warnings.warn(
2024-07-04 08:29:47 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 08:29:47 | INFO | stdout |
2024-07-04 08:29:47 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 08:29:47 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:29:47 | INFO | stdout | --------
2024-07-04 08:29:55 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:29:55 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/components/dropdown.py:173: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: or set allow_custom_value=True.
2024-07-04 08:29:55 | ERROR | stderr | warnings.warn(
2024-07-04 08:30:00 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 08:30:00 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:30:00 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9ddebad2b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 08:30:00 | INFO | gradio_web_server | model_name: , worker_addr:
2024-07-04 08:30:11 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 11
2024-07-04 08:30:11 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:30:11 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9ddebb59d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 08:30:11 | INFO | gradio_web_server | model_name: , worker_addr:
2024-07-04 08:30:24 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:30:26 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 08:30:27 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:30:27 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9defe4e8b0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 08:30:27 | INFO | gradio_web_server | model_name: , worker_addr:
2024-07-04 08:30:43 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:30:49 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 14
2024-07-04 08:30:49 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:30:49 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f9defe39130>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 08:30:49 | INFO | gradio_web_server | model_name: , worker_addr:
2024-07-04 08:31:45 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 08:31:53 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:31:53 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 08:31:53 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 08:31:54 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 08:31:54 | INFO | stdout |
2024-07-04 08:31:54 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 08:31:54 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:31:54 | INFO | stdout | --------
2024-07-04 08:31:58 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:32:01 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 08:32:01 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:32:01 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb7b494d550>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-04 08:32:01 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-04 08:32:01 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-04 08:32:06 | INFO | gradio_web_server | Hello! How can I assist you today?
2024-07-04 08:35:09 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 08:35:17 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:35:17 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 08:35:17 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 08:35:18 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 08:35:18 | INFO | stdout |
2024-07-04 08:35:18 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 08:35:18 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:35:18 | INFO | stdout | --------
2024-07-04 08:35:22 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:35:28 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 14
2024-07-04 08:35:28 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:35:28 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:35:28 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1673, in process_api
2024-07-04 08:35:28 | ERROR | stderr | data = await anyio.to_thread.run_sync(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:35:28 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:35:28 | ERROR | stderr | return await future
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:35:28 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1429, in postprocess_data
2024-07-04 08:35:28 | ERROR | stderr | self.validate_outputs(fn_index, predictions) # type: ignore
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1403, in validate_outputs
2024-07-04 08:35:28 | ERROR | stderr | raise ValueError(
2024-07-04 08:35:28 | ERROR | stderr | ValueError: An event handler (add_text) didn't receive enough output values (needed: 9, received: 8).
2024-07-04 08:35:28 | ERROR | stderr | Wanted outputs:
2024-07-04 08:35:28 | ERROR | stderr | [state, chatbot, textbox, button, button, button, button, button, button]
2024-07-04 08:35:28 | ERROR | stderr | Received outputs:
2024-07-04 08:35:28 | ERROR | stderr | [<__main__.State object at 0x7f808d953f40>, [['hi how are you', None]], "", button, button, button, button, button]
2024-07-04 08:35:28 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:35:28 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:35:28 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:35:28 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1664, in process_api
2024-07-04 08:35:28 | ERROR | stderr | result = await self.call_function(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1217, in call_function
2024-07-04 08:35:28 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 514, in async_iteration
2024-07-04 08:35:28 | ERROR | stderr | return await iterator.__anext__()
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 507, in __anext__
2024-07-04 08:35:28 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:35:28 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:35:28 | ERROR | stderr | return await future
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:35:28 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 490, in run_sync_iterator_async
2024-07-04 08:35:28 | ERROR | stderr | return next(iterator)
2024-07-04 08:35:28 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 673, in gen_wrapper
2024-07-04 08:35:28 | ERROR | stderr | response = next(iterator)
2024-07-04 08:35:28 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 437, in bot_response
2024-07-04 08:35:28 | ERROR | stderr | if state.skip_next:
2024-07-04 08:35:28 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-04 08:36:51 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 08:36:59 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:36:59 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 08:36:59 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 08:36:59 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:855: UserWarning: Expected 2 arguments for function <function build_single_model_ui.<locals>.save_feedback at 0x7fdf3ed7b0d0>, received 3.
2024-07-04 08:36:59 | ERROR | stderr | warnings.warn(
2024-07-04 08:36:59 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:863: UserWarning: Expected maximum 2 arguments for function <function build_single_model_ui.<locals>.save_feedback at 0x7fdf3ed7b0d0>, received 3.
2024-07-04 08:36:59 | ERROR | stderr | warnings.warn(
2024-07-04 08:37:00 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 08:37:00 | INFO | stdout |
2024-07-04 08:37:00 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 08:37:00 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:37:00 | INFO | stdout | --------
2024-07-04 08:37:04 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:37:06 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 08:37:06 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:37:06 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:37:06 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1673, in process_api
2024-07-04 08:37:06 | ERROR | stderr | data = await anyio.to_thread.run_sync(
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:37:06 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:37:06 | ERROR | stderr | return await future
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:37:06 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1429, in postprocess_data
2024-07-04 08:37:06 | ERROR | stderr | self.validate_outputs(fn_index, predictions) # type: ignore
2024-07-04 08:37:06 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1403, in validate_outputs
2024-07-04 08:37:06 | ERROR | stderr | raise ValueError(
2024-07-04 08:37:06 | ERROR | stderr | ValueError: An event handler (add_text) didn't receive enough output values (needed: 9, received: 8).
2024-07-04 08:37:06 | ERROR | stderr | Wanted outputs:
2024-07-04 08:37:06 | ERROR | stderr | [state, chatbot, textbox, button, button, button, button, button, button]
2024-07-04 08:37:06 | ERROR | stderr | Received outputs:
2024-07-04 08:37:06 | ERROR | stderr | [<__main__.State object at 0x7fdf3e930040>, [['hi', None]], "", button, button, button, button, button]
2024-07-04 08:37:07 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:37:07 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:37:07 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:37:07 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1664, in process_api
2024-07-04 08:37:07 | ERROR | stderr | result = await self.call_function(
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1217, in call_function
2024-07-04 08:37:07 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 514, in async_iteration
2024-07-04 08:37:07 | ERROR | stderr | return await iterator.__anext__()
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 507, in __anext__
2024-07-04 08:37:07 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:37:07 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:37:07 | ERROR | stderr | return await future
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:37:07 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 490, in run_sync_iterator_async
2024-07-04 08:37:07 | ERROR | stderr | return next(iterator)
2024-07-04 08:37:07 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 673, in gen_wrapper
2024-07-04 08:37:07 | ERROR | stderr | response = next(iterator)
2024-07-04 08:37:07 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 437, in bot_response
2024-07-04 08:37:07 | ERROR | stderr | if state.skip_next:
2024-07-04 08:37:07 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-04 08:39:07 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-04 08:39:15 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-04 08:39:15 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-04 08:39:15 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-04 08:39:15 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:855: UserWarning: Expected 2 arguments for function <function build_single_model_ui.<locals>.save_feedback at 0x7ffaf13a9d30>, received 3.
2024-07-04 08:39:15 | ERROR | stderr | warnings.warn(
2024-07-04 08:39:15 | ERROR | stderr | /usr/local/lib/python3.8/dist-packages/gradio/utils.py:863: UserWarning: Expected maximum 2 arguments for function <function build_single_model_ui.<locals>.save_feedback at 0x7ffaf13a9d30>, received 3.
2024-07-04 08:39:15 | ERROR | stderr | warnings.warn(
2024-07-04 08:39:16 | INFO | stdout | Running on local URL: http://0.0.0.0:7861
2024-07-04 08:39:16 | INFO | stdout |
2024-07-04 08:39:16 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-04 08:39:16 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-04 08:39:16 | INFO | stdout | --------
2024-07-04 08:39:19 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 08:39:23 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-04 08:39:23 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:39:23 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:39:23 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1673, in process_api
2024-07-04 08:39:23 | ERROR | stderr | data = await anyio.to_thread.run_sync(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:39:23 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:39:23 | ERROR | stderr | return await future
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:39:23 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1429, in postprocess_data
2024-07-04 08:39:23 | ERROR | stderr | self.validate_outputs(fn_index, predictions) # type: ignore
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1403, in validate_outputs
2024-07-04 08:39:23 | ERROR | stderr | raise ValueError(
2024-07-04 08:39:23 | ERROR | stderr | ValueError: An event handler (add_text) didn't receive enough output values (needed: 9, received: 8).
2024-07-04 08:39:23 | ERROR | stderr | Wanted outputs:
2024-07-04 08:39:23 | ERROR | stderr | [state, chatbot, textbox, button, button, button, button, button, button]
2024-07-04 08:39:23 | ERROR | stderr | Received outputs:
2024-07-04 08:39:23 | ERROR | stderr | [<__main__.State object at 0x7ffaf13ea700>, [['hi', None]], "", button, button, button, button, button]
2024-07-04 08:39:23 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-04 08:39:23 | ERROR | stderr | Traceback (most recent call last):
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/queueing.py", line 501, in call_prediction
2024-07-04 08:39:23 | ERROR | stderr | output = await route_utils.call_process_api(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/route_utils.py", line 252, in call_process_api
2024-07-04 08:39:23 | ERROR | stderr | output = await app.get_blocks().process_api(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1664, in process_api
2024-07-04 08:39:23 | ERROR | stderr | result = await self.call_function(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1217, in call_function
2024-07-04 08:39:23 | ERROR | stderr | prediction = await utils.async_iteration(iterator)
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 514, in async_iteration
2024-07-04 08:39:23 | ERROR | stderr | return await iterator.__anext__()
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 507, in __anext__
2024-07-04 08:39:23 | ERROR | stderr | return await anyio.to_thread.run_sync(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 56, in run_sync
2024-07-04 08:39:23 | ERROR | stderr | return await get_async_backend().run_sync_in_worker_thread(
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
2024-07-04 08:39:23 | ERROR | stderr | return await future
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 859, in run
2024-07-04 08:39:23 | ERROR | stderr | result = context.run(func, *args)
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 490, in run_sync_iterator_async
2024-07-04 08:39:23 | ERROR | stderr | return next(iterator)
2024-07-04 08:39:23 | ERROR | stderr | File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 673, in gen_wrapper
2024-07-04 08:39:23 | ERROR | stderr | response = next(iterator)
2024-07-04 08:39:23 | ERROR | stderr | File "/LLM_32T/evelyn/FastChat/fastchat/serve/gradio_web_server_evelyn.py", line 437, in bot_response
2024-07-04 08:39:23 | ERROR | stderr | if state.skip_next:
2024-07-04 08:39:23 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'skip_next'
2024-07-04 08:43:51 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-04 10:40:43 | INFO | stdout | Keyboard interruption in main thread... closing server.
2024-07-05 07:26:25 | INFO | gradio_web_server | args: Namespace(concurrency_count=10, controller_url='http://localhost:21002', gradio_auth_path=None, gradio_root_path=None, host='0.0.0.0', model_list_mode='once', moderate=False, port=None, register_api_endpoint_file=None, share=False, show_terms_of_use=False, use_remote_storage=False)
2024-07-05 07:26:25 | INFO | gradio_web_server | All models: ['vicuna-7b-v1.5']
2024-07-05 07:26:25 | INFO | gradio_web_server | Visible models: ['vicuna-7b-v1.5']
2024-07-05 07:26:25 | INFO | stdout | Running on local URL: http://0.0.0.0:7860
2024-07-05 07:26:25 | INFO | stdout |
2024-07-05 07:26:25 | INFO | stdout | To create a public link, set `share=True` in `launch()`.
2024-07-05 07:26:25 | INFO | stdout | IMPORTANT: You are using gradio version 4.20.0, however version 4.29.0 is available, please upgrade.
2024-07-05 07:26:25 | INFO | stdout | --------
2024-07-05 07:26:32 | INFO | gradio_web_server | load_demo. ip: 127.0.0.1. params: {}
2024-07-05 07:26:38 | INFO | gradio_web_server | add_text. ip: 127.0.0.1. len: 2
2024-07-05 07:26:39 | INFO | gradio_web_server | bot_response. ip: 127.0.0.1
2024-07-05 07:26:39 | INFO | gradio_web_server | monitor error: HTTPConnectionPool(host='localhost', port=9090): Max retries exceeded with url: /is_limit_reached?model=vicuna-7b-v1.5&user_id=127.0.0.1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f3ca497a970>: Failed to establish a new connection: [Errno 111] Connection refused'))
2024-07-05 07:26:39 | INFO | gradio_web_server | model_name: vicuna-7b-v1.5, worker_addr: http://127.0.0.1:21003
2024-07-05 07:26:39 | INFO | gradio_web_server | ==== request ====
{'model': 'vicuna-7b-v1.5', 'prompt': "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hi ASSISTANT:", 'temperature': 0.7, 'repetition_penalty': 1.0, 'top_p': 1.0, 'max_new_tokens': 1024, 'stop': None, 'stop_token_ids': None, 'echo': False}
2024-07-05 07:26:40 | INFO | gradio_web_server | Hello! How can I assist you today?