Spaces:
Sleeping
Sleeping
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
2024-05-15 14:59:12.669109: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
2024-05-15 14:59:14.457459: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/lib/python3/dist-packages/requests/__init__.py:87: RequestsDependencyWarning: urllib3 (2.2.1) or chardet (4.0.0) doesn't match a supported version! | |
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " | |
Collecting en_core_web_sm==2.3.1 | |
Using cached en_core_web_sm-2.3.1-py3-none-any.whl | |
Requirement already satisfied: spacy<2.4.0,>=2.3.0 in /usr/local/lib/python3.9/dist-packages (from en_core_web_sm==2.3.1) (2.3.9) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (3.0.9) | |
Requirement already satisfied: thinc<7.5.0,>=7.4.1 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (7.4.6) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (52.0.0) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.10) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.0.8) | |
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.7.11) | |
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (0.10.1) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (4.66.2) | |
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.26.4) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (2.25.1) | |
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.1.3) | |
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.2) | |
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<2.4.0,>=2.3.0->en_core_web_sm==2.3.1) (1.0.7) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the model via spacy.load('en_core_web_sm') | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 270, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1847, in process_api | |
result = await self.call_function( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1433, in call_function | |
prediction = await anyio.to_thread.run_sync( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync | |
return await get_async_backend().run_sync_in_worker_thread( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | |
return await future | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run | |
result = context.run(func, *args) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 788, in wrapper | |
response = f(*args, **kwargs) | |
File "/home/aliasgarov/copyright_checker/predictors.py", line 119, in update | |
corrected_text, corrections = correct_text(text, bias_checker, bias_corrector) | |
NameError: name 'bias_checker' is not defined | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
IMPORTANT: You are using gradio version 4.28.3, however version 4.29.0 is available, please upgrade. | |
-------- | |
Running on local URL: http://0.0.0.0:80 | |
Running on public URL: https://a5b565cd42a2675e81.gradio.live | |
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) | |
["OpenAI's chief scientist and co-founder, Ilya Sutskever, is leaving the artificial-intelligence company about six months after he voted to fire Chief Executive Sam Altman only to say he regretted the move days later"] | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |