Spaces:
Running
Running
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
2024-05-16 12:11:24.934695: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
Collecting en-core-web-sm==3.7.1 | |
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl (12.8 MB) | |
Requirement already satisfied: spacy<3.8.0,>=3.7.2 in /usr/local/lib/python3.9/dist-packages (from en-core-web-sm==3.7.1) (3.7.2) | |
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.4.0) | |
Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.9.4) | |
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (24.0) | |
Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (6.4.0) | |
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.2) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.10) | |
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.5) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.25.1) | |
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.4.8) | |
Requirement already satisfied: thinc<8.3.0,>=8.1.8 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.2.3) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (52.0.0) | |
Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.26.4) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.8) | |
Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.1.4) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.66.4) | |
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.10) | |
Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.3.4) | |
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.12) | |
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.7.1) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.9) | |
Requirement already satisfied: language-data>=1.2 in /usr/local/lib/python3.9/dist-packages (from langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.2.0) | |
Requirement already satisfied: marisa-trie>=0.7.7 in /usr/local/lib/python3.9/dist-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.1) | |
Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.18.2) | |
Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.11.0) | |
Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.6.0) | |
Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.7.11) | |
Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.1.4) | |
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.9/dist-packages (from typer<0.10.0,>=0.3.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.1.7) | |
Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.9/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.16.0) | |
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.1.5) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the package via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:924: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f402afeff70>, received 2. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:932: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f402afeff70>, received 2. | |
warnings.warn( | |
IMPORTANT: You are using gradio version 4.26.0, however version 4.29.0 is available, please upgrade. | |
-------- | |
Running on local URL: http://0.0.0.0:80 | |
Running on public URL: https://3122a891c774a52363.gradio.live | |
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) | |
correcting text..: 0%| | 0/1 [00:00<?, ?it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 13.00it/s] | |
/usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.) | |
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask) | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api | |
result = await self.call_function( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function | |
prediction = await anyio.to_thread.run_sync( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync | |
return await get_async_backend().run_sync_in_worker_thread( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | |
return await future | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run | |
result = context.run(func, *args) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper | |
response = f(*args, **kwargs) | |
File "/home/aliasgarov/copyright_checker/app.py", line 35, in ai_generated_test | |
return predict_bc_scores(input), predict_mc_scores(input) | |
File "/home/aliasgarov/copyright_checker/predictors.py", line 390, in predict_mc_scores | |
for key in models | |
NameError: name 'models' is not defined | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api | |
result = await self.call_function( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function | |
prediction = await anyio.to_thread.run_sync( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync | |
return await get_async_backend().run_sync_in_worker_thread( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | |
return await future | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run | |
result = context.run(func, *args) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper | |
response = f(*args, **kwargs) | |
File "/home/aliasgarov/copyright_checker/app.py", line 35, in ai_generated_test | |
return predict_bc_scores(input), predict_mc_scores(input) | |
File "/home/aliasgarov/copyright_checker/predictors.py", line 390, in predict_mc_scores | |
for key in models | |
NameError: name 'models' is not defined | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
2024-05-16 12:30:06.614564: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
Collecting en-core-web-sm==3.7.1 | |
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl (12.8 MB) | |
Requirement already satisfied: spacy<3.8.0,>=3.7.2 in /usr/local/lib/python3.9/dist-packages (from en-core-web-sm==3.7.1) (3.7.2) | |
Requirement already satisfied: thinc<8.3.0,>=8.1.8 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.2.3) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.66.4) | |
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.4.0) | |
Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.1.4) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.8) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (52.0.0) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.25.1) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.10) | |
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (24.0) | |
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.5) | |
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.10) | |
Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (6.4.0) | |
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.2) | |
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.4.8) | |
Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.26.4) | |
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.12) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.9) | |
Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.3.4) | |
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.7.1) | |
Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.9.4) | |
Requirement already satisfied: language-data>=1.2 in /usr/local/lib/python3.9/dist-packages (from langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.2.0) | |
Requirement already satisfied: marisa-trie>=0.7.7 in /usr/local/lib/python3.9/dist-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.1) | |
Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.6.0) | |
Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.18.2) | |
Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.11.0) | |
Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.7.11) | |
Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.1.4) | |
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.9/dist-packages (from typer<0.10.0,>=0.3.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.1.7) | |
Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.9/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.16.0) | |
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.1.5) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the package via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:924: UserWarning: Expected 1 arguments for function <function depth_analysis at 0x7f41bcff3f70>, received 2. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/gradio/utils.py:932: UserWarning: Expected maximum 1 arguments for function <function depth_analysis at 0x7f41bcff3f70>, received 2. | |
warnings.warn( | |
WARNING: Invalid HTTP request received. | |
/usr/local/lib/python3.9/dist-packages/optimum/bettertransformer/models/encoder_models.py:301: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.) | |
hidden_states = torch._nested_tensor_from_mask(hidden_states, ~attention_mask) | |
IMPORTANT: You are using gradio version 4.26.0, however version 4.29.0 is available, please upgrade. | |
-------- | |
Running on local URL: http://0.0.0.0:80 | |
Running on public URL: https://0cdfd3e29fbd88d78e.gradio.live | |
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces) | |
Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09 | |
Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244 | |
Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s | |
Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09 | |
Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244 | |
Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s | |
Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09 | |
Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244 | |
MC Score: {'OPENAI GPT': 0.9995232800159636, 'MISTRAL': 2.5594878925034267e-09, 'CLAUDE': 9.711950776533429e-08, 'GEMINI': 2.9362617399423087e-07, 'GRAMMAR ENHANCER': 2.6840377496865067e-05} | |
correcting text..: 0%| | 0/1 [00:00<?, ?it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 4.62it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 4.61it/s] | |
Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09 | |
Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244 | |
Input Text: sPredicting Financial Market Trends using Time Series Analysis and Natural Language Processing/s | |
Original BC scores: AI: 1.0, HUMAN: 5.14828402131684e-09 | |
Calibration BC scores: AI: 0.9995505136986301, HUMAN: 0.00044948630136987244 | |
MC Score: {'OPENAI GPT': 0.9995232800159636, 'MISTRAL': 2.5594878925034267e-09, 'CLAUDE': 9.711950776533429e-08, 'GEMINI': 2.9362617399423087e-07, 'GRAMMAR ENHANCER': 2.6840377496865067e-05} | |
correcting text..: 0%| | 0/1 [00:00<?, ?it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 35.91it/s] | |
correcting text..: 0%| | 0/1 [00:00<?, ?it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 37.42it/s] | |
/home/aliasgarov/copyright_checker/predictors.py:247: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. | |
probas = F.softmax(tensor_logits).detach().cpu().numpy() | |
{'Predicting Financial Market Trends using Time Series Analysis and Natural Language Processing': -0.01068235683389165} quillbot | |
correcting text..: 0%| | 0/1 [00:00<?, ?it/s] correcting text..: 100%|██████████| 1/1 [00:00<00:00, 36.51it/s] | |
Some characters could not be decoded, and were replaced with REPLACEMENT CHARACTER. | |
["Hö'elün (fl.", '1162–1210) was a Mongolian noblewoman and the mother of Temüjin, better known as Genghis Khan.', 'She played a major role in his rise to power.', "Born into the Olkhonud clan of the Onggirat tribe, Hö'elün was originally married to Chiledu, but was captured shortly after her wedding by Yesügei, an important member of the Mongols, becoming his primary wife.", 'She and Yesügei had three sons and one daughter, as well as Temüjin.', "After Yesügei was fatally poisoned and the Mongols abandoned her family, Hö'elün shepherded all her children through poverty to adulthood—her resilience and organisational skills have been remarked upon by historians.", "She continued to play an important role after Temüjin's marriage to Börte.", "Hö'elün married Münglig, an old retainer of Yesügei, in thanks for his support after a damaging defeat.", "During the next decades, she arranged marriages, maintained alliances, and was heavily involved in disputes between Genghis, his brothers, and Münglig's sons."] | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api | |
result = await self.call_function( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function | |
prediction = await anyio.to_thread.run_sync( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync | |
return await get_async_backend().run_sync_in_worker_thread( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | |
return await future | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run | |
result = context.run(func, *args) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper | |
response = f(*args, **kwargs) | |
File "/home/aliasgarov/copyright_checker/app.py", line 66, in main | |
depth_analysis_plot = depth_analysis(bias_buster_selected, input) | |
TypeError: depth_analysis() takes 1 positional argument but 2 were given | |
PLAGIARISM PROCESSING TIME: 10.25221395799963 | |
['Predicting Financial Market Trends using Time Series Analysis and Natural\nLanguage Processing'] | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... | |
To disable this warning, you can either: | |
- Avoid using `tokenizers` before the fork if possible | |
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) | |
Traceback (most recent call last): | |
File "/usr/local/lib/python3.9/dist-packages/gradio/queueing.py", line 527, in process_events | |
response = await route_utils.call_process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/route_utils.py", line 261, in call_process_api | |
output = await app.get_blocks().process_api( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1786, in process_api | |
result = await self.call_function( | |
File "/usr/local/lib/python3.9/dist-packages/gradio/blocks.py", line 1338, in call_function | |
prediction = await anyio.to_thread.run_sync( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py", line 56, in run_sync | |
return await get_async_backend().run_sync_in_worker_thread( | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread | |
return await future | |
File "/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py", line 851, in run | |
result = context.run(func, *args) | |
File "/usr/local/lib/python3.9/dist-packages/gradio/utils.py", line 759, in wrapper | |
response = f(*args, **kwargs) | |
File "/home/aliasgarov/copyright_checker/app.py", line 66, in main | |
depth_analysis_plot = depth_analysis(bias_buster_selected, input) | |
TypeError: depth_analysis() takes 1 positional argument but 2 were given | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
2024-05-16 13:20:02.406179: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
2024-05-16 13:34:26.723234: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
The BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co/docs/optimum/bettertransformer/overview for more details. | |
Framework not specified. Using pt to export the model. | |
Some weights of the model checkpoint at textattack/roberta-base-CoLA were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] | |
- This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). | |
- This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
***** Exporting submodel 1/1: RobertaForSequenceClassification ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
Framework not specified. Using pt to export the model. | |
Using the export variant default. Available variants are: | |
- default: The default ONNX variant. | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
***** Exporting submodel 1/3: T5Stack ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> False | |
***** Exporting submodel 2/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/modeling_utils.py:1017: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
if causal_mask.shape[1] < attention_mask.shape[1]: | |
***** Exporting submodel 3/3: T5ForConditionalGeneration ***** | |
Using framework PyTorch: 2.3.0+cu121 | |
Overriding 1 configuration item(s) | |
- use_cache -> True | |
/usr/local/lib/python3.9/dist-packages/transformers/models/t5/modeling_t5.py:503: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
elif past_key_value.shape[2] != key_value_states.shape[1]: | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
In-place op on output of tensor.shape. See https://pytorch.org/docs/master/onnx.html#avoid-inplace-operations-when-using-tensor-shape-in-tracing-mode | |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41. | |
Non-default generation parameters: {'max_length': 512, 'min_length': 8, 'num_beams': 2, 'no_repeat_ngram_size': 4} | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
[nltk_data] Downloading package cmudict to /root/nltk_data... | |
[nltk_data] Package cmudict is already up-to-date! | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
[nltk_data] Downloading package wordnet to /root/nltk_data... | |
[nltk_data] Package wordnet is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/torch/cuda/__init__.py:619: UserWarning: Can't initialize NVML | |
warnings.warn("Can't initialize NVML") | |
Collecting en-core-web-sm==3.7.1 | |
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.7.1/en_core_web_sm-3.7.1-py3-none-any.whl (12.8 MB) | |
Requirement already satisfied: spacy<3.8.0,>=3.7.2 in /usr/local/lib/python3.9/dist-packages (from en-core-web-sm==3.7.1) (3.7.2) | |
Requirement already satisfied: smart-open<7.0.0,>=5.2.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (6.4.0) | |
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (52.0.0) | |
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.11 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.12) | |
Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.10) | |
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (24.0) | |
Requirement already satisfied: typer<0.10.0,>=0.3.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.9.4) | |
Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.26.4) | |
Requirement already satisfied: srsly<3.0.0,>=2.4.3 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.4.8) | |
Requirement already satisfied: jinja2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.1.4) | |
Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.4.0) | |
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.66.4) | |
Requirement already satisfied: weasel<0.4.0,>=0.1.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.3.4) | |
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (3.0.9) | |
Requirement already satisfied: thinc<8.3.0,>=8.1.8 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.2.3) | |
Requirement already satisfied: wasabi<1.2.0,>=0.9.1 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.2) | |
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.10) | |
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.7.1) | |
Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.0.5) | |
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.9/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.0.8) | |
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/lib/python3/dist-packages (from spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.25.1) | |
Requirement already satisfied: language-data>=1.2 in /usr/local/lib/python3.9/dist-packages (from langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.2.0) | |
Requirement already satisfied: marisa-trie>=0.7.7 in /usr/local/lib/python3.9/dist-packages (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (1.1.1) | |
Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (4.11.0) | |
Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.6.0) | |
Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.9/dist-packages (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.18.2) | |
Requirement already satisfied: blis<0.8.0,>=0.7.8 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.7.11) | |
Requirement already satisfied: confection<1.0.0,>=0.0.1 in /usr/local/lib/python3.9/dist-packages (from thinc<8.3.0,>=8.1.8->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.1.4) | |
Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.9/dist-packages (from typer<0.10.0,>=0.3.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (8.1.7) | |
Requirement already satisfied: cloudpathlib<0.17.0,>=0.7.0 in /usr/local/lib/python3.9/dist-packages (from weasel<0.4.0,>=0.1.0->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (0.16.0) | |
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.9/dist-packages (from jinja2->spacy<3.8.0,>=3.7.2->en-core-web-sm==3.7.1) (2.1.5) | |
[38;5;2m✔ Download and installation successful[0m | |
You can now load the package via spacy.load('en_core_web_sm') | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
2024-05-24 14:36:36.892111: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. | |
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. | |
[nltk_data] Downloading package punkt to /root/nltk_data... | |
[nltk_data] Package punkt is already up-to-date! | |
[nltk_data] Downloading package stopwords to /root/nltk_data... | |
[nltk_data] Package stopwords is already up-to-date! | |
/usr/local/lib/python3.9/dist-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. | |
warnings.warn( | |
Traceback (most recent call last): | |
File "/home/aliasgarov/copyright_checker/app.py", line 4, in <module> | |
from predictors import predict_bc_scores, predict_mc_scores | |
File "/home/aliasgarov/copyright_checker/predictors.py", line 39, in <module> | |
).to(device) | |
NameError: name 'device' is not defined | |