davanstrien's picture
davanstrien HF staff
Update README.md
1265348
|
raw
history blame
2.23 kB
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: Neither this act nor any other act relating to said Cherokee Indians of Robeson County shall be construed so as to impose on said Indians any powers, privileges, rights or immunities, or #NAME? limitations on their power to contract, heretofore enacted with reference to the eastern band of Cherokee Indians residing in Cherokee, Graham, Swain, Jackson and other adjoining counties in North Carolina, or any other band or tribe of Cherokee Indians other than those now residing, or who have, since the Revolutionary War, resided in Robeson County, nor shall said Cherokee Indians of Robeson County, as herein designated, be subject to the limitations provided in sections nine hundred and seventy-five and nine hundred and seventy-six of The Revisal of one thousand nine hundred and five of North Carolina. ;
- text: That Section one hundred and twenty-two eightythree of the General Statutes of North Carolina is hereby amended by striking out the word insane in the catch line and in lines two, four, nine and fifteen and inserting in lieu thereof the words mentally disordered.
datasets:
- biglam/on_the_books
co2_eq_emissions:
emissions: 0.2641096478393395
license: mit
library_name: transformers
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 64771135885
- CO2 Emissions (in grams): 0.2641
## Validation Metrics
- Loss: 0.057
- Accuracy: 0.986
- Precision: 0.988
- Recall: 0.992
- AUC: 0.998
- F1: 0.990
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-testblog-64771135885
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-testblog-64771135885", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-testblog-64771135885", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```