Commit
•
0a1bdfc
1
Parent(s):
444a8db
add info on using the model
Browse files
README.md
CHANGED
@@ -27,6 +27,35 @@ More information needed
|
|
27 |
|
28 |
## Intended uses & limitations
|
29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
More information needed
|
31 |
|
32 |
## Training and evaluation data
|
|
|
27 |
|
28 |
## Intended uses & limitations
|
29 |
|
30 |
+
### Using in a transformer pipeline
|
31 |
+
|
32 |
+
The easiest way to use this model is via a [Transformers pipeline](https://huggingface.co/docs/transformers/main/en/pipeline_tutorial#vision-pipeline). To do this you should first load the model and feature extractor:
|
33 |
+
|
34 |
+
```python
|
35 |
+
from transformers import AutoFeatureExtractor, AutoModelForObjectDetection
|
36 |
+
|
37 |
+
extractor = AutoFeatureExtractor.from_pretrained("davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks")
|
38 |
+
|
39 |
+
model = AutoModelForObjectDetection.from_pretrained("davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks")
|
40 |
+
```
|
41 |
+
|
42 |
+
Then you can create a pipeline for object detection using the model
|
43 |
+
|
44 |
+
```python
|
45 |
+
from transformers import pipeline
|
46 |
+
|
47 |
+
pipe = pipeline('object-detection',model=model, feature_extractor=extractor)
|
48 |
+
```
|
49 |
+
|
50 |
+
To use this to make predictions pass in an image (or a file-path/URL for the image):
|
51 |
+
|
52 |
+
```python
|
53 |
+
>>> pipe("https://huggingface.co/davanstrien/detr-resnet-50_fine_tuned_nls_chapbooks/resolve/main/Chapbook_Jack_the_Giant_Killer.jpg")
|
54 |
+
[{'box': {'xmax': 290, 'xmin': 70, 'ymax': 510, 'ymin': 261},
|
55 |
+
'label': 'early_printed_illustration',
|
56 |
+
'score': 0.998455286026001}]
|
57 |
+
```
|
58 |
+
|
59 |
More information needed
|
60 |
|
61 |
## Training and evaluation data
|