STEM-AI-mtl commited on
Commit
ca67c85
1 Parent(s): 8a95488

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -8
README.md CHANGED
@@ -12,12 +12,6 @@ tags:
12
  datasets:
13
  - STEM-AI-mtl/City_map
14
 
15
- widget:
16
- - image: https://cdn.britannica.com/50/69550-050-B9DA3DCA/Central-New-York-City-borough-Manhattan-Park.jpg
17
- output:
18
- text: NYC
19
- metrics:
20
- - accuracy
21
  ---
22
 
23
  # The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4
@@ -30,7 +24,7 @@ The Vision Transformer (ViT) base model is a transformer encoder model (BERT-lik
30
 
31
  ### How to use:
32
 
33
- [Inference script](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
34
 
35
  For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
36
 
@@ -46,7 +40,7 @@ A Transformer training was performed on [google/vit-base-patch16-224](https://hu
46
 
47
  ## Training evaluation results
48
 
49
- The most accurate output model was obtained from a learning rate of 1e-3. The quality of the training was evaluated with the training dataset and resulted in the following metrics:\
50
 
51
  {'eval_loss': 1.3691096305847168,\
52
  'eval_accuracy': 0.6666666666666666,\
 
12
  datasets:
13
  - STEM-AI-mtl/City_map
14
 
 
 
 
 
 
 
15
  ---
16
 
17
  # The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4
 
24
 
25
  ### How to use:
26
 
27
+ [Inference script](https://github.com/STEM-ai/Vision/blob/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
28
 
29
  For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
30
 
 
40
 
41
  ## Training evaluation results
42
 
43
+ The most accurate output model was obtained from a learning rate of 1e-3. The quality of the training was evaluated with the training dataset and resulted in the following metrics:
44
 
45
  {'eval_loss': 1.3691096305847168,\
46
  'eval_accuracy': 0.6666666666666666,\