Zero-Shot Image Classification
TiC-CLIP
vision
fartashf commited on
Commit
bcf88e9
1 Parent(s): 1dd32d7

Add files using large-upload tool

Browse files
Files changed (1) hide show
  1. README.md +12 -69
README.md CHANGED
@@ -88,79 +88,22 @@ python evaluate.py --data_dir data/ --train_output_dir ./results --use_model "Vi
88
 
89
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
 
91
- [More Information Needed]
92
 
93
  ### Training Procedure
94
 
95
  Please refer to Sections 2-3 of our [TiC-CLIP](https://github.com/apple/ml-tic-clip) paper.
96
 
97
- #### Preprocessing [optional]
98
 
99
- [More Information Needed]
 
100
 
101
-
102
- #### Training Hyperparameters
103
-
104
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
105
-
106
- ## Evaluation
107
-
108
- <!-- This section describes the evaluation protocols and provides the results. -->
109
-
110
- ### Testing Data, Factors & Metrics
111
-
112
- #### Testing Data
113
-
114
- <!-- This should link to a Dataset Card if possible. -->
115
-
116
- [More Information Needed]
117
-
118
- #### Metrics
119
-
120
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
121
-
122
- [More Information Needed]
123
-
124
- ### Results
125
-
126
- [More Information Needed]
127
-
128
- #### Summary
129
-
130
-
131
-
132
- ## Environmental Impact
133
-
134
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
135
-
136
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
137
-
138
- - **Hardware Type:** [More Information Needed]
139
- - **Hours used:** [More Information Needed]
140
- - **Carbon Emitted:** [More Information Needed]
141
-
142
- ## Technical Specifications [optional]
143
-
144
- ### Model Architecture and Objective
145
-
146
- [More Information Needed]
147
-
148
- ### Compute Infrastructure
149
-
150
- [More Information Needed]
151
-
152
- #### Hardware
153
-
154
- [More Information Needed]
155
-
156
- #### Software
157
-
158
- [More Information Needed]
159
-
160
- ## Citation [optional]
161
-
162
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
163
-
164
- **BibTeX:**
165
-
166
- [More Information Needed]
 
88
 
89
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
90
 
91
+ Please refer to [TiC-DataComp](https://huggingface.co/datasets/apple/TiC-DataComp).
92
 
93
  ### Training Procedure
94
 
95
  Please refer to Sections 2-3 of our [TiC-CLIP](https://github.com/apple/ml-tic-clip) paper.
96
 
97
+ ## Citation
98
 
99
+ **[TiC-CLIP: Continual Training of CLIP Models](https://arxiv.org/abs/2310.16226). (ICLR 2024)**
100
+ *Garg, S., Farajtabar, M., Pouransari, H., Vemulapalli, R., Mehta, S., Tuzel, O., Shankar, V. and Faghri, F..*
101
 
102
+ ```bibtex
103
+ @inproceedings{garg2024tic,
104
+ title={TiC-CLIP: Continual Training of CLIP Models},
105
+ author={Garg, Saurabh and Farajtabar, Mehrdad and Pouransari, Hadi and Vemulapalli, Raviteja and Mehta, Sachin and Tuzel, Oncel and Shankar, Vaishaal and Faghri, Fartash},
106
+ booktitle={The Twelfth International Conference on Learning Representations (ICLR)},
107
+ year={2024},
108
+ url={https://openreview.net/forum?id=TLADT8Wrhn}
109
+ }