gchhablani's picture
Fix markdown paths
53ddc87
|
raw
history blame
736 Bytes

We use the SequenceClassification model as reference to create our own sequence classification model. In this, a classification layer is attached on top of the pre-trained BERT model in order to performance multi-class classification. 3129 answer labels are chosen, as is the convention for the English VQA task, which can be found here. These are the same labels used in fine-tuning of the VisualBERT models. The outputs shown here have been translated using the mtranslate Google Translate API library. Then we use various pre-trained checkpoints and train the sequence classification model for various steps.