boris-f's picture
Update README.md
80c4451
|
raw
history blame
2.5 kB
metadata
language:
  - de
  - en
  - es
  - fr

Model Card for answer-finder-v1-L-multilingual

This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.

Model name: answer-finder-v1-L-multilingual

Supported Languages

The model was trained and tested in the following languages:

  • English
  • French
  • German
  • Spanish

Scores

Metric Value
F1 Score on SQuAD v2 EN with Hugging Face evaluation pipeline 75
F1 Score on SQuAD v2 EN with Haystack evaluation pipeline 75
F1 Score on SQuAD v2 FR with Haystack evaluation pipeline 73.4
F1 Score on SQuAD v2 DE with Haystack evaluation pipeline 90.8
F1 Score on SQuAD v2 ES with Haystack evaluation pipeline 67.1

Inference Time

GPU Info Batch size 1 Batch size 32
NVIDIA A10 4 ms 84 ms
NVIDIA T4 15 ms 362 ms

Note that the Answer Finder models are only used at query time.

Requirements

  • Minimal Sinequa version: 11.10.0
  • GPU memory usage: 1060 MiB

Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.

Model Details

Overview

  • Number of parameters: 110 million
  • Base language model: bert-base-multilingual-cased pre-trained by Sinequa in English, French, German and Spanish
  • Insensitive to casing and accents

Training Data