Using hard negatives VS query, pos pair to train embedding models
Does using query, positive, negative triplets to train embedding models lead to better performance compared to just using query, positive pairs and a MultipleNegativesRankingLoss? If so, how significant is the improvement?
Hello!
Yes, using query, positive, negative triplets generally improves performance over just query, positive tuples in MultipleNegativesRankingLoss. The idea is that if you include a negative too, then it is more difficult for the model to find the correct positive (i.e. the "answer") for the given query out of all other positive and negative values from the batch that are used as all potential answers. The more difficult (to a point), the stronger the model will become.
Overall, the relative performance likely increases by about 1-4%, so it's not very major, but the best models do use "hard negatives", i.e. negatives that have been mined like this to act as difficult potential answers.
- Tom Aarsen
Cool, what about when training cross-encoders, if we have a query, 1 positive, and 4 negatives, do all 4 negative passages have to be "hard negatives" or do some of them have to be randomly sampled?
Unlike the case of Embedding Models, since the Cross-Encoders are trained as a binary classifiers, no in-batch negatives are used right?
Good question, they don't all have to be hard negatives. There's also some papers that showed that more random negatives might improve performance for "clearly negative" examples. If you only have hard negatives, those aren't common in training.
And indeed, usually no in-batch negatives, but there are losses that use in-batch negatives. For example my MultipleNegativesRankingLoss from my upcoming CrossEncoder refactor: https://github.com/UKPLab/sentence-transformers/pull/3222
In my experience, using primarily hard negatives, essentially as hard as possible, is generally best for my finetuning performance.
- Tom Aarsen
Cool, Thanks!