How to use this reranker?

#1
by brunnolou - opened
// npm i 

@xenova

	/transformers
import { pipeline } from '@xenova/transformers';

// Allocate pipeline
const reranker = await pipeline('text-classification', 'Xenova/bge-reranker-base');

I've tried:

const score = await rerank([
  "I love you. I like you",
  "I love you\n\nI like you",
  "<s>I love you</s><s>I like you</s>",
  "I love you</s>I like you",
]);

And array pairs throw an error.

const score = await rerank([
  ["I love you</s>I like you", "Banana</s>I like you"], // <- throw an error
]);

The results are always the same: { "label": "LABEL_0", "score": 1 }.

All the examples above returns different results from the Hosted inference API

Hi there! You can use it in the same way as shown in the README, just with minor syntax differences:

Python (original)

import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-base')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base')
model.eval()

pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
    inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
    scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
    print(scores)

outputs [-8.1544, 6.1821]

JavaScript (ours)

import { AutoModelForSequenceClassification, AutoTokenizer } from '@xenova/transformers';

let tokenizer = await AutoTokenizer.from_pretrained('Xenova/bge-reranker-base')
let model = await AutoModelForSequenceClassification.from_pretrained('Xenova/bge-reranker-base', { quantized: false })

let texts = ['what is panda?', 'what is panda?'];
let pairs = ['hi', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']

let inputs = tokenizer(texts, { text_pair: pairs, padding: true, truncation: true })

let scores = await model(inputs)
console.log(scores.logits.data)

outputs [ -8.154397964477539, 6.182114601135254 ]

The main difference is the way texts are passed into the tokenizer... which is unfortunately is due to limitations of how JavaScript handles optional positional and keyword arguments. For this reason, we require the users to separate the inputs into two parallel arrays.

Moreover, we use the unquantized model (quantized: false), but you can use the quantized version by removing this additional parameter (the output is also still pretty good: [ -7.527967929840088, 6.233025550842285 ]).

Sign up or log in to comment