Cross-Encoder
Collection
4 items
•
Updated
This is fav-kky/FERNET-C5, fine-tuned with the Cross-Encoder architecture on the Czech News Dataset for Semantic Textual Similarity and DaReCzech. The Cross-Encoder architecture processes both input text pieces simultaneously, enabling better accuracy.
The model can be used both for Semantic Textual Similarity and re-ranking.
Semantic Textual Similarity: The model takes two input sentences and evaluates how similar their meanings are.
from sentence_transformers import CrossEncoder
model = CrossEncoder('ctu-aic/CE-fernet-c5-sfle512', max_length=512)
scores = model.predict([["sentence_1", "sentence_2"]])
print(scores)
Re-ranking task: Given a query, the model assesses all potential passages and ranks them in descending order of relevance.
from sentence_transformers import CrossEncoder
model = CrossEncoder('ctu-aic/CE-fernet-c5-sfle512', max_length=512)
query = "Example query for."
documents = [
"Example document one.",
"Example document two.",
"Example document three."
]
top_k = 3
return_documents = True
results = model.rank(
query=query,
documents=documents,
top_k=top_k,
return_documents=return_documents
)
for i, res in enumerate(results):
print(f"{i+1}. {res['text']}")
Base model
fav-kky/FERNET-C5