Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Degarbayan-SC: A Colloquial Paraphrase Farsi using pre-trained mT5

This is the model of Degarbayan-SC paper. You can Finetune this model or similar models using Github.

Paraphrase generation and detection are important tasks in Natural Language Processing (NLP), such as information retrieval, text simplification, question answering, and chatbots. The lack of comprehensive datasets in the Persian paraphrase is a major obstacle to progress in this area. Despite their importance, no large-scale corpus has been made available so far, given the difficulties in its creation and the intensive labor required. In this paper, the construction process of Degarbayan-SC uses movie subtitles. As you know, movie subtitles are in Colloquial language. It is different from formal language. To the best of our knowledge, Degarbayan-SC is the first freely released large-scale (in the order of a million words) Persian paraphrase corpus. Furthermore, this newly introduced dataset will help the growth of Persian paraphrase.

Train Dataset

Access and Download

You can find the dataset under this link of Google Drive.

  • Dataset is in .csv format
  • our dataset has 2 columns the first column is for source sentences and the second is for targets.

Test

import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer

def set_seed(seed):
  torch.manual_seed(seed)

set_seed(42)
tx2tx_model = "m0javad/Degarbayan-SC"
model = T5ForConditionalGeneration.from_pretrained(tx2tx_model)
from transformers import T5ForConditionalGeneration,AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(tx2tx_model)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print ("device ",device)
model = model.to(device)

#get the sentences and see how this Network works
sents=["first sentence","second sentence",...]
w = 0 
for sentt in sents:
  text =  "paraphrase: " + sentt
  max_len = 100
  encoding = tokenizer.encode_plus(text,padding='longest', return_tensors="pt")
  input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device)
  # set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
  beam_outputs = model.generate(
      input_ids=input_ids, attention_mask=attention_masks,
      do_sample=True,
      max_length=120,
      top_k=120,
      top_p=0.98,
      early_stopping=True,
      num_return_sequences=1
  )
  print ("\nOriginal sentence: ")
  print (sentt)
  print ("\n")
  print ("Paraphrased sentences: ")
  final_outputs =[]
  for beam_output in beam_outputs:
      sent = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
      if sent.lower() != sentt.lower() and sent not in final_outputs:
          final_outputs.append(sent)
  for i, final_output in enumerate(final_outputs):
      print("{}: {}".format(i, final_output))

citation

@misc {mohammadjavadaghajani_2024,
    author       = { {MohammadJavadAghajani} },
    title        = { Degarbayan-SC (Revision 6de2b06) },
    year         = 2024,
    url          = { https://huggingface.co/m0javad/Degarbayan-SC },
    doi          = { 10.57967/hf/1774 },
    publisher    = { Hugging Face }
}

contact

mjaghajani.ai@gmail.com

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train m0javad/Degarbayan-SC