YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

This is a weights storage for the PaperPersiChat pipeline models.

The pipeline is presented in the paper PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management

Installation

git lfs install
git clone https://huggingface.co/ai-forever/paper_persi_chat

Usage

  1. Full pipeline:

See more details on https://github.com/ai-forever/paper_persi_chat

  1. Single models:

Inference examples: Open In Colab

Three models (Summarizer, QA module and Response Generator) can be imported using transformers library after weights downloading:

from transformers import BartForConditionalGeneration, BartTokenizer
model_name_or_path = 'paper_persi_chat/distilbart_summarizer' # or 'paper_persi_chat/bart_response_generator'
tokenizer = BartTokenizer.from_pretrained(model_name_or_path)
model =  BartForConditionalGeneration.from_pretrained(model_name_or_path).to('cuda')
from transformers import pipeline
model = pipeline("question-answering", model='paper_persi_chat/deberta_qa')
pred = model(question,
             context,
             max_seq_len=384,
             doc_stride=64,
             max_answer_len=384)

Citation

If you find our models helpful, feel free to cite our publication PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management:

@inproceedings{chernyavskiy-etal-2023-paperpersichat,
    title = "{P}aper{P}ersi{C}hat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management",
    author = "Chernyavskiy, Alexander  and
      Bregeda, Max  and
      Nikiforova, Maria",
    booktitle = "Proceedings of the 24th Meeting of the Special Interest Group on Discourse and Dialogue",
    month = sep,
    year = "2023",
    address = "Prague, Czechia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.sigdial-1.54",
    pages = "584--587",
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train ai-forever/paper_persi_chat