Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Model Details

Model name: AskMe

Model type: GPT-based, fine-tuned on Arabic instruction-based dataset

Base model: aubmindlab/aragpt2-large

Languages: Arabic

Author: Research team at Naseej

Introduction

AskMe is a GPT-based model, fine-tuned on an Arabic instruction-based dataset generated from ChatGPT. The research team at Naseej has fine-tuned this model using the aubmindlab/aragpt2-large model as the base. The model aims to provide a high-quality, context-aware language model that can assist users in generating human-like responses in Arabic, specifically when given instructions or prompts.

Dataset

The dataset used for fine-tuning AskMe consists of Arabic instruction-based conversations generated from ChatGPT. The research team at Naseej has made sure to curate and clean the dataset for better model performance and to reduce any biases that might be present in the data.

Fine-tuning

AskMe is fine-tuned using the aubmindlab/aragpt2-large model, which is specifically designed for Arabic language understanding and generation tasks. The research team at Naseej has carefully fine-tuned the model to improve its performance on instruction-based tasks, ensuring that the model is capable of generating accurate and contextually relevant responses.

Demo

https://askme.naseej.ai

Usage

AskMe can be used for a variety of tasks that involve understanding and responding to instructions or prompts in Arabic. This includes tasks such as:

  • Question-answering
  • Conversation modeling
  • Summarization
  • Translation
  • Generating instructions
  • Text completion

You can use the model with the Hugging Face Transformers library by loading it using the from_pretrained method:

from transformers import AutoTokenizer
from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel


tokenizer = AutoTokenizer.from_pretrained("Naseej/AskMe-Large")
model = GPT2LMHeadModel.from_pretrained("Naseej/AskMe-Large")

Limitations and Bias

Although AskMe has been fine-tuned on a curated dataset, it is still susceptible to biases present in the training data. This can result in the generation of biased or politically incorrect responses. Users should be cautious and critically evaluate the generated outputs.

Additionally, as a language model, AskMe may produce incorrect or nonsensical answers, especially when handling complex or ambiguous prompts. It is recommended to use the model as a tool to assist in decision-making and content generation rather than as a standalone solution.

Feedback and Contributions

We welcome feedback and contributions to improve the AskMe model. If you have any issues, suggestions, or questions, please feel free to open an issue on our GitHub repository, or reach out to the research team at Naseej.

License

AskMe is released under the MIT License.

Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Naseej/AskMe-Large 1