Kushtrim's picture
Update README.md
ecec6e7 verified
|
raw
history blame
3.45 kB
metadata
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
  - sq
library_name: transformers
pipeline_tag: text-generation
tags:
  - nlp
  - code
inference:
  parameters:
    temperature: 0.7
widget:
  - messages:
      - role: user
        content: >-
          Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi
          (lindi më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'

Kushtrim/Phi-3-mini-4k-instruct-sq

Model Overview

The Kushtrim/Phi-3-mini-4k-instruct-sq is a fine-tuned version of the Phi-3-mini-4k-instruct model, specifically tailored for Albanian language tasks. It has a context length of up to 4,000 tokens, making it suitable for a variety of applications requiring strong reasoning and high-quality outputs in Albanian.

Model Details

  • Model Name: Kushtrim/Phi-3-mini-4k-instruct-sq
  • Base Model: Phi-3-Mini-4K-Instruct
  • Context Length: 4,000 tokens
  • Language: Albanian
  • License: MIT License

Limitations

  • Representation of Harms & Stereotypes: Potential for biased outputs reflecting real-world societal biases.
  • Inappropriate or Offensive Content: Risk of generating content that may be offensive or inappropriate in certain contexts.
  • Information Reliability: Possibility of producing inaccurate or outdated information.
  • Dataset Size: The Albanian dataset used for fine-tuning was not very large, which may affect the model's performance and coverage.

Responsible AI Considerations

Developers using this model should:

  • Evaluate and mitigate risks related to accuracy, safety, and fairness.
  • Ensure compliance with applicable laws and regulations.
  • Implement additional safeguards for high-risk scenarios and sensitive contexts.
  • Inform end-users that they are interacting with an AI system.
  • Use feedback mechanisms and contextual information grounding techniques (RAG) to enhance output reliability.

How to Use

!pip3 install -U transformers peft accelerate bitsandbytes

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch

hf_token = "hf_...." # 

torch.random.manual_seed(0)

model = AutoModelForCausalLM.from_pretrained(
    "Kushtrim/Phi-3-mini-4k-instruct-sq",
    device_map="cuda",
    torch_dtype="auto",
    trust_remote_code=True,
    token=hf_token,
)

tokenizer = AutoTokenizer.from_pretrained("Kushtrim/Phi-3-mini-4k-instruct-sq", token=hf_token)

messages = [
    {"role": "system", "content": "Je një asistent inteligjent shumë i dobishëm."},
    {"role": "user", "content": "Identifiko emrat e personave në këtë artikull 'Majlinda Kelmendi (lindi më 9 maj 1991), është një xhudiste shqiptare nga Peja, Kosovë.'"},
]

pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
)

generation_args = {
    "max_new_tokens": 1024,
    "return_full_text": False,
    "temperature": 0.7,
    "do_sample": True,
}

output = pipe(messages, **generation_args)
print(output[0]['generated_text'])

Acknowledgements

This model is built upon the Phi-3-Mini-4K-Instruct by leveraging its robust capabilities and further fine-tuning it for Albanian language tasks. Special thanks to the developers and researchers who contributed to the original Phi-3 models.