Text Generation
Transformers
English
gpt_neox
red_pajama
Inference Endpoints
keldenl's picture
Update README.md
7419368
|
raw
history blame
2.31 kB
metadata
license: apache-2.0
language:
  - en
datasets:
  - togethercomputer/RedPajama-Data-1T
  - Muennighoff/P3
  - Muennighoff/natural-instructions
pipeline_tag: text-generation
tags:
  - gpt_neox
  - red_pajama

Original Model Link: https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1

This will NOT work with llama.cpp as of 5/8/2023. This will ONLY work with the GGML fork in https://github.com/ggerganov/ggml/pull/134, and soon https://github.com/keldenl/gpt-llama.cpp (which uses llama.cpp or ggml).

RedPajama-INCITE-Instruct-3B-v1

RedPajama-INCITE-Instruct-3B-v1 was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.

The model was fine-tuned for few-shot applications on the data of GPT-JT, with exclusion of tasks that overlap with the HELM core scenarios.

Model Details

  • Developed by: Together Computer.
  • Model type: Language Model
  • Language(s): English
  • License: Apache 2.0
  • Model Description: A 2.8B parameter pretrained language model.

Prompt Template

To prompt the chat model, use a typical instruction format + few shot prompting, for example:

Paraphrase the given sentence into a different sentence.

Input: Can you recommend some upscale restaurants in New York?
Output: What upscale restaurants do you recommend in New York?

Input: What are the famous places we should not miss in Paris?
Output: Recommend some of the best places to visit in Paris?

Input: Could you recommend some hotels that have cheap price in Zurich?
Output:

Which model to download?

  • The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
  • The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
  • The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
  • The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.