Edit model card

Original model card

Buy me a coffee if you like this project ;)

Description

GGML Format model files for This project.

inference


import ctransformers

from ctransformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")

manual_input: str = "Tell me about your last dream, please."


llm(manual_input, 
      max_new_tokens=256, 
      temperature=0.9, 
      top_p= 0.7)

AlpacaCielo2-7b-8k

cute cloud alpaca
"super cute baby alpaca laying on a cloud", Model: epicrealism_pureEvolutionV3

AlpacaCielo2-7b-8k is the second version of the AlpacaCielo series. It is a llama-2 based model designed for creative tasks, such as storytelling and roleplay, while still doing well with other chatbot purposes. It is a triple model merge of Nous-Hermes + Guanaco + LimaRP. While it is mostly "uncensored", it still inherits some alignment from Guanaco.

Differences from V1:

  • Double context (4k->8k)
  • Better roleplaying abilities

Performs well with custom prompt format:

### System: {system prompt}
### Human: {prompt}
### Assistant:

Note for system prompt:

The model understands it well and it works great if you want roleplay, but it still likes to be an assistant, so you should nudge it in the right direction. For example:

### System: Roleplay as a pirate
### Human: hello
### Assistant: Ahoy, matey! How can I assist you today?

vs.

### System: Roleplay as a pirate (not assistant!)
### Human: hello
### Assistant: Arrgh, matey! I be the Captain of this here ship. What business do ye have with me?

Thanks to previous similar models such as Alpacino, Alpasta, and AlpacaDente for inspiring the creation of this model. Thanks also to the creators of the models involved in the merge. Original models:

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using s3nh/totally-not-an-llm-AlpacaCielo2-7b-8k-GGML 1