starsnatched
commited on
Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,61 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
license: apache-2.0
|
5 |
+
tags:
|
6 |
+
- memgpt
|
7 |
+
- function
|
8 |
+
- function calling
|
9 |
---
|
10 |
+
|
11 |
+
# Model Description
|
12 |
+
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
|
13 |
+
|
14 |
+
~~The original model has been quantized to Q8_0, using [llama.cpp](https://github.com/ggerganov/llama.cpp) for better inference speed (original unquantized model coming soon).~~
|
15 |
+
|
16 |
+
# Key Features
|
17 |
+
* Function calling
|
18 |
+
* Dedicated to working with MemGPT
|
19 |
+
* Supports medium context, trained with Sequences up to 8,192
|
20 |
+
|
21 |
+
# Usage
|
22 |
+
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp.
|
23 |
+
|
24 |
+
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install).
|
25 |
+
|
26 |
+
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend.
|
27 |
+
|
28 |
+
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT.
|
29 |
+
|
30 |
+
Use `memgpt run --no-verify` to chat with MemGPT!
|
31 |
+
|
32 |
+
# Prompt Template
|
33 |
+
This model uses a custom chat template. We're naming this template "memgpt". This can be configured via `memgpt configure`.
|
34 |
+
```
|
35 |
+
<turn> SYSTEM: You are MemGPT. </turn>
|
36 |
+
<turn> SYSTEM_ALERT: {'message': 'You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions.', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
|
37 |
+
<turn> ASSISTANT: Bootup sequence complete. Persona activated. Testing messaging functionality. </turn>
|
38 |
+
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""} </turn>
|
39 |
+
<turn> LOGIN: {'last_login': '7 days 8 hours ago', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
|
40 |
+
<turn> USER_MESSAGE: {'message': 'Hey, remember that time we talked about music? What was the artist you mentioned you could get into?', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
|
41 |
+
<turn> ASSISTANT: Alright, let me search our conversation history about music to find the artist. </turn>
|
42 |
+
<turn> FUNCTION CALL: {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""} </turn>
|
43 |
+
<turn> HEARTBEAT: {'reason': 'request_heartbeat == true', 'time': '2023-10-07 11:50:08 PM PDT-0700'} </turn>
|
44 |
+
<turn> ASSISTANT: Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? </turn>
|
45 |
+
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""} </turn>
|
46 |
+
```
|
47 |
+
|
48 |
+
# Model Details
|
49 |
+
* Developed by: @starsnatched
|
50 |
+
* Model type: This repo contains a language model based on the transformer decoder architecture.
|
51 |
+
* Language: English
|
52 |
+
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
|
53 |
+
|
54 |
+
# Training Infrastructure
|
55 |
+
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
|
56 |
+
|
57 |
+
# Intended Use
|
58 |
+
The model is designed to be used as the base model for MemGPT agents.
|
59 |
+
|
60 |
+
# Limitations and Bias
|
61 |
+
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|