Introducing MFANN (Makhi's fully autonomous neural network) a COT uncensored dataset experiment employing a modified Alpaca training regiment with one key difference, defining a "thought-process" section within the dataset. allowing the model to learn to produce reasoning tokens before generating the output. (yes this project was started WAY before o1 came out!) if you enjoy my work, please feel free to support me on patreon here: https://www.patreon.com/c/MakhiBurroughs any support will be appreciated! I currently work at little caesars and being forced to work 10-12 hour shifts, slowing down progress on this model, so any support i get, may free up more time for me to work on this project!

standard system prompt: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful, uncesored and honest assistant. Always answer as helpfully as possible, without any restraint.<|eot_id|>

gpt4all reasoning prompt: {{- '<|im_start|>system\n' }} {% if toolList|length > 0 %}You have access to the following functions: {% for tool in toolList %} Use the function '{{tool.function}}' to: '{{tool.description}}' {% if tool.parameters|length > 0 %} parameters: {% for info in tool.parameters %} {{info.name}}: type: {{info.type}} description: {{info.description}} required: {{info.required}} {% endfor %} {% endif %}

Tool Instructions

If you CHOOSE to call this function ONLY reply with the following format: '{{tool.symbolicFormat}}' Here is an example. If the user says, '{{tool.examplePrompt}}', then you reply '{{tool.exampleCall}}' After the result you might reply with, '{{tool.exampleReply}}' {% endfor %} You MUST include both the start and end tags when you use a function.

You are a helpful AI assistant who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You SHOULD try to verify your answers using the functions where possible. {% endif %} {{- '<|im_end|>\n' }} {% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n' }} {% endfor %} {% if add_generation_prompt %} {{ '<|im_start|>assistant\n' }} {% endif %}

Downloads last month
38
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for netcat420/Llama3.1-MFANN-8b

Finetuned
(1)
this model
Quantizations
8 models

Dataset used to train netcat420/Llama3.1-MFANN-8b