jeiku commited on
Commit
f986df9
·
verified ·
1 Parent(s): 9c606a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -10
README.md CHANGED
@@ -10,21 +10,21 @@ language:
10
  base_model:
11
  - IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
12
  ---
13
- ## Aura-4B
14
 
15
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/jT4LeWC0ioarPieWtNZkE.png)
16
 
17
  ## Introduction
18
 
19
- **Aura-4B** is a state of the art dedicated roleplaying model designed to fulfill your every desire.
20
 
21
- This finetune has seen several hundreds of millions of tokens of completion, instruction and roleplaying data. A Kahneman-Tversky Optimization was applied to give this model a unique output style.
22
 
23
  Developed by **Aura Industries**, with contributions from **Anthracite Org**
24
 
25
  ## Model Details
26
 
27
- - **Model Name**: Aura-4B
28
  - **Base Model**: [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
29
  - **Model Type**: Chat Completions
30
  - **Prompt Format**: ChatML
@@ -38,11 +38,7 @@ This model is licensed under the [Apache 2.0 License](https://www.apache.org/lic
38
 
39
  ## Quantizations
40
 
41
- [Static GGUF](https://huggingface.co/mradermacher/Aura-4B-GGUF)
42
-
43
- [Imatrix GGUF](https://huggingface.co/mradermacher/Aura-4B-i1-GGUF)
44
-
45
- EXL2 coming soon...
46
 
47
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
48
 
 
10
  base_model:
11
  - IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
12
  ---
13
+ ## Aura-MoE-2x4B
14
 
15
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/LpCTIR45g099eXDIwYmKa.png)
16
 
17
  ## Introduction
18
 
19
+ **Aura-MoE-2x4B** is a state of the art dedicated roleplaying model designed to fulfill your every desire.
20
 
21
+ The finetunes used in this merge saw several hundreds of millions of tokens of completion, instruction and roleplaying data. A Kahneman-Tversky Optimization was applied as a Low Rank Adapter to both heal and give this model a unique output style.
22
 
23
  Developed by **Aura Industries**, with contributions from **Anthracite Org**
24
 
25
  ## Model Details
26
 
27
+ - **Model Name**: Aura-MoE-2x4B
28
  - **Base Model**: [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
29
  - **Model Type**: Chat Completions
30
  - **Prompt Format**: ChatML
 
38
 
39
  ## Quantizations
40
 
41
+ Coming soon...
 
 
 
 
42
 
43
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
44