TheBloke commited on
Commit
fe7157b
1 Parent(s): e17c9f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -38,13 +38,13 @@ quantized_by: TheBloke
38
  <!-- header end -->
39
 
40
  # Athena V2 - GGUF
41
- - Model creator: [IkariDev](https://huggingface.co/IkariDev)
42
  - Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
43
 
44
  <!-- description start -->
45
  ## Description
46
 
47
- This repo contains GGUF format model files for [IkariDev's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
48
 
49
  <!-- description end -->
50
  <!-- README_GGUF.md-about-gguf start -->
@@ -71,7 +71,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
71
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
72
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
73
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
74
- * [IkariDev's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
75
  <!-- repositories-available end -->
76
 
77
  <!-- prompt-template start -->
@@ -297,7 +297,7 @@ And thank you again to a16z for their generous grant.
297
  <!-- footer end -->
298
 
299
  <!-- original-model-card start -->
300
- # Original model card: IkariDev's Athena V2
301
 
302
 
303
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/y9gdW2923RkORUxejcLVL.png)
 
38
  <!-- header end -->
39
 
40
  # Athena V2 - GGUF
41
+ - Model creator: [IkariDev and Undi95](https://huggingface.co/IkariDev)
42
  - Original model: [Athena V2](https://huggingface.co/IkariDev/Athena-v2)
43
 
44
  <!-- description start -->
45
  ## Description
46
 
47
+ This repo contains GGUF format model files for [IkariDev and Undi95's Athena V2](https://huggingface.co/IkariDev/Athena-v2).
48
 
49
  <!-- description end -->
50
  <!-- README_GGUF.md-about-gguf start -->
 
71
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Athena-v2-AWQ)
72
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Athena-v2-GPTQ)
73
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Athena-v2-GGUF)
74
+ * [IkariDev and Undi95's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/IkariDev/Athena-v2)
75
  <!-- repositories-available end -->
76
 
77
  <!-- prompt-template start -->
 
297
  <!-- footer end -->
298
 
299
  <!-- original-model-card start -->
300
+ # Original model card: IkariDev and Undi95's Athena V2
301
 
302
 
303
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/y9gdW2923RkORUxejcLVL.png)