File size: 674 Bytes
0bf03e3
b21e3c1
 
 
0bf03e3
b21e3c1
 
 
 
0bf03e3
 
b21e3c1
0bf03e3
 
b21e3c1
0bf03e3
 
9d04609
0bf03e3
a87478f
 
5d4b0ed
0ab49ac
5d4b0ed
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- llama
- llama-3
- coreml
- text-generation
license: llama3
---


# Meta Llama 3 8B – Core ML

This repository contains a Core ML conversion of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)


This does not have KV Cache. only: inputs int32 / outputs float16. 

I haven't been able to test this, so leave something in 'Community' to let me know how ya tested it and how it worked.

I did model.half() before scripting / coverting thinking it would reduce my memory usage (I found online that it doesn't). 

I am unsure if it affected the conversion process or not.