File size: 2,227 Bytes
76f3f37
679d6d7
fc1301c
 
 
76f3f37
 
 
 
fc1301c
 
76f3f37
 
fc1301c
 
679d6d7
fc1301c
679d6d7
fc1301c
 
 
 
 
 
 
 
 
 
 
 
 
 
679d6d7
 
 
 
 
 
 
 
 
fc1301c
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
title: Apple OpenELM-270M-Instruct
emoji: 🍎
colorFrom: green
colorTo: red
sdk: gradio
sdk_version: 4.28.2
app_file: app.py
pinned: false
license: other
suggested_hardware: t4-small
---

# Apple OpenELM Models

OpenELM was introduced in [this paper](https://arxiv.org/abs/2404.14619).

This Space demonstrates [apple/OpenELM-270M-Instruct](https://huggingface.co/apple/OpenELM-270M-Instruct) from Apple. Please, check the original model card for details.
You can see the other models of the OpenELM family [here](https://huggingface.co/apple/OpenELM)

# The following Information was taken "as is" from original model card

## Bias, Risks, and Limitations

The release of OpenELM models aims to empower and enrich the open research community by providing access to state-of-the-art language models. Trained on publicly available datasets, these models are made available without any safety guarantees. Consequently, there exists the possibility of these models producing outputs that are inaccurate, harmful, biased, or objectionable in response to user prompts. Thus, it is imperative for users and developers to undertake thorough safety testing and implement appropriate filtering mechanisms tailored to their specific requirements.

## Citation

If you find our work useful, please cite:

```BibTex 
@article{mehtaOpenELMEfficientLanguage2024,
    title = {{OpenELM}: {An} {Efficient} {Language} {Model} {Family} with {Open} {Training} and {Inference} {Framework}},
    shorttitle = {{OpenELM}},
    url = {https://arxiv.org/abs/2404.14619v1},
    language = {en},
    urldate = {2024-04-24},
    journal = {arXiv.org},
    author = {Mehta, Sachin and Sekhavat, Mohammad Hossein and Cao, Qingqing and Horton, Maxwell and Jin, Yanzi and Sun, Chenfan and Mirzadeh, Iman and Najibi, Mahyar and Belenko, Dmitry and Zatloukal, Peter and Rastegari, Mohammad},
    month = apr,
    year = {2024},
}

@inproceedings{mehta2022cvnets, 
     author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, 
     title = {CVNets: High Performance Library for Computer Vision}, 
     year = {2022}, 
     booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, 
     series = {MM '22} 
}
```