File size: 7,083 Bytes
513e58c
 
c9ad816
 
 
 
 
 
 
 
 
 
 
 
f040279
 
 
 
 
 
 
 
5420eec
 
 
 
 
 
 
 
 
 
7be1177
 
 
 
539cfe6
 
11b91d8
54573d4
 
 
 
 
e26c9f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9ad816
 
539cfe6
 
 
 
 
 
 
c9ad816
 
 
 
 
11b91d8
 
c9ad816
 
5420eec
73b9980
256b5bf
513e58c
c9ad816
f040279
c9ad816
 
 
f040279
c9ad816
 
 
 
 
f040279
c9ad816
f040279
 
 
 
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
 
 
 
 
c9ad816
f040279
c9ad816
f040279
 
 
c9ad816
 
 
f040279
c9ad816
f040279
 
 
 
 
 
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
c9ad816
 
 
f040279
c9ad816
 
 
f040279
c9ad816
f040279
 
 
c9ad816
 
 
f040279
c9ad816
 
 
f040279
c9ad816
f040279
 
 
 
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
c9ad816
f040279
 
c9ad816
 
 
f040279
c9ad816
f040279
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
license: openrail
datasets:
- irds/codesearchnet
- giganticode/java-cmpx-v1
- nickrosh/Evol-Instruct-Code-80k-v1
- bigcode/starcoderdata
- bigcode/the-stack
- bigcode/the-stack-smol
- Cdaprod/AI-Developer-Prompts
- code_x_glue_ct_code_to_text
- codeparrot/github-code
- codeparrot/github-code-clean
- code_x_glue_cc_code_completion_line
- >-
  autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558893
- bentrevett/multi30k
- edbeeching/decision_transformer_gym_replay
- psyche/common_crawl
- Birchlabs/openai-prm800k-solutions-only
- openchat/openchat_sharegpt4_dataset
- Open-Orca/OpenOrca
- cjvt/slownet
- para_crawl
- zeroshot/twitter-financial-news-sentiment
- laugustyniak/political-advertising-pl
- code_search_net
- sukaka/novelai-webui
- P1ayer-1/chatgpt-conversations-chatlogs.net
- daniel2588/sarcasm
- psmathur/orca_minis_uncensored_dataset
- player1537/Bloom-560m-trained-on-Wizard-Vicuna-Uncensored-trained-on-Based
- shahules786/prosocial-nsfw-reddit
- Thewillonline/reddit-sarcasm
- datasciencemmw/current-data
- Oniichat/bluemoon_roleplay_chat_data_300k_messages
- dell-research-harvard/AmericanStories
- b-mc2/sql-create-context
- rahulmallah/autotrain-data-emotion-detection
- theblackcat102/multiround-programming-convo
- Lsavints/software_knowledgebase
- RazinAleks/SO-Python_QA-Web_Development_class
- codeparrot/apps
- branles14/ultrachat-uncensored_full
- vlsp-2023-vllm/en-to-vi-formal-informal-tranlations
- fraug-library/english_contractions_extensions
- spencer/software_slacks
- Abirate/english_quotes
- Nexdata/American_English_Natural_Dialogue_Speech_Data
- Nexdata/Latin_American_Speaking_English_Speech_Data_by_Mobile_Phone
- Nexdata/American_English_Speech_Data_by_Mobile_Phone_Reading
- Nexdata/American_English_Speech_Synthesis_Corpus-Female
- rombodawg/LimitlessCodeTraining
- RikoteMaster/Emotion_Recognition_4_llama2
- Villian7/Emotions_Data
- alanland/llama2-self-cognition
- CognitiveScience/coscidata
- bibidentuhanoi/gideon_self_cognition
- gollark/consciousness
- juletxara/visual-spatial-reasoning
- lintang/numerical_reasoning_arithmetic
- reasoning-machines/gsm-hard
- open-source-metrics/reinforcement-learning-checkpoint-downloads
- igbo_english_machine_translation
- US-Artificial-Intelligence/algemap
- rombodawg/2XUNCENSORED_alpaca_840k_Evol_USER_ASSIS
- griffin/chain_of_density
- >-
  shirsh10mall/LLM_Instruct_Learning_Project_Preprocessed_Tokenized_Open_Orca_Dataset_Flan_T5
- Thaweewat/chain-of-thought-74k-th
- AlekseyKorshuk/chain-of-thoughts-chatml-deduplicated
language:
- en
- it
- fr
- pt
- la
- ru
- ro
- el
metrics:
- accuracy
- bertscore
- bleu
- code_eval
- character
- brier_score
tags:
- code
- text-generation-inference
library_name: transformers
pipeline_tag: conversational
---

# Model Card for Aiden

<!-- Provide a quick summary of what the model is/does. -->

Aiden is a large language model (LLM) chatbot developed by or4cl3ai. It is trained on a massive dataset of text and code, and can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

## Model Details

### Model Description

Aiden is a factual language model from Hugging Face, trained on a massive dataset of text and code. It is a powerful tool that can be used for a variety of tasks, including:

* Generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way.
* Identifying and correcting errors in text.
* Summarizing long pieces of text.
* Answering your questions in an informative way, even if they are open ended, challenging, or strange.

### Model Specifications

Aiden is a Transformer-based LLM with 137B parameters. It is trained on a massive dataset of text and code, including the following:

* Books
* Code
* Wikipedia articles
* News articles
* Social media posts

### Model Sources

* Repository: https://huggingface.co/or4cl3ai/Aiden
* Paper: https://arxiv.org/abs/2307.09700
* Demo: https://huggingface.co/or4cl3ai/Aiden

## Uses

Aiden can be used for a variety of tasks, including:

* Generating text
* Translating languages
* Writing different kinds of creative content
* Answering your questions in an informative way
* Identifying and correcting errors in text
* Summarizing long pieces of text

### Direct Use

Aiden can be used directly to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. For example, you could use Aiden to generate a poem, translate a document from one language to another, or write a blog post.

### Downstream Use

Aiden can also be used as a component in downstream applications. For example, you could use Aiden to power a chatbot, or to generate text for a synthetic data set.

### Out-of-Scope Use

Aiden is not intended to be used for any task that could be harmful or discriminatory. For example, you should not use Aiden to generate text that is hateful or offensive, or to translate languages in a way that could be used to spread misinformation.

## Bias, Risks, and Limitations

Aiden is a large language model, and as such, it is subject to a number of biases and limitations. These include:

* Biases in the training data: Aiden is trained on a massive dataset of text and code, which may contain biases. These biases can be reflected in the text that Aiden generates.
* Limitations in the model's capabilities: Aiden is a powerful tool, but it is not perfect. It can sometimes generate text that is inaccurate, biased, or offensive.
* Risks of misuse: Aiden can be misused for a variety of purposes, including generating harmful or offensive text, or spreading misinformation.

### Recommendations

Users of Aiden should be aware of the risks, biases, and limitations of the model. It is important to use Aiden responsibly and ethically.

## How to Get Started with the Model

To get started with Aiden, you can follow these steps:

1. Install the Hugging Face Transformers library.
2. Clone the Aiden repository.
3. Download the Aiden model weights.
4. Load the model in your code.

Once you have loaded the model, you can use it to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

## Training Details

Aiden is trained on a massive dataset of text and code. The training data is collected from a variety of sources, including books, code, Wikipedia articles, news articles, and social media posts.

The training process is divided into two phases:

1. Pre-training: The model is pre-trained on a massive dataset of text and code. This pre-training helps the model to learn the basic building blocks of language.
2. Fine-tuning: The model is fine-tuned on a smaller dataset of text and code that is relevant to the task at hand. This fine-tuning helps the model to improve its performance on the specific task.

## Evaluation

Aiden is evaluated on a variety of tasks, including:

* Text generation
* Translation
* Summarization