doberst commited on
Commit
8f05c4b
1 Parent(s): b670bed

initial commit to repo

Browse files
Files changed (4) hide show
  1. README.md +107 -0
  2. config.json +96 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer.json +0 -0
README.md CHANGED
@@ -1,3 +1,110 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+ bling-falcon-1b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model
10
+
11
+ BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
12
+ the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
13
+ without using any advanced quantization optimizations.
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+ - **Developed by:** llmware
20
+ - **Model type:** GPTNeoX instruct-trained decoder
21
+ - **Language(s) (NLP):** English
22
+ - **License:** Apache 2.0
23
+ - **Finetuned from model [optional]:** tiiuae/falcon-rw-1b
24
+
25
+ ## Uses
26
+
27
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
28
+
29
+ The intended use of BLING models is two-fold:
30
+
31
+ 1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a
32
+ proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
33
+
34
+ 2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
35
+ automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
36
+
37
+
38
+ ### Direct Use
39
+
40
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
+
42
+ BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
43
+ legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
44
+
45
+ BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
46
+ having to send sensitive information over an Internet-based API.
47
+
48
+ The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
49
+ without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
50
+
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ 1. BLING is not designed for 'chat-bot' or 'consumer-oriented' applications.
57
+
58
+ 2. BLING is not optimal for most production applications, other than simple and highly specific use cases.
59
+
60
+
61
+ ## Bias, Risks, and Limitations
62
+
63
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
+
65
+ BLING has not been designed for end consumer-oriented applications, and there has not been any focus in training on safeguards to mitigate potential bias. We would strongly discourage any use of BLING for any 'chatbot' use case.
66
+
67
+
68
+ ## How to Get Started with the Model
69
+
70
+ The fastest way to get started with BLING is through direct import in transformers:
71
+
72
+ from transformers import AutoTokenizer, AutoModelForCausalLM
73
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1")
74
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")
75
+
76
+
77
+ The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
78
+
79
+ full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\: "
80
+
81
+ The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
82
+
83
+ 1. Text Passage Context, and
84
+ 2. Specific question or instruction based on the text passage
85
+
86
+ To get the best results, package "my_prompt" as follows:
87
+
88
+ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
89
+
90
+
91
+ ## Citation [optional]
92
+
93
+ This BLING models was built on top of a Falcon model base - for more information, please see the paper referenced below:
94
+
95
+ {
96
+ Title: "The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only"
97
+ Authors: Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Allessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, Julien Launay
98
+ Publication Date: June 1, 2023
99
+ }
100
+
101
+
102
+
103
+ ## Model Card Contact
104
+
105
+ Darren Oberst & llmware team
106
+
107
+ Please reach out anytime if you are interested in this project and would like to participate and work with us!
108
+
109
+
110
+
config.json ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "bling-falcon-1b-0.1",
3
+ "description": "Instruct train fine-tuning using distilled knowledge based critical reading tasks training dataset",
4
+ "training_timestamp": "Sat Oct 7 09:46:53 2023",
5
+ "training_comments": "falcon-rw-1b-base",
6
+ "_name_or_path": "tiiuae/falcon-rw-1b",
7
+ "model_type": "falcon",
8
+ "vocab_size": 50304,
9
+ "hidden_size": 2048,
10
+ "num_hidden_layers": 24,
11
+ "num_attention_heads": 32,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "initializer_range": 0.02,
14
+ "use_cache": true,
15
+ "hidden_dropout": 0.0,
16
+ "attention_dropout": 0.0,
17
+ "bos_token_id": 1,
18
+ "eos_token_id": 2,
19
+ "num_kv_heads": 32,
20
+ "alibi": true,
21
+ "new_decoder_architecture": false,
22
+ "multi_query": false,
23
+ "parallel_attn": false,
24
+ "bias": true,
25
+ "return_dict": true,
26
+ "output_hidden_states": false,
27
+ "output_attentions": false,
28
+ "torchscript": false,
29
+ "torch_dtype": "bfloat16",
30
+ "use_bfloat16": false,
31
+ "tf_legacy_loss": false,
32
+ "pruned_heads": {},
33
+ "tie_word_embeddings": true,
34
+ "is_encoder_decoder": false,
35
+ "is_decoder": false,
36
+ "cross_attention_hidden_size": null,
37
+ "add_cross_attention": false,
38
+ "tie_encoder_decoder": false,
39
+ "max_length": 20,
40
+ "min_length": 0,
41
+ "do_sample": false,
42
+ "early_stopping": false,
43
+ "num_beams": 1,
44
+ "num_beam_groups": 1,
45
+ "diversity_penalty": 0.0,
46
+ "temperature": 1.0,
47
+ "top_k": 50,
48
+ "top_p": 1.0,
49
+ "typical_p": 1.0,
50
+ "repetition_penalty": 1.0,
51
+ "length_penalty": 1.0,
52
+ "no_repeat_ngram_size": 0,
53
+ "encoder_no_repeat_ngram_size": 0,
54
+ "bad_words_ids": null,
55
+ "num_return_sequences": 1,
56
+ "chunk_size_feed_forward": 0,
57
+ "output_scores": false,
58
+ "return_dict_in_generate": false,
59
+ "forced_bos_token_id": null,
60
+ "forced_eos_token_id": null,
61
+ "remove_invalid_values": false,
62
+ "exponential_decay_length_penalty": null,
63
+ "suppress_tokens": null,
64
+ "begin_suppress_tokens": null,
65
+ "architectures": [
66
+ "FalconForCausalLM"
67
+ ],
68
+ "finetuning_task": null,
69
+ "id2label": {
70
+ "0": "LABEL_0",
71
+ "1": "LABEL_1"
72
+ },
73
+ "label2id": {
74
+ "LABEL_0": 0,
75
+ "LABEL_1": 1
76
+ },
77
+ "tokenizer_class": null,
78
+ "prefix": null,
79
+ "pad_token_id": null,
80
+ "sep_token_id": null,
81
+ "decoder_start_token_id": null,
82
+ "task_specific_params": null,
83
+ "problem_type": null,
84
+ "_name_or_path": "tiiuae/falcon-rw-1b",
85
+ "transformers_version": "4.28.1",
86
+ "apply_residual_connection_post_layernorm": false,
87
+ "auto_map": {
88
+ "AutoConfig": "configuration_falcon.FalconConfig",
89
+ "AutoModel": "modeling_falcon.FalconModel",
90
+ "AutoModelForSequenceClassification": "modeling_falcon.FalconForSequenceClassification",
91
+ "AutoModelForTokenClassification": "modeling_falcon.FalconForTokenClassification",
92
+ "AutoModelForQuestionAnswering": "modeling_falcon.FalconForQuestionAnswering",
93
+ "AutoModelForCausalLM": "modeling_falcon.FalconForCausalLM"
94
+ },
95
+ "trained": "custom training"
96
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6ea6aeb7de3e54bd5d183c789a2785b607f67d52f9a0f3924bb3ec085e53aa0
3
+ size 5246615285
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff