Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- FinTalk-19k
|
5 |
+
tags:
|
6 |
+
- summarization
|
7 |
+
- classification
|
8 |
+
- translation
|
9 |
+
- NLP
|
10 |
+
- finance
|
11 |
+
- domain specific llm
|
12 |
+
license: apache-2.0
|
13 |
+
pipeline_tag: text-generation
|
14 |
+
---
|
15 |
+
|
16 |
+
# Finance Connect
|
17 |
+
|
18 |
+
FinanceConnect is a state-of-the-art, open-source chat model tailored for finance and economic discussions. Built on the robust Llama2-13B architecture, this model has been fine-tuned on a combination of FinTalk-19k and Alpaca dataset, making it a valuable resource for finance professionals, researchers, and enthusiasts.
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
- Architecture: Llama2-13B
|
23 |
+
- Training Dataset: [FinTalk-19k](https://huggingface.co/datasets/ceadar-ie/FinTalk-19k), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
|
24 |
+
|
25 |
+
## Dataset Utilized: FinTalk-19k and Alpaca
|
26 |
+
|
27 |
+
Drawing strength from the FinTalk-19k and Alpaca dataset, a curated collection focused on financial knowledge, this model provides insights and information related to the finance industry. For a deeper dive into the dataset, visit: [FinTalk-19k](https://huggingface.co/datasets/ceadar-ie/FinTalk-19k), [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
|
28 |
+
|
29 |
+
### Model Specification
|
30 |
+
|
31 |
+
- **Developed by:** CeADAR Connect Group
|
32 |
+
- **Model type:** Large Language Model
|
33 |
+
- **Language(s):** en
|
34 |
+
- **Finetuned from model:** Llama2-13B
|
35 |
+
|
36 |
+
## Key Features and Functionalities
|
37 |
+
|
38 |
+
### Domain Specialization
|
39 |
+
The FinanceConnect model is specialized in Finance conversations, serving as a resource for financial researchers, and enthusiasts.
|
40 |
+
### Model API Accessibility
|
41 |
+
Offers a straightforward Python integration for generating financial content insights.
|
42 |
+
### Performance Optimisation
|
43 |
+
Efficient performance across both CPU and GPU platforms.
|
44 |
+
### Data Representation
|
45 |
+
Utilises a combination of comprehensive Finance dataset, enabling content generation to professional standards.
|
46 |
+
|
47 |
+
## Model Usage
|
48 |
+
|
49 |
+
Experience the capabilities of the FinanceConnect model through a well-structured Python interface. To kick-start your exploration, follow the steps and snippets given below:
|
50 |
+
|
51 |
+
## Prerequisites
|
52 |
+
### 1. Ensure required packages are available
|
53 |
+
|
54 |
+
```python
|
55 |
+
import torch
|
56 |
+
from typing import Any, Dict
|
57 |
+
from transformers import (
|
58 |
+
AutoModelForCausalLM,
|
59 |
+
AutoTokenizer,
|
60 |
+
BitsAndBytesConfig,
|
61 |
+
HfArgumentParser,
|
62 |
+
TrainingArguments,
|
63 |
+
PreTrainedTokenizerFast,
|
64 |
+
pipeline,
|
65 |
+
logging,
|
66 |
+
)
|
67 |
+
import time
|
68 |
+
```
|
69 |
+
|
70 |
+
### 2. Initiate the model and tokenizer
|
71 |
+
|
72 |
+
```python
|
73 |
+
model_name = "ceadar-ie/FinanceConnect-13B"
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
75 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, load_in_8bit = True, device_map = "auto", trust_remote_code=True)
|
76 |
+
```
|
77 |
+
|
78 |
+
### 3. Create a function for generating text
|
79 |
+
|
80 |
+
```python
|
81 |
+
def generate_text(input_text):
|
82 |
+
tokenizer.padding_side = "right"
|
83 |
+
num_words_input = len(input_text.split())
|
84 |
+
|
85 |
+
start_time = time.time()
|
86 |
+
|
87 |
+
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=1000,temperature=1)
|
88 |
+
prompt = f" <s>[INST] {input_text} [/INST]"
|
89 |
+
result = pipe(prompt)
|
90 |
+
generated_text = result[0]['generated_text'].replace(prompt, "").strip()
|
91 |
+
|
92 |
+
end_time = time.time()
|
93 |
+
time_taken = end_time - start_time
|
94 |
+
|
95 |
+
num_words_generated = len(generated_text.split())
|
96 |
+
|
97 |
+
print("Generated Output:")
|
98 |
+
print(generated_text)
|
99 |
+
print('=' * 60 )
|
100 |
+
print("\nNumber of Words in Input Text:", num_words_input)
|
101 |
+
print("Number of Words in Generated Output:", num_words_generated)
|
102 |
+
print("Time Taken to Generate Output: {:.2f} seconds".format(time_taken))
|
103 |
+
```
|
104 |
+
|
105 |
+
## Example Evaluation and Use
|
106 |
+
|
107 |
+
### Example Prompt 1:
|
108 |
+
List the key factors influencing the current state of the global economy.
|
109 |
+
|
110 |
+
#### Generated Output:
|
111 |
+
The key factors influencing the current state of the global economy are:
|
112 |
+
|
113 |
+
1. COVID-19 pandemic and its impact on consumer spending and supply chains
|
114 |
+
2. Inflation and its impact on consumer prices and central banks' monetary policies
|
115 |
+
3. Interest rates and their impact on borrowing and investment
|
116 |
+
4. Global economic growth and its impact on trade and investment
|
117 |
+
5. Geopolitical tensions and their impact on trade and investment
|
118 |
+
6. Technological advancements and their impact on industries and employment
|
119 |
+
7. Shifts in global economic power and their impact on trade and investment
|
120 |
+
8. Environmental factors such as climate change and their impact on industries and economies.
|
121 |
+
|
122 |
+
These are some of the key factors that influence the current state of the global economy.
|
123 |
+
|
124 |
+
### Example Prompt 2:
|
125 |
+
Explain the concept of quantitative easing and its impact on financial markets.
|
126 |
+
|
127 |
+
#### Generated Output:
|
128 |
+
Quantitative easing (QE) is a monetary policy tool used by central banks to stimulate economic growth during times of recession or low inflation. It involves the central bank purchasing government bonds from banks, which increases the money supply and lowers interest rates. This can lead to a decrease in the value of a country's currency, making exports cheaper for foreign buyers and potentially boosting economic growth. However, QE can also lead to inflation, as the increased money supply can drive up prices.
|
129 |
+
|
130 |
+
## Training Details
|
131 |
+
|
132 |
+
### Training Hyperparameters
|
133 |
+
- per_device_train_batch_size = 10
|
134 |
+
- gradient_accumulation_steps = 4
|
135 |
+
- optim = "paged_adamw_32bit"
|
136 |
+
- learning_rate = 2e-4
|
137 |
+
- max_grad_norm = 0.3
|
138 |
+
- warmup_ratio = 0.03
|
139 |
+
|
140 |
+
## Model Limitations
|
141 |
+
Potential Biases: With its fine-tuning centered on financial conversations sources, inherent biases from these sources may reflect in the model's outputs.
|
142 |
+
## Licensing
|
143 |
+
The FinanceConnect model, developed by CeADAR Connect Group, combines the licensing frameworks of Llama2, FinTalk-8k and Alpaca. Under Meta's terms, users are granted a non-exclusive, worldwide, non-transferable, royalty-free limited license for the use and modification of Llama Materials, inclusive of the Llama2 model and its associated documentation. When redistributing, the provided Agreement and a specific attribution notice must be included. In alignment with the FinTalk dataset's licensing and Alpaca dataset's licensing, the model is also distributed under the "cc-by-nc-4.0" license.
|
144 |
+
## Out-of-Scope Use
|
145 |
+
FinanceConnect is specifically tailored for finanical discussions and knowledge. It is not optimized for:
|
146 |
+
- General, non-AI-related conversations.
|
147 |
+
- Domain-specific tasks outside financial tasks.
|
148 |
+
- Direct interfacing with physical devices or applications.
|
149 |
+
## Bias, Risks, and Limitations
|
150 |
+
- Dataset Biases: The FinTalk-19k and Alpaca dataset may contain inherent biases that influence the model's outputs.
|
151 |
+
- Over-reliance: The model is an aid, not a replacement for human expertise. Decisions should be made with careful consideration.
|
152 |
+
- Content Understanding: The model lacks human-like understanding and cannot judge the veracity of knowledge.
|
153 |
+
- Language Limitations: The model's primary language is English. Performance may decrease with other languages.
|
154 |
+
- Knowledge Cut-off: The model may not be aware of events or trends post its last training update.
|
155 |
+
## Citation:
|
156 |
+
|
157 |
+
## Contact:
|
158 |
+
For any further inquiries or feedback concerning FinanceConnect, please forward your communications to ahtsham.zafar@ucd.ie
|