|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# HelixNet-LMoE |
|
|
|
HelixNet-LMoE is a simple LoRA based Mixture of Experts version of the [HelixNet](https://huggingface.co/migtissera/HelixNet) 3-model system by [Migel Tissera](https://huggingface.co/migtissera). |
|
|
|
_Update_ : There is a 6bpw LMoE version that runs the entire 3-model system much faster, using 8 GB gpu mem in total. ExLlamaV2 version here: [HelixNet-LMoE-6.0bpw-h6-exl2](https://huggingface.co/rhysjones/HelixNet-LMoE-6.0bpw-h6-exl2). |
|
|
|
For each HelixNet model, a separate LoRA adapter was extracted : |
|
* [HelixNet-LMoE-Actor](https://huggingface.co/rhysjones/HelixNet-LMoE-Actor) |
|
* [HelixNet-LMoE-Critic](https://huggingface.co/rhysjones/HelixNet-LMoE-Critic) |
|
* [HelixNet-LMoE-Regenerator](https://huggingface.co/rhysjones/HelixNet-LMoE-Regenerator) |
|
|
|
These are then loaded together with the base [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) model to give the combined LMoE model. |
|
|
|
As HelixNet processes its inputs using the actor, critic and regenerator actions, the corresponding LoRA adapter is dynamically enabled as required. |
|
|
|
It is similar in approach to [Airoboro's MoE implementation](https://github.com/jondurbin/airoboros/tree/main#lmoe) allowing GPU memory requirements in this (unquantized) instance to be reduced from 3 x 14GB to 1 x 14GB + 3 x 320MB. |
|
The LoRAs were extracted based on the process given in [https://github.com/uukuguy/multi_loras](https://github.com/uukuguy/multi_loras), with a Rank of 64 and an Alpha of 128. |
|
|
|
# Prompt format: |
|
|
|
``` |
|
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. |
|
USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? |
|
ASSISTANT: |
|
``` |
|
# Example Usage |
|
|
|
The following is a code example on how to use HelixNet-LMoE. No special system-context messages are needed for the `critic` and the `regenerator`. \ |
|
At the **You:** prompt, enter a question such as _What is the relationship between Earth's atmosphere, magnetic field and gravity?_ |
|
|
|
```python |
|
import torch, json |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel |
|
|
|
def load_model(model_path): |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
torch_dtype=torch.float16, |
|
device_map="cuda", |
|
load_in_4bit=False, |
|
trust_remote_code=True, |
|
) |
|
return model |
|
|
|
def load_tokenizer(model_path): |
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
return tokenizer |
|
|
|
def generate_text(instruction, adapter): |
|
# Select our required LoRA adapter |
|
adapter_model.set_adapter(adapter) |
|
|
|
tokens = base_tokenizer.encode(instruction) |
|
tokens = torch.LongTensor(tokens).unsqueeze(0) |
|
tokens = tokens.to("cuda") |
|
|
|
instance = { |
|
"input_ids": tokens, |
|
"top_p": 1.0, |
|
"temperature": 0.75, |
|
"generate_len": 1024, |
|
"top_k": 50, |
|
} |
|
|
|
length = len(tokens[0]) |
|
with torch.no_grad(): |
|
rest = adapter_model.generate( |
|
input_ids=tokens, |
|
max_length=length + instance["generate_len"], |
|
use_cache=True, |
|
do_sample=True, |
|
top_p=instance["top_p"], |
|
temperature=instance["temperature"], |
|
top_k=instance["top_k"], |
|
num_return_sequences=1, |
|
pad_token_id=base_tokenizer.eos_token_id, |
|
) |
|
output = rest[0][length:] |
|
string = base_tokenizer.decode(output, skip_special_tokens=True) |
|
return f"{string}" |
|
|
|
# Load our base Mistral 7B model and tokenizer |
|
base_model = load_model("mistralai/Mistral-7B-v0.1") |
|
base_tokenizer = load_tokenizer("mistralai/Mistral-7B-v0.1") |
|
|
|
# Load in our three different LoRA adapters for the actor, critic and regenerator |
|
adapter_model = PeftModel.from_pretrained(base_model, "rhysjones/HelixNet-LMoE-Actor", "actor") |
|
adapter_model.load_adapter("rhysjones/HelixNet-LMoE-Critic", adapter_name="critic") |
|
adapter_model.load_adapter("rhysjones/HelixNet-LMoE-Regenerator", adapter_name="regenerator") |
|
|
|
system_prompt = "You are HelixNet. Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." |
|
|
|
while True: |
|
user_input = input("You: ") |
|
|
|
prompt_actor = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nASSISTANT: " |
|
actor_response = generate_text(prompt_actor, "actor") |
|
print(f"ACTOR: {actor_response}\n\n") |
|
|
|
prompt_critic = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE:" |
|
critic_response = generate_text(prompt_critic, "critic") |
|
print(f"CRITIQUE: {critic_response}\n\n") |
|
|
|
prompt_regenerator = f"SYSTEM: {system_prompt} \nUSER: {user_input} \nRESPONSE: {actor_response} \nCRITIQUE: {critic_response} \nREGENERATOR:" |
|
regenerator_response = generate_text(prompt_regenerator, "regenerator") |
|
print(f"REGENERATION: {regenerator_response}") |
|
|
|
``` |
|
|
|
# LLM Evaluation |
|
|
|
Evaluation on a merged version of each base+lora models has yet to be done on the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how it compares to the equivalent full HelixNet model. |
|
|
|
# HelixNet Details |
|
|
|
HelixNet is a Deep Learning architecture consisting of 3 x Mistral-7B LLMs. It has an `actor`, a `critic`, and a `regenerator`. The `actor` LLM produces an initial response to a given system-context and a question. The `critic` then takes in as input, a tuple of (system-context, question, response) and provides a critique based on the provided answer to the given system-context and the question. Its job is not to criticize, but to provide an intelligent critique so that the answer can be modified/regenerated to address the question better. Finally, the `regenerator` takes in a tuple of (system-context, question, response, critique) and regenerates the answer. |
|
|
|
HelixNet is insprired from an actor-critic architecture most prominent in Reinforcement Learning algorithms. The name derives from Helix, referring to the spiral structure of a DNA molecule. It symbolizes the intertwined nature of the three networks, working in tandem, much like the strands of a DNA molecule. |
|
|
|
HelixNet regenerates very pleasing and accurate responses, due to the entropy preservation of the regenerator. The regenerator was only trained on a dataset of 1000 samples, similar to Meta's LIMA. The actor network here was trained on about 250K very high-quality samples, and the critic network was trained on further 10K samples. |
|
|
|
Full details on how HelixNet was trained and evaluated is located at [https://huggingface.co/migtissera/HelixNet](https://huggingface.co/migtissera/HelixNet) |
|
|