junelegend commited on
Commit
67e77a4
1 Parent(s): 64e8817

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -9,6 +9,9 @@ tags:
9
  - llama
10
  - trl
11
  base_model: unsloth/llama-3-8b-bnb-4bit
 
 
 
12
  ---
13
 
14
  # Uploaded model
@@ -17,6 +20,36 @@ base_model: unsloth/llama-3-8b-bnb-4bit
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
9
  - llama
10
  - trl
11
  base_model: unsloth/llama-3-8b-bnb-4bit
12
+ datasets:
13
+ - adeocybersecurity/DockerCommand
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
  # Uploaded model
 
20
  - **License:** apache-2.0
21
  - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
22
 
23
+ ## Model Details
24
+
25
+ This model is finetuned on [adeocybersecurity/DockerCommand](https://huggingface.co/datasets/adeocybersecurity/DockerCommand) dataset using the base [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) model. These are only the lora adapaters of the model, the base model is automatically downloaded.
26
+
27
+ ## How to use
28
+
29
+ ```
30
+ from unsloth import FastLanguageModel
31
+ model, tokenizer = FastLanguageModel.from_pretrained(
32
+ model_name = "llama-3-docker-command-lora", # YOUR MODEL YOU USED FOR TRAINING
33
+ max_seq_length = max_seq_length,
34
+ dtype = dtype,
35
+ load_in_4bit = load_in_4bit,
36
+ )
37
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
38
+
39
+ inputs = tokenizer(
40
+ [
41
+ alpaca_prompt.format(
42
+ "translate this sentence in docker command.", # instruction
43
+ "Give me a list of all containers, indicating their status as well.", # input
44
+ "", # output - leave this blank for generation!
45
+ )
46
+ ], return_tensors = "pt").to("cuda")
47
+
48
+ outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
49
+ tokenizer.batch_decode(outputs)
50
+
51
+ ```
52
+
53
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
54
 
55
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)