doberst commited on
Commit
c8319d6
1 Parent(s): c2902b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -82,10 +82,11 @@ The fastest way to get started with BLING is through direct import in transforme
82
  tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-0.1")
83
  model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-0.1")
84
 
 
85
 
86
  The DRAGON model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
87
 
88
- full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
89
 
90
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
91
 
 
82
  tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-0.1")
83
  model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-0.1")
84
 
85
+ Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
86
 
87
  The DRAGON model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
88
 
89
+ full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
90
 
91
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
92