srowen commited on
Commit
e88788c
1 Parent(s): 61c07d4

Port simple python markdown change from 3B repo

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -35,7 +35,7 @@ on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tr
35
  To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
36
  In a Databricks notebook you could run:
37
 
38
- ```
39
  %pip install accelerate>=0.12.0 transformers[torch]==4.25.1
40
  ```
41
 
@@ -44,7 +44,7 @@ found in the model repo [here](https://huggingface.co/databricks/dolly-v2-3b/blo
44
  Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
45
  It is also fine to remove it if there is sufficient memory.
46
 
47
- ```
48
  import torch
49
  from transformers import pipeline
50
 
@@ -53,7 +53,7 @@ generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloa
53
 
54
  You can then use the pipeline to answer instructions:
55
 
56
- ```
57
  res = generate_text("Explain to me the difference between nuclear fission and fusion.")
58
  print(res[0]["generated_text"])
59
  ```
@@ -61,7 +61,7 @@ print(res[0]["generated_text"])
61
  Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
62
  store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
63
 
64
- ```
65
  import torch
66
  from instruct_pipeline import InstructionTextGenerationPipeline
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -77,7 +77,7 @@ generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokeniz
77
  To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
78
  and the default for the pipeline is to only return the new text.
79
 
80
- ```
81
  import torch
82
  from transformers import pipeline
83
 
@@ -87,7 +87,7 @@ generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloa
87
 
88
  You can create a prompt that either has only an instruction or has an instruction with context:
89
 
90
- ```
91
  from langchain import PromptTemplate, LLMChain
92
  from langchain.llms import HuggingFacePipeline
93
 
@@ -109,13 +109,13 @@ llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
109
 
110
  Example predicting using a simple instruction:
111
 
112
- ```
113
  print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
114
  ```
115
 
116
  Example predicting using an instruction with context:
117
 
118
- ```
119
  context = """George Washington (February 22, 1732[b] – December 14, 1799) was an American military officer, statesman,
120
  and Founding Father who served as the first president of the United States from 1789 to 1797."""
121
 
 
35
  To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed.
36
  In a Databricks notebook you could run:
37
 
38
+ ```python
39
  %pip install accelerate>=0.12.0 transformers[torch]==4.25.1
40
  ```
41
 
 
44
  Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
45
  It is also fine to remove it if there is sufficient memory.
46
 
47
+ ```python
48
  import torch
49
  from transformers import pipeline
50
 
 
53
 
54
  You can then use the pipeline to answer instructions:
55
 
56
+ ```python
57
  res = generate_text("Explain to me the difference between nuclear fission and fusion.")
58
  print(res[0]["generated_text"])
59
  ```
 
61
  Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-3b/blob/main/instruct_pipeline.py),
62
  store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
63
 
64
+ ```python
65
  import torch
66
  from instruct_pipeline import InstructionTextGenerationPipeline
67
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
77
  To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
78
  and the default for the pipeline is to only return the new text.
79
 
80
+ ```python
81
  import torch
82
  from transformers import pipeline
83
 
 
87
 
88
  You can create a prompt that either has only an instruction or has an instruction with context:
89
 
90
+ ```python
91
  from langchain import PromptTemplate, LLMChain
92
  from langchain.llms import HuggingFacePipeline
93
 
 
109
 
110
  Example predicting using a simple instruction:
111
 
112
+ ```python
113
  print(llm_chain.predict(instruction="Explain to me the difference between nuclear fission and fusion.").lstrip())
114
  ```
115
 
116
  Example predicting using an instruction with context:
117
 
118
+ ```python
119
  context = """George Washington (February 22, 1732[b] – December 14, 1799) was an American military officer, statesman,
120
  and Founding Father who served as the first president of the United States from 1789 to 1797."""
121