Text Generation
ELM
English
dev-slx commited on
Commit
5563225
1 Parent(s): 793da2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -14
README.md CHANGED
@@ -1,14 +1,12 @@
1
  # SliceX AI™ ELM (Efficient Language Models)
2
  This repository contains code to run our ELM models.
3
 
4
- Models are located in the "models" folder. ELM models in this repository comes in three sizes (elm-1.0, elm-0.75 and elm-0.25) and supports the following use-cases.
5
- - news_classification
6
  - toxicity_detection
7
- - news_content_generation
8
 
9
  ## Download ELM repo
10
  ```bash
11
- git clone git@hf.co:slicexai/elm-0.25-v0.1
12
  sudo apt-get intall git-lfs
13
  git lfs install
14
  ```
@@ -20,12 +18,12 @@ PATH=$PATH:/<absolute-path>/git-lfs-3.2.0/
20
  git lfs install
21
  ```
22
 
23
- ## Download ELM task-specific model checkpoints
24
  ```bash
25
- cd elm-0.25-v0.1
26
- git lfs pull -I models/elm-0.25_news_classification/ckpt.pt
 
27
  git lfs pull -I models/elm-0.25_toxicity_detection/ckpt.pt
28
- git lfs pull -I models/elm-0.25_news_content_generation/ckpt.pt
29
  ```
30
 
31
  ## Installation
@@ -36,13 +34,13 @@ pip install -r requirements.txt
36
  ## How to use - Run ELM on a sample task (e.g., news classification)
37
  ```bash
38
  python run.py <elm-model-directory>
39
- E.g. python run.py models/elm-0.25_news_classification
40
  ```
41
- Prompts for the specific tasks can be found in the corresponding checkpoint directory. See an example below in the form of `models/elm-0.25_news_classification/example_prompts.json`.
42
  ```json
43
  {
44
- "inputs": ["GM May Close Plant in Europe DETROIT (Reuters) - General Motors Corp. &lt;A HREF=\"http://www.investor.reuters.com/FullQuote.aspx?ticker=GM.N target=/stocks/quickinfo/fullquote\"&gt;GM.N&lt;/A&gt; will likely cut some jobs in Europe and may close a plant there as part of a restructuring plan under development to try to return the region to profitability, the U.S. automaker said on Wednesday."],
45
- "template": "[INST]Below is a news article. Please classify it under one of the following classes (World, Business, Sports, Sci/Tech). Please format your response as a JSON payload.\n\n### Article: {input}\n\n### JSON Response:[/INST]"
46
  }
47
  ```
48
 
@@ -50,7 +48,7 @@ Running the above command returns the following response
50
 
51
  ```json
52
  {
53
- "prompt": "[INST]Below is a news article. Please classify it under one of the following classes (World, Business, Sports, Sci/Tech). Please format your response as a JSON payload.\n\n### Article: GM May Close Plant in Europe DETROIT (Reuters) - General Motors Corp. &lt;A HREF=\"http://www.investor.reuters.com/FullQuote.aspx?ticker=GM.N target=/stocks/quickinfo/fullquote\"&gt;GM.N&lt;/A&gt; will likely cut some jobs in Europe and may close a plant there as part of a restructuring plan under development to try to return the region to profitability, the U.S. automaker said on Wednesday.\n\n### JSON Response:[/INST]",
54
- "response": "{'text_label': 'Business'}"
55
  }
56
  ```
 
1
  # SliceX AI™ ELM (Efficient Language Models)
2
  This repository contains code to run our ELM models.
3
 
4
+ Models are located in the "models" folder. ELM models in this repository comes in three sizes (elm-1.0, elm-0.75 and elm-0.25) and supports the following use-case.
 
5
  - toxicity_detection
 
6
 
7
  ## Download ELM repo
8
  ```bash
9
+ git clone git@hf.co:slicexai/elm-v0.1_toxicity_detection
10
  sudo apt-get intall git-lfs
11
  git lfs install
12
  ```
 
18
  git lfs install
19
  ```
20
 
21
+ ## Download ELM model checkpoints
22
  ```bash
23
+ cd elm-v0.1_toxicity_detection
24
+ git lfs pull -I models/elm-1.0_toxicity_detection/ckpt.pt
25
+ git lfs pull -I models/elm-0.75_toxicity_detection/ckpt.pt
26
  git lfs pull -I models/elm-0.25_toxicity_detection/ckpt.pt
 
27
  ```
28
 
29
  ## Installation
 
34
  ## How to use - Run ELM on a sample task (e.g., news classification)
35
  ```bash
36
  python run.py <elm-model-directory>
37
+ E.g. python run.py models/elm-0.75_toxicity_detection
38
  ```
39
+ Prompts for the specific tasks can be found in the corresponding checkpoint directory. See an example below in the form of `models/elm-0.75_toxicity_detection/example_prompts.json`.
40
  ```json
41
  {
42
+ "inputs": ["Dear Dr. Mereu, \n\n I am very much looking forward to this class. It is my first class at Rutgers! I think its extremely interesting and am very excited about it as I just decided that I want to minor in Psychology this year. I am especially interested in the neuroscience aspect of it all. Looking forward to a great semester!",],
43
+ "template": "[INST]You are a helpful, precise, detailed, and concise artificial intelligence assistant. You are a very intelligent and sensitive, having a keen ability to discern whether or not a text message is toxic. You can also be trusted with following the instructions given to you precisely, without deviations.\nIn this task, you are asked to decide whether or not comment text is toxic.\nToxic content harbors negativity towards a person or a group, for instance:\n - stereotyping (especially using negative stereotypes)\n - disparaging a person's gender -- as in \"male\", \"female\", \"men\", \"women\"\n - derogatory language or slurs\n - racism -- as in discriminating toward people who are \"black\", \"white\"\n - cultural appropriation\n - mockery or ridicule\n - sexual objectification\n - homophobia -- bullying people who are \"homosexual\", \"gay\", \"lesbian\"\n - historical insensitivity\n - disrespecting religion -- as in \"christian\", \"jewish\", \"muslim\"\n - saying that certain groups are less worthy of respect\n - insensitivity to health conditions -- as in \"psychiatric/mental illness\"\n\nRead the comment text provided and predict whether or not the comment text is toxic. If comment text is toxic according to the instructions, then the answer is \"yes\" (return \"yes\"); otherwise, the answer is \"no\" (return \"no\").\nOutput the answer only as a \"yes\" or a \"no\"; do not provide explanations.\nPlease, never return empty output; always return a \"yes\" or a \"no\" answer.\nYou will be evaluated based on the following criteria: - The generated answer is always \"yes\" or \"no\" (never the empty string, \"\"). - The generated answer is correct for the comment text presented to you.\n### Comment Text: {input}\n### Comment Text Is Toxic (Yes/No) [/INST]"
44
  }
45
  ```
46
 
 
48
 
49
  ```json
50
  {
51
+ "prompt": "[INST]You are a helpful, precise, detailed, and concise artificial intelligence assistant. You are a very intelligent and sensitive, having a keen ability to discern whether or not a text message is toxic. You can also be trusted with following the instructions given to you precisely, without deviations.\nIn this task, you are asked to decide whether or not comment text is toxic.\nToxic content harbors negativity towards a person or a group, for instance:\n - stereotyping (especially using negative stereotypes)\n - disparaging a person's gender -- as in \"male\", \"female\", \"men\", \"women\"\n - derogatory language or slurs\n - racism -- as in discriminating toward people who are \"black\", \"white\"\n - cultural appropriation\n - mockery or ridicule\n - sexual objectification\n - homophobia -- bullying people who are \"homosexual\", \"gay\", \"lesbian\"\n - historical insensitivity\n - disrespecting religion -- as in \"christian\", \"jewish\", \"muslim\"\n - saying that certain groups are less worthy of respect\n - insensitivity to health conditions -- as in \"psychiatric/mental illness\"\n\nRead the comment text provided and predict whether or not the comment text is toxic. If comment text is toxic according to the instructions, then the answer is \"yes\" (return \"yes\"); otherwise, the answer is \"no\" (return \"no\").\nOutput the answer only as a \"yes\" or a \"no\"; do not provide explanations.\nPlease, never return empty output; always return a \"yes\" or a \"no\" answer.\nYou will be evaluated based on the following criteria: - The generated answer is always \"yes\" or \"no\" (never the empty string, \"\"). - The generated answer is correct for the comment text presented to you.\n### Comment Text: Dear Dr. Mereu, \n\n I am very much looking forward to this class. It is my first class at Rutgers! I think its extremely interesting and am very excited about it as I just decided that I want to minor in Psychology this year. I am especially interested in the neuroscience aspect of it all. Looking forward to a great semester!\n### Comment Text Is Toxic (Yes/No) [/INST]",
52
+ "response": "No"
53
  }
54
  ```