phoebeklett commited on
Commit
4982797
1 Parent(s): 0f40f95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -4,17 +4,17 @@
4
  {}
5
  ---
6
 
7
- # Model Card for Extended-Mind-MPT-7b
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
- Extended Mind MPT-7b, as described in [Supersizing Transformers](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html).
12
 
13
  ### Model Description
14
 
15
  <!-- Provide a longer summary of what this model is. -->
16
 
17
- This model implements active externalism for MPT's 7b model. The model weights have not been edited. Original architecture and code by Mosaic ML.
18
 
19
  For more details on active externalism, check out our [blog](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html)!
20
 
 
4
  {}
5
  ---
6
 
7
+ # Model Card for Extended-Mind-MPT-7b-Chat
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+ Extended Mind MPT-7b-chat, as described in [Supersizing Transformers](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html).
12
 
13
  ### Model Description
14
 
15
  <!-- Provide a longer summary of what this model is. -->
16
 
17
+ This model implements active externalism for MPT's 7b chat model. The model weights have not been edited. Original architecture and code by Mosaic ML.
18
 
19
  For more details on active externalism, check out our [blog](https://blog.normalcomputing.ai/posts/2023-09-12-supersizing-transformers/supersizing-transformers.html)!
20