File size: 1,570 Bytes
de9f40c
 
 
 
 
 
 
 
 
 
 
876cbfe
 
 
 
 
 
de9f40c
 
876cbfe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de9f40c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: mit
datasets:
- monsoon-nlp/asknyc-chatassistant-format
language:
- en
pipeline_tag: text-generation
tags:
- nyc
- reddit
---
# GPT-NYC

## About

GPT2-Medium fine-tuned on questions and responses from https://reddit.com/r/asknyc

**2023 Update: try a larger model: [monsoon-nlp/nyc-savvy-llama2-7b](https://huggingface.co/monsoon-nlp/nyc-savvy-llama2-7b)**

I filtered comments to ones with scores >= 3, and responding directly
to the original post ( = ignoring responses to other commenters).

I added tokens to match NYC neighborhoods, subway stations, foods, and other
common terms in the original batches of questions and comments.
You would be surprised what is missing from GPT tokens!

Try prompting with ```question? %% ``` or  ```question? - more info %%```

## Status

I would like to continue by:
- fine-tuning GPT2-Large with a larger dataset of questions
- examining bias and toxicity
- examining memorization vs. original responses
- releasing a reusable benchmark

## Blog

https://mapmeld.medium.com/gpt-nyc-part-1-9cb698b2e3d

## Notebooks

### Data processing / new tokens

https://colab.research.google.com/drive/13BOw0uekoAYB4jjQtaXTn6J_VHatiRLu

### Fine-tuning GPT2 (small)

https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR

### Fine-tuning GPT2-Medium

Same code as small, but on Google Cloud to use an A100 GPU

### Predictive text and probabilities

Scroll to end of

https://colab.research.google.com/drive/1FnXcAh4H-k8dAzixkV5ieygV96ePh3lR

to see how to install git-lfs and trick ecco into loading this.