File size: 3,110 Bytes
936b764
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# CS 670 Project - Finetuning Language Models

************************

Deliverables

************************

Milestone-3 notebook: https://github.com/aye-thuzar/CS670Project/blob/main/CS670_milestone_3_AyeThuzar.ipynb

Hugging Face App: https://huggingface.co/spaces/ayethuzar/can-i-patent-this

Landing Page for the App: https://sites.google.com/view/cs670-finetuning-language-mode/home

App Demonstration Video: [https://youtu.be/UEWUe-8fDOw](https://youtu.be/IXMJDoUqXK4)

The tuned model shared to the Hugging Face Hub: https://huggingface.co/ayethuzar/tuned-for-patentability/tree/main

************************

Dataset: https://github.com/suzgunmirac/hupd




**Data Preprocessing**

 I used the load_dataset function to load all the patent applications that were filed to the USPTO in January 2016. We specify the date ranges of the training and validation sets as January 1-21, 2016 and January 22-31, 2016, respectively. This is a smaller dataset.

 There are two datasets: train and validation. Here are the steps I did:

 - Label-to-index mapping for the decision status field
 - map the 'abstract' and 'claims' sections and tokenize them using pretrained('distilbert-base-uncased') tokenizer
 - format them
 - use DataLoader with batch_size = 16

**milestone3:**

The following notebook has the tuned model. There are 6 classes in the Harvard USPTO patent dataset and I decided to encode them as follow:

decision_to_str = {'REJECTED': 0, 'ACCEPTED': 1, 'PENDING': 1, 'CONT-REJECTED': 0, 'CONT-ACCEPTED': 1, 'CONT-PENDING': 1}

so that I can get a patentability score between 0 and 1.

I use the pertained-model 'distilbert-base-uncased' from the Hugging face hub and tune it with the smaller dataset.

My tuned model's performance is not good but I ran out of time. =(

milestone3 notebook: https://github.com/aye-thuzar/CS670Project/blob/main/CS670_milestone_3_AyeThuzar.ipynb

The tuned model shared to the Hugging Face Hub: https://huggingface.co/ayethuzar/tuned-for-patentability/tree/main

I tested my shared model here: https://github.com/aye-thuzar/CS670Project/blob/main/CS670_Examples.ipynb


**milestone 4**

This is the landing page for milestone 4 : https://sites.google.com/view/cs670-finetuning-language-mode/home

The documentation for milestone 4: https://github.com/aye-thuzar/CS670Project/blob/main/milestone4Documentation.md

I did not get a chance to fix my video, so it only has the model before I tuned it. After my tuned it, my model is only showing a patentabiilty score no matter which texts, I put for abstract and claims. =(
    
**************

References:

1. https://colab.research.google.com/drive/1_ZsI7WFTsEO0iu_0g3BLTkIkOUqPzCET?usp=sharing#scrollTo=B5wxZNhXdUK6
2. https://huggingface.co/AI-Growth-Lab/PatentSBERTa
3. https://huggingface.co/anferico/bert-for-patents
4. https://huggingface.co/transformers/v3.2.0/custom_datasets.html
5. https://colab.research.google.com/drive/1TzDDCDt368cUErH86Zc_P2aw9bXaaZy1?usp=sharing
6. https://huggingface.co/docs/transformers/model_sharing
7. https://docs.streamlit.io/library/api-reference/widgets/st.file_uploader