Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
Mirror of OpenFold parameters as provided in https://github.com/aqlaboratory/openfold. Stopgap solution as the original download link was down. All rights to the authors.
|
6 |
+
|
7 |
+
OpenFold model parameters, v. 06_22.
|
8 |
+
|
9 |
+
# Training details:
|
10 |
+
|
11 |
+
Trained using OpenFold on 44 A100s using the training schedule from Table 4 in
|
12 |
+
the AlphaFold supplement. AlphaFold was used as the pre-distillation model.
|
13 |
+
Training data is hosted publicly in the "OpenFold Training Data" RODA repository.
|
14 |
+
|
15 |
+
# Parameter files:
|
16 |
+
|
17 |
+
Parameter files fall into the following categories:
|
18 |
+
|
19 |
+
initial_training.pt:
|
20 |
+
OpenFold at the end of the initial training phase.
|
21 |
+
finetuning_x.pt:
|
22 |
+
Checkpoints in chronological order corresponding to peaks in the
|
23 |
+
validation LDDT-Ca during the finetuning phase. Roughly evenly spaced
|
24 |
+
across the 45 finetuning epochs.
|
25 |
+
finetuning_ptm_x.pt:
|
26 |
+
Checkpoints in chronological order corresponding to peaks inthe pTM
|
27 |
+
training phase, a short additional training phase that takes place
|
28 |
+
after finetuning. Models in this category include the pTM module and
|
29 |
+
comprise the most recent of the checkpoints in this directory.
|
30 |
+
|
31 |
+
Average validation LDDT-Ca scores for each of the checkpoints are listed below.
|
32 |
+
The validation set contains approximately 180 chains drawn from CAMEO over a
|
33 |
+
three-month period at the end of 2021.
|
34 |
+
|
35 |
+
initial_training: 0.9088
|
36 |
+
finetuning_ptm_1: 0.9075
|
37 |
+
finetuning_ptm_2: 0.9097
|
38 |
+
finetuning_1: 0.9089
|
39 |
+
finetuning_2: 0.9061
|
40 |
+
finetuning_3: 0.9075
|
41 |
+
finetuning_4: 0.9059
|
42 |
+
finetuning_5: 0.9054
|