File size: 2,347 Bytes
17f8d95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72436ef
17f8d95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81

---
license: apache-2.0
dataset: yield
tags:
  - finetuned
  - multimodal
inference: false
---

These are weights for a version of `checkpoints/stage2/llava-moleculestm-vicuna-7b-v1.5-pretrain_all` finetuned for multimodal applications. 

### Modalities

* Molecule2DModality (use `<molecule_2d>` in text and provide `molecules`

### Usage

GitHub: https://github.com/IDEA-XL/PRESTO (includes training scripts and basic inference server)

### Dataset

yield (9515 examples)


### Training Device(s)

```
name, pci.bus_id, vbios_version
NVIDIA RTX A6000, 00000000:01:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:25:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:41:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:61:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:81:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:A1:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:C1:00.0, 94.02.5C.00.02
NVIDIA RTX A6000, 00000000:E1:00.0, 94.02.5C.00.02
```


### Model

```
LlamaLMMForCausalLM.model =

LlamaLMMForCausalLM(
  (model): LlamaLMMModel(
    (embed_tokens): Embedding(32000, 4096, padding_idx=0)
    (layers): ModuleList(
      (0-31): 32 x LlamaDecoderLayer(
        (self_attn): LlamaSdpaAttention(
          (q_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (k_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (v_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (o_proj): Linear(in_features=4096, out_features=4096, bias=False)
          (rotary_emb): LlamaRotaryEmbedding()
        )
        (mlp): LlamaMLP(
          (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
          (up_proj): Linear(in_features=4096, out_features=11008, bias=False)
          (down_proj): Linear(in_features=11008, out_features=4096, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): LlamaRMSNorm()
        (post_attention_layernorm): LlamaRMSNorm()
      )
    )
    (norm): LlamaRMSNorm()
    (molecule_2d_lmm_projector): _MLPVectorProjector(
      (mlp): Sequential(
        (0): Linear(in_features=300, out_features=4096, bias=True)
        (1): GELU(approximate='none')
        (2): Linear(in_features=4096, out_features=4096, bias=True)
      )
    )
  )
  (lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```