Commit History

style: lint
d5d442a

boris commited on

feat: handle gradient checkpointing
5173ec7

boris commited on

feat: load from bucket
1c4e839

boris commited on

feat(train): save to bucket
50498e6

boris commited on

feat: reduce artifact space + offset step
34cf91c

boris commited on

feat: restore weights on CPU
5f954fc

boris commited on

feat(train): simplify tokenizer loading
4cb21dd

boris commited on

feat(train): use compilation cache
da9367c

boris commited on

feat: log num_parameters early
7cfe576

boris commited on

feat(train) - handle multiple nodes (#130)
0952927
unverified

boris commited on

feat: handle model parallel
1bb3269

boris commited on

feat(train): more custom x-axis
5f28cd2

boris commited on

fix(train): opt_state_shape for distributed_shampoo
225b6ff

boris commited on

feat(train): split artifact into model/state
fa5b058

boris commited on

feat(train): another 25% faster
14abe8c

boris commited on

feat(train): overhead from 70% to 1% 🥳
2b7f5f1

boris commited on

feat(pjit): follow t5x style
7b5868f

boris commited on

fix(train): grads spec
00710bc

boris commited on

feat(train): improve pjit speed
f254058

boris commited on

fix(train): consider correct batch size
b7c7458

boris commited on

feat(train): custom start_preconditioning_step
8149924

boris commited on

feat(train): handle distributed_shampoo in pjit
032f623

boris commited on

feat(train): distributed_shampoo with pjit
cc34d07

boris commited on

fix style
f044cb8

boris commited on

feat(train): restore opt_state efficiently
1bfc1b5

boris commited on

feat(model): clean way to load on cpu
12f323d

boris commited on

feat(train): load model on CPU
3d43591

boris commited on

feat(train): different rng per node
2d212d8

boris commited on

feat(train): no batch dimension with pjit
df1fe19

boris commited on

feat(train): progress on pjit
49597a2

boris commited on

feat(train): start pjit support
0081723

boris commited on

Load from wandb artifact (#121)
f69b21b
unverified

boris commited on

Use DalleBartTokenizer. State restoration reverted to previous method:
ae983d7

Pedro Cuenca commited on

fix(train): variable not defined
4c87adf

boris commited on

feat(train): cleanup args
a2bf605

boris commited on

refactor(train): cleanup
274ba73

boris commited on

feat: custom gradient accumulation
2d07559

boris commited on

fix: style
df01fa8

boris commited on

feat(train): use MultiSteps for gradient accumulation
4fa53a5

boris commited on

Accept changes suggested by linter.
9f522b8

Pedro Cuenca commited on

Update help string for `model_name_or_path`.
290e443

Pedro Cuenca commited on

Update `resume_from_checkpoint` to use `from_pretrained`.
bb3f53e

Pedro Cuenca commited on

Load tokenizer associated to the model checkpoint, if possible.
a77c0d4

Pedro Cuenca commited on

Use model configuration unless a specific one is supplied.
5ec61cc

Pedro Cuenca commited on

fix: style
25862e8

boris commited on

feat: add more config of distributed_shampoo
89cf9ea

boris commited on

feat(train): refactor learning rate params
e2781bc

boris commited on

fix(train): handle seed_dataset
8b72ed8

boris commited on

feat: refactor TrainingArguments
adbdff9

boris commited on

fix: push_to_hub deprecated
23389f6

boris commited on