Add TF weights

#1
by joaogante HF staff - opened

Validated by the pt_to_tf CLI. Max crossload hidden state difference=1.121e-05; Max converted hidden state difference=1.121e-05.

The weights look good according to our conversion tool, but they take 2x storage. Are these weights stored in a 16-bit format? ( @valhalla )

Related to this GH PR: https://github.com/huggingface/transformers/pull/16543

EDIT -- after checking with stricter tests, there are further differences between PT and TF. The original question is still relevant, but do not merge these weights.

Sounds good! @valhalla do you know?

Just checked the PT weights are in float16 indeed. BTW an easy rule of thumb is "size of model checkpoint" / 4 = model parameters if in float32 . Here 1GB file would mean 250M parameters but we have 564 -> so it's most likely fp16

That makes sense. There is a slightly higher PT-to-TF error than usual (~1e-4) in the internal layers, but being float16 probably explains the difference πŸ‘

joaogante changed pull request status to merged

Sign up or log in to comment