QARAC / qarac

Commit History

Ensure tokenizer is on GPU
ca642d2

PeteBleackley commited on

Diagnostics
1a9032d

PeteBleackley commited on

Diagnostics
3fc3a1b

PeteBleackley commited on

Diagnostics
7758fd9

PeteBleackley commited on

Diagnostics
4808422

PeteBleackley commited on

Ensure attention mask is assigned to GPU
7b60f84

PeteBleackley commited on

Ensure consistency of device assignment when training
f5599c3

PeteBleackley commited on

Fixed import
4a7707c

PeteBleackley commited on

Fixed import
e8324a1

PeteBleackley commited on

Factorized the weight matrix in the GlobalAttentionPoolingHead, thus reducing the number of parameters in this layer by a factor of 48
a1e9f64

PeteBleackley commited on

Reduced batch size
b2ffb12

PeteBleackley commited on

Created HuggingFace Space
cce945c

PeteBleackley commited on

Ensure consitency value is 32 bit
452e9a6

PeteBleackley commited on

Correct dimension of consistency cosine
14d83dc

PeteBleackley commited on

Using torch.nn.CosineSimilarity to simplify code
798488e

PeteBleackley commited on

Removed unnecessary parameters
684c1d8

PeteBleackley commited on

Attention mask in decoder
69cf4c5

PeteBleackley commited on

Set use_cache argument
9052370

PeteBleackley commited on

Fix Einstein summation notation
7fe1144

PeteBleackley commited on

Use keepdim option when normalising vectors
738b546

PeteBleackley commited on

Make EPSILON a tensor
cf5f935

PeteBleackley commited on

torch.maximum, not torch.max
98ad67d

PeteBleackley commited on

Unsqueeze attention mask
bc77ce5

PeteBleackley commited on

Unpack BatchEncoding
b5ce6f8

PeteBleackley commited on

Create BatchEncoding in pad
a0c9643

PeteBleackley commited on

Use BatchEncoding for training, not batch_encoding
08197ec

PeteBleackley commited on

Use BatchEncoding for training
80162eb

PeteBleackley commited on

Removed unnecessary on_epoch_end
02e37b9

PeteBleackley commited on

input_embeddings not needed
a5b7b8e

PeteBleackley commited on

Removed unnecessary parameter
8172944

PeteBleackley commited on

get_input_embeddings() directly from base model
e095479

PeteBleackley commited on

Missing 'from_pretrained'
215b416

PeteBleackley commited on

config didn't need to be a property
0abed2a

PeteBleackley commited on

There's a simpler way of doing this, I hope
858f75e

PeteBleackley commited on

Might be simpler to inherit from RobertaModel rather than PreTrainedModel
f0ad7f1

PeteBleackley commited on

Removed a base model that was causing a loop in model initialisation
87535ff

PeteBleackley commited on

Problems with config
2f6dc26

PeteBleackley commited on

Removed line that would have failed
dbfe7ff

PeteBleackley commited on

Fixed import
acda749

PeteBleackley commited on

Typo
ed62a1c

PeteBleackley commited on

Further changes for compatibility with HuggingFace Pytorch implementation
5b7a8ed

PeteBleackley commited on

PyTorch implementation of HugggingFace PreTrainedModel class does not allow direct setting of base_model. Rejig constructors accordingly
519dfd1

PeteBleackley commited on

Removed superfluous ()
4cda7b6

PeteBleackley commited on

Removed superfluous ()
518e821

PeteBleackley commited on

Corrected inheritance
8823ce8

PeteBleackley commited on

Modified CombinedCorpus to use PyTorch
7a9be99

PeteBleackley commited on

Converted QaracTrainerModel to use PyTorch
56e5680

PeteBleackley commited on

Converted QaracDecoderModel to use PyTorch
13f1508

PeteBleackley commited on

Converted QaracEncoderModel to use PyTorch
37a581e

PeteBleackley commited on

Converted GlobalAttentionPoolingHead to use PyTorch
32df2f1

PeteBleackley commited on