Transformers
PyTorch
code
custom_code
Inference Endpoints

Allow for attention weights to be extracted.

#2
by FJFehr - opened

There is a small bug in indexing which didn't allow me to get the attentions out. I think this change fixes it.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment