Transformers
PyTorch
code
custom_code
Inference Endpoints
FJFehr commited on
Commit
1dc5982
1 Parent(s): c4a6531

Allow for attention weights to be extracted.

Browse files

There is a small bug in indexing which didn't allow me to get the attentions out. I think this change fixes it.

Files changed (1) hide show
  1. modeling_codesage.py +1 -1
modeling_codesage.py CHANGED
@@ -149,7 +149,7 @@ class CodeSageBlock(nn.Module):
149
  feed_forward_hidden_states = self.mlp(hidden_states)
150
  hidden_states = residual + feed_forward_hidden_states
151
 
152
- outputs = (hidden_states,) + outputs[1:]
153
  return outputs # hidden_states, present, (attentions)
154
 
155
 
 
149
  feed_forward_hidden_states = self.mlp(hidden_states)
150
  hidden_states = residual + feed_forward_hidden_states
151
 
152
+ outputs = (hidden_states,) + outputs
153
  return outputs # hidden_states, present, (attentions)
154
 
155