Browser test and local test have different result
Hi,
I'm playing with this model and noticed that in the sequence "US guys what do you think about this piece of shit?" (this is Reddit title) I get different result from browser test and from my local machine. this is the result:
Local machine: [ { "label":"anger", "score":0.30374875664711 }, { "label":"disgust", "score":0.4684724807739258 }, { "label":"fear", "score":0.089825838804245 }, { "label":"joy", "score":0.002520856214687228 }, { "label":"neutral", "score":0.10477810353040695 }, { "label":"sadness", "score":0.008448047563433647 }, { "label":"surprise", "score":0.02220587059855461 } ]
I'm curious why there is such a discrepancy?
I am facing the same issue. Also, the pipeline inference is somehow slower than the HF model implementation. Is the model used in hosted inference different from the model card?