'num_return_sequences' & 'num_beams' can't be changed in inference API calls

#80
by mrscoopers - opened

While using inference API, for me any alternation of 'num_beams' and 'num_return_sequences' parameters does not change the output: it always returns the same (one) generated text.

Could anybody explain to me, please, why so?

For some models (such as gpt-2, both parameters work. For some (e.g., bloom), only one does (e.g., num_beams)

%%
import requests
headers = {'Content-type': 'application/json', "Authorization": f"Bearer hf_bearer"}

def query_falcon(prompt):
parameters = {'max_new_tokens':25, 'early_stopping':True, 'return_full_text': False,
'do_sample': False, 'num_beams':10, 'num_return_sequences':2}
options = {'use_cache': False}
payload = {'inputs': prompt,
'parameters': parameters,
'options': options}
data = json.dumps(payload)
response = requests.request("POST",
"https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct",
headers=headers,
data=data)
try:
return json.loads(response.content.decode("utf-8"))
except Exception as e:
...
return 'Model error'
%%

Sign up or log in to comment