Correct long-form generation config parameters 'max_initial_timestamp_index' and 'prev_sot_token_id'.
Hey openai π,
Your model repository seems to contain outdated generation config parameters, such as 'max_initial_timestamp_index' and is missing the 'prev_sot_token_id' parameter. These parameters need to be updated to correctly handle long-form generation as stated in as part of https://github.com/huggingface/transformers/pull/27658. This PR makes sure that everything is up to date and can be safely merged.
Best, the Transformers team.
Hey openai π,
Your model repository seems to contain outdated generation config parameters, such as 'max_initial_timestamp_index' and is missing the 'prev_sot_token_id' parameter. These parameters need to be updated to correctly handle long-form generation as stated in as part of https://github.com/huggingface/transformers/pull/27658. This PR makes sure that everything is up to date and can be safely merged.
Best, the Transformers team.
Hi sir, just curious - I read about your merged PR and I appreciate you for implementing this!
If I understood correctly, are these parameters still not updated in the model configs? I really need to use parameters like "condition_on_prev_tokens", "no_speech_threshold" to tackle long form speech and I'd love to see them implemented in transformers.