Model consistently gets into a loop to repeat itself if there is too much in the context window

#48
by mstachow - opened

I've found that the model will consistently get into a loop and repeat itself, which is unfortunate because it's otherwise excellent. This happens when the input gets too long, although I haven't tried to see where exactly the errors start to happen. I am running the model using a FastAPI endpoint, but I doubt that is the case. Here is the function I have been using. Note that the generation parameters and the model loading are all per the document, but the max_length is passed in as a parameter during a call to the function via the web request. It doesn't seem to matter if I use a shorter max_length, though. The problem is when the prompt itself gets too long.

async def generate_text(request: RequestModel):
    print(request)
    
    generation_args = { 
        "max_new_tokens": request.max_length, 
        "return_full_text": False, 
        "temperature": 0.0, 
        "do_sample": False,
    } 
    #inputs = tokenizer.encode(request.prompt, return_tensors="pt").to(device)
    try:
        messages = [{"role":"user","content":request.prompt}]
        outputs = pipe(messages, **generation_args) 
        print(outputs[0]["generated_text"])
        response_text = outputs[0]["generated_text"]#tokenizer.decode(outputs[0], skip_special_tokens=True)
        print(response_text)
        # Strip the prompt from the generated text
        #if response_text.startswith(request.prompt):
        #    response_text = response_text[len(request.prompt):].strip()

        return ResponseModel(response=response_text)
    except Exception as e:
        print(e)
        raise HTTPException(status_code=500, detail=str(e))

I am having the same issue in LM Studio. I suspect it's a Prompt template issue.

I see. Do you imagine the issue will go away if I were to avoid using the pipeline?

hi there,
I'm experiencing the same issue. I haven't found the sweet spot either, but Phi seems to start generating gibberish once the input exceeds 4k tokens.

I must admit I've given up on Phi for this reason. Qwen 32B is smaller in size, better at reasoning, and doesn't have this issue, plus I haven't found it to be any slower.

Sign up or log in to comment