My discussions crash fairly quickly with error out of context memory. The model thinks it can use 4096 tokens of context.
It can be increased up to 16k and possibly further
· Sign up or log in to comment