How can I fix this issue?

#1
by BoreGuy1998 - opened

Can't determine model type from model name

Can't determine model type from model name

well , what software are you running? if you can provide details maybe someone can help. theres probably a certain way you have to name the file for the program you're using

Nvidia if that is what you mean by software.

Nvidia if that is what you mean by software.

no i mean what program are you using to run the model? I googled the error you got and it looks like it's with oobabooga's text ui and that it needs a certain naming scheme

Can't determine model type from model name

well , what software are you running? if you can provide details maybe someone can help. theres probably a certain way you have to name the file for the program you're using
oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py", line 2405, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\Monero_Pygmalion-Metharme-7b-4bit-TopScore.

I hope this helps.

There is also this if that is what you meant by the program. The detected CUDA version (12.1) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.

Can't determine model type from model name

also having this issue I am using oobabooga with pygmalion-7b-4bit-128g-cuda following a video from Aitrepreneur

start it with --model_type llama

start it with --model_type llama

That fixed it for me, thanks!

start it with --model_type llama

That fixed it for me, thanks!

Where should I put that in?

Getting this I doubt I am putting this in the right place.

At line:1 char:3

  • --model_type llama
  • ~
    Missing expression after unary operator '--'.
    At line:1 char:3
  • --model_type llama
  • 
    

Unexpected token 'model_type' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingExpressionAfterOperator

start it with --model_type llama

Where do I put this in oobabooga?

start it with --model_type llama

Where do I put this in oobabooga?

Probably as part of the launch options or the model type selector in the web ui

start it with --model_type llama

Where do I put this in oobabooga?

I think that would be in webui.py file.
Open the file, search 'python server.py' and add there.

Not sure, but it might take every model as llama, so may be other model might or might not work.
So, may be you can try selecting llama from oobabooga web ui and reload. If it doesn't work, you can try above solution.

Sign up or log in to comment