liuhaotian commited on
Commit
ff6a08f
1 Parent(s): d85229e

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +4 -6
app.py CHANGED
@@ -69,12 +69,10 @@ if __name__ == "__main__":
69
  ONLY WORKS WITH GPU! By default, we load the model with 4-bit quantization to make it fit in smaller hardwares. Set the environment variable `bits` to control the quantization.
70
 
71
  Set the environment variable `model` to change the model, and switch hardware accordingly:
72
- | Model | Hardware |
73
- |-----------------------------------|------------|
74
- | liuhaotian/llava-v1.6-mistral-7b | T4 small |
75
- | liuhaotian/llava-v1.6-vicuna-7b | T4 small |
76
- | liuhaotian/llava-v1.6-vicuna-13b | T4 small |
77
- | liuhaotian/llava-v1.6-34b | A10G large |
78
  """
79
 
80
  print(f"args: {gws.args}")
 
69
  ONLY WORKS WITH GPU! By default, we load the model with 4-bit quantization to make it fit in smaller hardwares. Set the environment variable `bits` to control the quantization.
70
 
71
  Set the environment variable `model` to change the model, and switch hardware accordingly:
72
+ [`liuhaotian/llava-v1.6-mistral-7b`](https://huggingface.co/liuhaotian/llava-v1.6-mistral-7b)
73
+ [`liuhaotian/llava-v1.6-vicuna-7b`](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b)
74
+ [`liuhaotian/llava-v1.6-vicuna-13b`](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-13b)
75
+ [`liuhaotian/llava-v1.6-34b`](https://huggingface.co/liuhaotian/llava-v1.6-34b)
 
 
76
  """
77
 
78
  print(f"args: {gws.args}")