Update README.md
Browse files
README.md
CHANGED
@@ -30,6 +30,7 @@ For generating single-step actions in GUI agent tasks, you can use:
|
|
30 |
|
31 |
`OS-Atlas-Action-4B` is a GUI action model finetuned from OS-Atlas-Base-4B. By taking as input a system prompt, basic and custom actions, and a task instruction, the model generates thoughtful reasoning (`thought`) and executes the appropriate next step (`action`).
|
32 |
|
|
|
33 |
### Installation
|
34 |
To use `OS-Atlas-Action-4B`, first install the necessary dependencies:
|
35 |
```
|
@@ -131,7 +132,7 @@ model = AutoModel.from_pretrained(
|
|
131 |
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
|
132 |
|
133 |
# set the max number of tiles in `max_num`
|
134 |
-
pixel_values = load_image('/
|
135 |
generation_config = dict(max_new_tokens=1024, do_sample=True)
|
136 |
|
137 |
sys_prompt = """
|
|
|
30 |
|
31 |
`OS-Atlas-Action-4B` is a GUI action model finetuned from OS-Atlas-Base-4B. By taking as input a system prompt, basic and custom actions, and a task instruction, the model generates thoughtful reasoning (`thought`) and executes the appropriate next step (`action`).
|
32 |
|
33 |
+
Note that the released OS-Atlas-Pro-4B model is described in the Section 5.4 of the paper. Compared to the OS-Atlas model in Tables 4 and 5, the Pro model demonstrates superior generalizability and performance. Critically, it is not constrained to specific tasks or training datasets merely to satisfy particular experimental conditions like OOD and SFT. Furthermore, this approach prevents us from overdosing HuggingFace by uploading over 20+ distinct model checkpoints.
|
34 |
### Installation
|
35 |
To use `OS-Atlas-Action-4B`, first install the necessary dependencies:
|
36 |
```
|
|
|
132 |
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True)
|
133 |
|
134 |
# set the max number of tiles in `max_num`
|
135 |
+
pixel_values = load_image('./examples/images/action_example_1.jpg', max_num=6).to(torch.bfloat16).cuda()
|
136 |
generation_config = dict(max_new_tokens=1024, do_sample=True)
|
137 |
|
138 |
sys_prompt = """
|