Update README.md
Browse files
README.md
CHANGED
@@ -23,8 +23,8 @@ For GUI grounding tasks, you can use:
|
|
23 |
- [OS-Atlas-Base-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-4B)
|
24 |
|
25 |
For generating single-step actions in GUI agent tasks, you can use:
|
26 |
-
- [OS-Atlas-
|
27 |
-
- [OS-Atlas-
|
28 |
|
29 |
## OS-Atlas-Action-4B
|
30 |
|
@@ -39,6 +39,9 @@ pip install transformers
|
|
39 |
For additional dependencies, please refer to the [InternVL2 documentation](https://internvl.readthedocs.io/en/latest/get_started/installation.html)
|
40 |
|
41 |
### Example Inference Code
|
|
|
|
|
|
|
42 |
```python
|
43 |
import torch
|
44 |
import torchvision.transforms as T
|
@@ -123,7 +126,7 @@ def load_image(image_file, input_size=448, max_num=6):
|
|
123 |
return pixel_values
|
124 |
|
125 |
# If you want to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
|
126 |
-
path = '
|
127 |
model = AutoModel.from_pretrained(
|
128 |
path,
|
129 |
torch_dtype=torch.bfloat16,
|
|
|
23 |
- [OS-Atlas-Base-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Base-4B)
|
24 |
|
25 |
For generating single-step actions in GUI agent tasks, you can use:
|
26 |
+
- [OS-Atlas-Pro-7B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-7B)
|
27 |
+
- [OS-Atlas-Pro-4B](https://huggingface.co/OS-Copilot/OS-Atlas-Pro-4B)
|
28 |
|
29 |
## OS-Atlas-Action-4B
|
30 |
|
|
|
39 |
For additional dependencies, please refer to the [InternVL2 documentation](https://internvl.readthedocs.io/en/latest/get_started/installation.html)
|
40 |
|
41 |
### Example Inference Code
|
42 |
+
First download the [example image](https://github.com/OS-Copilot/OS-Atlas/blob/main/examples/images/action_example_1.jpg) and save it to the current directory.
|
43 |
+
|
44 |
+
Inference code:
|
45 |
```python
|
46 |
import torch
|
47 |
import torchvision.transforms as T
|
|
|
126 |
return pixel_values
|
127 |
|
128 |
# If you want to load a model using multiple GPUs, please refer to the `Multiple GPUs` section.
|
129 |
+
path = './action_example_1.jpg' # change to your example image path
|
130 |
model = AutoModel.from_pretrained(
|
131 |
path,
|
132 |
torch_dtype=torch.bfloat16,
|