HugoLaurencon commited on
Commit
2f09a11
·
verified ·
1 Parent(s): 4746d53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  license: mit
3
  datasets:
4
- - HuggingFaceM4/img2html
5
  language:
6
  - en
7
  tags:
@@ -15,7 +15,7 @@ tags:
15
 
16
  This model converts screenshots of website components into HTML/CSS codes.
17
 
18
- It is based on a very early checkpoint of our forthcoming vision-language foundation model, which has been fine-tuned using the [img2html](https://huggingface.co/datasets/HuggingFaceM4/img2html) dataset.
19
 
20
  This is very much an alpha version. The goal is to kick off an effort to develop improved models capable of converting a website screenshot into actual code.
21
 
@@ -96,10 +96,10 @@ print(generated_text)
96
  - **Parent Models:** [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
97
  - **Resources for more information:**
98
  <!-- - [GitHub Repo](https://github.com/huggingface/m4/) -->
99
- - img2html dataset: [Dataset card](https://huggingface.co/datasets/HuggingFaceM4/img2html)
100
 
101
  # License
102
 
103
  The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). As such, users should comply with the licenses of these models.
104
 
105
- The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an MIT license.
 
1
  ---
2
  license: mit
3
  datasets:
4
+ - HuggingFaceM4/WebSight
5
  language:
6
  - en
7
  tags:
 
15
 
16
  This model converts screenshots of website components into HTML/CSS codes.
17
 
18
+ It is based on a very early checkpoint of our forthcoming vision-language foundation model, which has been fine-tuned using the [Websight](https://huggingface.co/datasets/HuggingFaceM4/Websight) dataset.
19
 
20
  This is very much an alpha version. The goal is to kick off an effort to develop improved models capable of converting a website screenshot into actual code.
21
 
 
96
  - **Parent Models:** [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
97
  - **Resources for more information:**
98
  <!-- - [GitHub Repo](https://github.com/huggingface/m4/) -->
99
+ - Websight dataset: [Dataset card](https://huggingface.co/datasets/HuggingFaceM4/Websight)
100
 
101
  # License
102
 
103
  The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). As such, users should comply with the licenses of these models.
104
 
105
+ The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an MIT license.