LinaAlhuri
commited on
Commit
•
f033eea
1
Parent(s):
287622f
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ar
|
4 |
+
pipeline_tag: text-to-image
|
5 |
+
---
|
6 |
+
|
7 |
+
|
8 |
+
|
9 |
+
|
10 |
+
## Model Details
|
11 |
+
|
12 |
+
Arabic CLIP is an adaptation of the Contrastive Language-Image Pre-training (CLIP) for the Arabic language. CLIP is an OpenAI-developed model that learns conceptual concepts from images and relates them with textual descriptions. This work attempts to improve the model's understanding and interpretation of visual information in the context of the Arabic language.
|
13 |
+
|
14 |
+
|
15 |
+
## Model Use
|
16 |
+
|
17 |
+
|
18 |
+
```python
|
19 |
+
|
20 |
+
from transformers import AutoTokenizer, FlaxVisionTextDualEncoderModel
|
21 |
+
model = FlaxVisionTextDualEncoderModel.from_pretrained("LinaAlhuri/Arabic-clip-vit-base-patch32", logit_scale_init_value=1,from_pt=True)
|
22 |
+
model.save_pretrained("arabic_clip")
|
23 |
+
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained("asafaya/bert-base-arabic", cache_dir=None, use_fast=True)
|
25 |
+
|
26 |
+
```
|
27 |
+
|
28 |
+
|
29 |
+
|
30 |
+
## Data
|
31 |
+
|
32 |
+
This was done through a combination of crawling Wikipedia and using commonly used pre-existing image datasets such as [CC](https://ai.google.com/research/ConceptualCaptions/). One of the most challenging obstacles for multimodal technologies is the fact that Arabic has few data resources, making huge dataset construction difficult. Another is the degradation of translated datasets adapted from well-known publicly available datasets. Whether the choice is to use translated data or genuine data, it is difficult to achieve the desired results depending on only one source, as each choice has its pros and cons. As a result, the goal of this work is to construct the largest Arabic image-text pair collection feasible by merging diverse data sources. This technique takes advantage of the rich information in genuine datasets to compensate for information loss in translated datasets. In contrast, translated datasets contribute to this work with enough pairs that cover a wide range of domains, scenarios, and objects.
|
33 |
+
|
34 |
+
|
35 |
+
| Dataset name | Images |
|
36 |
+
| --- | --- |
|
37 |
+
|Arabic Conceptual Captions |1,427,210|
|
38 |
+
|Arabic COCO 2014 |414,113|
|
39 |
+
|Arabic WIT |109,366|
|
40 |
+
|Arabic Flicker8K |24,272|
|
41 |
+
|Proposed (WAP) dataset |151,252|
|
42 |
+
|Total |2,126,213|
|
43 |
+
|
44 |
+
|
45 |
+
## Performance and Limitations
|
46 |
+
|
47 |
+
We have tested the efficacy of Arabic CLIP across different benchmarks tailored for tasks like zero-shot learning, image retrieval, localization, and image search.
|
48 |
+
- Conceptual Captions
|
49 |
+
- COCO
|
50 |
+
- ImageNet
|
51 |
+
- Unsplash
|
52 |
+
|
53 |
+
### Zero-shot Learning
|
54 |
+
|
55 |
+
| Multilingual CLIP| Top 1 | Top 5 | Top 10 | Top 100 |
|
56 |
+
|-----------------------|---------|---------|---------|---------|
|
57 |
+
| **Short translation** | 10.10 | 21.99 | 26.70 | 47.57 |
|
58 |
+
| **Long translation** | 9.518 | 20.942 | 25.54 | 45.59 |
|
59 |
+
|
60 |
+
|
61 |
+
| Arabic Baseline Patch 32 | Top 1 | Top 5 | Top 10 | Top 100 |
|
62 |
+
|-----------------------|---------|---------|---------|---------|
|
63 |
+
| **Short translation** | 17.58 | 37.15 | 45.60 | 73.02 |
|
64 |
+
| **Long translation** | 16.94 | 37.12 | 45.44 | 72.94 |
|
65 |
+
|
66 |
+
|
67 |
+
### Image Retrieval
|
68 |
+
#### Conceptual Captions Evaluation
|
69 |
+
|
70 |
+
| Metric | MCLIP | Baseline Patch 32 |
|
71 |
+
|---------|-------|-------------------|
|
72 |
+
| **MRR@1** | 0.064 | 0.165 |
|
73 |
+
| **MRR@5** | 0.093 | 0.231 |
|
74 |
+
| **MRR@10** | 0.100 | 0.244 |
|
75 |
+
|
76 |
+
#### COCO Evaluation
|
77 |
+
|
78 |
+
| Metric | MCLIP | Baseline Patch 32 |
|
79 |
+
|---------|-------|-------------------|
|
80 |
+
| **MRR@1** | 0.043 | 0.082 |
|
81 |
+
| **MRR@5** | 0.068 | 0.127 |
|
82 |
+
| **MRR@10** | 0.074 | 0.138 |
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
|
88 |
+
|
89 |
+
## Limitations
|
90 |
+
To summarize the limitations into points
|
91 |
+
- Arabic CLIP struggles to count after 3.
|
92 |
+
- Limited genuine samples for the Arabic language.
|
93 |
+
- Various noises and biases might be introduced into Arabic CLIP because no studies have been conducted yet to address this issue in the published Arabic dataset or Arabic language models.
|
94 |
+
|
95 |
+
### Bias
|
96 |
+
For gender bias, it is important to note that Arabic uses a two-gender system in which all nouns are classified as masculine or feminine.
|
97 |
+
However, this is not the case for English. Translating the text from English to Arabic may result in information loss or even make it prone to gender bias.
|
98 |
+
|
99 |
+
|
100 |
+
|