7eu7d7 commited on
Commit
ae86ab5
1 Parent(s): 2f8f94d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CCIP
2
+ CCIP(Contrastive Anime Character Image Pre-Training) is a model to calculuate the visual similarity between anime characters in two images. (limited to images containing only a single anime character). More similar the characters between two images are, higher score it should have.
3
+
4
+ # Usage
5
+ Using CCIP with [imgutils](https://dghs-imgutils.deepghs.org/main/tutorials/installation/index.html)
6
+
7
+ ![](https://dghs-imgutils.deepghs.org/main/_images/ccip_small.plot.py.svg)
8
+ Calculuate character similarity between images:
9
+ ```
10
+ from imgutils.metrics import ccip_batch_differences
11
+
12
+ ccip_batch_differences(['ccip/1.jpg', 'ccip/2.jpg', 'ccip/6.jpg', 'ccip/7.jpg'])
13
+ array([[6.5350548e-08, 1.6583106e-01, 4.2947042e-01, 4.0375218e-01],
14
+ [1.6583106e-01, 9.8025822e-08, 4.3715334e-01, 4.0748104e-01],
15
+ [4.2947042e-01, 4.3715334e-01, 3.2675274e-08, 3.9229470e-01],
16
+ [4.0375218e-01, 4.0748104e-01, 3.9229470e-01, 6.5350548e-08]],
17
+ dtype=float32)
18
+ ```
19
+
20
+ [More detailed instruction](https://dghs-imgutils.deepghs.org/main/api_doc/metrics/ccip.html)
21
+
22
+ # Performence
23
+ | Model | F1 Score | Precision | Recall | Threshold | Cluster_2 | Cluster_Free |
24
+ |:-----------------------------------:|:----------:|:-----------:|:--------:|:-----------:|:-----------:|:--------------:|
25
+ | ccip-caformer_b36-24 | 0.940925 | 0.938254 | 0.943612 | 0.213231 | 0.89508 | 0.957017 |
26
+ | ccip-caformer-24-randaug-pruned | 0.917211 | 0.933481 | 0.901499 | 0.178475 | 0.890366 | 0.922375 |
27
+ | ccip-v2-caformer_s36-10 | 0.906422 | 0.932779 | 0.881513 | 0.207757 | 0.874592 | 0.89241 |
28
+ | ccip-caformer-6-randaug-pruned_fp32 | 0.878403 | 0.893648 | 0.863669 | 0.195122 | 0.810176 | 0.897904 |
29
+ | ccip-caformer-5_fp32 | 0.864363 | 0.90155 | 0.830121 | 0.183973 | 0.792051 | 0.862289 |
30
+ | ccip-caformer-4_fp32 | 0.844967 | 0.870553 | 0.820842 | 0.18367 | 0.795565 | 0.868133 |
31
+ | ccip-caformer_query-12 | 0.823928 | 0.871122 | 0.781585 | 0.141308 | 0.787237 | 0.809426 |
32
+ | ccip-caformer-23_randaug_fp32 | 0.81625 | 0.854134 | 0.781585 | 0.136797 | 0.745697 | 0.8068 |
33
+ | ccip-caformer-2-randaug-pruned_fp32 | 0.78561 | 0.800148 | 0.771592 | 0.171053 | 0.686617 | 0.728195 |
34
+ | ccip-caformer-2_fp32 | 0.755125 | 0.790172 | 0.723055 | 0.141275 | 0.64977 | 0.718516 |
35
+
36
+ * The calculation of `F1 Score`, `Precision`, and `Recall` considers "the characters in both images are the same" as a positive case. `Threshold` is determined by finding the maximum value on the F1 Score curve.
37
+ * `Cluster_2` represents the approximate optimal clustering solution obtained by tuning the eps value in DBSCAN clustering algorithm with min_samples set to `2`, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`.
38
+ * `Cluster_Free` represents the approximate optimal solution obtained by tuning the `max_eps` and `min_samples` values in the OPTICS clustering algorithm, and evaluating the similarity between the obtained clusters and the true distribution using the `random_adjust_score`.
39
+
40
+ ![operations benchmark](https://dghs-imgutils.deepghs.org/main/_images/ccip_benchmark.plot.py.svg)
41
+
42
+ # Citation
43
+ ```bibtex
44
+ @misc{CCIP,
45
+ title={Contrastive Anime Character Image Pre-Training},
46
+ author={Ziyi Dong and narugo1992},
47
+ year={2024},
48
+ howpublished={\url{https://huggingface.co/deepghs/ccip}}
49
+ }
50
+ ```