File size: 2,211 Bytes
ee0c625
e995f9b
f77624e
12821c3
 
 
1184c17
ee0c625
 
e995f9b
ee0c625
b504e38
ee0c625
e995f9b
ee0c625
e995f9b
 
12821c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c81525
 
 
 
 
 
 
 
12821c3
fbe4350
 
9cbe30a
fbe4350
ee0c625
 
 
 
 
 
 
e995f9b
 
 
ee0c625
e995f9b
ee0c625
e995f9b
 
 
ee0c625
e995f9b
ee0c625
 
 
e995f9b
 
 
 
 
 
 
 
 
 
 
12821c3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: mit
library_name: transformers
tags:
- image-to-image
- lineart
inference: false
---

# MangaLineExtraction-hf

The huggingface `transformers` compatible version of [MangaLineExtraction_PyTorch](https://github.com/ljsabc/MangaLineExtraction_PyTorch).

Original repo: https://github.com/ljsabc/MangaLineExtraction_PyTorch

## Example


```py
from PIL import Image
import torch

from transformers import AutoModel, AutoImageProcessor

REPO_NAME = "p1atdev/MangaLineExtraction-hf"

model = AutoModel.from_pretrained(REPO_NAME, trust_remote_code=True)
processor = AutoImageProcessor.from_pretrained(REPO_NAME, trust_remote_code=True)

image = Image.open("./sample.jpg")

inputs = processor(image, return_tensors="pt")

with torch.no_grad():
    outputs = model(inputs.pixel_values)

line_image = Image.fromarray(outputs.pixel_values[0].numpy().astype("uint8"), mode="L")
line_image.save("./line_image.png")
```
or you can use the pipeline

```py
from transformers import pipeline

pipe = pipeline("image-to-image", model="p1atdev/MangaLineExtraction-hf", trust_remote_code=True)
pipe("sample.jpg")
```

|`sample.jpg`|Generated line image|
|-|-|
|<img src="./images/sample.jpg" width="320px" alt="Source image">|<img src="./images/line_image.png" width="320px" alt="Generated line image">|


## Model Details

### Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** Chengze Li, Xueting Liu, Tien-Tsin Wong
- **Converted by:** Plat
- **License:** MIT

### Model Sources

- **Repository:** https://github.com/ljsabc/MangaLineExtraction_PyTorch
- **Paper:** https://ttwong12.github.io/papers/linelearn/linelearn.pdf
- **Project page:** https://www.cse.cuhk.edu.hk/~ttwong/papers/linelearn/linelearn.html 

## Citation 

**BibTeX:**

```bibtex
@article{li-2017-deep,
    author   = {Chengze Li and Xueting Liu and Tien-Tsin Wong},
    title    = {Deep Extraction of Manga Structural Lines},
    journal  = {ACM Transactions on Graphics (SIGGRAPH 2017 issue)},
    month    = {July},
    year     = {2017},
    volume   = {36},
    number   = {4},
    pages    = {117:1--117:12},
}
```