Vivien Chappelier commited on
Commit
e3f5433
1 Parent(s): 584a7bc

set model commit_hash

Browse files
Files changed (2) hide show
  1. README.md +3 -2
  2. detect_demo.py +2 -1
README.md CHANGED
@@ -19,7 +19,8 @@ import sys
19
  import torch
20
 
21
  image_processor = BlipImageProcessor.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
22
- model = AutoModel.from_pretrained("imatag/stable-signature-bzh-detector-resnet18", trust_remote_code=True)
 
23
 
24
  img = Image.open(sys.argv[1]).convert("RGB")
25
  inputs = image_processor(img, return_tensors="pt")
@@ -33,7 +34,7 @@ print(f"approximate p-value: {p}")
33
 
34
  This model is an approximate version of [IMATAG](https://www.imatag.com/)'s BZH decoder for the watermark embedded in our [SDXL-turbo watermarking demo](https://huggingface.co/spaces/imatag/stable-signature-bzh).
35
  It works on this watermark only and cannot be used to decode other watermarks.
36
- It will produce an approximate p-value measuring the risk of detecting a watermark an a benign (non-watermarked) image. Thresholding this value will give a hard decision on whether an image is watermarked (0) or not (1), with an approximate chance of mistakenly claiming the image is watermarked while it's not equal to the threshold. For an exact p-value, please use the [API](https://huggingface.co/spaces/imatag/stable-signature-bzh/resolve/main/detect_api.py) instead.
37
 
38
  For more details on this watermarking technique, check out our [announcement](https://www.imatag.com/blog/unlocking-the-future-of-content-authentication-imatags-breakthrough-in-ai-generated-image-watermarking) and our lab's [blog post](https://imatag-lab.medium.com/stable-signature-meets-bzh-53ad0ba13691).
39
 
 
19
  import torch
20
 
21
  image_processor = BlipImageProcessor.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
22
+ commit_hash = "584a7bc01dc0f02e53bf8b8b295717ed09ed7294"
23
+ model = AutoModel.from_pretrained("imatag/stable-signature-bzh-detector-resnet18", trust_remote_code=True, revision=commit_hash)
24
 
25
  img = Image.open(sys.argv[1]).convert("RGB")
26
  inputs = image_processor(img, return_tensors="pt")
 
34
 
35
  This model is an approximate version of [IMATAG](https://www.imatag.com/)'s BZH decoder for the watermark embedded in our [SDXL-turbo watermarking demo](https://huggingface.co/spaces/imatag/stable-signature-bzh).
36
  It works on this watermark only and cannot be used to decode other watermarks.
37
+ It will produce an approximate p-value measuring the risk of detecting a watermark an a benign (non-watermarked) image. Thresholding this value will give a hard decision on whether an image is watermarked (0) or not (1), with an approximate chance of mistakenly claiming the image is watermarked while it's not equal to the threshold. For an exact p-value and improved robustness, please use the [API](https://huggingface.co/spaces/imatag/stable-signature-bzh/resolve/main/detect_api.py) instead.
38
 
39
  For more details on this watermarking technique, check out our [announcement](https://www.imatag.com/blog/unlocking-the-future-of-content-authentication-imatags-breakthrough-in-ai-generated-image-watermarking) and our lab's [blog post](https://imatag-lab.medium.com/stable-signature-meets-bzh-53ad0ba13691).
40
 
detect_demo.py CHANGED
@@ -4,7 +4,8 @@ import sys
4
  import torch
5
 
6
  image_processor = BlipImageProcessor.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
7
- model = AutoModel.from_pretrained("imatag/stable-signature-bzh-detector-resnet18", trust_remote_code=True)
 
8
 
9
  img = Image.open(sys.argv[1]).convert("RGB")
10
  inputs = image_processor(img, return_tensors="pt")
 
4
  import torch
5
 
6
  image_processor = BlipImageProcessor.from_pretrained("imatag/stable-signature-bzh-detector-resnet18")
7
+ commit_hash = "584a7bc01dc0f02e53bf8b8b295717ed09ed7294"
8
+ model = AutoModel.from_pretrained("imatag/stable-signature-bzh-detector-resnet18", trust_remote_code=True, revision=commit_hash)
9
 
10
  img = Image.open(sys.argv[1]).convert("RGB")
11
  inputs = image_processor(img, return_tensors="pt")