automatically load the model with huggingface

#9
No description provided.

this is a stable pr for custom architecture you can use the RMBG-1.4 model

How to use

either load the model

from transformers import AutoModelForImageSegmentation
model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-1.4",revision ="refs/pr/9",trust_remote_code=True)

or load the pipeline

from transformers import pipeline

pipe = pipeline("image-segmentation", model="briaai/RMBG-1.4",revision ="refs/pr/9",  trust_remote_code=True)

numpy_mask = pipe("img_path") # outputs numpy mask

pipe("image_path",out_name="myout.png") # applies mask and saves the extracted image as `myout.png`

parameters and methods :

for the pipeline you can use the following parameters :

  • model_input_size : default to [1024,1024]
  • out_name : if specified it will use the numpy mask to extract the image and save it using the out_name
  • preprocess_image : original method created by briaai
  • postprocess_image : original method created by briaai

@nielsr I found a work around by pointing the architecture to a remote repo that I made holding all the custom code.
It's sort of a hack, but as long as it's working we are all good.

@mchenbria88 can I ask for a review on this one ?
Also, I give you full access to copy, use, change all codes that I have under not-lain/CustomCodeForRMBG, If you need any help with transfering the architecture as your own do reach out at any of my social media.

not-lain changed pull request title from add custom architecture to automatically load the model with huggingface

following section 1.3 of the licence agreement and since there have been no contact from the briaiai in the past couple of days I have chosen to close this pull request and close access to my remote repo.

Note that there is no possible way to load the model at the moment using the transformers library, checkout https://github.com/huggingface/transformers/issues/28919 to find out more.
The only solution is by https://github.com/huggingface/transformers/issues/28919#issuecomment-1937728036 which is a solution I have tailored specifically for this kind of issues.

This is a temporary problem rn, and it should be fixed in future versions of the transformers library. If you ever have any questions, please do not hesitate to tag me

not-lain changed pull request status to closed
BRIA AI org

@not-lain thanks for your great contribution! i've used your work to run model with Transformers, in a couple of days will push it.

@OriLib I have reopened the remote repo under not-lain/CustomCodeForRMBG for public use.
If you ever have any questions do not hesitate to tag me

not-lain changed pull request status to open

@OriLib can we please merge config.json? it would really help the community

@prateekbh there is no need for that.
thanks to @Rocketknight1 and @Cebtenzzre fixing the issue with the transformers library, they are now able to push and load their custom architecture easily using the transformers library.
I was actually waiting for the new release to report this tbh XD
I will create a new pull request that does not rewire to another repo for it to work

  • because this one does rewire to not-lain/CustomCodeForRMBG to download the architecture (check the config.json file in this pull request to understand more)
  • also they are right not to merge this one since the architecture is being rewired to another repo that they do not have control over will put people into risk, i mean what if i injected some dangerous code in there?, keeping the community safe should take priority over all, which is why they didn't merge this one so far
  • also i will never inject any dangerous code, so you can keep using this pull request XD

i will write a new pull request as soon as the next release is out to automate this model without rewiring to another repo, until then stay tuned, and stay safe 🤗

@prateekbh you can always use the code in this pull request even without merging.

from transformers import AutoModelForImageSegmentation
model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-1.4",revision ="refs/pr/9",trust_remote_code=True)

or

from transformers import pipeline

pipe = pipeline("image-segmentation", model="briaai/RMBG-1.4",revision ="refs/pr/9",  trust_remote_code=True)

numpy_mask = pipe("img_path") # outputs numpy mask

pipe("image_path",out_name="myout.png") # applies mask and saves the extracted image as `myout.png`

checkout this space for more details on how i used the code : https://huggingface.co/spaces/not-lain/RMBG-1.4-but-with-transformers-library

@not-lain I am actually trying to use transformers.js and not sure where to pass this option while doing so. any ideas?

@prateekbh this pull request is sololy about transformers.

for transformer.js i recommend you check this repo for how to use it https://huggingface.co/spaces/Xenova/remove-background-web/tree/main.
for transformer library you can use this repo as a reference https://huggingface.co/spaces/not-lain/RMBG-1.4-but-with-transformers-library/tree/main

closing this since #21 solved it ✅

not-lain changed pull request status to closed

Sign up or log in to comment