Rembg is a tool to remove images background.
If this project has helped you, please consider making a donation.
withoutBG API https://withoutbg.comHigh-quality background removal API at affordable rates
PhotoRoom Remove Background API https://photoroom.com/apiFast and accurate background remover API
If you have onnxruntime already installed, just install rembg:
Otherwise, install rembg with explicit CPU/GPU support.
First of all, you need to check if your system supports the onnxruntime-gpu.
Go to onnxruntime.ai and check the installation matrix.
If yes, just run:
Nvidia GPU may require onnxruntime-gpu, cuda, and cudnn-devel. #668 . If rembg[gpu] doesn't work and you can't install cuda or cudnn-devel, use rembg[cpu] and onnxruntime instead.
After the installation step you can use rembg just typing rembg in your terminal window.
The rembg command has 4 subcommands, one for each input type:
- i for files
- p for folders
- s for http server
- b for RGB24 pixel binary stream
You can get help about the main command using:
As well, about all the subcommands using:
Used when input and output are files.
Remove the background from a remote image
Remove the background from a local file
Remove the background specifying a model
Remove the background returning only the mask
Remove the background applying an alpha matting
Passing extras parameters
Used when input and output are folders.
Remove the background from all images in a folder
Same as before, but watching for new/changed files to process
Used to start http server.
To see the complete endpoints documentation, go to: http://localhost:7000/api.
Remove the background from an image url
Remove the background from an uploaded image
Process a sequence of RGB24 images from stdin. This is intended to be used with another program, such as FFMPEG, that outputs RGB24 pixel data to stdout, which is piped into the stdin of this program, although nothing prevents you from manually typing in images at stdin.
Arguments:
- image_width : width of input image(s)
- image_height : height of input image(s)
- output_specifier: printf-style specifier for output filenames, for example if output-%03u.png, then output files will be named output-000.png, output-001.png, output-002.png, etc. Output files will be saved in PNG format regardless of the extension specified. You can omit it to write results to stdout.
Example usage with FFMPEG:
The width and height values must match the dimension of output images from FFMPEG. Note for FFMPEG, the "-an -f rawvideo -pix_fmt rgb24 pipe:1" part is required for the whole thing to work.
Input and output as bytes
Input and output as a PIL image
Input and output as a numpy array
Force output as bytes
How to iterate over files in a performatic way
To see a full list of examples on how to use rembg, go to the examples page.
Just replace the rembg command for docker run danielgatis/rembg.
Try this:
Requirement: using CUDA in docker needs your host has NVIDIA Container Toolkit installed. NVIDIA Container Toolkit Install Guide
Nvidia CUDA Hardware Acceleration needs cudnn-devel so you need to build the docker image by yourself. #668
Here is a example shows you how to build an image and name it rembg-nvidia-cuda-cudnn-gpu
Be aware: It would take 11GB of your disk space. (The cpu version only takes about 1.6GB). Models didn't included.
After you build the image, run it like this as a cli
- Trick 1: Actually you can also make up a nvidia-cuda-cudnn-gpu image and install rembg[gpu, cli] in it.
- Trick 2: Try param -v /somewhereYouStoresModelFiles/:/root/.u2net so to download/store model files out of docker images. You can even comment the line RUN rembg d u2net so when builing the image, it download will no models, so you can download the specific model you want even without the default u2net model.
All models are downloaded and saved in the user home folder in the .u2net directory.
The available models are:
- u2net (download, source): A pre-trained model for general use cases.
- u2netp (download, source): A lightweight version of u2net model.
- u2net_human_seg (download, source): A pre-trained model for human segmentation.
- u2net_cloth_seg (download, source): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- silueta (download, source): Same as u2net but the size is reduced to 43Mb.
- isnet-general-use (download, source): A new pre-trained model for general use cases.
- isnet-anime (download, source): A high-accuracy segmentation for anime character.
- sam (download encoder, download decoder, source): A pre-trained model for any use cases.
- birefnet-general (download, source): A pre-trained model for general use cases.
- birefnet-general-lite (download, source): A light pre-trained model for general use cases.
- birefnet-portrait (download, source): A pre-trained model for human portraits.
- birefnet-dis (download, source): A pre-trained model for dichotomous image segmentation (DIS).
- birefnet-hrsod (download, source): A pre-trained model for high-resolution salient object detection (HRSOD).
- birefnet-cod (download, source): A pre-trained model for concealed object detection (COD).
- birefnet-massive (download, source): A pre-trained model with massive dataset.
If You need more fine tuned models try this: #193 (comment)
- https://www.youtube.com/watch?v=3xqwpXjxyMQ
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=Ai-BS_T7yjE
- https://www.youtube.com/watch?v=D7W-C0urVcQ
- https://arxiv.org/pdf/2005.09007.pdf
- https://github.com/NathanUA/U-2-Net
- https://github.com/pymatting/pymatting
This library directly depends on the onnxruntime library. Therefore, we can only update the Python version when onnxruntime provides support for that specific version.
Liked some of my work? Buy me a coffee (or more likely a beer)
Copyright (c) 2020-present Daniel Gatis
Licensed under MIT License