Code Monkey home page Code Monkey logo

inpaint-anything's Introduction

Inpaint Anything: Segment Anything Meets Image Inpainting

Inpaint Anything can inpaint anything in images, videos and 3D scenes!

  • Authors: Tao Yu, Runseng Feng, Ruoyu Feng, Jinming Liu, Xin Jin, Wenjun Zeng and Zhibo Chen.
  • Institutes: University of Science and Technology of China; Eastern Institute for Advanced Study.
  • [Paper] [Website] [Hugging Face Homepage]

TL; DR: Users can select any object in an image by clicking on it. With powerful vision models, e.g., SAM, LaMa and Stable Diffusion (SD), Inpaint Anything is able to remove the object smoothly (i.e., Remove Anything). Further, prompted by user input text, Inpaint Anything can fill the object with any desired content (i.e., Fill Anything) or replace the background of it arbitrarily (i.e., Replace Anything).

📜 News

[2023/9/15] Remove Anything 3D code is available!
[2023/4/30] Remove Anything Video available! You can remove any object from a video!
[2023/4/24] Local web UI supported! You can run the demo website locally!
[2023/4/22] Website available! You can experience Inpaint Anything through the interface!
[2023/4/22] Remove Anything 3D available! You can remove any 3D object from a 3D scene!
[2023/4/13] Technical report on arXiv available!

🌟 Features

💡 Highlights

  • Any aspect ratio supported
  • 2K resolution supported
  • Technical report on arXiv available (🔥NEW)
  • Website available (🔥NEW)
  • Local web UI available (🔥NEW)
  • Multiple modalities (i.e., image, video and 3D scene) supported (🔥NEW)

📌 Remove Anything

image

Click on an object in the image, and Inpainting Anything will remove it instantly!

Installation

Requires python>=3.8

python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt 

In Windows, we recommend you to first install miniconda and open Anaconda Powershell Prompt (miniconda3) as administrator. Then pip install ./lama_requirements_windows.txt instead of ./lama/requirements.txt.

Usage

Download the model checkpoints provided in Segment Anything and LaMa (e.g., sam_vit_h_4b8939.pth and big-lama), and put them into ./pretrained_models. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./ and get ./pretrained_models.

For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM

bash script/remove_anything.sh

Specify an image and a point, and Remove Anything will remove the object at the point.

python remove_anything.py \
    --input_img ./example/remove-anything/dog.jpg \
    --coords_type key_in \
    --point_coords 200 450 \
    --point_labels 1 \
    --dilate_kernel_size 15 \
    --output_dir ./results \
    --sam_model_type "vit_h" \
    --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
    --lama_config ./lama/configs/prediction/default.yaml \
    --lama_ckpt ./pretrained_models/big-lama

You can change --coords_type key_in to --coords_type click if your machine has a display device. If click is set, after running the above command, the image will be displayed. (1) Use left-click to record the coordinates of the click. It supports modifying points, and only last point coordinates are recorded. (2) Use right-click to finish the selection.

Demo

📌 Fill Anything

Text prompt: "a teddy bear on a bench"

image

Click on an object, type in what you want to fill, and Inpaint Anything will fill it!

  • Click on an object;
  • SAM segments the object out;
  • Input a text prompt;
  • Text-prompt-guided inpainting models (e.g., Stable Diffusion) fill the "hole" according to the text.

Installation

Requires python>=3.8

python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install diffusers transformers accelerate scipy safetensors

Usage

Download the model checkpoints provided in Segment Anything (e.g., sam_vit_h_4b8939.pth) and put them into ./pretrained_models. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./ and get ./pretrained_models.

For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM

bash script/fill_anything.sh

Specify an image, a point and text prompt, and run:

python fill_anything.py \
    --input_img ./example/fill-anything/sample1.png \
    --coords_type key_in \
    --point_coords 750 500 \
    --point_labels 1 \
    --text_prompt "a teddy bear on a bench" \
    --dilate_kernel_size 50 \
    --output_dir ./results \
    --sam_model_type "vit_h" \
    --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth

Demo

Text prompt: "a camera lens in the hand"
Text prompt: "a Picasso painting on the wall"
Text prompt: "an aircraft carrier on the sea"
Text prompt: "a sports car on a road"

📌 Replace Anything

Text prompt: "a man in office"

image

Click on an object, type in what background you want to replace, and Inpaint Anything will replace it!

  • Click on an object;
  • SAM segments the object out;
  • Input a text prompt;
  • Text-prompt-guided inpainting models (e.g., Stable Diffusion) replace the background according to the text.

Installation

Requires python>=3.8

python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install diffusers transformers accelerate scipy safetensors

Usage

Download the model checkpoints provided in Segment Anything (e.g. sam_vit_h_4b8939.pth) and put them into ./pretrained_models. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./ and get ./pretrained_models.

For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM

bash script/replace_anything.sh

Specify an image, a point and text prompt, and run:

python replace_anything.py \
    --input_img ./example/replace-anything/dog.png \
    --coords_type key_in \
    --point_coords 750 500 \
    --point_labels 1 \
    --text_prompt "sit on the swing" \
    --output_dir ./results \
    --sam_model_type "vit_h" \
    --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth

Demo

Text prompt: "sit on the swing"
Text prompt: "a bus, on the center of a country road, summer"
Text prompt: "breakfast"
Text prompt: "crossroad in the city"

📌 Remove Anything 3D

With a single click on an object in the first view of source views, Remove Anything 3D can remove the object from the whole scene!

  • Click on an object in the first view of source views;
  • SAM segments the object out (with three possible masks);
  • Select one mask;
  • A tracking model such as OSTrack is ultilized to track the object in these views;
  • SAM segments the object out in each source view according to tracking results;
  • An inpainting model such as LaMa is ultilized to inpaint the object in each source view.
  • A novel view synthesizing model such as NeRF is ultilized to synthesize novel views of the scene without the object.

Installation

Requires python>=3.8

python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
python -m pip install jpeg4py lmdb

Usage

Download the model checkpoints provided in Segment Anything and LaMa (e.g., sam_vit_h_4b8939.pth), and put them into ./pretrained_models. Further, download OSTrack pretrained model from here (e.g., vitb_384_mae_ce_32x4_ep300.pth) and put it into ./pytracking/pretrain. In addition, download [nerf_llff_data] (e.g, horns), and put them into ./example/3d. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./ and get ./pretrained_models. Additionally, download pretrain, put the directory into ./pytracking and get ./pytracking/pretrain.

For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM

bash script/remove_anything_3d.sh

Specify a 3d scene, a point, scene config and mask index (indicating using which mask result of the first view), and Remove Anything 3D will remove the object from the whole scene.

python remove_anything_3d.py \
      --input_dir ./example/3d/horns \
      --coords_type key_in \
      --point_coords 830 405 \
      --point_labels 1 \
      --dilate_kernel_size 15 \
      --output_dir ./results \
      --sam_model_type "vit_h" \
      --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
      --lama_config ./lama/configs/prediction/default.yaml \
      --lama_ckpt ./pretrained_models/big-lama \
      --tracker_ckpt vitb_384_mae_ce_32x4_ep300 \
      --mask_idx 1 \
      --config ./nerf/configs/horns.txt \
      --expname horns

The --mask_idx is usually set to 1, which typically is the most confident mask result of the first frame. If the object is not segmented out well, you can try other masks (0 or 2).

📌 Remove Anything Video

With a single click on an object in the first video frame, Remove Anything Video can remove the object from the whole video!

  • Click on an object in the first frame of a video;
  • SAM segments the object out (with three possible masks);
  • Select one mask;
  • A tracking model such as OSTrack is ultilized to track the object in the video;
  • SAM segments the object out in each frame according to tracking results;
  • A video inpainting model such as STTN is ultilized to inpaint the object in each frame.

Installation

Requires python>=3.8

python -m pip install torch torchvision torchaudio
python -m pip install -e segment_anything
python -m pip install -r lama/requirements.txt
python -m pip install jpeg4py lmdb

Usage

Download the model checkpoints provided in Segment Anything and STTN (e.g., sam_vit_h_4b8939.pth and sttn.pth), and put them into ./pretrained_models. Further, download OSTrack pretrained model from here (e.g., vitb_384_mae_ce_32x4_ep300.pth) and put it into ./pytracking/pretrain. For simplicity, you can also go here, directly download pretrained_models, put the directory into ./ and get ./pretrained_models. Additionally, download pretrain, put the directory into ./pytracking and get ./pytracking/pretrain.

For MobileSAM, the sam_model_type should use "vit_t", and the sam_ckpt should use "./weights/mobile_sam.pt". For the MobileSAM project, please refer to MobileSAM

bash script/remove_anything_video.sh

Specify a video, a point, video FPS and mask index (indicating using which mask result of the first frame), and Remove Anything Video will remove the object from the whole video.

python remove_anything_video.py \
    --input_video ./example/video/paragliding/original_video.mp4 \
    --coords_type key_in \
    --point_coords 652 162 \
    --point_labels 1 \
    --dilate_kernel_size 15 \
    --output_dir ./results \
    --sam_model_type "vit_h" \
    --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth \
    --lama_config lama/configs/prediction/default.yaml \
    --lama_ckpt ./pretrained_models/big-lama \
    --tracker_ckpt vitb_384_mae_ce_32x4_ep300 \
    --vi_ckpt ./pretrained_models/sttn.pth \
    --mask_idx 2 \
    --fps 25

The --mask_idx is usually set to 2, which typically is the most confident mask result of the first frame. If the object is not segmented out well, you can try other masks (0 or 1).

Demo

Acknowledgments

Other Interesting Repositories

Citation

If you find this work useful for your research, please cite us:

@article{yu2023inpaint,
  title={Inpaint Anything: Segment Anything Meets Image Inpainting},
  author={Yu, Tao and Feng, Runseng and Feng, Ruoyu and Liu, Jinming and Jin, Xin and Zeng, Wenjun and Chen, Zhibo},
  journal={arXiv preprint arXiv:2304.06790},
  year={2023}
}

Star History Chart

inpaint-anything's People

Contributors

advaypal avatar chchnii avatar dlwangsan avatar florinshen avatar geekyutao avatar hllj avatar jinx-ustc avatar jmliu206 avatar qiaoyu1002 avatar ruoyufeng avatar rysonfeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

inpaint-anything's Issues

Expected running time for Remove Anything operation

I tried running the remove_anything command as mentioned in the README and it ended up taking 4m 30s to finish. I ran this on Colab without GPU. Is this the expected timeframe or am I doing something wrong?

Thanks

Separate all segments?

This is not an error, more a question; is it somehow possible to segment any object in the image + fill the hole and save the segmentations as separate pngs?

Lama Refine

hello, I don't understand why rm lama refine ?

python -m pip install lama_requirements_windows.txt

python -m pip install lama_requirements_windows.txt
ERROR: Could not find a version that satisfies the requirement lama_requirements_windows.txt (from versions: none)
ERROR: No matching distribution found for lama_requirements_windows.txt

How to enter multiple coordinates?

Hi, I have a question, how to enter multiple coordinates?

python remove_anything.py \

--input_img ~/input/test1.png \
--point_coords 58 127 778 127 58 366 778 366 \
--point_labels 4 \
--dilate_kernel_size 15 \
--output_dir ~/output \
--sam_model_type "vit_h" \
--sam_ckpt ~/sam/sam_vit_h_4b8939.pth \
--lama_config ./lama/configs/prediction/default.yaml \
--lama_ckpt big-lama

Traceback (most recent call last):
File "/root/Inpaint-Anything/remove_anything.py", line 75, in
masks, _, _ = predict_masks_with_sam(
File "/root/Inpaint-Anything/sam_segment.py", line 28, in predict_masks_with_sam
masks, scores, logits = predictor.predict(
File "/root/Inpaint-Anything/segment_anything/segment_anything/predictor.py", line 154, in predict
masks, iou_predictions, low_res_masks = self.predict_torch(
File "/root/miniconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/root/Inpaint-Anything/segment_anything/segment_anything/predictor.py", line 222, in predict_torch
sparse_embeddings, dense_embeddings = self.model.prompt_encoder(
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/Inpaint-Anything/segment_anything/segment_anything/modeling/prompt_encoder.py", line 155, in forward
point_embeddings = self._embed_points(coords, labels, pad=(boxes is None))
File "/root/Inpaint-Anything/segment_anything/segment_anything/modeling/prompt_encoder.py", line 84, in _embed_points
points = torch.cat([points, padding_point], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 2 for tensor number 1 in the list.

Demo

is there a hugging face for this?

FileNotFoundError: [Errno 2] No such file or directory: 'sam_vit_h_4b8939.pth' run error

I am trying to do a Fill any thing and get an execution error like this Please help me with a solution.

Traceback (most recent call last):
File "fill_anything.py", line 79, in
masks, _, _ = predict_masks_with_sam(
File "/home/haga/Inpaint-Anything/sam_segment.py", line 23, in predict_masks_with_sam
sam = sam_model_registrymodel_type
File "/home/haga/Inpaint-Anything/segment_anything/segment_anything/build_sam.py", line 15, in build_sam_vit_h
return _build_sam(
File "/home/haga/Inpaint-Anything/segment_anything/segment_anything/build_sam.py", line 104, in _build_sam
with open(checkpoint, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'sam_vit_h_4b8939.pth'

Does not install

I still have the same problem with the installation of the software. Tried what you suggested. Same error:

D:\Inpaint-Anything>python -m pip install -e segment_anything
Defaulting to user installation because normal site-packages is not writeable
ERROR: segment_anything is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).

D:\Inpaint-Anything>

Try Inpaint Anything in A1111 Stable Diffusion WebUI

An issue inside my extension redirected me here. Many people might be excited about this work, but have no good user interface. If you use A1111 SD-WebUI, my SAM extension + Mikubill ControlNet extension are all you need to try inpaint-anything inside a good and easy-to-use UI. Check README and this tutorial for how to use. You do not need to download whatever huge annoying general-purposed inpaint model. A base model + lllyasviel ControlNet inpainting model + SAM are all you need.

Anyone know the list of models for all the features ?

when i test fill_anything i get :
1、vae\diffusion_pytorch_model.safetensors not found
2、text_encoder\model.safetensors not found

i want to know all the models download URL for the features:
[Remove Anything]
[Fill Anything]
[Replace Anything]
[Remove Anything 3D]
[Remove Anything Video]

thank you !

Aborted (core dumped)

Although remove_anything.py works fine for me, when I try other features such as fill_anything.py or replace_anything.py
I`m stuck with the error below:

malloc(): invalid next->prev_inuse (unsorted)
Aborted (core dumped)

apparently it is generally displayed when running C++ codes.

The Gradio Web UI segment mask shows the last image not current image

It is an awesome segment project. I try your new web ui, but I see the wrong result in the web ui.
result

First I load the baseball image, I click the image and obtain the result. Then I reset the web ui and I load the dog image.When I click the dog image and click the predict mask using sam. The segment result is last image not the current image.

Run remove_anything.py error

When I run your demo remove_anything.py, it returns error that

ImportError: cannot import name 'SamPredictor' from 'segment_anything' (unknown location)
This error is in from segment_anything import SamPredictor, sam_model_registry

I find the SamPredictor class is in predictor.py not segment_anything.py. Does anyone has this problem?

My run order is

python ./remove_anything.py --input_img ./example/remove-anything/dog.jpg --point_coords 200 450 --point_labels 1 --dilate_kernel_size 15 --output_dir ./results --sam_model_type "vit_h" --sam_ckpt ./sam_vit_h_4b8939.pth --lama_config ./lama/configs/prediction/default.yaml --lama_ckpt ./pretrained_models/big-lama

I followed the readme steps and installed successfully. But an error occurred when executing the remove_anything.py file。

你好,我这出现一个问题,我按照readme一步步安装成功,最后执行remove_anything.py文件时报错:

Detectron v2 is not installed
Traceback (most recent call last):
File "D:\AI\Inpaint-Anything\remove_anything.py", line 75, in
masks, _, _ = predict_masks_with_sam(
File "D:\AI\Inpaint-Anything\sam_segment.py", line 24, in predict_masks_with_sam
sam.to(device=device)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 820, in apply
param_applied = fn(param)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "C:\Users\xxx\PycharmProjects\pythonProject2\venv\lib\site-packages\torch\cuda_init
.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

How to turn the webUI demo app into a Linux service

I have write a simple service unit but not work. The error as blow
image
service Uint as blow

[Unit]
Description=inpaint_anything_service
After=network.target syslog.target
Wants=network.target

[Service]
Type=simple
Restart=on-failure
RestartSec=5s
#Environment="PATH=/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/mambaforge/envs/anything/bin:/opt/Inpaint-Anything/Inpaint-Anything"
ExecStart=/opt/mambaforge/envs/anything/bin/python /opt/Inpaint-Anything/Inpaint-Anything/app/app.py
ExecReload=/opt/mambaforge/envs/anything/bin/python /opt/Inpaint-Anything/Inpaint-Anything/app/app.py
ExecStop=/bin/kill $MAINPID
User=root
Group=root

[Install]
WantedBy=multi-user.target

OS: RedHat 9.1 x64

Of course, running "python app.py" under the app folder can run the Web UI normally.

Does Inpaint-Anything not support MacBook Pro with Intel CPU?

When I run the command:
python3 -m pip install -r lama/requirements.txt
the system displays an error message saying that
the clang compiler does not support '-march=native'.

Does Inpaint-Anything not support MacBook Pro with Intel CPU? My MacBook Pro CPU is intel Core i5 Quad 2GHz, MacOS version is 13.1.

error.log

Run app.py Error in app folder

ENV:
windows 11
python 3.11.3

log:

C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\app>python app.py --lama_config ./lama/configs/prediction/default.yaml --lama_ckpt ./pretrained_models/big-lama --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth
Traceback (most recent call last):
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\app\app.py", line 13, in <module>
    from lama_inpaint import inpaint_img_with_lama, build_lama_model, inpaint_img_with_builded_lama
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\lama_inpaint.py", line 19, in <module>
    from saicinpainting.evaluation.utils import move_to_device
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\lama\saicinpainting\evaluation\__init__.py", line 6, in <module>
    from saicinpainting.evaluation.losses.base_loss import SSIMScore, LPIPSScore, FIDScore
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\lama\saicinpainting\evaluation\losses\base_loss.py", line 15, in <module>
    from .lpips import PerceptualLoss
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\lama\saicinpainting\evaluation\losses\lpips.py", line 15, in <module>
    from saicinpainting.utils import get_shape
  File "C:\Users\Jarvis\Desktop\Project\AI\Inpaint-Anything\lama\saicinpainting\utils.py", line 12, in <module>
    from pytorch_lightning import seed_everything
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\__init__.py", line 28, in <module>
    from pytorch_lightning import metrics  # noqa: E402
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.functional.accuracy import _accuracy_compute, _accuracy_update
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\functional\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.functional.accuracy import accuracy  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\functional\accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.classification.helpers import _input_format_classification, DataType
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\helpers.py", line 19, in <module>
    from pytorch_lightning.metrics.utils import select_topk, to_onehot
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\metrics\utils.py", line 18, in <module>
    from pytorch_lightning.utilities import rank_zero_warn
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\utilities\__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 25, in <module>
    from pytorch_lightning.utilities.imports import _compare_version, _TORCHTEXT_AVAILABLE
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\utilities\imports.py", line 76, in <module>
    _HYDRA_EXPERIMENTAL_AVAILABLE = _module_available("hydra.experimental")
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\pytorch_lightning\utilities\imports.py", line 35, in _module_available
    return find_spec(module_path) is not None
           ^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib.util>", line 94, in find_spec
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\__init__.py", line 5, in <module>
    from hydra import utils
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\utils.py", line 8, in <module>
    import hydra._internal.instantiate._instantiate2
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\_internal\instantiate\_instantiate2.py", line 11, in <module>
    from hydra._internal.utils import _locate
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\_internal\utils.py", line 17, in <module>
    from hydra.core.utils import get_valid_filename, validate_config_path
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\core\utils.py", line 19, in <module>
    from hydra.core.hydra_config import HydraConfig
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\core\hydra_config.py", line 6, in <module>
    from hydra.conf import HydraConf
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\conf\__init__.py", line 45, in <module>
    class JobConf:
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\site-packages\hydra\conf\__init__.py", line 70, in JobConf
    @dataclass
     ^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1223, in dataclass
    return wrap(cls)
           ^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 1213, in wrap
    return _process_class(cls, init, repr, eq, order, unsafe_hash,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 958, in _process_class
    cls_fields.append(_get_field(cls, name, type, kw_only))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Jarvis\AppData\Local\Programs\Python\Python311\Lib\dataclasses.py", line 815, in _get_field
    raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory

help: how to erase the shadow generated by Inpainting?

desc

i have an tourist img, i use the Inpaint-Anything to remove one tourister,
it can remove well, but leaving with shadow,
could you please tell me how to erase the shadow, so it looks like the lady take a picture by herself?

raw image

图片名称

inpainting image

图片名称

The pictures are all from the Internet If there is infringement, please contact to delete Thank you

Detectron v2 is not installed

I get the message and the program is pause. I cannot see any process to continue. I wonder how I can run the code?

I’m having a problem, can someone help me?

File "F:\Inpaint-Anything\fill_anything.py", line 126, in
img_filled = fill_img_with_sd(
File "F:\Inpaint-Anything\stable_diffusion_inpaint.py", line 21, in fill_img_with_sd
pipe = StableDiffusionInpaintPipeline.from_pretrained(
File "F:\miniConda\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1008, in from_pretrained
loaded_sub_model = load_sub_model(
File "F:\miniConda\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 444, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "F:\miniConda\lib\site-packages\diffusers\models\modeling_utils.py", line 491, in from_pretrained
config, unused_kwargs, commit_hash = cls.load_config(
File "F:\miniConda\lib\site-packages\diffusers\configuration_utils.py", line 349, in load_config
raise EnvironmentError(
OSError: Error no file named config.json found in directory C:\Users\zhou.cache\huggingface\hub\models--stabilityai--stable-diffusion-2-inpainting\snapshots\781cb3e2113c1932245692810716dfd27e355ab6\vae.

Run app.py Error:cannot import name 'build_lama_model' from 'lama_inpaint'

DESC

hi, i am so surprised by your latest feature, the local web ui, so i quickly run the app.py for a test, however i get an error like this: cannot import name 'build_lama_model' from 'lama_inpaint'
please help me solve the problem, or give me more information about this, thanks!

COMMAND

python app.py \
>       --lama_config  /home/lcc/github.com/Inpaint-Anything/pretrained_models/big-lama/config.yaml \
>       --lama_ckpt  /home/lcc/github.com/Inpaint-Anything/pretrained_models/big-lama/models/best.ckpt \
>       --sam_ckpt /home/lcc/github.com/Inpaint-Anything/pretrained_models/sam_vit_h_4b8939.pth

ERROR

Detectron v2 is not installed
Traceback (most recent call last):
  File "/home/lcc/github.com/Inpaint-Anything/app/app.py", line 13, in <module>
    from lama_inpaint import inpaint_img_with_lama, build_lama_model, inpaint_img_with_builded_lama
ImportError: cannot import name 'build_lama_model' from 'lama_inpaint' (/home/lcc/github.com/Inpaint-Anything/lama_inpaint.py)

MY ENV

centos7
GCC9.5
python3.9
cuda 11.3
torch 1.12.1

How to get accurate point_coords?

想用自己的数据进行inpaint,怎么获取要去掉物体--point_coords这个参数?

运行的时候没有SAM那个交互式的界面,这是正常的吗?怎么开启SAM交互式界面获得--point_coords?

谢谢!!

Python complains when I attempt to install segment anything

Here is the error message:

ERROR: segment_anything is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).

I must be doing something wrong. Please tell me what it is. Thank you.

Do you have simple interface to use these?

I would like to make a tutorial video on my channel SECourses but I don't see gradio interface or such as in your examples

If you don't have can you release the interface you used to generate those examples images you put in main page?

That shows masked / segmented content after clicking

Question: Remove Anything uses only Lama

Good Morning

Thank you for your contribution. I was wondering why you used only Lama for "Remove Anything". Did you find it better suited than Stable Diffusion inpainting?

运行时出现错误

2023-04-19 14:17:53.241788: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-04-19 14:17:53.291639: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-19 14:17:54.054049: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Traceback (most recent call last):
File "replace_anything.py", line 77, in
masks, _, _ = predict_masks_with_sam(
File "/home/Inpaint-Anything/sam_segment.py", line 23, in predict_masks_with_sam
sam = sam_model_registrymodel_type
File "/home/Inpaint-Anything/segment_anything/segment_anything/build_sam.py", line 15, in build_sam_vit_h
return _build_sam(
File "/home/Inpaint-Anything/segment_anything/segment_anything/build_sam.py", line 105, in _build_sam
state_dict = torch.load(f)
File "/opt/miniconda3/envs/Inpainting/lib/python3.8/site-packages/torch/serialization.py", line 809, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/opt/miniconda3/envs/Inpainting/lib/python3.8/site-packages/torch/serialization.py", line 1172, in _load
result = unpickler.load()
File "/opt/miniconda3/envs/Inpainting/lib/python3.8/site-packages/torch/serialization.py", line 1142, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/opt/miniconda3/envs/Inpainting/lib/python3.8/site-packages/torch/serialization.py", line 1112, in load_tensor
storage = zip_file.get_storage_from_record(name, numel, torch.UntypedStorage)._typed_storage()._untyped_storage
RuntimeError: PytorchStreamReader failed reading file data/11: invalid header or archive is corrupted

应该如何解决?还有一个问题,那个big-lama是把链接里面两个文件都放在pretrained_models里面吗?

How can you crop the painted/replaced area?

Hello! Great work this lib is amazing so far!

I am wondering how can we crop the area segmented by SAM, inpaint it with SD and expose that as an image on its own; The output Im trying to get is a mask of a point selection inpainted and everything else on the image as transparent or black

Would anyone know how to do that? Maybe Im missing something and SAM already exposes the mask as black/white images? I feel like everything is there to do that and its just a matter of piping things together properly, any help would be appreciated!

Solve Prompt Matters

Hi, Thank you for the good paper and project.
I saw the issue of prompts in the research paper. How about developing a model that logically explains complex sentences by dividing them, or using it like Visual ChatGPT?
Specifically, it can solve by decomposing like Visual ChatGPT. This method of breaking complex sentences into shorter sentences and processing them in order.

image

  • The image is contained in Visual ChatGPT paper 3.1, Prompt Managing of System Principles M(P)

如何自動移除物件?

你好,謝謝你們偉大的作品
想請問,如果想要自動消除某個物件,例如移除車子
可以下一個指令,例如 remove car 就把所有車子都移除掉

目前你的版本是手動點擊,不知道能否做到自動化的方式
謝謝

Error: mutable default for field override_dirname is not allowed

I am getting an error when I try to run the first example:

python remove_anything.py --input_img ./example/remove-anything/dog.jpg --point_coords 200 450 --point_labels 1 --dilate_kernel_size 15 --output_dir ./results --sam_model_type "vit_h" --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth --lama_config ./lama/configs/prediction/default.yaml --lama_ckpt ./pretrained_models/big-lama

I am running on Windows 10 - 64 with Python 3.11

I had to update the lama requirements to use the current version of scikit-image ( 0.20.0 ) as the version called for ( 0.17.2 ) would not compile whereas the current version compiles just fine.

Here is the full error output:

Traceback (most recent call last):
  File "F:\ai\Inpaint-Anything\remove_anything.py", line 9, in <module>
    from lama_inpaint import inpaint_img_with_lama
  File "F:\ai\Inpaint-Anything\lama_inpaint.py", line 19, in <module>
    from saicinpainting.evaluation.utils import move_to_device
  File "F:\ai\Inpaint-Anything\lama\saicinpainting\evaluation\__init__.py", line 6, in <module>
    from saicinpainting.evaluation.losses.base_loss import SSIMScore, LPIPSScore, FIDScore
  File "F:\ai\Inpaint-Anything\lama\saicinpainting\evaluation\losses\base_loss.py", line 15, in <module>
    from .lpips import PerceptualLoss
  File "F:\ai\Inpaint-Anything\lama\saicinpainting\evaluation\losses\lpips.py", line 15, in <module>
    from saicinpainting.utils import get_shape
  File "F:\ai\Inpaint-Anything\lama\saicinpainting\utils.py", line 12, in <module>
    from pytorch_lightning import seed_everything
  File "C:\Python311\Lib\site-packages\pytorch_lightning\__init__.py", line 28, in <module>
    from pytorch_lightning import metrics  # noqa: E402
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.functional.accuracy import _accuracy_compute, _accuracy_update
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\functional\__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.functional.accuracy import accuracy  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\functional\accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.classification.helpers import _input_format_classification, DataType
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\classification\helpers.py", line 19, in <module>
    from pytorch_lightning.metrics.utils import select_topk, to_onehot
  File "C:\Python311\Lib\site-packages\pytorch_lightning\metrics\utils.py", line 18, in <module>
    from pytorch_lightning.utilities import rank_zero_warn
  File "C:\Python311\Lib\site-packages\pytorch_lightning\utilities\__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 25, in <module>
    from pytorch_lightning.utilities.imports import _compare_version, _TORCHTEXT_AVAILABLE
  File "C:\Python311\Lib\site-packages\pytorch_lightning\utilities\imports.py", line 76, in <module>
    _HYDRA_EXPERIMENTAL_AVAILABLE = _module_available("hydra.experimental")
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\pytorch_lightning\utilities\imports.py", line 35, in _module_available
    return find_spec(module_path) is not None
           ^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib.util>", line 94, in find_spec
  File "C:\Python311\Lib\site-packages\hydra\__init__.py", line 5, in <module>
    from hydra import utils
  File "C:\Python311\Lib\site-packages\hydra\utils.py", line 8, in <module>
    import hydra._internal.instantiate._instantiate2
  File "C:\Python311\Lib\site-packages\hydra\_internal\instantiate\_instantiate2.py", line 11, in <module>
    from hydra._internal.utils import _locate
  File "C:\Python311\Lib\site-packages\hydra\_internal\utils.py", line 17, in <module>
    from hydra.core.utils import get_valid_filename, validate_config_path
  File "C:\Python311\Lib\site-packages\hydra\core\utils.py", line 19, in <module>
    from hydra.core.hydra_config import HydraConfig
  File "C:\Python311\Lib\site-packages\hydra\core\hydra_config.py", line 6, in <module>
    from hydra.conf import HydraConf
  File "C:\Python311\Lib\site-packages\hydra\conf\__init__.py", line 45, in <module>
    class JobConf:
  File "C:\Python311\Lib\site-packages\hydra\conf\__init__.py", line 70, in JobConf
    @dataclass
     ^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 1223, in dataclass
    return wrap(cls)
           ^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 1213, in wrap
    return _process_class(cls, init, repr, eq, order, unsafe_hash,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 958, in _process_class
    cls_fields.append(_get_field(cls, name, type, kw_only))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 815, in _get_field
    raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory

Detectron v2 is not installed

When i run this code following: python remove_anything.py --input_img ./example/remove-anything/dog.jpg --point_coords 200 450 --point_labels 1 --dilate_kernel_size 15 --output_dir ./results --sam_model_type "vit_h" --sam_ckpt ./pretrained_models/sam_vit_h_4b8939.pth --lama_config ./lama/configs/prediction/default.yaml --lama_ckpt ./pretrained_models/big-lama, there have:
image

i keep following as the Installation and usage instruation, but i can't know what happend to get the "Detectron v2 is not installed" error?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.