Code Monkey home page Code Monkey logo

Comments (1)

dengfenglai321 avatar dengfenglai321 commented on August 29, 2024

Good project, I want to know what is the different between with StoryDiffusion.

env: windows 11 x64, Python3.10.11 + torch 2.1.0+ cu11.8, prepared all stable-diffusion-v1-5/sd-vae-ft-mse/IP-Adapter checkpoints.

error message in the terminal windows:

D:\AITest\AutoStudio\model\pipeline_stable_diffusion.py:41: FutureWarning: Importing DiffusionPipeline or ImagePipelineOutput from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead. from diffusers.pipeline_utils import DiffusionPipeline Using box scale: (512, 512) D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only! warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\AITest\AutoStudio\run_me.py:31 in │ │ │ │ 28 │ │ 29 from model.unet_2d_condition import UNet2DConditionModel │ │ 30 from model.utils import show_boxes, show_image, get_global_prompt │ │ ❱ 31 from model.autostudio import AUTOSTUDIO, AUTOSTUDIOPlus, AUTOSTUDIOXL, AUTOSTUDIOXLPlus │ │ 32 │ │ 33 from detectSam import EFFICIENT_SAM_MODEL, GROUNDING_DINO_MODEL │ │ 34 │ │ │ │ D:\AITest\AutoStudio\model\autostudio.py:16 in │ │ │ │ 13 │ │ 14 from PIL import Image │ │ 15 from typing import List │ │ ❱ 16 from detectSam import process_image │ │ 17 from diffusers.pipelines.controlnet import MultiControlNetModel │ │ 18 from safetensors import safe_open │ │ 19 from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection │ │ │ │ D:\AITest\AutoStudio/DETECT_SAM\detectSam.py:36 in │ │ │ │ 33 │ RESULTS = "results" │ │ 34 │ │ │ 35 │ DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") │ │ ❱ 36 │ EFFICIENT_SAM_MODEL = load(device=DEVICE) │ │ 37 │ GROUNDING_DINO_MODEL = Model(f"{dpath}/Grounding-DINO/groundingdino/config/Grounding │ │ 38 │ │ │ 39 │ BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator() │ │ │ │ D:\AITest\AutoStudio/DETECT_SAM\efficient_sam.py:14 in load │ │ │ │ 11 │ │ 12 def load(device: torch.device) -> torch.jit.ScriptModule: │ │ 13 │ if device.type == "cuda": │ │ ❱ 14 │ │ model = torch.jit.load(GPU_EFFICIENT_SAM_CHECKPOINT) │ │ 15 │ else: │ │ 16 │ │ model = torch.jit.load(CPU_EFFICIENT_SAM_CHECKPOINT) │ │ 17 │ model.eval() │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\torch\jit_serialization.py:152 in load │ │ │ │ 149 │ │ │ 150 │ if isinstance(f, str): │ │ 151 │ │ if not os.path.exists(f): # type: ignore[type-var] │ │ ❱ 152 │ │ │ raise ValueError(f"The provided filename {f} does not exist") # type: ignor │ │ 153 │ │ if os.path.isdir(f): │ │ 154 │ │ │ raise ValueError(f"The provided filename {f} is a directory") # type: ignor │ │ 155 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: The provided filename D:\AITest\AutoStudio\DETECT_SAM/pretrain/efficient_sam_s_gpu.jit does not exist Press any key to continue . . .

I find and download the efficient_sam_s_gpu.jit from "https://huggingface.co/merve/EfficientSAM" to put it above directory, and go ahead.

2.------ D:\AITest\AutoStudio\model\pipeline_stable_diffusion.py:41: FutureWarning: Importing DiffusionPipeline or ImagePipelineOutput from diffusers.pipeline_utils is deprecated. Please import from diffusers.pipelines.pipeline_utils instead. from diffusers.pipeline_utils import DiffusionPipeline Using box scale: (512, 512) D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only! warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\configuration_utils.py:629 in │ │ _get_config_dict │ │ │ │ 626 │ │ │ │ │ 627 │ │ │ try: │ │ 628 │ │ │ │ # Load from local folder or from cache or download from model Hub and ca │ │ ❱ 629 │ │ │ │ resolved_config_file = cached_file( │ │ 630 │ │ │ │ │ pretrained_model_name_or_path, │ │ 631 │ │ │ │ │ configuration_file, │ │ 632 │ │ │ │ │ cache_dir=cache_dir, │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\utils\hub.py:417 in cached_file │ │ │ │ 414 │ user_agent = http_user_agent(user_agent) │ │ 415 │ try: │ │ 416 │ │ # Load from URL or cache if already cached │ │ ❱ 417 │ │ resolved_file = hf_hub_download( │ │ 418 │ │ │ path_or_repo_id, │ │ 419 │ │ │ filename, │ │ 420 │ │ │ subfolder=None if len(subfolder) == 0 else subfolder, │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\huggingface_hub\utils_validators.py:110 in │ │ inner_fn │ │ │ │ 107 │ │ │ kwargs.items(), # Kwargs values │ │ 108 │ │ ): │ │ 109 │ │ │ if arg_name in ["repo_id", "from_id", "to_id"]: │ │ ❱ 110 │ │ │ │ validate_repo_id(arg_value) │ │ 111 │ │ │ │ │ 112 │ │ │ elif arg_name == "token" and arg_value is not None: │ │ 113 │ │ │ │ has_token = True │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\huggingface_hub\utils_validators.py:158 in │ │ validate_repo_id │ │ │ │ 155 │ │ raise HFValidationError(f"Repo id must be a string, not {type(repo_id)}: '{repo │ │ 156 │ │ │ 157 │ if repo_id.count("/") > 1: │ │ ❱ 158 │ │ raise HFValidationError( │ │ 159 │ │ │ "Repo id must be in the form 'repo_name' or 'namespace/repo_name':" │ │ 160 │ │ │ f" '{repo_id}'. Use repo_type argument if needed." │ │ 161 │ │ ) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data2/chengjunhao/THEATERGEN/pretrained_models/dino_bert'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\AITest\AutoStudio\run_me.py:31 in │ │ │ │ 28 │ │ 29 from model.unet_2d_condition import UNet2DConditionModel │ │ 30 from model.utils import show_boxes, show_image, get_global_prompt │ │ ❱ 31 from model.autostudio import AUTOSTUDIO, AUTOSTUDIOPlus, AUTOSTUDIOXL, AUTOSTUDIOXLPlus │ │ 32 │ │ 33 from detectSam import EFFICIENT_SAM_MODEL, GROUNDING_DINO_MODEL │ │ 34 │ │ │ │ D:\AITest\AutoStudio\model\autostudio.py:16 in │ │ │ │ 13 │ │ 14 from PIL import Image │ │ 15 from typing import List │ │ ❱ 16 from detectSam import process_image │ │ 17 from diffusers.pipelines.controlnet import MultiControlNetModel │ │ 18 from safetensors import safe_open │ │ 19 from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection │ │ │ │ D:\AITest\AutoStudio/DETECT_SAM\detectSam.py:37 in │ │ │ │ 34 │ │ │ 35 │ DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") │ │ 36 │ EFFICIENT_SAM_MODEL = load(device=DEVICE) │ │ ❱ 37 │ GROUNDING_DINO_MODEL = Model(f"{dpath}/Grounding-DINO/groundingdino/config/Grounding │ │ 38 │ │ │ 39 │ BOUNDING_BOX_ANNOTATOR = sv.BoundingBoxAnnotator() │ │ 40 │ MASK_ANNOTATOR = sv.MaskAnnotator() │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\util\inference.py:142 in init │ │ │ │ 139 │ │ model_checkpoint_path: str, │ │ 140 │ │ device: str = "cuda" │ │ 141 │ ): │ │ ❱ 142 │ │ self.model = load_model( │ │ 143 │ │ │ model_config_path=model_config_path, │ │ 144 │ │ │ model_checkpoint_path=model_checkpoint_path, │ │ 145 │ │ │ device=device │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\util\inference.py:40 in load_model │ │ │ │ 37 def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda") │ │ 38 │ args = SLConfig.fromfile(model_config_path) │ │ 39 │ args.device = device │ │ ❱ 40 │ model = build_model(args) │ │ 41 │ checkpoint = torch.load(model_checkpoint_path, map_location="cpu") │ │ 42 │ model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) │ │ 43 │ model.eval() │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\models__init__.py:17 in │ │ build_model │ │ │ │ 14 │ │ │ 15 │ assert args.modelname in MODULE_BUILD_FUNCS.module_dict │ │ 16 │ build_func = MODULE_BUILD_FUNCS.get(args.modelname) │ │ ❱ 17 │ model = build_func(args) │ │ 18 │ return model │ │ 19 │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\models\GroundingDINO\groundingdino. │ │ py:395 in build_groundingdino │ │ │ │ 392 │ dec_pred_bbox_embed_share = args.dec_pred_bbox_embed_share │ │ 393 │ sub_sentence_present = args.sub_sentence_present │ │ 394 │ │ │ ❱ 395 │ model = GroundingDINO( │ │ 396 │ │ backbone, │ │ 397 │ │ transformer, │ │ 398 │ │ num_queries=args.num_queries, │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\models\GroundingDINO\groundingdino. │ │ py:115 in init │ │ │ │ 112 │ │ │ │ 113 │ │ # bert │ │ 114 │ │ self.tokenizer = get_tokenlizer.get_tokenlizer(text_encoder_type) │ │ ❱ 115 │ │ self.bert = get_tokenlizer.get_pretrained_language_model(text_encoder_type) │ │ 116 │ │ self.bert.pooler.dense.weight.requires_grad(False) │ │ 117 │ │ self.bert.pooler.dense.bias.requires_grad_(False) │ │ 118 │ │ self.bert = BertModelWarper(bert_model=self.bert) │ │ │ │ D:\AITest\AutoStudio\DETECT_SAM/Grounding-DINO\groundingdino\util\get_tokenlizer.py:25 in │ │ get_pretrained_language_model │ │ │ │ 22 │ │ 23 def get_pretrained_language_model(text_encoder_type): │ │ 24 │ if text_encoder_type == "bert-base-uncased" or (os.path.isdir(text_encoder_type) and │ │ ❱ 25 │ │ return BertModel.from_pretrained("/data2/chengjunhao/THEATERGEN/pretrained_model │ │ 26 │ if text_encoder_type == "roberta-base": │ │ 27 │ │ return RobertaModel.from_pretrained(text_encoder_type) │ │ 28 │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\modeling_utils.py:2251 in │ │ from_pretrained │ │ │ │ 2248 │ │ # Load config if we don't provide a configuration │ │ 2249 │ │ if not isinstance(config, PretrainedConfig): │ │ 2250 │ │ │ config_path = config if config is not None else pretrained_model_name_or_pat │ │ ❱ 2251 │ │ │ config, model_kwargs = cls.config_class.from_pretrained( │ │ 2252 │ │ │ │ config_path, │ │ 2253 │ │ │ │ cache_dir=cache_dir, │ │ 2254 │ │ │ │ return_unused_kwargs=True, │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\configuration_utils.py:547 in │ │ from_pretrained │ │ │ │ 544 │ │ assert config.output_attentions == True │ │ 545 │ │ assert unused_kwargs == {"foo": False} │ │ 546 │ │ ```""" │ │ ❱ 547 │ │ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwarg │ │ 548 │ │ if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["m │ │ 549 │ │ │ logger.warning( │ │ 550 │ │ │ │ f"You are using a model of type {config_dict['model_type']} to instantia │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\configuration_utils.py:574 in │ │ get_config_dict │ │ │ │ 571 │ │ """ │ │ 572 │ │ original_kwargs = copy.deepcopy(kwargs) │ │ 573 │ │ # Get config dict associated with the base config file │ │ ❱ 574 │ │ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwar │ │ 575 │ │ if "_commit_hash" in config_dict: │ │ 576 │ │ │ original_kwargs["_commit_hash"] = config_dict["_commit_hash"] │ │ 577 │ │ │ │ D:\AITest\AutoStudio\python310\lib\site-packages\transformers\configuration_utils.py:650 in │ │ _get_config_dict │ │ │ │ 647 │ │ │ │ raise │ │ 648 │ │ │ except Exception: │ │ 649 │ │ │ │ # For any other exception, we throw a generic error. │ │ ❱ 650 │ │ │ │ raise EnvironmentError( │ │ 651 │ │ │ │ │ f"Can't load the configuration of '{pretrained_model_name_or_path}'. │ │ 652 │ │ │ │ │ " from 'https://huggingface.co/models', make sure you don't have a l │ │ 653 │ │ │ │ │ f" name. Otherwise, make sure '{pretrained_model_name_or_path}' is t │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OSError: Can't load the configuration of '/data2/chengjunhao/THEATERGEN/pretrained_models/dino_bert'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/data2/chengjunhao/THEATERGEN/pretrained_models/dino_bert' is the correct path to a directory containing a config.json file Press any key to continue . . .

I check the "https://github.com/IDEA-Research/GroundingDINO" and "https://github.com/donahowe/Theatergen", but can't find your pretrained model "dino_bert".

Please help me to check and analayze the error message, thanks!

It's path of bert-base-uncased

from autostudio.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.