open-mmlab / playground Goto Github PK
View Code? Open in Web Editor NEWA central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.
License: Apache License 2.0
A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.
License: Apache License 2.0
After installing the dependence, in accordance with the routine test code, displays an error
安装完依赖后,按照例程代码进行测试,显示报错
Traceback (most recent call last):
File "detector_sam_demo.py", line 30, in <module>
import mmdet
File "/home/gcbm/workspace/mmdetection-master/mmdet/__init__.py", line 24, in <module>
assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.3.17, <=1.8.0.
I changed the version to 1.7.1 with error reporting
我将版本更换成1.7.1后报错显示
Traceback (most recent call last):
File "detector_sam_demo.py", line 509, in <module>
main()
File "detector_sam_demo.py", line 418, in main
raise RuntimeError('detection model is not installed,\
RuntimeError: detection model is not installed, please install it follow README
Dear developers,
I set up all things according to your documentation, except that I use Python 3.10.4, and I can only install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0
Everything looks fine except that the auto segmentation does not work when I pick a point.
Below is the output. It says "ML backend returned empty prediction".
Also I notice that the model version is "unknown" instead of "initial"
Please help. Thank you very much!
best regards
Window10 简体中文 64位 系统
Successfully connected to http://127.0.0.1:8003 but it doesn't look like a valid ML backend. Reason: 500 Server Error: INTERNAL SERVER ERROR for url: http://127.0.0.1:8003/setup.
Check the ML backend server console logs to check the status. There might be
something wrong with your model or it might be incompatible with the current labeling configuration.
Version: 1.7.3
{"model_dir":"E:\python\playground\label_anything\sam","status":"UP","v2":false}
Method Not Allowed The method is not allowed for the requested URL.
Anaconda prompt:
127.0.0.1 - - [26/Apr/2023 17:11:02] "POST /setup HTTP/1.1" 500 -
??? why?
do you know whats going on?
In mmdet-sam I tried a the code you gave as an example of inference using DINO and SAM model and got the predicted json file. The last image is my result, the first problem is that I am not sure if it is working and getting the correct annotation and segmentation results? It looks like garbled code, I used utf-8 just. The second problem is that the json file obtained from this inference result is in standard coco format by default?
The code of from mmdet.models.tracks import ByteTrack may be wrong? when i change it to from mmtrack, it raise a error between mmcv2.0 and mmtrack---"AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.3.17, <2.0.0.". i want some help.
I follow the readme and can open the label-studio website. Most steps seem good. However, when I use the point to label the cat, the mask can't be generated by SAM.
This is my shell output when start sam,
and I notice that it not shows same line as readme:
<segment_anything .predictor.SamPredictor object at .....>
Is this the problem that caused the failure?
or when I start 'label-studio start', the output is:
By the way, after setting labeling interface, I can add the model successfully in 'Machine Learning' as :
The labeling page looks like:
When I open the label task, the shell outputs:
What the problems? Thanks!
(rtmdet-sam) F:\playground\label_anything>python F:\playground\label_anything\tools\convert_to_rle_mask_coco.py --json_file_path D:/Desktop/4cats/result.json --out_dir F:\images
Traceback (most recent call last):
File "F:\playground\label_anything\tools\convert_to_rle_mask_coco.py", line 165, in
format_to_coco(args)
File "F:\playground\label_anything\tools\convert_to_rle_mask_coco.py", line 65, in format_to_coco
image_path_from=os.path.join('~/AppData/Local/label-studio/label-studio/media/upload/',os.path.dirname(contents[0]['data']['image']).split('/')[-1])
KeyError: 0
The default image storage method in label anything is to upload your images to labelstudio backend.
But in many cases, the unannotated images have existed at the backend, e.g., the Linux server.
When I tried to use the Label studio 'Add Cloud Storage' method and tried to utilize images stored in the server, the Error is:
Traceback (most recent call last): File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/exceptions.py", line 39, in exception_f return f(*args, **kwargs) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/api.py", line 51, in _predict predictions, model = _manager.predict(tasks, project, label_config, force_reload, try_fetch, **params) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/model.py", line 617, in predict predictions = cls._current_model.model.predict(tasks, **kwargs) File "/mnt/data1/zhangyijun/Code/playground/label_anything/sam/mmdetection.py", line 165, in predict image_path = self.get_local_path(image_url) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/model.py", line 323, in get_local_path return get_local_path(url, project_dir=project_dir, hostname=self.hostname, access_token=self.access_token) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_tools/core/utils/io.py", line 71, in get_local_path raise FileNotFoundError(filepath) FileNotFoundError: /data/images/IMG_20210705_084125__01.jpg
There is no problem running Faster R-CNN models, but running the dino model reported an error
following the error: "Konva error: You may only add groups and shapes to a layer" and "Failed to load resource :the server responded with a status of 413 ()"
I see that you have introduced semi-automatic annotation, thank you for your work, but this is still rather laborious. Is it possible to combine this with the previously introduced mmdet-sam to automatically perform detection segmentation to get the annotation?
In this case, we only need to specify the data folder to be processed and the category.txt to be detected for segmentation. then run the code to output directly to get the annotation file in coco etc. I would appreciate if you have any plans for this.
我看导出来的标签id是根据字母顺序排列的,怎么改成自己想要的排序呢
How to refine the annotations generated by SAM within Label-Studio ?
The page doesn't split automatically when I annotate the images
https://github.com/open-mmlab/playground/blob/9aedd47bfdbd0a73e08dd4ea6fc41b58b82c1da1/mmdet_sam/detector_sam_demo.py#LL254C1-L255C74
我发现这里计算对应关系时,构建的tokenizer使用了args.text_prompt 而不是text_prompt。text_prompt在某些情况下和args.text_prompt并不相同,会不会导致错误的映射关系
How to obtain the pre-trained model of SAM。Can we input the pretrained model of SAM trained on someone dataset.
In mmdet_sam, can I use my own training dataset for object detection and SAM segmentation? If so, what should be changed? Since I directly use the weights and configuration of the DINO model trained with the visdrone aerial photography dataset as parameters, it will display an error, so I would like to ask if only 80 categories of COCO inference are currently supported.
Here is the complete log:
04/19 14:44:16 - mmengine - INFO - Creating Linaqruf/anything-v3.0 by 'HuggingFace'
/home/wushuchen/projects/SAM_test/mmediting/mmedit/models/base_archs/wrapper.py:129: FutureWarning: Accessing config attribute block_out_channels
directly via 'AutoencoderKL' object attribute is deprecated. Please access 'block_out_channels' over 'AutoencoderKL's config object instead, e.g. 'unet.config.block_out_channels'.
return getattr(self.model, name)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:02<00:00, 5.54it/s]
0%| | 0/15 [00:00<?, ?it/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/wushuchen/projects/SAM_test/playground/mmediting_sam/play_controlnet_animation_sam.py:108 │
│ in │
│ │
│ 105 │ init_default_scope('mmedit') │
│ 106 │ │
│ 107 │ # 1. generate animation with mmediting controlnet animation │
│ ❱ 108 │ generate_animation( │
│ 109 │ │ video=config.source_video_frame_path, │
│ 110 │ │ save_path=config.middle_video_frame_path, │
│ 111 │ │ prompt=config.prompt, │
│ │
│ /home/wushuchen/projects/SAM_test/playground/mmediting_sam/play_controlnet_animation_sam.py:19 │
│ in generate_animation │
│ │
│ 16 │ │ │ │ │ height): │
│ 17 │ editor = MMEdit(model_name='controlnet_animation') │
│ 18 │ │
│ ❱ 19 │ editor.infer( │
│ 20 │ │ video=video, │
│ 21 │ │ prompt=prompt, │
│ 22 │ │ negative_prompt=negative_prompt, │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/edit.py:199 in infer │
│ │
│ 196 │ │ │ Dict or List[Dict]: Each dict contains the inference result of │
│ 197 │ │ │ each image or video. │
│ 198 │ │ """ │
│ ❱ 199 │ │ return self.inferencer( │
│ 200 │ │ │ img=img, │
│ 201 │ │ │ video=video, │
│ 202 │ │ │ label=label, │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/apis/inferencers/init.py:116 in call │
│ │
│ 113 │ │ Returns: │
│ 114 │ │ │ Union[Dict, List[Dict]]: Results of inference pipeline. │
│ 115 │ │ """ │
│ ❱ 116 │ │ return self.inferencer(**kwargs) │
│ 117 │ │
│ 118 │ def get_extra_parameters(self) -> List[str]: │
│ 119 │ │ """Each inferencer may has its own parameters. Call this function to │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/autograd/grad_mode.p │
│ y:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/apis/inferencers/controlnet_animation_inferen │
│ cer.py:208 in call │
│ │
│ 205 │ │ │ concat_hed.paste(hed_image, (image_width, 0)) │
│ 206 │ │ │ concat_hed.paste(first_hed, (image_width * 2, 0)) │
│ 207 │ │ │ │
│ ❱ 208 │ │ │ result = self.pipe.infer( │
│ 209 │ │ │ │ control=concat_hed, │
│ 210 │ │ │ │ latent_image=concat_img, │
│ 211 │ │ │ │ prompt=prompt, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/autograd/grad_mode.p │
│ y:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/models/editors/controlnet/controlnet.py:806 │
│ in infer │
│ │
│ 803 │ │ │ latent_model_input = self.test_scheduler.scale_model_input( │
│ 804 │ │ │ │ latent_model_input, t) │
│ 805 │ │ │ │
│ ❱ 806 │ │ │ down_block_res_samples, mid_block_res_sample = self.controlnet( │
│ 807 │ │ │ │ latent_model_input, │
│ 808 │ │ │ │ t, │
│ 809 │ │ │ │ encoder_hidden_states=text_embeddings, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/models/base_archs/wrapper.py:159 in forward │
│ │
│ 156 │ │ Returns: │
│ 157 │ │ │ Any: The output of wrapped module's forward function. │
│ 158 │ │ """ │
│ ❱ 159 │ │ return self.model(*args, **kwargs) │
│ 160 │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/controlne │
│ t.py:525 in forward │
│ │
│ 522 │ │ down_block_res_samples = (sample,) │
│ 523 │ │ for downsample_block in self.down_blocks: │
│ 524 │ │ │ if hasattr(downsample_block, "has_cross_attention") and downsample_block.has │
│ ❱ 525 │ │ │ │ sample, res_samples = downsample_block( │
│ 526 │ │ │ │ │ hidden_states=sample, │
│ 527 │ │ │ │ │ temb=emb, │
│ 528 │ │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/unet_2d_b │
│ locks.py:867 in forward │
│ │
│ 864 │ │ │ │ )[0] │
│ 865 │ │ │ else: │
│ 866 │ │ │ │ hidden_states = resnet(hidden_states, temb) │
│ ❱ 867 │ │ │ │ hidden_states = attn( │
│ 868 │ │ │ │ │ hidden_states, │
│ 869 │ │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ 870 │ │ │ │ │ cross_attention_kwargs=cross_attention_kwargs, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/transform │
│ er_2d.py:265 in forward │
│ │
│ 262 │ │ │
│ 263 │ │ # 2. Blocks │
│ 264 │ │ for block in self.transformer_blocks: │
│ ❱ 265 │ │ │ hidden_states = block( │
│ 266 │ │ │ │ hidden_states, │
│ 267 │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ 268 │ │ │ │ timestep=timestep, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ .py:294 in forward │
│ │
│ 291 │ │ │ norm_hidden_states = self.norm1(hidden_states) │
│ 292 │ │ │
│ 293 │ │ cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not │
│ ❱ 294 │ │ attn_output = self.attn1( │
│ 295 │ │ │ norm_hidden_states, │
│ 296 │ │ │ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention els │
│ 297 │ │ │ attention_mask=attention_mask, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:243 in forward │
│ │
│ 240 │ │ # The Attention
class can call different attention processors / attention func │
│ 241 │ │ # here we simply pass along all tensors to the selected processor class │
│ 242 │ │ # For standard processors that are defined here, **cross_attention_kwargs
is e │
│ ❱ 243 │ │ return self.processor( │
│ 244 │ │ │ self, │
│ 245 │ │ │ hidden_states, │
│ 246 │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:382 in call │
│ │
│ 379 │ │ key = attn.head_to_batch_dim(key) │
│ 380 │ │ value = attn.head_to_batch_dim(value) │
│ 381 │ │ │
│ ❱ 382 │ │ attention_probs = attn.get_attention_scores(query, key, attention_mask) │
│ 383 │ │ hidden_states = torch.bmm(attention_probs, value) │
│ 384 │ │ hidden_states = attn.batch_to_head_dim(hidden_states) │
│ 385 │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:284 in get_attention_scores │
│ │
│ 281 │ │ │ baddbmm_input = attention_mask │
│ 282 │ │ │ beta = 1 │
│ 283 │ │ │
│ ❱ 284 │ │ attention_scores = torch.baddbmm( │
│ 285 │ │ │ baddbmm_input, │
│ 286 │ │ │ query, │
│ 287 │ │ │ key.transpose(-1, -2), │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: CUDA out of memory. Tried to allocate 9.00 GiB (GPU 0; 23.67 GiB total capacity; 14.61 GiB already allocated; 4.53 GiB free; 14.82 GiB reserved in
total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF
{"model_dir":"d:\playground\label_anything\sam","status":"UP","v2":false}
I use MacBookPro, device=mps,torch=2.0, mac system version 13.3.1, no matter where I click, mask and box appear in the upper left corner of the fixed area, may I ask why, the code shows:
[WARNING] ML backend returned empty prediction for project SAM (id=12, url=http://192.168.1.6:8000/)
Thank you for providing such a great project. In some regions, downloading or installing dependent files can be problematic due to network conditions. How about create a docker image? Thanks again.
Traceback (most recent call last):
File ".\play_controlnet_animation_sam.py", line 6, in
from mmedit.edit import MMEdit
ModuleNotFoundError: No module named 'mmedit'
Using mim install mmedit, python install mmedit cannot be solved.
When I select the COCO format to export, the following error occurs.
Traceback (most recent call last):
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\rest_framework\views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\django\utils\decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio\data_export\api.py", line 183, in get
export_stream, content_type, filename = DataExport.generate_export_file(
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio\data_export\models.py", line 161, in generate_export_file
converter.convert(input_json, tmp_dir, output_format, is_dir=False)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio_converter\converter.py", line 209, in convert
self.convert_to_coco(
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio_converter\converter.py", line 700, in convert_to_coco
(x / 100 * width, y / 100 * height) for x, y in label["points"]
KeyError: 'points'
The label-studio ml backend is OOM and be killed after I use it for a while
After I label some amount of image, the RAM usage is increase substantial and every time i switch from one image to another, the RAM increase even more
And it keep increasing after that
windoes下上传数据后,工具无法读取带有中文的图像,将playground\label_anything\sam\mmdetection文件读取图像由cv2.imread改为cv2.imdecode依然存在该问题,mmdetection中predict这句话报错--if kwargs.get('context') is None:,请问有什么解决方法吗
How to install detic?
目前demo只支持单个point+或者单个bbox,能否支持同时多个point +/- 和bbox
RUNTIME ERROR
Validation error
Successfully connected to http://127.0.0.1:8003/ but it doesn't look like a valid ML backend. Reason: 500 Server Error: INTERNAL SERVER ERROR for url: http://127.0.0.1:8003/setup.
Check the ML backend server console logs to check the status. There might be
something wrong with your model or it might be incompatible with the current labeling configuration.
I loaded the SAM back-end in the background, and successfully connected the reasoning model in the front page. However, when marking, the sam reasoning model could not be automatically recognized according to the prompt points or rectangular boxes, and an error was reported in the terminal: ‘’AttributeError: 'MMDetection' object has no attribute 'value'‘
Traceback (most recent call last):
File "tracking_demo.py", line 31, in <module>
from mmdet.models.trackers import ByteTracker
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/__init__.py", line 10, in <module>
from .reid import * # noqa: F401,F403
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/reid/__init__.py", line 2, in <module>
from .base_reid import BaseReID
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/reid/base_reid.py", line 17, in <module>
class BaseReID(ImageClassifier):
NameError: name 'ImageClassifier' is not defined
我查阅代码发现是缺少了mmcls的包导致没有正常导入,我手动添加了mmcls-0.25.0,之后产生了新的报错
When I looked up the code and found that there was a missing package for mmcls that was not being imported properly, I manually added mmcls-0.25.0 and then generated a new error
Traceback (most recent call last):
File "tracking_demo.py", line 31, in <module>
from mmdet.models.trackers import ByteTracker
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/__init__.py", line 3, in <module>
from .data_preprocessors import * # noqa: F401,F403
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/data_preprocessors/__init__.py", line 6, in <module>
from .reid_data_preprocessor import ReIDDataPreprocessor
File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/data_preprocessors/reid_data_preprocessor.py", line 13, in <module>
import mmcls
File "/home/gcbm/workspace/anaconda3/envs/mmtracking-sam/lib/python3.8/site-packages/mmcls/__init__.py", line 55, in <module>
assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.4.2, <=1.9.0.
我尝试使用低版本的mmcv,但是其他的包不能够正常运行。我想知道这是mmcls和mmcv2.0存在冲突吗?我没有找到相应的解决方案
I tried using a lower version of MMCV, but the other packages didn't work. I want to know is there a conflict between mmcls and mmcv2.0? I didn't find a solution
[ERROR] [label_studio_ml.model::get_result_from_last_job::131] 1685446124 job returns exception:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/wyz-labelsam/lib/python3.9/site-packages/label_studio_ml/model.py", line 129, in get_result_from_last_job
result = self.get_result_from_job_id(job_id)
File "/home/ubuntu/anaconda3/envs/wyz-labelsam/lib/python3.9/site-packages/label_studio_ml/model.py", line 111, in get_result_from_job_id
assert isinstance(result, dict)
AssertionError
When I'm labeling, there's a lag. On the backend, you also get the above error message.
I want to use Grounding DINO generating all the bbox it can recognize and use SAM on it.
I think I probably can use coco_cls_name.txt. Can you please share how to generate or download it? Because I don't sure whether . should be include in txt file class name.
Thanks.
If you have any suggestions for Label Anything, you can leave a message here.
/site-packages/label_studio_converter/converter.py", line 917, in rotated_rectangle
label["x"],
KeyError: 'x'
export coco or yolo, this happens, plz help
Is it possible to export file in coco format to target cloud storage? Or how can I manually convert the annotations in cloud storage to coco format? Thanks.
I wish I could add custom shortcut functionality, otherwise I would need more time to switch between samples。
Does the label_anything supports single click to expand or remove mask candidate, please?
The brush tool to refine the mask is not so automatic though...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.