Code Monkey home page Code Monkey logo

playground's Issues

测试MMDet+SAM存在问题

After installing the dependence, in accordance with the routine test code, displays an error
安装完依赖后,按照例程代码进行测试,显示报错

Traceback (most recent call last):
  File "detector_sam_demo.py", line 30, in <module>
    import mmdet
  File "/home/gcbm/workspace/mmdetection-master/mmdet/__init__.py", line 24, in <module>
    assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.3.17, <=1.8.0.

I changed the version to 1.7.1 with error reporting
我将版本更换成1.7.1后报错显示

Traceback (most recent call last):
  File "detector_sam_demo.py", line 509, in <module>
    main()
  File "detector_sam_demo.py", line 418, in main
    raise RuntimeError('detection model is not installed,\
RuntimeError: detection model is not installed,                 please install it follow README

ML backend returned empty prediction

Dear developers,

I set up all things according to your documentation, except that I use Python 3.10.4, and I can only install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0

Everything looks fine except that the auto segmentation does not work when I pick a point.

Below is the output. It says "ML backend returned empty prediction".

image

Also I notice that the model version is "unknown" instead of "initial"

image

Please help. Thank you very much!

best regards

[label_anything]: Add model - Validation error

Window10 简体中文 64位 系统

Successfully connected to http://127.0.0.1:8003 but it doesn't look like a valid ML backend. Reason: 500 Server Error: INTERNAL SERVER ERROR for url: http://127.0.0.1:8003/setup.
Check the ML backend server console logs to check the status. There might be
something wrong with your model or it might be incompatible with the current labeling configuration.
Version: 1.7.3

http://localhost:8003/

{"model_dir":"E:\python\playground\label_anything\sam","status":"UP","v2":false}

http://127.0.0.1:8003/setup

Method Not Allowed The method is not allowed for the requested URL.

Anaconda prompt:
127.0.0.1 - - [26/Apr/2023 17:11:02] "POST /setup HTTP/1.1" 500 -

??? why?

连接网页出错

后端已经成功开启 SAM 的推理,但是在验证的时候就会报错
4bd643580ca3f6ac366eed429760350
5791671a9933739781cd3d040b88c1e

problem about Inference?

In mmdet-sam I tried a the code you gave as an example of inference using DINO and SAM model and got the predicted json file. The last image is my result, the first problem is that I am not sure if it is working and getting the correct annotation and segmentation results? It looks like garbled code, I used utf-8 just. The second problem is that the json file obtained from this inference result is in standard coco format by default?

image
the following is my commands.
image
the results follows:
image

some problems about mmtrack and mmdet

The code of from mmdet.models.tracks import ByteTrack may be wrong? when i change it to from mmtrack, it raise a error between mmcv2.0 and mmtrack---"AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.3.17, <2.0.0.". i want some help.

can't inference sam on label-anything

I follow the readme and can open the label-studio website. Most steps seem good. However, when I use the point to label the cat, the mask can't be generated by SAM.
This is my shell output when start sam,
image
and I notice that it not shows same line as readme:
<segment_anything .predictor.SamPredictor object at .....>
Is this the problem that caused the failure?
or when I start 'label-studio start', the output is:
image
By the way, after setting labeling interface, I can add the model successfully in 'Machine Learning' as :
image
The labeling page looks like:
image
When I open the label task, the shell outputs:
image

What the problems? Thanks!

error: Label Studio Output Conversion to RLE Format Masks

(rtmdet-sam) F:\playground\label_anything>python F:\playground\label_anything\tools\convert_to_rle_mask_coco.py --json_file_path D:/Desktop/4cats/result.json --out_dir F:\images
Traceback (most recent call last):
File "F:\playground\label_anything\tools\convert_to_rle_mask_coco.py", line 165, in
format_to_coco(args)
File "F:\playground\label_anything\tools\convert_to_rle_mask_coco.py", line 65, in format_to_coco
image_path_from=os.path.join('~/AppData/Local/label-studio/label-studio/media/upload/',os.path.dirname(contents[0]['data']['image']).split('/')[-1])
KeyError: 0

[label anything] FileNotFoundError when Add Cloud Storage in Linux

The default image storage method in label anything is to upload your images to labelstudio backend.
But in many cases, the unannotated images have existed at the backend, e.g., the Linux server.
When I tried to use the Label studio 'Add Cloud Storage' method and tried to utilize images stored in the server, the Error is:
Traceback (most recent call last): File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/exceptions.py", line 39, in exception_f return f(*args, **kwargs) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/api.py", line 51, in _predict predictions, model = _manager.predict(tasks, project, label_config, force_reload, try_fetch, **params) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/model.py", line 617, in predict predictions = cls._current_model.model.predict(tasks, **kwargs) File "/mnt/data1/zhangyijun/Code/playground/label_anything/sam/mmdetection.py", line 165, in predict image_path = self.get_local_path(image_url) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_ml/model.py", line 323, in get_local_path return get_local_path(url, project_dir=project_dir, hostname=self.hostname, access_token=self.access_token) File "/root/anaconda3/envs/rtmdet-sam/lib/python3.9/site-packages/label_studio_tools/core/utils/io.py", line 71, in get_local_path raise FileNotFoundError(filepath) FileNotFoundError: /data/images/IMG_20210705_084125__01.jpg

any plan with fully automated labeling?

I see that you have introduced semi-automatic annotation, thank you for your work, but this is still rather laborious. Is it possible to combine this with the previously introduced mmdet-sam to automatically perform detection segmentation to get the annotation?

In this case, we only need to specify the data folder to be processed and the category.txt to be detected for segmentation. then run the code to output directly to get the annotation file in coco etc. I would appreciate if you have any plans for this.

problem about Inference?

In mmdet_sam, can I use my own training dataset for object detection and SAM segmentation? If so, what should be changed? Since I directly use the weights and configuration of the DINO model trained with the visdrone aerial photography dataset as parameters, it will display an error, so I would like to ask if only 80 categories of COCO inference are currently supported.

mmediting+sam 存在问题, 测试时24g 显卡,cuda out of memory

Here is the complete log:

04/19 14:44:16 - mmengine - INFO - Creating Linaqruf/anything-v3.0 by 'HuggingFace'
/home/wushuchen/projects/SAM_test/mmediting/mmedit/models/base_archs/wrapper.py:129: FutureWarning: Accessing config attribute block_out_channels directly via 'AutoencoderKL' object attribute is deprecated. Please access 'block_out_channels' over 'AutoencoderKL's config object instead, e.g. 'unet.config.block_out_channels'.
return getattr(self.model, name)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:02<00:00, 5.54it/s]
0%| | 0/15 [00:00<?, ?it/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/wushuchen/projects/SAM_test/playground/mmediting_sam/play_controlnet_animation_sam.py:108 │
│ in │
│ │
│ 105 │ init_default_scope('mmedit') │
│ 106 │ │
│ 107 │ # 1. generate animation with mmediting controlnet animation │
│ ❱ 108 │ generate_animation( │
│ 109 │ │ video=config.source_video_frame_path, │
│ 110 │ │ save_path=config.middle_video_frame_path, │
│ 111 │ │ prompt=config.prompt, │
│ │
│ /home/wushuchen/projects/SAM_test/playground/mmediting_sam/play_controlnet_animation_sam.py:19 │
│ in generate_animation │
│ │
│ 16 │ │ │ │ │ height): │
│ 17 │ editor = MMEdit(model_name='controlnet_animation') │
│ 18 │ │
│ ❱ 19 │ editor.infer( │
│ 20 │ │ video=video, │
│ 21 │ │ prompt=prompt, │
│ 22 │ │ negative_prompt=negative_prompt, │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/edit.py:199 in infer │
│ │
│ 196 │ │ │ Dict or List[Dict]: Each dict contains the inference result of │
│ 197 │ │ │ each image or video. │
│ 198 │ │ """ │
│ ❱ 199 │ │ return self.inferencer( │
│ 200 │ │ │ img=img, │
│ 201 │ │ │ video=video, │
│ 202 │ │ │ label=label, │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/apis/inferencers/init.py:116 in call
│ │
│ 113 │ │ Returns: │
│ 114 │ │ │ Union[Dict, List[Dict]]: Results of inference pipeline. │
│ 115 │ │ """ │
│ ❱ 116 │ │ return self.inferencer(**kwargs) │
│ 117 │ │
│ 118 │ def get_extra_parameters(self) -> List[str]: │
│ 119 │ │ """Each inferencer may has its own parameters. Call this function to │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/autograd/grad_mode.p │
│ y:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/apis/inferencers/controlnet_animation_inferen │
│ cer.py:208 in call
│ │
│ 205 │ │ │ concat_hed.paste(hed_image, (image_width, 0)) │
│ 206 │ │ │ concat_hed.paste(first_hed, (image_width * 2, 0)) │
│ 207 │ │ │ │
│ ❱ 208 │ │ │ result = self.pipe.infer( │
│ 209 │ │ │ │ control=concat_hed, │
│ 210 │ │ │ │ latent_image=concat_img, │
│ 211 │ │ │ │ prompt=prompt, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/autograd/grad_mode.p │
│ y:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/models/editors/controlnet/controlnet.py:806 │
│ in infer │
│ │
│ 803 │ │ │ latent_model_input = self.test_scheduler.scale_model_input( │
│ 804 │ │ │ │ latent_model_input, t) │
│ 805 │ │ │ │
│ ❱ 806 │ │ │ down_block_res_samples, mid_block_res_sample = self.controlnet( │
│ 807 │ │ │ │ latent_model_input, │
│ 808 │ │ │ │ t, │
│ 809 │ │ │ │ encoder_hidden_states=text_embeddings, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/projects/SAM_test/mmediting/mmedit/models/base_archs/wrapper.py:159 in forward │
│ │
│ 156 │ │ Returns: │
│ 157 │ │ │ Any: The output of wrapped module's forward function. │
│ 158 │ │ """ │
│ ❱ 159 │ │ return self.model(*args, **kwargs) │
│ 160 │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/controlne │
│ t.py:525 in forward │
│ │
│ 522 │ │ down_block_res_samples = (sample,) │
│ 523 │ │ for downsample_block in self.down_blocks: │
│ 524 │ │ │ if hasattr(downsample_block, "has_cross_attention") and downsample_block.has │
│ ❱ 525 │ │ │ │ sample, res_samples = downsample_block( │
│ 526 │ │ │ │ │ hidden_states=sample, │
│ 527 │ │ │ │ │ temb=emb, │
│ 528 │ │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/unet_2d_b │
│ locks.py:867 in forward │
│ │
│ 864 │ │ │ │ )[0] │
│ 865 │ │ │ else: │
│ 866 │ │ │ │ hidden_states = resnet(hidden_states, temb) │
│ ❱ 867 │ │ │ │ hidden_states = attn( │
│ 868 │ │ │ │ │ hidden_states, │
│ 869 │ │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ 870 │ │ │ │ │ cross_attention_kwargs=cross_attention_kwargs, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/transform │
│ er_2d.py:265 in forward │
│ │
│ 262 │ │ │
│ 263 │ │ # 2. Blocks │
│ 264 │ │ for block in self.transformer_blocks: │
│ ❱ 265 │ │ │ hidden_states = block( │
│ 266 │ │ │ │ hidden_states, │
│ 267 │ │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ 268 │ │ │ │ timestep=timestep, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ .py:294 in forward │
│ │
│ 291 │ │ │ norm_hidden_states = self.norm1(hidden_states) │
│ 292 │ │ │
│ 293 │ │ cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not │
│ ❱ 294 │ │ attn_output = self.attn1( │
│ 295 │ │ │ norm_hidden_states, │
│ 296 │ │ │ encoder_hidden_states=encoder_hidden_states if self.only_cross_attention els │
│ 297 │ │ │ attention_mask=attention_mask, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/torch/nn/modules/module.py │
│ :1130 in _call_impl │
│ │
│ 1127 │ │ # this function, and just call forward. │
│ 1128 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1129 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1130 │ │ │ return forward_call(*input, **kwargs) │
│ 1131 │ │ # Do not call functions when jit is used │
│ 1132 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1133 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:243 in forward │
│ │
│ 240 │ │ # The Attention class can call different attention processors / attention func │
│ 241 │ │ # here we simply pass along all tensors to the selected processor class │
│ 242 │ │ # For standard processors that are defined here, **cross_attention_kwargs is e │
│ ❱ 243 │ │ return self.processor( │
│ 244 │ │ │ self, │
│ 245 │ │ │ hidden_states, │
│ 246 │ │ │ encoder_hidden_states=encoder_hidden_states, │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:382 in call
│ │
│ 379 │ │ key = attn.head_to_batch_dim(key) │
│ 380 │ │ value = attn.head_to_batch_dim(value) │
│ 381 │ │ │
│ ❱ 382 │ │ attention_probs = attn.get_attention_scores(query, key, attention_mask) │
│ 383 │ │ hidden_states = torch.bmm(attention_probs, value) │
│ 384 │ │ hidden_states = attn.batch_to_head_dim(hidden_states) │
│ 385 │
│ │
│ /home/wushuchen/anaconda3/envs/mmedit-sam/lib/python3.8/site-packages/diffusers/models/attention │
│ _processor.py:284 in get_attention_scores │
│ │
│ 281 │ │ │ baddbmm_input = attention_mask │
│ 282 │ │ │ beta = 1 │
│ 283 │ │ │
│ ❱ 284 │ │ attention_scores = torch.baddbmm( │
│ 285 │ │ │ baddbmm_input, │
│ 286 │ │ │ query, │
│ 287 │ │ │ key.transpose(-1, -2), │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: CUDA out of memory. Tried to allocate 9.00 GiB (GPU 0; 23.67 GiB total capacity; 14.61 GiB already allocated; 4.53 GiB free; 14.82 GiB reserved in
total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF

复现mmtracking_open_detection的库版本问题

您好,可以提供一下您运行mmtracking_open_detection的mmdet、mmcv、mmengine、mmcls版本吗?我按照教程安装mmcv=2.0.0,mmdet=3.0.0rc6后提示找不到mmcls。但是安装mim install mmcls的版本只有 0.25.0,如下图
image
。请问这些版本如何搭配才能把算法运行起来?

I have this problem.

Traceback (most recent call last):
File ".\play_controlnet_animation_sam.py", line 6, in
from mmedit.edit import MMEdit
ModuleNotFoundError: No module named 'mmedit'

Using mim install mmedit, python install mmedit cannot be solved.

Validation Error

全程按照文档操作,在add model -> validate and save时,显示validation error:
图片

同时,SAM后端cmd中报错信息如下:
图片

Export data error

When I select the COCO format to export, the following error occurs.

Traceback (most recent call last):
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\rest_framework\views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\django\utils\decorators.py", line 43, in _wrapper
return bound_method(*args, **kwargs)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio\data_export\api.py", line 183, in get
export_stream, content_type, filename = DataExport.generate_export_file(
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio\data_export\models.py", line 161, in generate_export_file
converter.convert(input_json, tmp_dir, output_format, is_dir=False)
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio_converter\converter.py", line 209, in convert
self.convert_to_coco(
File "F:\Anaconda\anaconda\envs\rtmdet-sam\lib\site-packages\label_studio_converter\converter.py", line 700, in convert_to_coco
(x / 100 * width, y / 100 * height) for x, y in label["points"]
KeyError: 'points'

Ram usage increase every time a new image is selected

The label-studio ml backend is OOM and be killed after I use it for a while
After I label some amount of image, the RAM usage is increase substantial and every time i switch from one image to another, the RAM increase even more

Main screen
34.6 GB RAM
Screenshot from 2023-05-23 14-50-13

Load first image
37.5 GB RAM
Screenshot from 2023-05-23 14-50-26

Load second image
42.1 GB RAM
Screenshot from 2023-05-23 14-50-41

And it keep increasing after that

windows下图像带有中文无法处理

windoes下上传数据后,工具无法读取带有中文的图像,将playground\label_anything\sam\mmdetection文件读取图像由cv2.imread改为cv2.imdecode依然存在该问题,mmdetection中predict这句话报错--if kwargs.get('context') is None:,请问有什么解决方法吗

跟随教程,但 label-studio 安装会修改 numpy版本,引发广泛依赖的错误

跟着 label-studio SAM 教程。在安装 label-studio 时,它会强制指定 numpy==1.21.6,继而引发pytorch/ cv2 等依赖的冲突。非常难受。
图片

我踩坑了。于是必须重装 pytorch cv2 mmcv 等一系列的依赖库。所幸,最后搞定了依赖,也能跑通教程。

强烈建议在教程中想用户警告 ·label studio· 会修改numpy版本引发冲突。并且在教程中,先安装 label studio 再安装 pytorch 和 mmcv 等。

感谢

add model,hava this problem

RUNTIME ERROR
Validation error
Successfully connected to http://127.0.0.1:8003/ but it doesn't look like a valid ML backend. Reason: 500 Server Error: INTERNAL SERVER ERROR for url: http://127.0.0.1:8003/setup.
Check the ML backend server console logs to check the status. There might be
something wrong with your model or it might be incompatible with the current labeling configuration.

AttributeError: 'MMDetection' object has no attribute 'value'

I loaded the SAM back-end in the background, and successfully connected the reasoning model in the front page. However, when marking, the sam reasoning model could not be automatically recognized according to the prompt points or rectangular boxes, and an error was reported in the terminal: ‘’AttributeError: 'MMDetection' object has no attribute 'value'‘
uTools_1685267491683

MMTracking Open Detection :An error was reported while running the test sample

Traceback (most recent call last):
  File "tracking_demo.py", line 31, in <module>
    from mmdet.models.trackers import ByteTracker
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/__init__.py", line 10, in <module>
    from .reid import *  # noqa: F401,F403
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/reid/__init__.py", line 2, in <module>
    from .base_reid import BaseReID
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/reid/base_reid.py", line 17, in <module>
    class BaseReID(ImageClassifier):
NameError: name 'ImageClassifier' is not defined

我查阅代码发现是缺少了mmcls的包导致没有正常导入,我手动添加了mmcls-0.25.0,之后产生了新的报错
When I looked up the code and found that there was a missing package for mmcls that was not being imported properly, I manually added mmcls-0.25.0 and then generated a new error

Traceback (most recent call last):
  File "tracking_demo.py", line 31, in <module>
    from mmdet.models.trackers import ByteTracker
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/__init__.py", line 3, in <module>
    from .data_preprocessors import *  # noqa: F401,F403
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/data_preprocessors/__init__.py", line 6, in <module>
    from .reid_data_preprocessor import ReIDDataPreprocessor
  File "/home/gcbm/workspace/track/playground/mmdetection/mmdet/models/data_preprocessors/reid_data_preprocessor.py", line 13, in <module>
    import mmcls
  File "/home/gcbm/workspace/anaconda3/envs/mmtracking-sam/lib/python3.8/site-packages/mmcls/__init__.py", line 55, in <module>
    assert (mmcv_version >= digit_version(mmcv_minimum_version)
AssertionError: MMCV==2.0.0 is used but incompatible. Please install mmcv>=1.4.2, <=1.9.0.

我尝试使用低版本的mmcv,但是其他的包不能够正常运行。我想知道这是mmcls和mmcv2.0存在冲突吗?我没有找到相应的解决方案
I tried using a lower version of MMCV, but the other packages didn't work. I want to know is there a conflict between mmcls and mmcv2.0? I didn't find a solution

AssertionError,This error occurred when I labeling

[ERROR] [label_studio_ml.model::get_result_from_last_job::131] 1685446124 job returns exception:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/wyz-labelsam/lib/python3.9/site-packages/label_studio_ml/model.py", line 129, in get_result_from_last_job
result = self.get_result_from_job_id(job_id)
File "/home/ubuntu/anaconda3/envs/wyz-labelsam/lib/python3.9/site-packages/label_studio_ml/model.py", line 111, in get_result_from_job_id
assert isinstance(result, dict)
AssertionError
When I'm labeling, there's a lag. On the backend, you also get the above error message.

export error

/site-packages/label_studio_converter/converter.py", line 917, in rotated_rectangle
label["x"],
KeyError: 'x'

export coco or yolo, this happens, plz help

Export coco to target cloud storage

Is it possible to export file in coco format to target cloud storage? Or how can I manually convert the annotations in cloud storage to coco format? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.