Code Monkey home page Code Monkey logo

aleflabo / mocodad Goto Github PK

View Code? Open in Web Editor NEW
59.0 59.0 7.0 3.34 MB

The official PyTorch implementation of the IEEE/CVF International Conference on Computer Vision (ICCV) '23 paper Multimodal Motion Conditioned Diffusion Model for Skeleton-based Video Anomaly Detection.

Home Page: https://openaccess.thecvf.com/content/ICCV2023/html/Flaborea_Multimodal_Motion_Conditioned_Diffusion_Model_for_Skeleton-based_Video_Anomaly_Detection_ICCV_2023_paper.html

License: MIT License

Python 100.00%
anomaly-detection artificial-intelligence computer-vision gcn iccv

mocodad's Introduction

Hi! I'm Alessandro Flaborea

Phd in Computer Science @ Sapienza University of Rome

  • 🔭 Research areas: Computer Vision and Machine Learning
  • 🌱 I’m currently working on Video Anomaly Detection, Human Pose Forecasting, Egocentric Vision, Hyperbolic Neural Networks
  • 🧑‍💻 I'm a member of the Perception and Intelligence Lab (PINlab)
  • 📖 You can find my Publications here: Google Scholar
  • 📫 How to reach me: Linkedin Badge Gmail Badge X
  • ⚡ Tools I daily use: Python PyTorch VS Code Weights & Biases

mocodad's People

Contributors

aleflabo avatar stdrr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mocodad's Issues

Train and evaluate with custom dataset

Hi, I wanna ask if there are any code versions which is available for training or testing on a custom dataset. It seems that the current code doesn't support this and the format of data is not stated.
It would be really helpful if you can give the code for converting to your data format, or a note on how to organize the data files to run the code. Thank you in advance!

Inconsistent accuracy

Hi,thank you for your open source of your great work!
I trained the MoCoDAD model on the Avenue dataset using the readme file, and then run the Evaluation ,The accuracy is 86.3%.
But I use the pretrained models and then run the Evaluation ,The accuracy can be 89%. I trained the model according to the default parameters in the mocodad_train.yaml file. So I am not sure what the problem is.
I see that the "use_hr" parameter in the mocodad_train.yaml file is false. Should I set this parameter to True during the training process?
I hope you can take some time to answer my question. Thank you very much!

Why perform affine transformation on original coordinates?

Hi,thank you for your open source of your great work!
Recently I was reading your open source code and found that affine transformation was performed on the original coordinates during the processing of the dataset. What impact does this have on the results? Which paper did you use the affine transformation based on the results of?I want to study the impact of this transformation on the results.thank you!

UBnormal - hr_bool_masks

Hi, thank you for your work!
At your provided GDRive link you provide the Avenue, HR-ShanghaiTech and UBnormal data.
Are the test_frame_masks already HR-related or not?

Also according to you code in get_hr_ubnormal_mask() in the line path_to_boolean_masks = f'./data/UBnormal/hr_bool_masks/{split}/test_frame_mask/*' the folder hr_bool_masks should be in the UBnormal dataset. However, there is no such folder in the GDRive download.

Thank you in advance!

some questions about the dataset

I would like to ask some questions about the dataset. In the process of testing, for example, in the dataset of shanghaiHR, there is only a small part of gt data, while there are a lot of data for testing. How do you finally make the test data correspond to gt data?

IsADirectoryError: [Errno 21] Is a directory: '/workspace/MoCoDAD/checkpoints/HR-Avenue/train_experiment'

Hi,thank you for your open source of your great work!
However,when i Test MoCoDAD by running the code

python` eval_MoCoDAD.py --config /args.exp_dir/args.dataset_choice/args.dir_name/config.yaml,

I have the following error,

File "/workspace/MoCoDAD/eval_MoCoDAD.py", line 39, in out = trainer.test(model, dataloaders=loader, ckpt_path=ckpt_path)
IsADirectoryError: [Errno 21] Is a directory: '/workspace/MoCoDAD/checkpoints/HR-Avenue/train_experiment'.

How can I solve the problem,hope for your answer,thank you!

How can I test HR-UBnormal?

Hello, this project is awesome.
I'm testing your checkpoints. And ckecked HR-Avenue, HR-STC, and UBnormal.
However, I could find error about there is no UBnormal-hr_bool_masks-trajectories.
It can test only test_frame_mask? or how can I test HR-UBnormal?

Could you help me with error please?

Dear author, since I am not familiar with the training steps, after downloading the UBnorma data set locally and using your readme file to train, I found the following error:
(simclr) C:\Users\34252\Desktop\MoCoDAD-main>python train_MoCoDAD.py --config config/UBnormal/mocodad_train.yaml
Namespace(config='config/UBnormal/mocodad_train.yaml')
config/UBnormal/mocodad_train.yaml
Experiment directories created in ./checkpoints\UBnormal\train_experiment
'cp' is not an internal or external command, nor a runnable program
Or batch files.
Seed set to 999
Traceback (most recent call last):
File "C:\Users\34252\Desktop\MoCoDAD-main\train_MoCoDAD.py", line 65, in
_, train_loader, _, val_loader = get_dataset_and_loader(args, split=args.split, validation=args.validation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\34252\Desktop\MoCoDAD-main\utils\dataset.py", line 310, in get_dataset_and_loader
dataset = PoseDatasetRobust(path_to_data=args.data_dir,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\34252\Desktop\MoCoDAD-main\utils\dataset.py", line 223, in init
arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\34252.conda\envs\simclr\Lib\site-packages\numpy\lib\npyio.py", line 999, in _read
arr = _load_from_filelike(
^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'gbk' codec can't decode byte 0xc0 in position 51: illegal multibyte sequence
I would like to know how to set up the data set structure if I only train on the ubnor data set and use the validation set to identify abnormal behavior.Looking forward to your early reply!I would be appreciated it!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.