Code Monkey home page Code Monkey logo

Comments (25)

Wolfybox avatar Wolfybox commented on September 4, 2024 1

Thank you for the comment!
As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16):
print(os.getcwd())
vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing'
save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx'
if not os.path.exists(save_dir):
os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

I just noticed that it was written in the paper that "the normality of each frame is evaluated by the reconstruction
error of the cuboid centering on it. " So I guess the author are referring to an overlapped sliding strategy.

from memae-anomaly-detection.

lyn1874 avatar lyn1874 commented on September 4, 2024 1

Thanks for the fruitful discussion.
I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline
frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Thank you very much.

from memae-anomaly-detection.

Wolfybox avatar Wolfybox commented on September 4, 2024

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

def gen_frame_index(clip_len=16):
vfolder = r'F:\dataset\UCSD\ped2\testing\frames'
save_dir = r'F:\dataset\UCSD\ped2\testing\indices'
for vname in tqdm(os.listdir(vfolder)):
vdir = os.path.join(vfolder, vname)
flist = sorted(os.listdir(vdir))
fnum = len(flist)
clip_num = math.ceil(fnum / clip_len)
clip_num_len = len(str(clip_num))
target_dir = os.path.join(save_dir, vname)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
for clip_i in range(clip_num):
start_fi = clip_i * clip_len
end_fi = (clip_i + 1) * clip_len if (clip_i + 1) * clip_len < fnum else fnum
clip_list = flist[start_fi: end_fi]
clip_list = np.array(clip_list)
np.save(os.path.join(target_dir, f'{str(clip_i).zfill(clip_num_len)}.npy'), clip_list)

Well, this is how i generate the so-called indices which the author's code required. However, to use these indices, you will also have to modify a few lines of code in the 'video_dataset.py'. Actually the data preparation is not troublesome at all. The basic logic is simple: the frames' indices(or more specifically the name of images in the frame folder) are divided by clip and then saved to individual clip indices file.

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

You are so nice! I still have a lot to learn.Have a nice day!

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

from memae-anomaly-detection.

Wolfybox avatar Wolfybox commented on September 4, 2024

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

Welp, I haven't done the training part. :P

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Okay,fine.Thank you again~

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

from memae-anomaly-detection.

Wolfybox avatar Wolfybox commented on September 4, 2024

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

`class VideoDatasetOneDir(Dataset):
def init(self, idx_dir, frame_root, is_testing=False, use_cuda=False, transform=None):
self.idx_dir = idx_dir
self.frame_root = frame_root
self.idx_name_list = [name for name in os.listdir(self.idx_dir)]
self.idx_name_list.sort()
self.use_cuda = use_cuda
self.transform = transform
self.is_testing = is_testing

def __len__(self):
    return len(self.idx_name_list)

def __getitem__(self, clip_idx):
    """ get a video clip with stacked frames indexed by the (idx) """
    idx_name = self.idx_name_list[clip_idx]
    frame_idx = np.load(os.path.join(self.idx_dir, idx_name))
    v_dir = self.frame_root

    sample_frame = cv2.imread(os.path.join(v_dir, frame_idx[0]), cv2.IMREAD_GRAYSCALE)

    sample_frame_shape = sample_frame.shape
    h = sample_frame_shape[0]
    w = sample_frame_shape[1]

    # each sample is concatenation of the indexed frames
    clip = []
    for fname in frame_idx:
        cur_frame = cv2.imread(os.path.join(v_dir, fname), cv2.IMREAD_GRAYSCALE)
        cur_frame = cv2.resize(cur_frame, (w + 8, h), cv2.INTER_CUBIC)
        cur_frame = torch.from_numpy(cur_frame)
        clip.append(cur_frame)
    if len(clip) < 16:
        clip += [clip[-1]] * (16 - len(clip))
    clip = torch.stack(clip, dim=0)
    clip = clip.unsqueeze(dim=0).float()
    return clip_idx, clip`

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Thank you very much!!!Have a nice day ! I love HIT!

from memae-anomaly-detection.

callbarian avatar callbarian commented on September 4, 2024

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

from memae-anomaly-detection.

Wolfybox avatar Wolfybox commented on September 4, 2024

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

The gt file of the Ped2 is named "ped2.mat" which is an array-like data of 12 different tuples indicating the starting and ending frame of the anomalous event. The corresponding evaluation part lies in 'scrip_eval_video.py' and 'util/eval.py'. However, the format of the gt file or data doesn't influence the 'script_testing.py' since there are two individual files. And this is how I load the gt file from Ped2:

gt_path = r'F:\dataset\UCSD\ped2\ped2.mat'
gt_list = []
gt_data = sio.loadmat(gt_path)['gt'][0]
for gt_tuple in gt_data:
    gt_tuple = gt_tuple.squeeze()
    start, end = gt_tuple[0], gt_tuple[1]
    gt_list.append((start, end))

To generate ground truth data, I applied following processing which is simple:

for i in range(len(gt_list)):
    start, end = gt_list[i]
    fnum = fnum_list[i]
    y_true = [0] * start + [1] * (end - start) + [0] * (fnum - end)
    y_trues.extend(y_true)

About the 'video_dataset.py' codes, I am afraid I can't explain further since it will be a long story to describe my ideas for data loading.

from memae-anomaly-detection.

WYZhang999 avatar WYZhang999 commented on September 4, 2024

Thanks a lot.It's very helpful.

from memae-anomaly-detection.

callbarian avatar callbarian commented on September 4, 2024

Thank you for the comment!
As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16):
print(os.getcwd())
vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing'
save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx'
if not os.path.exists(save_dir):
os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

from memae-anomaly-detection.

LiUzHiAn avatar LiUzHiAn commented on September 4, 2024

@callbarian

Actually, I think the way you prepared the dataset is more likely to be consistent with the original paper (i.e. the 16-frame-long clip sliding strategy)

from memae-anomaly-detection.

LiUzHiAn avatar LiUzHiAn commented on September 4, 2024

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

from memae-anomaly-detection.

Wolfybox avatar Wolfybox commented on September 4, 2024

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

I wrote a training script yet it only got me an AUC of around 86% on Ped2. BTW, I noticed the author didnt implement cosine similarity when computing the attention weight.

from memae-anomaly-detection.

sdjsngs avatar sdjsngs commented on September 4, 2024

@Wolfybox can you show some train detail? like init learning optimizer totoal_epoch? i am re-implement this paper in this week.
since the author write code : gt_labels[8:-7] i suppose he ignore the border frames for each video when eval auc, did you do that too?

from memae-anomaly-detection.

lyn1874 avatar lyn1874 commented on September 4, 2024

Thanks for the fruitful discussion.
I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline
frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])
I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

can you share training code please

https://github.com/lyn1874/memAE

from memae-anomaly-detection.

donggong1 avatar donggong1 commented on September 4, 2024

Hi guys, thanks for the discussion and clarification. Specifically, thanks @lyn1874 for the wonderful repo and reproduction. I uploaded an example for dataset preparation and training. Hope that can be helpful.

from memae-anomaly-detection.

gdwang08 avatar gdwang08 commented on September 4, 2024

Thanks for the fruitful discussion.
I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline
frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

from memae-anomaly-detection.

abhishekaich27 avatar abhishekaich27 commented on September 4, 2024

@gdwang08 @donggong1 @lyn1874 In the testing script for a given video, why don't we compare the score frame-wise? We can always save the reconstruction error and hence the score for every frame. Why would that be incorrect/different? Why do it with the center frame as you have explained in earlier comments?

It would be great if you guys could just define what does "frame-level AUC" means? I was under the impression that we compare each frame's score but that doesn't seem to be the case.

from memae-anomaly-detection.

huyi1998 avatar huyi1998 commented on September 4, 2024

Thanks for the fruitful discussion.
I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline
frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])
I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

the author is lyn1874?

from memae-anomaly-detection.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.