Code Monkey home page Code Monkey logo

feathernets_face-anti-spoofing-attack-detection-challenge-cvpr2019's People

Contributors

andrey1994 avatar dspmeng avatar edwardpwtsoi avatar flywme avatar hhb avatar liuyinhangx avatar shincytu avatar skibey avatar softwaregift avatar uartie avatar windyuan avatar wujunkai166 avatar yaowang1 avatar zongwave avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

feathernets_face-anti-spoofing-attack-detection-challenge-cvpr2019's Issues

关于训练与样本增强

你好,我对训练过程有两点疑问。
1·比赛公布数据集有RGB,IR和Depth数据,看你的代码是仅仅只用Depth数据训练吗?为什么选择不使用三种类型的数据一起训练分类器呢,混合训练是效果差吗?
2·样本增强方面。在data/our_filelist/2depth_train.txt中,sample-eye-nose_out是对原始深度图直接提取保留眼睛和鼻子部分然后其他区域都置为0吗?如果是这样处理,请问对于网络学习泛化能力提高明显吗?
谢谢分享!

有關RGB-D-IR的問題

您好,在 RGB-D-NI數據集中,深度圖與近紅外圖的分辨率是一樣的,但是可見光的分辨率與近紅外和深度圖的分辨率都不一樣,這3種圖像該怎麼融合呢?您這邊是分別預測的嗎

无法使用预训练模型mobilenet_v2.pth.tar

根据您给的百度云盘链接下载的mobilenet_v2.pth.tar无法解压;之前的问题中说不解压也可以使用该模型进行训练,但是仍然是无法使用的,在加载模型时,无法读取该模型的epoch等参数,请问是否是该模型无法使用,能重新上传新的模型吗?
测试命令如下:
python main.py --config="cfgs/mobilenetv2.yaml" --resume ./checkpoints/pre-trainedModels/mobilenet_v2.pth.tar --phase-test True --val True --val-save True
显示如下:
Input image size: 224, test size: 224

  • Number of FLOPs: 306.17M
    The network has 2226434 params.
    total_params 2226434
    D:\FeatherNets_Face-Anti-spoofing-Attack
    => loading checkpoint './checkpoints/pre-trainedModels/mobilenet_v2.pth.tar'
    Traceback (most recent call last):
    File "main.py", line 406, in
    main()
    File "main.py", line 151, in main
    args.start_epoch = checkpoint['epoch']
    KeyError: 'epoch'

Data Preprocessing Code

Could you please attach the code of the Data Preprocessing, I want to test the model with my own pictures ...

FileNotFoundError: [Errno 2] No such file or directory: 'test_private_list.txt'

run data/fileList.py

Traceback (most recent call last): File "fileList.py", line 78, in <module> for line in fileinput.input("test_private_list.txt"): File "D:\ProgramData\Anaconda3\lib\fileinput.py", line 252, in __next__ line = self._readline() File "D:\ProgramData\Anaconda3\lib\fileinput.py", line 364, in _readline self._file = open(self._filename, self._mode) FileNotFoundError: [Errno 2] No such file or directory: 'test_private_list.txt'
@SoftwareGift

batch_size?

为什么我把batch_size改到512,训练结果反而变得很差?普通情况是大的batchsize相对来说结果不会比小的差。

about paper

作者好,能否提前把论文挂出来学习一下?谢谢。

FileNotFoundError: [Errno 2] No such file or directory: 'F:\\CVPR2019\\data/Training/fake_part/CLKJ_BS1010/06_enm_b.rssdk/depth/151.jpg'

When I run "python main.py --config="cfgs/fishnet150-32.yaml" --b 32 --lr 0.01 --every-decay 30 --fl-gamma 2 >> fishnet150-train.log". The following error happen:

D:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name)) Traceback (most recent call last): File "main.py", line 402, in <module> main() File "main.py", line 198, in main train(train_loader, model, criterion, optimizer, epoch) File "main.py", line 230, in train for i, (input, target) in enumerate(train_loader): File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 637, in __next__ return self._process_next_batch(batch) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 658, in _process_next_batch raise batch.exc_type(batch.exc_msg) FileNotFoundError: Traceback (most recent call last): File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 138, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 138, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "F:\CVPR2019\read_data.py", line 94, in __getitem__ depth = Image.open(depth_dir[idx]) File "D:\ProgramData\Anaconda3\lib\site-packages\PIL\Image.py", line 2609, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'F:\\CVPR2019\\data/Training/fake_part/CLKJ_BS1010/06_enm_b.rssdk/depth/151.jpg'

@SoftwareGift

关于使用FeatherNetB训练出错

我使用命令行
python main.py --config="cfgs/FeatherNetB-32.yaml" --b 32 --lr 0.01 --every-decay 60 --fl-gamma 3 >> MobileLiteNetB-bs32--train.log
但是得到如下报错
Traceback (most recent call last):
File "main.py", line 406, in
main()
File "main.py", line 113, in main
model = models.dictargs.arch
File "/home/plato/FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019-regression/models/FeatherNet.py", line 161, in FeatherNetB
model = FeatherNet(se = True,avgdown=True)
File "/home/plato/FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019-regression/models/FeatherNet.py", line 87, in init
super(MobileLiteNet, self).init()
NameError: name 'MobileLiteNet' is not defined

请问应该怎么修改?

验证集 求取

您好,从百度云下载的数据集没有Val文件夹,请问怎么能获得验证集吗,谢谢。

validate问题

你好,我在训练完一个epoch后进行validata后出错,经查是在
_,predicted = torch.max(soft_output.data, 1)
predicted = predicted.to('cpu').detach().numpy()

这里的predicted不是包含0,1的tensor,是一个这样的tensor
Tensor: tensor([920, 744, 920, 818, 416, 920, 818, 920, 919, 920, 818, 818, 920, 974,
920, 855, 919, 744, 503, 818, 744, 920, 818, 684, 818, 503, 920, 919,
920, 818, 919, 974], device='cuda:0')

导致在后面计算confusion_matrix的时候无法正确计算tn, fp, fn, tp,
请问是哪里有问题吗? 我是从train_list里随机取的2000个样本做val_list的,其他地方没有改动

关于模型训练

请问大佬们,模型训练出来有用到系统中的吗?准确率怎么样?

feathernet如何从pytorch转换到ncnn上

我们基于Feathernet训练了一个pytorch模型,希望能在嵌入式设备使用。
首先将py模型转成了onnx模型,用python验证onnx模型效果和py模型一致,然后基于ncnn提供的工具onnx2ncnn进行了ncnn模型转换,转换成功,并用C++可以加载运行,但预测效果全无,想请教一下这块问题如何处理

How to perform well on test set?

Hello, I use your code to train featherNetB with depth images, and get 0.9080% on ACER on val set, which is a good result. But I get 2.1473% on ACER on test set, which is a apparent decline. Could you provide some advice to improve the results except ensembling? Because we want to have a simple and fast model.

test code

您好,请问有单独的test代码吗?能放出来吗?

Fusion

请问大佬,论文中关于三种特征图融合是在哪一部分呢。我在您的代码中只看到了关于depth的部分。期待大佬回复哈 ;-)

streaming module

你好,streaming module将6444特征图拉伸成1024维向量,在做二分类时只用到第一二个值。这样做DWConv是否有意义,是我哪里理解错了?谢谢

how to commpute normalize accorcoding to casia-surf val?

##accorcoding to casia-surf val to commpute
normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],
std=[0.10050353, 0.100842826, 0.10034215])
how to commpute normalize? do you have any references?
how about training effect using normalize?

How to train about the 6 Train FeatherNetB?

Hi @SoftwareGift ,thanks for your great project! I train your model about python3 main.py --config="cfgs/mobilenetv2.yaml" --b 8 --lr 0.001 --every-decay 40 --fl-gamma 2 >> mobilenetv2-bs32-train.log and I obtain more than 95% accuracy. That a small problem, I should mkdir logs file first. But I want to train your best result model, when I use your 6th train method. I use python3 main.py --config="cfgs/FeatherNetB-32.yaml" --b 32 --lr 0.01 --every-decay 60 --fl-gamma 3 >> MobileLiteNetB-bs32--train.log error appears.The feedback is

main.py:91: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(f)
Traceback (most recent call last):
File "main.py", line 406, in
main()
File "main.py", line 113, in main
model = models.dictargs.arch
KeyError: 'FeatherNetB'

I don't know how to fix the FeatherNetB. Please help how to train the FeatherNetB correctly.Thank you.

如何获取实际的预测结果啊? (how to get predict result for one image?)

如何根据下面的代码的最后部分获取预测结果啊? 比如最终的预测分数


# 加载模型
 model = models.fishnet150()

 # 加载模型文件的权重数据到model模型
 checkpoint = torch.load('checkpoints/pre-trainedModels/fishnet150_ckpt.tar')  # 加载指定的模型文件
 model.load_state_dict(checkpoint['state_dict'])  # 导入模型中的权重数据

 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")  # 设置计算硬件,CPU 或 GPU
 torch.cuda.set_device(device)
 # torch.backends.cudnn.benchmark = True
 model = model.module.to(device)
 model.eval()

 # 准备transform
 img_size = 224
 ratio = 224.0 / float(img_size)
 normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],
                                  std=[0.10050353, 0.100842826, 0.10034215])
 transform = transforms.Compose([transforms.Resize(int(224 * ratio)), transforms.CenterCrop(img_size),
                                 transforms.ToTensor(), normalize])

 # 读取单张人脸RGB图像进行预测
 img = Image.open("face1.jpg")
 img = img.convert('RGB')
 img = img.resize((224, 224))
 img = transform(img)

 img = np.array(img)  # 形状为(3, 224, 224)
 img = np.expand_dims(img, 0)  # 此时形状为(1, 3, 224, 224)

 input = torch.tensor(img, dtype=torch.float32, device=device)  
 output = model(input)  # output形状为[1, 1000]

 soft_output = torch.softmax(output, dim=-1)  # 对output中最后一个维度上的元素进行softmax处理,得到的形状依然是[1, 1000]
 preds = soft_output.to('cpu').detach().numpy()  # 转换为numpy数组,形状为(1, 1000)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.