Code Monkey home page Code Monkey logo

cr-fiqa's Introduction

This is the official repository of the paper:

CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

Paper accepted at CVPR 2023

Arxiv: CR-FIQA

Update


CR-FIQA training

  1. In the paper, we employ MS1MV2 as the training dataset for CR-FIQA(L) which can be downloaded from InsightFace (MS1M-ArcFace in DataZoo)
    1. Download MS1MV2 dataset from insightface on strictly follow the licence distribution
  2. We use CASIA-WebFace as the training dataset for CR-FIQA(S) which can be downloaded from InsightFace (CASIA in DataZoo)
    1. Download CASIA dataset from insightface on strictly follow the licence distribution
  3. Unzip the dataset and place it in the data folder
  4. Intall the requirement from requirement.txt
  5. pip install -r requirements.txt
  6. All code are trained and tested using PyTorch 1.7.1 Details are under (Torch)[https://pytorch.org/get-started/locally/]

CR-FIQA(L)

Set the following in the config.py

  1. config.output to output dir
  2. config.network = "iresnet100"
  3. config.dataset = "emoreIresNet"
  4. Run ./run.sh

CR-FIQA(S)

Set the following in the config.py

  1. config.output to output dir
  2. config.network = "iresnet50"
  3. config.dataset = "webface"
  4. Run ./run.sh

Pretrained model

Evaluation

Follow these steps to reproduce the results on XQLFW:

  1. Download the aligned XQLFW (please download xqlfw_aligned_112.zip)
  2. Note: if you have an unaligned dataset, please do alignment first using Alignment
  3. Unzip XQLFW (Folder structure should look like this ./data/XQLFW/xqlfw_aligned_112/)
  4. Download also xqlfw_pairs.txt to ./data/XQLFW/xqlfw_pairs.txt
  5. Set (in feature_extraction/extract_xqlfw.py) path = "./data/XQLFW" to your XQLFW data folder and outpath = "./data/quality_data" where you want to save the preprocessed data
  6. Run python extract_xqlfw.py (it creates the output folder, saves the images in BGR format, creates image_path_list.txt and pair_list.txt)
  7. Run evaluation/getQualityScore.py to estimate the quality scores
    1. CR-FIQA(L)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet100" --model_id "181952" --score_file_name "CRFIQAL.txt"
    2. CR-FIQA(S)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet50" --model_id "32572" --score_file_name "CRFIQAS.txt"

Note: Our model process an aligned image using (Alignment). The images should be of RGB color space. If you are using OpenCV to load the images, make sure to convert from BGR to RGB as OpenCV converts the color channel when it is loaded with OpenCV read command. All images are processed to have pixel values between -1 and 1 as in QualityModel.py

The quality score of LFW, AgeDB-30, CFP-FP, CALFW, CPLFW can be produced by following these steps:

  1. LFW, AgeDB-30, CFP-FP, CALFW, CPLFW are be included in the training dataset folder insightface
  2. Set (in extract_bin.py) path = "/data/faces_emore/lfw.bin" to your LFW bin file and outpath = "./data/quality_data" where you want to save the preprocessed data (subfolder will be created)
  3. Run python extract_bin.py (it creates the output folder, saves the images in BGR format, creates image_path_list.txt and pair_list.txt)
  4. Run evaluation/getQualityScore.py to estimate the quality scores
    1. CR-FIQA(L)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet100" --model_id "181952" --score_file_name "CRFIQAL.txt" --color_channel "BGR"
    2. CR-FIQA(S)
      1. Download the pretrained model
      2. run: python3 evaluation/getQualityScorce.py --data_dir "./data/quality_data" --datasets "XQLFW" --model_path "path_to_pretrained_CF_FIQAL_model" --backbone "iresnet50" --model_id "32572" --score_file_name "CRFIQAS.txt" --color_channel "BGR"

Ploting ERC curves

  1. Download pretrained model e.g. ElasticFace-Arc, MagFac, CurricularFace or ArcFace
  2. Run CUDA_VISIBLE_DEVICES=0 python feature_extraction/extract_emb.py --model_path ./pretrained/ElasticFace --model_id 295672 --dataset_path "./data/quality_data/XQLFW" --modelname "ElasticFaceModel" 2.1 Note: change the path to pretrained model and other arguments according to the evaluated model
  3. Run python3 ERC/erc.py (details in ERC/README.md)

Citation

If you use any of the code provided in this repository or the models provided, please cite the following paper:

@InProceedings{Boutros_2023_CVPR,
    author    = {Boutros, Fadi and Fang, Meiling and Klemt, Marcel and Fu, Biying and Damer, Naser},
    title     = {CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {5836-5845}
}

License

This project is licensed under the terms of the Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Copyright (c) 2021 Fraunhofer Institute for Computer Graphics Research IGD Darmstadt

cr-fiqa's People

Contributors

fdbtrs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cr-fiqa's Issues

Evaluate on custom dataset

Hi,

May I know if I can evaluate the pretrained model on my custom dataset? I will like to get the predicted score for each testing image, besides of image itself, anything else should I prepared to get the score result?

Pretrained model of ArcFace

Dear author, thank you so much for the work. I have implemented other methods except the ArcFace. Can you specify the pretrained model you used for the extract_emb.py, and the bash script that can properly call the extract_emb.py? Thank you.

inference problem

nice work!
I double that Why dropout=0.4 in training but not in inference?
QualityModel._get_model:
backbone = iresnet50(num_features=512, qs=1, use_se=False).to(f"cuda:{ctx}")

Range of values qs should take

Hi,

Thanks for sharing your work.
I wonder if there is an inherent range of values that the quality score would take, I see values > 2 and that looked out of the expected range of [0, 1] in the literature.

From the CR equation, for CCS [-1, 1] and NNCCS [-0.9, 1], CR looks to have a minimum value just below 0 and its maximum value lies at around 10.

Would you please elaborate on that?

Thanks

Readme Typo

Hi, There is a minor typo under point 6 of the evaluation section:
getQualityScorce.py -> getQualityScore.py

Inconsistency between the paper and codes

the paper says that NNCCS value range is shifted from [-1,+1] to [ε, 2+ε] by adding a 1+ε term, however, the codes didn't add this term max_negative_cloned[index,label.view(-1)]= -1e-12, do we need to add 1+ε for quality score?

BGR vs RGB

Great work!
I would like to know if there is any specific reason you use BGR format instead of regular RGB for the model training and testing.

Thank you!

benchmark for timing / speed

Hi,

in video / realtime environment to chack face quality to determine which face should send the recognition is important.

do you have any face per second benchmark ? is it good to use on video detection/rec. ?

quastion about the FIQA process

hi, thx for your amazing job! I have a question about the FIQA part. I’m confused about the “pair” meaning in XQLFW, also, why should I save image in a BGR way (by running extract_xqlfw.py) before get the quality score?

If i have some images in a folder, can I compute the score of these images directly without saving in BGR format?

the perfomance of face recognition

In the provided code, the loss_qs is directly added to the main CE loss and backward.
Since the label (ccs/nnccs) is no stable, I wonder if this regression task will affect the final performance of backbone/face recognition?

Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.