Code Monkey home page Code Monkey logo

hrnet-for-fashion-landmark-estimation.pytorch's People

Contributors

dependabot[bot] avatar shenhanqian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hrnet-for-fashion-landmark-estimation.pytorch's Issues

Question about testing

val_400_gt_pred

I test the checkpoint, using the command in Readme.
python tools/test.py
--cfg experiments/deepfashion2/hrnet/w48_384x288_adam_lr1e-3.yaml
TEST.MODEL_FILE models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth
TEST.USE_GT_BBOX True

I open the debug switch in config file, and the saved image result is very bad. Any show problem?

for catId in catIds} File "/home/sa/anaconda3/envs/torch1.7/lib/python3.6/site-packages/pycocotools/cocoeval.py", line 229, in computeOks e = (dx**2 + dy**2) / vars / (gt['area']+np.spacing(1)) / 2 ValueError: operands could not be broadcast together with shapes (294,) (17,)

训练的时候出现以下错误
for catId in catIds}
File "/home/sa/anaconda3/envs/torch1.7/lib/python3.6/site-packages/pycocotools/cocoeval.py", line 229, in computeOks
e = (dx2 + dy2) / vars / (gt['area']+np.spacing(1)) / 2
ValueError: operands could not be broadcast together with shapes (294,) (17,)
@ShenhanQian

infer on test image

how can we get clothes bbox, clothes class, scale and other parameters, if we want to run this repo on test image

Visuliaztion problem

First, thanks for sharing this great work! Here is some issue that I met.

I try to visualize the result by running the script###

python tools/test.py --cfg experiments/deepfashion2/hrnet/w48_384x288_adam_lr1e-3.yaml TEST.MODEL_FILE models/pose_hrnet-w48_384x288-deepfashion2_mAP_0.7017.pth TEST.USE_GT_BBOX True DATASET.MINI_DATASET True TAG 'experiment description' WORKERS 4 TEST.BATCH_SIZE_PER_GPU 8 TRAIN.BATCH_SIZE_PER_GPU 8

the config file is

AUTO_RESUME: false #
CUDNN:
BENCHMARK: true
DETERMINISTIC: false
ENABLED: true
DATA_DIR: ''
GPUS: (1,)
OUTPUT_DIR: 'output'
LOG_DIR: 'log'
WORKERS: 8
PRINT_FREQ: 100
PIN_MEMORY: true

DATASET:
COLOR_RGB: false
DATASET: 'deepfashion2'
DATA_FORMAT: jpg
FLIP: true
NUM_JOINTS_HALF_BODY: 8
PROB_HALF_BODY: 0.3
ROOT: 'data/deepfashion2/'
ROT_FACTOR: 15 #45
SCALE_FACTOR: 0.1 #0.35
TEST_SET: 'validation'
TRAIN_SET: 'train'
MINI_DATASET: True
SELECT_CAT: [1,2,3,4,5,6,7,8,9,10,11,12,13]
MODEL:
INIT_WEIGHTS: true
NAME: pose_hrnet
NUM_JOINTS: 294
PRETRAINED: ''
TARGET_TYPE: gaussian
IMAGE_SIZE:

  • 288
  • 384
    HEATMAP_SIZE:
  • 72
  • 96
    SIGMA: 2 # 3
    EXTRA:
    PRETRAINED_LAYERS:
    • 'conv1'
    • 'bn1'
    • 'conv2'
    • 'bn2'
    • 'layer1'
    • 'transition1'
    • 'stage2'
    • 'transition2'
    • 'stage3'
    • 'transition3'
    • 'stage4'
      FINAL_CONV_KERNEL: 1
      STAGE2:
      NUM_MODULES: 1
      NUM_BRANCHES: 2
      BLOCK: BASIC
      NUM_BLOCKS:
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
        FUSE_METHOD: SUM
        STAGE3:
        NUM_MODULES: 4
        NUM_BRANCHES: 3
        BLOCK: BASIC
        NUM_BLOCKS:
      • 4
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
      • 192
        FUSE_METHOD: SUM
        STAGE4:
        NUM_MODULES: 3
        NUM_BRANCHES: 4
        BLOCK: BASIC
        NUM_BLOCKS:
      • 4
      • 4
      • 4
      • 4
        NUM_CHANNELS:
      • 48
      • 96
      • 192
      • 384
        FUSE_METHOD: SUM
        LOSS:
        USE_TARGET_WEIGHT: true
        TRAIN:
        BATCH_SIZE_PER_GPU: 8
        SHUFFLE: true
        BEGIN_EPOCH: 0
        END_EPOCH: 210
        OPTIMIZER: adam
        LR: 0.001 #0.001
        LR_FACTOR: 0.1
        LR_STEP:
  • 170
  • 200
    WD: 0.
    GAMMA1: 0.99
    GAMMA2: 0.0
    MOMENTUM: 0.9
    NESTEROV: false
    TEST:
    BATCH_SIZE_PER_GPU: 8
    COCO_BBOX_FILE: ''
    DEEPFASHION2_BBOX_FILE: ''
    BBOX_THRE: 1.0
    IMAGE_THRE: 0.0 # threshold for detected bbox to be feed into HRNet
    IN_VIS_THRE: 0.2
    MODEL_FILE: ''
    NMS_THRE: 1.0
    OKS_THRE: 0.9 # the lower threshold for a peak point in a heatmap to be kept
    USE_GT_BBOX: true
    FLIP_TEST: true
    POST_PROCESS: true
    SHIFT_HEATMAP: true
    DEBUG:
    DEBUG: True
    SAVE_BATCH_IMAGES_GT: false
    SAVE_BATCH_IMAGES_PRED: false
    SAVE_BATCH_IMAGES_GT_PRED: True
    SAVE_HEATMAPS_GT: false
    SAVE_HEATMAPS_PRED: false

I change the CONFIG parameter to True, however it still does not save any image. The image saving only works when I change the BATH_SIZE_PER_GPU to 1. However, the image-saving function is based on a torch grid, thus result in a very wired visualization since the scale of keypoint and output is different. Could you please try to solve the problem? I am using a single GPU RTX 3080TI with Ubuntu 18.04.

Question about the test result

Thanks for great job! And there is a question about the output of pretained model provided.
When I test pretained model, it return a tensor whose shape is [batchsize, 294, height, weight]. Could you give some expanation about the number "294" ?

How to predict landmark for new images?

First, I would like to thank authors for sharing the code. The work inspired me a lot and I‘m trying to followup your method for finding keypoints using my own input images.
However, I found that it is hard to directly modify test.py to meet a situation when the input is not the validation set, which really confused me. Is there a straightforward way to generate landmark with input images not in the deepfashion2 dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.