Code Monkey home page Code Monkey logo

Comments (12)

BichenWuUCB avatar BichenWuUCB commented on July 22, 2024

Thanks for your question. I'll look into it. @cksl

from squeezedet.

cksl avatar cksl commented on July 22, 2024

@BichenWuUCB I found one possible bug in the demo.py , I modify my demo.py above, which seven lines of code are inserted( commented 【new codes】). The new codes are in the blow.
The new mAP on KITTI I run is:

  • 60.11% on easy, 51.41% on hard

The result turns better, however, there are still gaps with your paper (Table 2)

  • 81.4% on easy, 68.5% on hard

I think the change reason is that: some input image is 1224x370, which is resized to 1242x375. So the results should be restored to original scale.

There must be other problems , hope you can look into that carefully . Thank you, Bichen!

def image_demo():
  """Detect image."""

  with tf.Graph().as_default():
    # Load model
    mc = kitti_squeezeDetPlus_config()
    mc.BATCH_SIZE = 1
    # model parameters will be restored from checkpoint
    mc.LOAD_PRETRAINED_MODEL = False
    model = SqueezeDetPlus(mc, FLAGS.gpu)
    saver = tf.train.Saver(model.model_params)

    #  set model path
    FLAGS.checkpoint = r'/home/project/HumanDetection/squeezeDet_github/models/squeezeDetPlus/model.ckpt-95000'
    # set test data path
    basic_image_path = r'/home/dataSet/kitti/ori_data/left_image/testing/image_2/'
    list_path = r'/home/dataSet/kitti/ori_data/left_image/testing/test_list.txt' 
    write_result_path =r'/home/dataSet/kitti/ori_data/left_image/testing/run_out/'
    
    with open(list_path,'rt') as F_read_list:
        image_list_name = [x.strip() for x in F_read_list.readlines()]


    print ('image numbers:  ', len(image_list_name) )

    count_num = 0
    pedestrian_index = int(1)  ## for pedestrian index
    keep_score = 0.05  #  
    
    # write file format
    default_str_1 = 'Pedestrian -1 -1 -10'
    default_str_2 = '-1 -1 -1 -1000 -1000 -1000 -10'

    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
    	saver.restore(sess, FLAGS.checkpoint)

    	for file_name in image_list_name:
    		read_full_name = basic_image_path + file_name
    		im = cv2.imread(read_full_name)
    		if im is None:
    			print (file_name, ' is empty!')
    			continue
                
               # 【【【【【【【【【new codes】】】】】】】】】
                ori_height, ori_width, _ = im.shape
                x_scale = float(ori_width) / mc.IMAGE_WIDTH
                y_scale = float(ori_height) / mc.IMAGE_HEIGHT
    		
                im = im.astype(np.float32, copy=False)
    		im = cv2.resize(im, (mc.IMAGE_WIDTH, mc.IMAGE_HEIGHT))
    		input_image = im - mc.BGR_MEANS   		     
                
                # Detect 
	        det_boxes, det_probs, det_class = sess.run(
	        	[model.det_boxes, model.det_probs, model.det_class],
	        	feed_dict={model.image_input:[input_image], model.keep_prob: 1.0})
	        
                # NMS  Filter
	        final_boxes, final_probs, final_class = model.filter_prediction(
	        	det_boxes[0], det_probs[0], det_class[0])
	        
	        ##  only keep high probablity pedestrian
	        keep_idx    = [idx for idx in range(len(final_probs)) \
	        if final_probs[idx] > keep_score]

	        final_boxes = [final_boxes[idx] for idx in keep_idx]
	        final_probs = [final_probs[idx] for idx in keep_idx]
	        final_class = [final_class[idx] for idx in keep_idx]

	        # -------------- write files -----------------------
	        F_w_one_by_one = open(write_result_path + file_name.replace('png', 'txt'), 'wt')
	        rect_num = final_class.count(pedestrian_index)
	        
                print ('count: ', count_num)
        	count_num+=1
        	
	        if rect_num==0:
                        F_w_one_by_one.close()
	        	continue

        	goal_index = [idx for idx,value in enumerate(final_class) if value==pedestrian_index]

        	for kk in goal_index:
        		box = final_boxes[kk]
        		
                        xmin = box[0] - box[2]/2.0
        		ymin = box[1] - box[3]/2.0
        		xmax = box[0] + box[2]/2.0
        		ymax = box[1] + box[3]/2.0
        		
                       # 【【【【【【【【【new codes】】】】】】】】】
                        xmin*=x_scale
                        ymin*=y_scale
                        xmax*=x_scale
                        ymax*=y_scale
        		
        		line_2 = default_str_1 + ' '+ str(xmin) + ' '+ str(ymin) + ' '+ \
				str(xmax) + ' '+ str(ymax)+' '+ default_str_2 +' ' + str(final_probs[kk])+'\n'

        		F_w_one_by_one.write(line_2)
	         
	        F_w_one_by_one.close()


def main(argv=None):
    image_demo()


if __name__ == '__main__':
    tf.app.run()

from squeezedet.

andreapiso avatar andreapiso commented on July 22, 2024

both in training and demo the resize function distorts the image, which cannot be good for learning... probably would be better to rescale keeping the aspect ratio and add padding? this is already what happens when the image is smaller than the given size..

from squeezedet.

ByeonghakYim avatar ByeonghakYim commented on July 22, 2024

In my opinion(from the experience), KITTI dataset has sequential images(This means that sequential images have large correlation) and for test in paper they use randomly sampled training and validation data. This is why the result is different
I recommend you to use split method in 3DOP paper they considered this problem and no same sequence in both side

from squeezedet.

cksl avatar cksl commented on July 22, 2024

@ByeonghakYim Yes, KITTI train dataset has sequential images.
Howerver, all the results in Table 2 of the paper should be from KITTI test set, otherwise , the comparison of mAP is meaningless.

from squeezedet.

dojoscan avatar dojoscan commented on July 22, 2024

@cksl I was under the impression that you can only submit to the KITTI evaluation servers once per paper. I would definitely like clarification on this issue.

Edit: Seems to be a validation set actually #18

from squeezedet.

cksl avatar cksl commented on July 22, 2024

@dojoscan ,Yes, One algorithm should run only once on the evaluation server.
Actually, I forgot to post further explanations for the evaluation above. The result is not run on the KITTI evaluation server. One classmate of mine have participated in KITTI pedestrian detection competition last year, and he labeled the KITTI test set, so the result is on our own label result.
Yes, the label result was not identical to the KITTI (official) server groundtruth. (There is about 5 mAP gap with the official result according to his experience ).
Before I apply this algorithm on my several object detection, I just run this algorithm on my classmates' test set to confirm its effectiveness . I find the problem above……

from squeezedet.

avavavsf avatar avavavsf commented on July 22, 2024

@cksl Did you figured out this problem? I think the keep_score is too low, leading to too many false positive.

@BichenWuUCB I did not find the report on kitti test datasets in your paper. Did you run SqueezeDet on KITTI test datasets? What is the score you got from them?

from squeezedet.

yossibiton avatar yossibiton commented on July 22, 2024

@BichenWuUCB -
does your results in Table 2 (SqueezeDet & SqueezeDet+) refer to KITTI test set or validation set ?

from squeezedet.

twangnh avatar twangnh commented on July 22, 2024

@BichenWuUCB seems result on validation and official Test benchmark differs greatly, I randomly split the training set and run the model(squeezedet), after some tune I got close to paper result, which is
validation-result
but same model with same hyperparams tested on official Test Benchmark only got:
kitti-eval-result
but the result reported on table compares with methods test on official test test, can you explain where is missed here? your feedback is really expected, thank you!

from squeezedet.

aditya1709 avatar aditya1709 commented on July 22, 2024

@MrWanter Can you please help me with what hyperparamter changes (tuning) you made to get the same result as the paper?
My training process has been futile as even when the loss is low (0.3-0.5) the mAP on validation set is low at around 61.
Any help will be appreciated.

from squeezedet.

eypros avatar eypros commented on July 22, 2024

From my experience the low loss on training set does not guarantee a correspondent low mAP in test set. Maybe you got higher mAP on test set at a higher training loss (some previous checkpoint I mean).

from squeezedet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.