Code Monkey home page Code Monkey logo

Comments (22)

villanuevab avatar villanuevab commented on July 3, 2024 1

@Lisandro79 thanks for your work on this. May I ask how you decided on

["bbox/trimming/bbox:0",
"probability/score:0",
"probability/class_idx:0"]

as the output nodes? These are from the interpretation_graph, and I am trying to understand the difference between this and the forward_graph

from squeezedet.

andreapiso avatar andreapiso commented on July 3, 2024 1

from squeezedet.

keymanchen1215 avatar keymanchen1215 commented on July 3, 2024 1

Hi all
When I load the original model.ckpt-8700, I check the node name in the graph, it has "image_input".
Then I double check it again after I use the below code, the "image_input" node disappear.
output_graph_def = graph_util.convert_variables_to_constants(sess, input_graph_def, output_node_names.split(",") )
But the "batch/fifo_queue" node mentioned by @venuktan venuktan exists.
The issue should be located in the below code:
self.image_input, self.input_mask, self.box_delta_input, \ self.box_input, self.labels = tf.train.batch( self.FIFOQueue.dequeue(), batch_size=mc.BATCH_SIZE, capacity=mc.QUEUE_CAPACITY)
Any one know why?

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

In the end I solved this by changing the way I was saving the model to disk.

with tf.Graph().as_default():

    mc = kitti_squeezeDetPlus_config()
    mc.BATCH_SIZE = 1
    mc.LOAD_PRETRAINED_MODEL = False
    model = SqueezeDetPlus(mc, FLAGS.gpu)

    saver = tf.train.Saver(model.model_params, reshape=True)

    graph = tf.get_default_graph()
    input_graph_def = graph.as_graph_def()
    output_node_names = "bbox/trimming/bbox,probability/class_idx,probability/score"

    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:

        init = tf.initialize_all_variables()
        sess.run(init)
        tf.train.start_queue_runners(sess=sess)

        # Restores from checkpoint
        ckpts = set()
        ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_path)
        ckpts.add(ckpt.model_checkpoint_path)
        print ('Loading {}...'.format(ckpt.model_checkpoint_path))
        saver.restore(sess, ckpt.model_checkpoint_path)

         # Use a built-in TF helper to export variables to constants
        output_graph_def = graph_util.convert_variables_to_constants(sess, 
            input_graph_def,  output_node_names.split(",")  )

        # Serialize and dump the output graph to the filesystem
        output_graph = "/data/squeezeDet_TF011/logs/test_freeze/test.pb"
        with tf.gfile.GFile(output_graph, "wb") as f:
            f.write(output_graph_def.SerializeToString())
        print(("%d ops in the final graph." % len(output_graph_def.node)))

from squeezedet.

mistiansen avatar mistiansen commented on July 3, 2024

Can you offer some advice for getting SqueezeDet to work with your dataset?
Did you just change make your labels in KITTI format and change the number of classes in config?

Thank you

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

Yes, that's all you need to train your own dataset

Cheers

from squeezedet.

mistiansen avatar mistiansen commented on July 3, 2024

Thank you. Do you know if changing the image width and height will work? I see the parameters in the config file. I am hoping to work with very large images.

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

Yes, it works perfectly well with other image resolutions. You just need to adjust the parameters for the anchors appropriately

from squeezedet.

mistiansen avatar mistiansen commented on July 3, 2024

It seems one would need to make changes to the eval tool (the KITTI-eval tool in dataset/kitti-eval/cpp/evaluate_object.cpp) to handle new detection classes.
Was this necessary for you? Were you able to evaluate accuracy on your custom dataset?

Thanks again

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

No, I do not remember making any modifications to "evaluate_object.cpp". And, yes, I am able to evaluate accuracy on my custom dataset. We did modified the list of classes in the configuration files inside "config" folder

from squeezedet.

mistiansen avatar mistiansen commented on July 3, 2024

Thanks. Do you know how to resume training with a model.ckpt file? Or model.ckpt.data file? I tried putting this in pretrained_model_path but it doesn't work. It looks like it wants a .pkl file.

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

Your question is too broad and off topic. There are several tutorials online that show how to do what you are asking. I recommend that you follow them and if you still have difficulties after that, then post a minimal example with a concrete error so that other people can help you.

from squeezedet.

mistiansen avatar mistiansen commented on July 3, 2024

OK. I just didn't know if you were able to stop and resume training using this tool and model.ckpt files.

Did you get this to work with SqueezeDet+ on your own data? It seems the rule for grid spacing as 1/16 of image width, height doesn't work there.

from squeezedet.

venuktan avatar venuktan commented on July 3, 2024

I am trying to do inference with the exported .pb and I am getting the following error

Caused by op u'fire7/expand3x3/kernels/read', defined at:
  File "inference.py", line 103, in <module>
    tf.app.run(main=Inference())
  File "inference.py", line 28, in __init__
    self.get_results(FLAGS.input_image)
  File "inference.py", line 55, in get_results
    model = SqueezeDet(mc, FLAGS.gpu)
  File "/home/ubuntu/squeezeDet/src/nets/squeezeDet.py", line 24, in __init__
    self._add_forward_graph()
  File "/home/ubuntu/squeezeDet/src/nets/squeezeDet.py", line 63, in _add_forward_graph
    'fire7', fire6, s1x1=48, e1x1=192, e3x3=192, freeze=False)
  File "/home/ubuntu/squeezeDet/src/nets/squeezeDet.py", line 104, in _fire_layer
    padding='SAME', stddev=stddev, freeze=freeze)
  File "/home/ubuntu/squeezeDet/src/nn_skeleton.py", line 533, in _conv_layer
    wd=mc.WEIGHT_DECAY, initializer=kernel_init, trainable=(not freeze))
  File "/home/ubuntu/squeezeDet/src/nn_skeleton.py", line 66, in _variable_with_weight_decay
    var = _variable_on_device(name, shape, initializer, trainable)
  File "/home/ubuntu/squeezeDet/src/nn_skeleton.py", line 48, in _variable_on_device
    name, shape, initializer=initializer, dtype=dtype, trainable=trainable)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
    custom_getter=custom_getter)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
    custom_getter=custom_getter)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 348, in get_variable
    validate_shape=validate_shape)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
    caching_device=caching_device, validate_shape=validate_shape)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 684, in _get_single_variable
    validate_shape=validate_shape)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 226, in __init__
    expected_shape=expected_shape)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 344, in _init_from_args
    self._snapshot = array_ops.identity(self._variable, name="read")
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1490, in identity
    result = _op_def_lib.apply_op("Identity", input=input, name=name)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/ubuntu/.conda/envs/py27sq/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
    self._traceback = _extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value fire7/expand3x3/kernels
	 [[Node: fire7/expand3x3/kernels/read = Identity[T=DT_FLOAT, _class=["loc:@fire7/expand3x3/kernels"], _device="/job:localhost/replica:0/task:0/gpu:0"](fire7/expand3x3/kernels)]]
	 [[Node: probability/score/_7 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_534_probability/score", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

I have exported my protobuf this way as instructed above

with tf.gfile.GFile(out_model, "wb") as f:
    f.write(output_graph_def.SerializeToString())

from squeezedet.

venuktan avatar venuktan commented on July 3, 2024

@Lisandro79 @mistiansen @BichenWuUCB after you export to a pb whats the input (for image) and output nodes ?
I am using this

det_boxes, det_probs, det_class = sess.run(
                ["import_5/bbox/trimming/bbox:0", "import_5/probability/score:0", "import_5/probability/class_idx:0"],
                feed_dict={"import_5/batch/fifo_queue:0": [input_image]})

I am getting an error

TypeError: Cannot interpret feed_dict key as Tensor: The name 'import_5/batch/fifo_queue:0' refers to a Tensor which does not exist. The operation, 'import_5/batch/fifo_queue', does not exist in the graph.

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

Hi @venuktan,

I think you have the wrong name of the input layer. Here are some snippets of code of what we used. Please note that the input layer is called "Image_Input:0" (not "fifo_queue")

 # Test with one image
IMAGE_WIDTH = 720
IMAGE_HEIGHT = 405
BGR_MEANS = np.array([[[103.939, 116.779, 123.68]]])
im_name = './test/test3.jpg'
im = cv2.imread(im_name)

im = im.astype(np.float32, copy=False)
im = cv2.resize(im, (IMAGE_WIDTH, IMAGE_HEIGHT))
input_image = im - BGR_MEANS

cv2.imshow("Image", im)
cv2.imwrite('test.jpg', im.astype(np.int16, copy=False))
cv2.destroyAllWindows()

x = graph.get_tensor_by_name('image_input:0')
bboxes = graph.get_tensor_by_name('bbox/trimming/bbox:0')
scores = graph.get_tensor_by_name('probability/score:0')
clss = graph.get_tensor_by_name('probability/class_idx:0')
keep_prob = graph.get_tensor_by_name('keep_prob:0')

# We launch a Session
with tf.Session(graph=graph) as sess:
        det_probs = sess.run(scores, {'image_input:0': [im], 'keep_prob:0': 0.5})
        print det_probs

# Unpersists graph from file
with tf.gfile.FastGFile(args.frozen_model_filename, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')

with tf.Session() as sess:
        x = sess.graph.get_tensor_by_name('image_input:0')
        bboxes = sess.graph.get_tensor_by_name('bbox/trimming/bbox:0')
        scores = sess.graph.get_tensor_by_name('probability/score:0')
        clss = sess.graph.get_tensor_by_name('probability/class_idx:0')
        keep_prob = sess.graph.get_tensor_by_name('keep_prob:0')

        det_probs = sess.run(scores, {'image_input:0': [input_image], 'keep_prob:0': 0.5})
        det_boxes, det_probs, det_class = sess.run([bboxes, scores, clss],
            feed_dict={x: [input_image], keep_prob: 1.0})
        print det_probs

Hope this helps

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

Hi @mistiansen

If you have the correct numbers for the anchors it should work. What do you mean by "It seems the rule for grid spacing as 1/16 of image width, height doesn't work there"?

You should post a minimal code example with an error so that everybody can understand your problem. It is very difficult to understand the problem from your message.

Cheers

from squeezedet.

Lisandro79 avatar Lisandro79 commented on July 3, 2024

I did not "decide" anything, those are the names of the output nodes in the graph.

If you want to check the difference between the graphs, why don't you visualize those graphs and compare the nodes?

Also, @BichenWuUCB provides useful info about this in #73. We did optimization for inference and removed dropout (this step is necessary if you want to run SqueezeDet on some mobile devices)

After freezing for production the speed of inference on a Titan X is ~0.017s for images of 720x400 pixels, which is excellent for our task (great job @BichenWuUCB).

from squeezedet.

villanuevab avatar villanuevab commented on July 3, 2024

Thank you, you answered my question. I just wanted to clarify, as stated in #73, which ops and subgraphs were needed for inference.

Thank you and @BichenWuUCB for your helpful responses and great work!

from squeezedet.

hoonkai avatar hoonkai commented on July 3, 2024

We did optimization for inference and removed dropout (this step is necessary if you want to run SqueezeDet on some mobile devices)

@Lisandro79 @BichenWuUCB Can I ask why dropout needs to be removed? Isn't the dropout layer trivial during inference?

from squeezedet.

manoj652 avatar manoj652 commented on July 3, 2024

how to freeze the graph

from squeezedet.

pribadihcr avatar pribadihcr commented on July 3, 2024

In the end I solved this by changing the way I was saving the model to disk.

with tf.Graph().as_default():

    mc = kitti_squeezeDetPlus_config()
    mc.BATCH_SIZE = 1
    mc.LOAD_PRETRAINED_MODEL = False
    model = SqueezeDetPlus(mc, FLAGS.gpu)

    saver = tf.train.Saver(model.model_params, reshape=True)

    graph = tf.get_default_graph()
    input_graph_def = graph.as_graph_def()
    output_node_names = "bbox/trimming/bbox,probability/class_idx,probability/score"

    with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:

        init = tf.initialize_all_variables()
        sess.run(init)
        tf.train.start_queue_runners(sess=sess)

        # Restores from checkpoint
        ckpts = set()
        ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_path)
        ckpts.add(ckpt.model_checkpoint_path)
        print ('Loading {}...'.format(ckpt.model_checkpoint_path))
        saver.restore(sess, ckpt.model_checkpoint_path)

         # Use a built-in TF helper to export variables to constants
        output_graph_def = graph_util.convert_variables_to_constants(sess, 
            input_graph_def,  output_node_names.split(",")  )

        # Serialize and dump the output graph to the filesystem
        output_graph = "/data/squeezeDet_TF011/logs/test_freeze/test.pb"
        with tf.gfile.GFile(output_graph, "wb") as f:
            f.write(output_graph_def.SerializeToString())
        print(("%d ops in the final graph." % len(output_graph_def.node)))

@Lisandro79 when I visualize the graph of .pb file using netron, the input node is disappear. ony has queueDequeuemanyV2 or batch node before conv1/convolution node. How to include the input_node?. thanks

from squeezedet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.