Code Monkey home page Code Monkey logo

sanet_implementation's People

Contributors

bigknight avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

sanet_implementation's Issues

生成density map的delta值是多少?

请问一下你的密度图是怎么生成的呢?我的运行结果和你的一样,对于所有输入的image都倾向于输出固定的结果(0)。你的这种情况有没有得到改善呢?后来是怎么解决的呢?

Project environment issues

Hello , thanks for your open-source.
I am try to reimplement the result of this paper, and I am not sure about your experiment environment such as Tensorflow version and dependency.
Would you plz give me more detail about this?
And step information about training and testing.
Thank you very much.

卷积和池化的两个问题

  1. 论文原文是1×1的卷积,不是2×2的卷积
    with tf.variable_scope(name_or_scope="branch_1x1_" + str(layer_number)):
    branch_1x1 = slim.conv2d(data_input, channel_output, [2, 2], 1, "same")
  2. 关于池化
    feature_map_encoder = slim.max_pool2d(feature_map_encoder, [2, 2], 2, "SAME", scope="max_pooling_2x2")

    原文: 2 × 2 max-pooling layers after each module to halve the spatial resolution of feature maps.
    所以我认为应该是VALID

另外还有一个问题我没弄明白,望解答
3. 此处的2*channel_output与原文的意思是否一样,原文的feature dimensions指的是什么?

with tf.variable_scope(name_or_scope="branch_3x3_" + str(layer_number)):
branch_3x3_part_1 = slim.conv2d(data_input, 2*channel_output, [1, 1], 1, "SAME", scope="convolution_layer_1a")

原文:In addition, we add a 1 × 1convolution before the 3 × 3, 5 × 5 and 7 × 7 convolution layers to reduce the feature dimensions by half.

预处理标签密度图的制作

非常感谢您的开源,我看了一下您在制作密度图标签的时候,采用的是固定核,核的大小取的是5,但是我看其他介绍,他们取的是15,这个对结果有影响吗?

1

def inception_arg_scope(weight_decay=4e-4, std=3, batch_norm_var_collection="moving_vars"):
instance_norm_params = {
# "decay": 0.9997,
"epsilon": 1e-6,
"activation_fn": tf.nn.relu,
"trainable": True,
"variables_collections": {
"beta": None,
"gamma": None,
"moving_mean": [batch_norm_var_collection],
"moving_variance": [batch_norm_var_collection]},
"outputs_collections": {
}
}
with slim.arg_scope([slim.conv2d],

模型测试时对图片的处理

你好,请问论文中Evaluation details中提到测试时对图片的处理,具体怎么实现的呢?谢谢!
ps.我只简单跑了网络结构 效果不是很好 可能训练或者测试细节没加上去的原因 另外发现加上IN层虽然加速收敛了 效果相对而言却差了

原文:For testing the model trained based on patches, we crop each test sample to patches 1/4 size of original image with 50% overlapping. For each overlapping pixels between patches, we only reserve the
density value in the patch of which the center is the nearest to the pixel than others, because the center part of patches has enough contextual information to ensure accurate estimation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.