Code Monkey home page Code Monkey logo

cvpr16-deepbit's People

Contributors

kevinlin311tw avatar suvojit-0x55aa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cvpr16-deepbit's Issues

loss1 remains large and loss2 decrease to 0

When I fine tune VGG16 with the code provided in stage1, loss1 increases until 0.25 and loss2 decreases to a small value < 1e-5. I use output bits: 64, batch size:32. Can someone give me any advice about this strange situation?

train.sh

Hi, When I run the ./train.sh common, the following error ,how to solve the problem , thanks
error: check failed : fd != -1 (-1 vs. -1) File not found : DeepBit32_stage2_iter_5000.caffemodel

perform prepare.sh failed

Execute "./prepare.sh" failed:

I can access dropbox.com but cannot access the model and dataset mentioned in prepare.sh.

Any alternative solution?

Installation Error

After installing the requirements, when I run the code make all -j8, I got this error.

LD -o .build_release/lib/libcaffe.so
ld: framework not found vecLib
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [.build_release/lib/libcaffe.so] Error 1

So I think it can not find vecLib.
I use macOs Mojave 10.14.5.

So what should I do?
Thanks in advance

mAP

I had followed your instruction step by step, why that I can't get the ideal results presented in your paper. Is there anything that I should pay extra attention to?

why add (i*channels) in dot product operation? why have two weight?

Hello, can help me explain this loss code, I do not understand, thank you very much!For example, why add (i*channels) in dot product operation? why have two weight?

for (int i = 0; i < bottom[0]->num(); ++i) {
dist_sq_.mutable_cpu_data()[i] = caffe_cpu_dot(channels,
diff_.cpu_data() + (ichannels), diff_.cpu_data() + (ichannels));

loss += (rotation_weight2[i]*(rotation_weight1[i]*dist_sq_.cpu_data()[i]));

}

Error during runtest

We get the following error when we do "make runtest". We use CUDA 7.5 on Ubuntu 14.04. Could you please suggest a solution for it?

https://github.com/kevinlin311tw/cvpr16-deepbit/blob/master/README.md

[----------] 4 tests from ImageDataLayerTest/0, where TypeParam = caffe::CPUDevice

[ RUN ] ImageDataLayerTest/0.TestShuffle

*** Aborted at 1470802253 (unix time) try "date -d @1470802253" if you are using GNU date ***

PC: @ 0x2b17d11def20 (unknown)

*** SIGSEGV (@0x0) received by PID 43804 (TID 0x2b17cbfc91c0) from PID 0; stack trace: ***

@     0x2b17d2301d40 (unknown)

@     0x2b17d11def20 (unknown)

@           0x569cac std::operator+<>()

@     0x2b17d1645117 caffe::ImageDataLayer<>::DataLayerSetUp()

@     0x2b17d1626ce3 caffe::BasePrefetchingDataLayer<>::LayerSetUp()

@           0x5519ea caffe::ImageDataLayerTest_TestShuffle_Test<>::TestBody()

@           0x7b4803 testing::internal::HandleExceptionsInMethodIfSupported<>()

@           0x7ab4e7 testing::Test::Run()

@           0x7ab58e testing::TestInfo::Run()

@           0x7ab695 testing::TestCase::Run()

@           0x7ae9d8 testing::internal::UnitTestImpl::RunAllTests()

@           0x7aec67 testing::UnitTest::Run()

@           0x44fd4a main

@     0x2b17d22ecec5 (unknown)

@           0x456169 (unknown)

@                0x0 (unknown)

make: *** [runtest] Segmentation fault (core dumped)

Time ORB and DeepBit

Section 4.2. "Results in Image Matching" What was the processing time of the descriptor ORB and DeepBit?
Because I would use the DeepBit in time real for image matching.

Check failed: fd != -1 (-1 vs. -1) File not found: solver_stage1.prototxt

I got the following error while executing train.sh script.How can I handle this error? Thanks very much.

Check failed: fd != -1 (-1 vs. -1) File not found: solver_stage1.prototxt
*** Check failure stack trace: ***
@ 0x7ffbddd355cd google::LogMessage::Fail()
@ 0x7ffbddd37433 google::LogMessage::SendToLog()
@ 0x7ffbddd3515b google::LogMessage::Flush()
@ 0x7ffbddd37e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7ffbde3fa178 caffe::ReadProtoFromTextFile()
@ 0x7ffbde400ff6 caffe::ReadSolverParamsFromTextFileOrDie()
@ 0x40ad3a train()
@ 0x407fc0 main
@ 0x7ffbdcc2f830 __libc_start_main
@ 0x4087e9 _start
@ (nil) (unknown)

Unsupervised Binary Descriptor

Hi guys,
Although I read the paper and have a look at the source code, I could not understand how it works with the unsupervised approach.

This code snippet is from K2_min_quantization_loss_layer.
How is bottom_data calculated?

  for ( int i = 0; i < count; i++)
  {
	if(bottom_data[i] > 0.5){
		binary[i] = 1;
	}
	else{
		binary[i] = 0; 
	}
  }
  for (int i = 0; i < count; ++i) {
	loss = loss + (binary[i]-bottom_data[i])*(binary[i]-bottom_data[i]);
        diff_.mutable_cpu_data()[0] = diff_.mutable_cpu_data()[0] + (binary[i]-bottom_data[i])*(-bottom_data[i]);
  } 

The deploy model file cannot be adapt to the caffe of current BVLC version

run_cifar10
cvpr16-deepbit startup
Cleared 0 solvers and 0 stand-alone nets
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0617 21:46:16.693743 67684 upgrade_proto.cpp:52] Attempting to upgrade input file specified using deprecated V1LayerParameter: ./models/deepbit/deploy32.prototxt
F0617 21:46:16.715786 67684 upgrade_proto.cpp:660] Refusing to upgrade inconsistent NetParameter input; the definition includes both 'layer' and 'layers' fields. The current format defines 'layer' fields with string type like layer { type: 'Layer' ... } and not layers { type: LAYER ... }. Manually switch the definition to 'layer' format to continue.

Change max_iter for training new model

Hi, thank you for your working.

I just wonder how I can change max_iter parameter (in Optimization step) as your paper presented. In your example with Cifar I could not find where to change max_iter.

Thank you in advance.

About the K1&K2_EuclideanLossLayer

I'm a beginner of deep hash, and I admire this method. I have already achieve generating hash codes by adding latent layer on my own dataset. 
Then I wanna improve it, but I can't find **the registrations of K1&K2_EuclideanLossLayer and their LayerParameters** in **caffe.proto**, are they necessary?

Really outperform baselines that much ?

Hi, I have noticed that reported performance has far beyond baselines but which is mAP@1K. Consider your baselines are all mAP which should be lower than mAP@1K in your implementation. As claimed in section 4.3,

Following the settings in [24](Deep hashing for compact binary codes learning, CVPR 15) ...

however, DH reported mAP rather than mAP@1K (if you double check the setting in [24]). Could you please give us more details about the real mAP on Cifar10 to fully compare with others.
In our evaluation, even ITQ (well-known and still competitive baseline) has reached 20+ mAP@1K, which makes us wonder whether Deepbit outperforms baselines that much?

Anyway, good work and thanks for the open source codes.

How to binarize the features from fc8_kevin layer

I am trying to deploy the network with python. Can someone help me with converting converting the features from fc8_kevin to binary. For example, the output from fc8 is like this.

[[-0.56248081 -0.45775688 -1.30485046 -0.1604958 1.5815804 0.77081078
-0.7585181 -1.10825634 -1.4173944 -0.79846704 0.49331596 -0.42672595
0.13242441 -0.19356699 -1.40226376 -0.30305305 0.51420587 -0.613258
-0.56232476 -1.44308019 -1.60355341 -0.82647151 0.34802282 -1.08634388
-0.70180577 -1.15342426 -1.00441277 0.03183408 0.90380871 -0.03189395
-0.22369295 0.1908104 ]]

All binary vectors are the same

Hi,

I retrain the model on the NUSWIDE dataset. But the output binary codes for all the test and database samples are the same. Namely, the pdist2 matrix is zero matrix.

I wanna to know why this happen?

check fail on imagenet_mean_binaryproto

Hi, I tried to use train.sh to get the training model. However, when caffe tried to load mean file from imagenet_mean.binaryproto, it gives me an error "check failed: !lines_.empty() File is empty". How can I handle this error? Thanks very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.