Comments (4)
Hi,
The training procedure is mentioned in the 2nd paragraph of Sect. 4. I just want to point out that the stage-wise training scheme is used only for pre-training on Chairs dataset (i.e. those models with "-pre").
For pre-training, say L6, you need to remove the code segment for L5 to L2. Similarly, pre-training upto L5, you need to remove the code segment for L4 to L2.
from liteflownet.
Thanks for the quick reply and clarifying the procedure of training.
However, I guess I was not clear in asking my question. Actually, I want to ask that when I use this partial trained network, and run inference on that part of network (i.e till layer 6 + layer 1 which does not require any training) to calculate the flow, the inference takes too much time giving the outputs as stated in the earlier post. However, when I run inference on the whole network (either using the trained model provided in your github repository or my partially trained model which is trained only till layer 6), the inference is fast and no repetitions of the output takes place. However, using whole network for inference, while training is still going on does not seem logical to me as the weights of other layers (i.e layer 2) are not trained. I want to know what I am doing wrong and how can I check the intermediate flow results. Is it something related to data augmentation / crop size causing multiple images to be generated during inference time that I am missing?
Secondly, I get slightly confused regarding the training procedure. Please correct me if I am wrong. The procedure for training is :
1). LiteFlowNet-pre: Train the network stage-wise as mentioned in your answer using FlyingChairs dataset
2). LiteFlowNet: Using the trained LiteFlownet-pre model, fine-tune this complete model (no training in stages needs to be done for this fine-tuning) on FlyingThings3D dataset (In this case, can you tell what learning rate you are using as I cannot find this out in the paper).
Thanks once again for reading the post. I am sorry if my questions seems irrelevant / silly.
from liteflownet.
Generally speaking, support other than issue related to this github is not provided! Anyway, I have no idea on what you meant by "till layer 6 + layer 1"? If you only want to infer flow fields say level 6, then you need to comment out all unused layers (i.e. from levels 5 to 2).
The procedure is correct.
from liteflownet.
Thank you for the response and giving your time.
From layer 6 + layer 1, I meant that for inference, I used the network till layer 6 and at the end give the output of layer 6 as input to layer 1 (post processing layer). In layer 1, there is no trainable parameters for the convolution layer, "ScaleMag_flow". So, I think that I can use this post processing layer and added it for inference as this layer also write flow to a file which I require for evaluating performance of model. However, I left the weight and height parameters in the post processing layer as it is ( width: $TARGET_WIDTH, height: $TARGET_HEIGHT) and am not sure whether this is correct or not.
I am sorry if the questions are not related to this github.
from liteflownet.
Related Issues (20)
- Cannot read all entries present in lmdb file HOT 1
- Charbonnier loss in your paper HOT 1
- warping each channel differently HOT 1
- Softmax function in regularization layer HOT 1
- Question about data augmentation? HOT 2
- train all the parameters together? HOT 2
- Question about training process. HOT 1
- LiteFlowNetX HOT 1
- How to calculate training loss HOT 3
- Data augmentation layers HOT 2
- It is a must use CUDNN? HOT 2
- Layer type: "Grid" in {deploy.prototxt} HOT 3
- Implementation of LiteFlowNet with tensorflow HOT 4
- prepare lmdb file on kitti HOT 2
- Convert training.list and test.list issues make-lmdbs-train.sh HOT 6
- train.py -gpu 0 2>&1 | tee ./log.txt ERROR HOT 8
- Compiling errors
- Compiling Error HOT 2
- About training logs?
- about yout paper : Motion-Depth HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from liteflownet.