Code Monkey home page Code Monkey logo

piecewisecrf's Introduction

Piecewise CRF

This is an implementation of piecewise crf training for semantic segmentation based on the work of Chen et al. The implemented model consists of three parts:

  1. A neural network used for learning unary and binary potentials
  2. A contextual conditional random field that combines the learnt unary and binary potentials
  3. A fully connected Gaussian conditional random field used for segmentation postprocessing

The implemented system is evaluated on the publicly available datasets: Cityscapes and KITTI. For more information about the implementation as well as the results look into the thesis paper.

Usage

In this section the usage pipeline for semantic segmentation is explained. For more detailed usage explanations about specific scripts look into the comments inside them or the readme files in appropriate subdirectories of this project. The usage pipeline consists of several steps which will be further explained in the upcoming sections. All the scripts are well documented and for information about script arguments look into comments.

IMPORTANT: In order to run the piecewisecrf scripts, set the PYTHONPATH environment variable to the project(repository) path.

Generating images

The first step is to generate all the necessary files used for training and validation.

  1. Download the datasets (Cityscapes or KITTI). For the Cityscapes dataset download the ground truth labels as well as left images. Extract the downloaded archives. For KITTI rename the valid folder to val.
  2. Run the piecewisecrf/datasets/cityscapes/train_validation_split.py in order to generate the validation dataset. For KITTI use piecewisecrf/datasets/kitti/train_validation_split.py.
  3. Configure piecewisecrf/config/prefs.py file. Set the dataset_dir, save_dir, img_width, img_height, img_depth flags
  4. Run the piecewisecrf/datasets/cityscapes/prepare_dataset_files.py in order to generate files necessary for tensorflow records generation as well as evaluation. For KITTI use piecewisecrf/datasets/kitti/prepare_dataset_files.py.
  5. Generate tensorflow records used for training and validation by running the following script piecewisecrf/datasets/prepare_tfrecords.py. The destination directory is used to reconfigure the piecewisecrf/config/prefs.py file (train_records_dir, val_records_dir, test_records_dir flags)

Training the neural network

  1. Prepare the numpy file with vgg weights (look at the readme in caffe-tensorflow).
  2. Configure the piecewisecrf/config/prefs.py (vgg_init_file, train_dir and all the other parameters for training)
  3. Run piecewisecrf/train.py

Evaluating the neural network

  1. Configure the piecewisecrf/config/prefs.py if not already done.
  2. Run the following script: piecewisecrf/eval.py

Generating output files from contextual CRF

  1. Configure the piecewisecrf/config/prefs.py if not already done.
  2. Run the following script: piecewisecrf/forward_pass.py

This will generate predictions (in small and original resolution) as well as unary potentials used by the fully connected CRF.

Learning the parameters of the fully connected CRF

This is done by applying grid search.

  1. Build the dense crf executable (look at the readme in densecrf)
  2. If necessary pick a subset of the validation dataset by using tools/validation_set_picker.py and tools/copy_files.py
  3. Configure the tools/grid_config.py file (grid search parameters)
  4. Start the grid search by running tools/grid_search.py.
  5. Evaluate grid search results by running tools/evaluate_grid.py

With this you will get optimal CRF parameters on the validation dataset.

Fully connected CRF inference and evaluation

  1. To infer images with the fully connected CRF run the tools/run_crf.py script.
  2. In order to evaluate the generated output you can use tools/calculate_accuracy_t.py
  3. Because the output is in binary format, in order to generate image files, run the tools/colorize.py script.

References

Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation
Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016

Convolutional scale invariance for semantic segmentation
Krešo Ivan, Čaušević Denis, Krapac Josip, Šegvić Siniša
38th German Conference on Pattern Recognition, Hannover, 2016

Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Philipp Krähenbühl and Vladlen Koltun
NIPS 2011

Vision-based offline-online perception paradigm for autonomous driving
Ros, G., Ramos, S., Granados, M., Bakhtiary, A., Vazquez, D., Lopez, A.M.
IEEE Winter Conference on Applications of Computer Vision, Hawaii, 2015

The Cityscapes dataset for semantic urban scene understanding
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, SS., Schiele, B.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 2016

piecewisecrf's People

Contributors

vaan5 avatar

Stargazers

 avatar Yuseung (Phillip) Lee avatar  avatar Xiaolin Zhang avatar Lawrence avatar Rosun avatar Ke Young avatar zhengqiang ZHANG avatar JanySunny avatar Ashim Gupta avatar  avatar  avatar  avatar Shubham Pachori avatar jacobzhao avatar  avatar  avatar  avatar Munan Ning avatar Songyang Zhang avatar TWJ avatar Sysko avatar theta avatar Feng Gu avatar  avatar  avatar  avatar Qin Cao avatar Chen Ma avatar Stan Soo avatar  avatar Dong Du avatar Jorge Beltrán de la Cita avatar Itsuki Toyota avatar TENSORTALK avatar  avatar AIHGF avatar Sudeep Pillai avatar

Watchers

 avatar

piecewisecrf's Issues

How to Convert VGG16 caffe model to tensorflow

Hi @Vaan5

I am new with tensorflow and caffe. I am from FER also, a new student from Indonesia. Can you help me how to configure your project properly? I tried it few times, and end up with several errors. However, I still got some error while converting caffemodel. I tried to run this script:

./convert.py VGG16_SalObjSub.caffemodel --code-output-path=coba.npy

I got error:

File "./convert.py", line 60, in <module> main() File "./convert.py", line 56, in main args.phase) File "./convert.py", line 27, in convert transformer = TensorFlowTransformer(def_path, caffemodel_path, phase=phase) File "/home/adi005/ImageSegmentation/piecewisecrf/caffe-tensorflow/kaffe/tensorflow/transformer.py", line 221, in __init__ self.load(def_path, data_path, phase) File "/home/adi005/ImageSegmentation/piecewisecrf/caffe-tensorflow/kaffe/tensorflow/transformer.py", line 227, in load graph = GraphBuilder(def_path, phase).build() File "/home/adi005/ImageSegmentation/piecewisecrf/caffe-tensorflow/kaffe/graph.py", line 140, in __init__ self.load() File "/home/adi005/ImageSegmentation/piecewisecrf/caffe-tensorflow/kaffe/graph.py", line 146, in load text_format.Merge(def_file.read(), self.params) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 476, in Merge descriptor_pool=descriptor_pool) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 526, in MergeLines return parser.MergeLines(lines, message) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 559, in MergeLines self._ParseOrMerge(lines, message) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 574, in _ParseOrMerge self._MergeField(tokenizer, message) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 619, in _MergeField name = tokenizer.ConsumeIdentifierOrNumber() File "/usr/local/lib/python2.7/dist-packages/google/protobuf/text_format.py", line 1066, in ConsumeIdentifierOrNumber raise self.ParseError('Expected identifier or number.') google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number.
can you show me a proper script to convert the model.
thank you in advance.

How do I use .json file in cityscapes dataset?

Hello

I appreciate your kindness sharing your code

I'm trying to follow your explanations on README file, but in generating image step I have a problem now.

As I run the prepare_dataset_files.py after running train_test_validation_split.py and prefs.py,
(I set the configuration in prefs.py)
the code does not take .json file and give me error on cmd

Should I remove .json file after I ran split.py ?

Or did I miss something?

I already checked github of cityscapes datasets, but I did not find any guidelines that I should process .json file for some reason.

How should I handle these files?

about thesis

Hello,I want to konw whether your thesis have English version? Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.