Code Monkey home page Code Monkey logo

superpixel-benchmark's Introduction

Superpixels: An Evaluation of the State-of-the-Art

Build Status

This repository contains the source code used for evaluation in [1], a large-scale comparison of state-of-the-art superpixel algorithms.

ArXiv | Project Page | Datasets | Doxygen Documentation

This repository subsumes earlier work on comparing superpixel algorithms: davidstutz/gcpr2015-superpixels, davidstutz/superpixels-revisited.

Please cite the following work if you use this benchmark or the provided tools or implementations:

[1] D. Stutz, A. Hermans, B. Leibe.
    Superpixels: An Evaluation of the State-of-the-Art.
    Computing Research Repository, abs/1612.01601.

Also make also sure to cite additional papers when using datasets or superpixel algorithms.

Updates:

  • An implementation of the average metrics, i.e. Average Boundary Recall (called Average Miss Rate in the updated paper), Average Undersegmentation Error and Average Explained Variation (called Average Unexplained Variation in the updated paper) is provided in lib_eval/evaluation.h and an easy-to-use command line tool is provided, see eval_average_cli and the corresponding documentation and examples in Executables and Examples respectively.
  • As of Mar 29, 2017 the paper was accepted for publication at CVIU.
  • The converted (i.e. pre-processed) NYUV2, SBD and SUNRGBD datasets are now available in the data repository.
  • The source code of MSS has been added.
  • The source code of PF and SEAW has been added.
  • Doxygen documentation is now available here.
  • The presented paper was in preparation for a longer period of time — some recent superpixel algorithms are not included in the comparison. These include SCSP and LRW.

Table of Contents

Introduction

Superpixels group pixels similar in color and other low-level properties. In this respect, superpixels address two problems inherent to the processing of digital images: firstly, pixels are merely a result of discretization; and secondly, the high number of pixels in large images prevents many algorithms from being computationally feasible. Superpixels were introduced as more natural entities - grouping pixels which perceptually belong together while heavily reducing the number of primitives.

This repository can be understood as supplementary material for an extensive evaluation of 28 algorithms on 5 datasets regarding visual quality, performance, runtime, implementation details and robustness - as presented in [1]. To ensure a fair comparison, parameters have been optimized on separate training sets; as the number of generated superpixels heavily influences parameter optimization, we additionally enforced connectivity. Furthermore, to evaluate superpixel algorithms independent of the number of superpixels, we propose to integrate over commonly used metrics such as Boundary Recall, Undersegmentation Error and Explained Variation. Finally, we present a ranking of the superpixel algorithms considering multiple metrics and independent of the number of generated superpixels, as shown below.

Algorithm ranking.

The table shows the average ranks across the 5 datasets, taking into account Average Boundary Recall (ARec) and Average Undersegmentation Error (AUE) - lower is better in both cases, see Benchmark. The confusion matrix shows the rank distribution of the algorithms across the datasets.

Algorithms

The following algorithms were evaluated in [1], and most of them are included in this repository:

Included Algorithm Reference
☑️ CCS Ref. & Web
Instructions CIS Ref. & Web
☑️ CRS Ref. & Web
☑️ CW Ref. & Web
☑️ DASP Ref. & Web
☑️ EAMS Ref., Ref., Ref. & Web
☑️ ERS Ref. & Web
☑️ FH Ref. & Web
☑️ MSS Ref.
☑️ PB Ref. & Web
☑️ preSLIC Ref. & Web
☑️ reSEEDS Web
☑️ SEAW Ref. & Web
☑️ SEEDS Ref. & Web
☑️ SLIC Ref. & Web
☑️ TP Ref. & Web
☑️ TPS Ref. & Web
☑️ vlSLIC Web
☑️ W Web
☑️ WP Ref. & Web
☑️ PF Ref. & Web
☑️ LSC Ref. & Web
☑️ RW Ref. & Web
☑️ QS Ref. & Web
☑️ NC Ref. & Web
☑️ VCCS Ref. & Web
☑️ POISE Ref. & Web
☑️ VC Ref. & Web
☑️ ETPS Ref. & Web
☑️ ERGC Ref., Ref. & Web

Submission

To keep the benchmark alive, we encourage authors to make their implementations publicly available and integrate them into this benchmark. We are happy to help with the integration and update the results published in [1] and on the project page. Also see the Documentation for details.

License

Note that part of the provided algorithms come with different licenses, see Algorithms for details. Also note that the datasets come with different licenses, see Datasets for details.

Further, note that the additional dataset downloads as in Datasets follow the licenses of the original datasets.

The remaining source code provided in this repository is licensed as follows:

Copyright (c) 2016, David Stutz All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

superpixel-benchmark's People

Contributors

davidstutz avatar hughplay avatar matheusvieirao avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.