davisvideochallenge / davis2017-evaluation Goto Github PK
View Code? Open in Web Editor NEWEvaluation Framework for DAVIS 2017 Semi-supervised and Unsupervised used in the DAVIS Challenges
License: BSD 3-Clause "New" or "Revised" License
Evaluation Framework for DAVIS 2017 Semi-supervised and Unsupervised used in the DAVIS Challenges
License: BSD 3-Clause "New" or "Revised" License
Thank you very much for making the dataset and evaluation code available!
As there is no license file, I was wondering under which license the evaluation code is published?
Hi, I see that there are multiple repos available for DAVIS evaluation.
Could you please clarify which is the latest/recommended to use?
For example, the two relevant repos are:
https://github.com/davisvideochallenge/davis-2017
https://github.com/davisvideochallenge/davis2017-evaluation
python setup.py install
/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/dist.py:760: UserWarning: Usage of dash-separated 'author-email' will not be supported in future versions. Please use the underscore name 'author_email' instead
% (opt, underscore_opt)
/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/dist.py:760: UserWarning: Usage of dash-separated 'home-page' will not be supported in future versions. Please use the underscore name 'home_page' instead
% (opt, underscore_opt)
Traceback (most recent call last):
File "setup.py", line 19, in
'tqdm>=4.28.1'
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/init.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 122, in setup
dist.parse_config_files()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/dist.py", line 851, in parse_config_files
self, self.command_options, ignore_option_errors=ignore_option_errors
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 167, in parse_configuration
meta.parse()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 446, in parse
section_parser_method(section_options)
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 417, in parse_section
self[name] = value
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 238, in setitem
value = parser(value)
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 552, in _parse_version
return expand.version(self._parse_attr(value, self.package_dir, self.root_dir))
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/setupcfg.py", line 371, in _parse_attr
package_dir.update(self.ensure_discovered.package_dir)
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/_collections_abc.py", line 720, in iter
yield from self._mapping
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/expand.py", line 458, in iter
return iter(self._target())
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/expand.py", line 448, in _target
self._value = self._obtain()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/expand.py", line 418, in _get_package_dir
self()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/config/expand.py", line 408, in call
self._dist.set_defaults(name=False) # Skip name, we can still be parsing
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/discovery.py", line 330, in call
self._analyse_package_layout(ignore_ext_modules)
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/discovery.py", line 363, in _analyse_package_layout
or self._analyse_flat_layout()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/discovery.py", line 420, in _analyse_flat_layout
return self._analyse_flat_packages() or self._analyse_flat_modules()
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/discovery.py", line 426, in _analyse_flat_packages
self._ensure_no_accidental_inclusion(top_level, "packages")
File "/home/zeus/anaconda3/envs/vfs/lib/python3.7/site-packages/setuptools/discovery.py", line 455, in _ensure_no_accidental_inclusion
raise PackageDiscoveryError(cleandoc(msg))
setuptools.errors.PackageDiscoveryError: Multiple top-level packages discovered in a flat-layout: ['pytest', 'results', 'davis2017'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
find
directive with include
or exclude
)src-layout
py_modules
or packages
with a list of namesTo find more information, look for "package discovery" on setuptools docs.
if i modify this code a little, like changing the val.txt and annotations format to Davis2016, could i use this code directly for the evaluation of davis 2016? I just want to know if the implementation details of J/F are exactly the same with 2016 version.
Thx
Have you considered also supporting DAVIS2016 exclusively? Mostly because of the single object segmentation. I managed to adjust the code, appending the diff in case anyone will need it. I can also do a pull request. Props on the code man :)
diff --git a/davis2017/davis.py b/davis2017/davis.py
index d831be6..8891b88 100644
--- a/davis2017/davis.py
+++ b/davis2017/davis.py
@@ -8,10 +8,11 @@ from PIL import Image
class DAVIS(object):
SUBSET_OPTIONS = ['train', 'val', 'test-dev', 'test-challenge']
TASKS = ['semi-supervised', 'unsupervised']
+ YEARS = ['2016', '2017', '2019']
DATASET_WEB = 'https://davischallenge.org/davis2017/code.html'
VOID_LABEL = 255
- def __init__(self, root, task='unsupervised', subset='val', sequences='all', resolution='480p', codalab=False):
+ def __init__(self, root, task='unsupervised', subset='val', sequences='all', resolution='480p', codalab=False, year='2017'):
"""
Class to read the DAVIS dataset
:param root: Path to the DAVIS folder that contains JPEGImages, Annotations, etc. folders.
@@ -24,6 +25,8 @@ class DAVIS(object):
raise ValueError(f'Subset should be in {self.SUBSET_OPTIONS}')
if task not in self.TASKS:
raise ValueError(f'The only tasks that are supported are {self.TASKS}')
+ if year not in self.YEARS:
+ raise ValueError(f'Year should be one of the following {self.YEARS}')
self.task = task
self.subset = subset
@@ -31,8 +34,12 @@ class DAVIS(object):
self.img_path = os.path.join(self.root, 'JPEGImages', resolution)
annotations_folder = 'Annotations' if task == 'semi-supervised' else 'Annotations_unsupervised'
self.mask_path = os.path.join(self.root, annotations_folder, resolution)
- year = '2019' if task == 'unsupervised' and (subset == 'test-dev' or subset == 'test-challenge') else '2017'
- self.imagesets_path = os.path.join(self.root, 'ImageSets', year)
+
+ self.year = year
+ if self.year == '2019' and not (task == 'unsupervised' and (subset == 'test-dev' or subset == 'test-challenge')):
+ raise ValueError("Set 'task' to 'unsupervised' and subset to 'test-dev' or 'test-challenge'")
+
+ self.imagesets_path = os.path.join(self.root, 'ImageSets', self.year)
self._check_directories()
@@ -95,6 +102,10 @@ class DAVIS(object):
tmp = tmp * np.arange(1, num_objects + 1)[:, None, None, None]
masks = (tmp == masks[None, ...])
masks = masks > 0
+ else:
+ # for single object evaluation (e.g. DAVIS2016)
+ masks = np.expand_dims(masks, axis=0)
+ masks = masks > 0
return masks, masks_void, masks_id
def get_sequences(self):
diff --git a/davis2017/evaluation.py b/davis2017/evaluation.py
index 7bfb80f..eae777c 100644
--- a/davis2017/evaluation.py
+++ b/davis2017/evaluation.py
@@ -12,7 +12,7 @@ from scipy.optimize import linear_sum_assignment
class DAVISEvaluation(object):
- def __init__(self, davis_root, task, gt_set, sequences='all', codalab=False):
+ def __init__(self, davis_root, task, gt_set, sequences='all', codalab=False, year='2017'):
"""
Class to evaluate DAVIS sequences from a certain set and for a certain task
:param davis_root: Path to the DAVIS folder that contains JPEGImages, Annotations, etc. folders.
@@ -22,7 +22,8 @@ class DAVISEvaluation(object):
"""
self.davis_root = davis_root
self.task = task
- self.dataset = DAVIS(root=davis_root, task=task, subset=gt_set, sequences=sequences, codalab=codalab)
+ self.year = year
+ self.dataset = DAVIS(root=davis_root, task=task, subset=gt_set, sequences=sequences, codalab=codalab, year=self.year)
@staticmethod
def _evaluate_semisupervised(all_gt_masks, all_res_masks, all_void_masks, metric):
@@ -77,10 +78,12 @@ class DAVISEvaluation(object):
if 'F' in metric:
metrics_res['F'] = {"M": [], "R": [], "D": [], "M_per_object": {}}
+ separate_objects_masks = self.year != '2016'
+
# Sweep all sequences
results = Results(root_dir=res_path)
for seq in tqdm(list(self.dataset.get_sequences())):
- all_gt_masks, all_void_masks, all_masks_id = self.dataset.get_all_masks(seq, True)
+ all_gt_masks, all_void_masks, all_masks_id = self.dataset.get_all_masks(seq, separate_objects_masks)
if self.task == 'semi-supervised':
all_gt_masks, all_masks_id = all_gt_masks[:, 1:-1, :, :], all_masks_id[1:-1]
all_res_masks = results.read_masks(seq, all_masks_id)
diff --git a/evaluation_method.py b/evaluation_method.py
index 04f67d1..d364f81 100644
--- a/evaluation_method.py
+++ b/evaluation_method.py
@@ -20,6 +20,8 @@ parser.add_argument('--task', type=str, help='Task to evaluate the results', def
choices=['semi-supervised', 'unsupervised'])
parser.add_argument('--results_path', type=str, help='Path to the folder containing the sequences folders',
required=True)
+parser.add_argument("--year", type=str, help="Davis dataset year (default: 2017)", default='2017',
+ choices=['2016', '2017', '2019'])
args, _ = parser.parse_known_args()
csv_name_global = f'global_results-{args.set}.csv'
csv_name_per_sequence = f'per-sequence_results-{args.set}.csv'
@@ -34,7 +36,7 @@ if os.path.exists(csv_name_global_path) and os.path.exists(csv_name_per_sequence
else:
print(f'Evaluating sequences for the {args.task} task...')
# Create dataset and evaluate
- dataset_eval = DAVISEvaluation(davis_root=args.davis_path, task=args.task, gt_set=args.set)
+ dataset_eval = DAVISEvaluation(davis_root=args.davis_path, task=args.task, gt_set=args.set, year=args.year)
metrics_res = dataset_eval.evaluate(args.results_path)
J, F = metrics_res['J'], metrics_res['F']
Thanks for seeing this!
When i was trying to use the dataset from OSVOS and RVOS given in your repo as '--results_path', the result return from terminal and .csv shows all J&F-mean are 1.0 (i thought its totally the same as the ground-truth when returning this value,well obviosly its not ), so how to fix it and get the right evaluation result? Thanks , i am a beginner and feeling confused.
After my DAVIS path is well-setted
my configuration parameters:
--task
semi-supervised
--results_path
results/semi-supervised/osvos
terminal returns
---------- Per sequence results for val ----------
Sequence J-Mean F-Mean
bike-packing_1 1.0 1.0
bike-packing_2 1.0 1.0
blackswan_1 1.0 1.0
bmx-trees_1 1.0 1.0
bmx-trees_2 1.0 1.0
breakdance_1 1.0 1.0
camel_1 1.0 1.0
car-roundabout_1 1.0 1.0
car-shadow_1 1.0 1.0
cows_1 1.0 1.0
dance-twirl_1 1.0 1.0
dog_1 1.0 1.0
dogs-jump_1 1.0 1.0
dogs-jump_2 1.0 1.0
Hi all,
This is my command I run and I got an unexpected error. I change the path to DAVIS correctly.
python evaluation_method.py --task semi-supervised --results_path my_results/semi-supervised
Evaluating sequences for the semi-supervised task...
0%| | 0/30 [00:00<?, ?it/s]bike-packing frame 00001 not found!
The frames have to be indexed PNG files placed inside the corespondent sequence folder.
The indexes have to match with the initial frame.
IOError: No such file or directory
0%| | 0/30 [00:00<?, ?it/s]
Could anyone help me to figure out the problem?
Hi there,
I noticed that since davis 2017, temporal stability has not been one of the evaluation metrics. Would you mind telling the reason? Since it seems to be an important aspect and a numeric metric is desired.
Thanks
When the predicted object number is less than GT object number, this line will lead to a error:
https://github.com/davisvideochallenge/davis2017-evaluation/blob/master/davis2017/evaluation.py#L30
So I think the one-hot conversion for result masks should use the GT object number instead of the result object number.
Thanks.
Hi,
For semi-supervised evaluation, it seems that the last frame is excluded link.
The reason for excluding the first frame is clear, but I would appreciate it if you explain the reason for excluding the last frame.
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.