scikit-image / scikit-image Goto Github PK
View Code? Open in Web Editor NEWImage processing in Python
Home Page: https://scikit-image.org
License: Other
Image processing in Python
Home Page: https://scikit-image.org
License: Other
from skimage import filter, data
image = data.page()
filter.threshold_adaptive(image, block_size=10, offset=10)
on gray lena image, both in float and ubyte
Do you agree that this is confusing?
This is an easy fix (for after the release) but it bothers me a little.
I get two test failures in the segmentation module (test_felz...
and test_quickshift
):
ImportError: cannot import name assert_greater
I think this function is new to nose 1.1.3, but I'm not sure. The real problem is that pip and easy_install seem to think that 1.1.2 is the current version.
This isn't a big issue (I'll probably just upgrade my nose install), but sklearn
has compatibility functions.
under ubuntu (oneiric - but this might apply on most debian distros), installing FreeImage from the repository (apt-get install libfreeimage3) installs the library share object as:
/usr/lib/libfreeimage.so.3
This is not detected in skimage by freeimage_plugin.py, hence rising
RuntimeError: Could not find the plugin "freeimage" for imread
A workaround consists in linking /usr/lib/libfreeimage.so.3 to /usr/lib/libfreeimage.so :
sudo ln -s /usr/lib/libfreeimage.so.3 /usr/lib/libfreeimage.so
The distribution missing many files present in the git repository.
from skimage.viewer import ImageViewer
from skimage.viewer.widgets import Slider
from skimage.viewer.widgets.history import SaveButtons
from skimage.viewer.plugins.overlayplugin import OverlayPlugin
from skimage import data, filter
image = data.coins()
viewer = ImageViewer(image)
plugin = OverlayPlugin(image_filter=filter.canny)
plugin += Slider('sigma', 0, 5, update_on='release')
plugin += SaveButtons()
viewer += plugin
viewer.show()
and
from skimage import io
io.imshow(io.pop())
When I do a "nosetests" at the top level, I get a segfault in
test_opencv_cv.TestCalibrateCamera2.test_cvCalibrateCamera2_Identity.
Currently, the dtype conversion functions (i.e. skimage.img_as_*
) don't handle bool arrays (raises ValueError
).
I think it would make sense for these functions to accept bool arrays and return arrays with False set to 0 and True set to the max value for the given image dtype.
The plot_label example is broken (none of the coins are properly identified; labeled image appears as horizontal bands). Apparently, the problem arose in the following commit:
https://github.com/scikits-image/scikits-image/commit/b05c062d2490c9be607ec69d8302aa6dc9d4dff5
Hi guys,
I wonder if you're having the same problem on your machine.
So far this bug is reproduced on a Gentoo Linux and Mac OS X
(Snow Leopard).
the following code :
#!/usr/bin/env python
import logging as log
log.basicConfig(level=log.INFO)
import skimage
log.info('test')
prints :
INFO:root:test
But if one inverts the order of imports :
#!/usr/bin/env python
import skimage
import logging as log
log.basicConfig(level=log.INFO)
log.info('test')
then nothing prints on screen.
Many of skimage's Cython extension modules use the C int
type instead of ssize_t
(np.intp
or Py_ssize_t
) for indexing, shape, and size variables. While in certain cases this works, it raises compiler warnings, is harder to review, and can fail on 64 bit platforms. It would be better to consistently use Py_ssize_t or np.intp types where Python and numpy use them.
See for example:
Skimage depends on gst. Importing gst interferes with the functionality of argparse. See http://stackoverflow.com/a/12417626/19501 for an example. The same error happens if the statement "import gst" is replaced by "from skimage import io".
from skimage.viewer import ImageViewer
from skimage.viewer.widgets import Slider
from skimage.viewer.plugins.overlayplugin import OverlayPlugin
from skimage import data, filter
image = data.coins()
viewer = ImageViewer(image)
plugin = OverlayPlugin(image_filter=filter.canny)
plugin += Slider('sigma', 0, 5, update_on='release')
viewer += plugin
viewer.show()
print "Asd"
No "Asd" is ever printed.
I installed skimage cdb61f8 on Mac OS X 10.7 with pyfits 3.1-0.0.dev846 .
There are two test FITS-related test failures:
http://dl.dropbox.com/u/4923986/bug_reports/skimage-test.log
Also there's lots of WARNINGs when building the docs:
http://dl.dropbox.com/u/4923986/bug_reports/skimage-docs.log
With non-int64 images, skimage.morphology.label
raises an error.
For bool image::
ValueError: Does not understand character buffer dtype format string ('?')
For other dtypes::
ValueError: Buffer dtype mismatch, expected 'DTYPE_t' but got '*'
where * is any input dtype other than int64.
The label
function should take care of casting the input.
Running nosetests --with-doctest skimage
I get some failures:
FAILED (SKIP=14, errors=45, failures=26)
A lot of them are
TypeError: 'module' object is not callable
which I don't understand.
Most are nuisances like numpy not being importet rather than actual errors. I found some errors while working on the color
module, though, so I thought I give this a go.
When trying to build the docs based on the most recent commits, I get the following error:
Traceback (most recent call last):
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 321, in import_object
__import__(self.modname)
ImportError: No module named MatplotlibCanvas
and:
# Sphinx version: 1.1.3
# Python version: 2.7.3
# Docutils version: 0.9.1 release
# Jinja2 version: 2.6
Traceback (most recent call last):
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/cmdline.py", line 189, in main
app.build(force_all, filenames)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/application.py", line 204, in build
self.builder.build_update()
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 196, in build_update
'out of date' % len(to_build))
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 216, in build
purple, length):
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/builders/__init__.py", line 120, in status_iterator
for item in iterable:
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/environment.py", line 613, in update_generator
self.read_doc(docname, app=app)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/environment.py", line 761, in read_doc
pub.publish()
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/core.py", line 221, in publish
self.settings)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/readers/__init__.py", line 69, in read
self.parse()
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/readers/__init__.py", line 75, in parse
self.parser.parse(self.input, document)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/__init__.py", line 162, in parse
self.statemachine.run(inputlines, document, inliner=self.inliner)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 174, in run
input_source=document['source'])
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2706, in underline
self.section(title, source, style, lineno - 1, messages)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 331, in section
self.new_subsection(title, lineno, messages)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 399, in new_subsection
node=section_node, match_titles=True)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 286, in nested_parse
node=node, match_titles=match_titles)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 199, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2706, in underline
self.section(title, source, style, lineno - 1, messages)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 331, in section
self.new_subsection(title, lineno, messages)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 399, in new_subsection
node=section_node, match_titles=True)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 286, in nested_parse
node=node, match_titles=match_titles)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 199, in run
results = StateMachineWS.run(self, input_lines, input_offset)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 239, in run
context, state, transitions)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/statemachine.py", line 460, in check_line
return method(match, context, next_state)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2279, in explicit_markup
nodelist, blank_finish = self.explicit_construct(match)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2291, in explicit_construct
return method(self, expmatch)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2034, in directive
directive_class, match, type_name, option_presets)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/docutils/parsers/rst/states.py", line 2083, in run_directive
result = directive_instance.run()
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 1303, in run
documenter.generate(more_content=self.content)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 731, in generate
self.add_content(more_content)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 1046, in add_content
ModuleLevelDocumenter.add_content(self, more_content)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 477, in add_content
for i, line in enumerate(self.process_doc(docstrings)):
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 441, in process_doc
self.options, docstringlines)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/sphinx/application.py", line 314, in emit
results.append(callback(self, *args))
File "/Users/hannes/Development/scikits-image/doc/source/../ext/numpydoc.py", line 36, in mangle_docstrings
doc = get_doc_object(obj, what, u"\n".join(lines), config=cfg)
File "/Users/hannes/Development/scikits-image/doc/source/../ext/docscrape_sphinx.py", line 220, in get_doc_object
config=config)
File "/Users/hannes/Development/scikits-image/doc/source/../ext/docscrape_sphinx.py", line 201, in __init__
ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config)
File "/Users/hannes/Development/scikits-image/doc/source/../ext/docscrape.py", line 481, in __init__
for name in sorted(self.methods)]
File "/Users/hannes/Development/scikits-image/doc/source/../ext/docscrape.py", line 490, in methods
return [name for name,func in inspect.getmembers(self._cls)
File "/usr/local/Cellar/python/2.7.3/lib/python2.7/inspect.py", line 253, in getmembers
value = getattr(object, key)
File "/Users/hannes/.virtualenvs/skimage/lib/python2.7/site-packages/scikits_image-0.7dev-py2.7-macosx-10.7-x86_64.egg/skimage/viewer/utils/core.py", line 21, in __get__
raise RuntimeError(self.msg)
RuntimeError: Widget is not attached to a Plugin.
For integer images, skimage.filter.tv_denoise
returns a float image that has the same range as the integer image (e.g., 0--255 for uint8 images, but float images should be between 0 and 1).
This contradicts the scikits-image user guide, which suggests that all functions return images within the valid range. Plus, such an image will fail to save on certain io backends (e.g. 'qt'), and get saved incorrectly on others (e.g. 'pil').
Your GitHub repo still has http://stefanv.github.com/scikits.image/ listed as the homepage for scikit-image. You might want to change it to the new home.
Running nosetests -s -x -v --with-doctest
on HEAD:
test_exposure.test_equalize_ubyte ... ERROR
======================================================================
ERROR: test_exposure.test_equalize_ubyte
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib64/python2.6/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/npinto/venv/skimage-shape-system/scikits-image/skimage/exposure/tests/test_exposure.py", line 17, in test_equalize_ubyte
img_eq = exposure.equalize(test_img)
File "/home/npinto/venv/skimage-shape-system/scikits-image/skimage/exposure/exposure.py", line 104, in equalize
image = skimage.img_as_float(image)
File "/home/npinto/venv/skimage-shape-system/scikits-image/skimage/util/dtype.py", line 181, in img_as_float
return convert(image, np.float64)
File "/home/npinto/venv/skimage-shape-system/scikits-image/skimage/util/dtype.py", line 75, in convert
raise ValueError("Images of type float must be between 0 and 1")
ValueError: Images of type float must be between 0 and 1
----------------------------------------------------------------------
Ran 23 tests in 0.134s
FAILED (errors=1)
I was following your code here and it stucks on:
c_plus = c2a * w + c1a * (1.0 - w) <= m
With: ValueError: shape mismatch: objects cannot be broadcast to a single shape
On the scikits-image.org website:
Release! Version 0.5 26/02/2011
Shouldn't this be 26/02/2012?
As part of improving the performance of certain functions I had a look at the sobel, prewitt etc. implementations and was wondering what the binary_erosion
of the mask is for? If this step is left out, the computation could be speeded up quite a bit.
If scikits-image is installed without nose, a message is printed about unit tests not being available is printed on import but it is a little bit distracting to the casual end-user.
Hi,
I have been using graph.MCP_Geometric from the latest version of scikits.image 0.3 in order to find minimum travel costs with the find_costs function. On most cost surfaces it works without problems, but when I use large cost matrices it doesn't even try to find those costs. It looks as if the algorithm stopped before the second iteration, since all the starts have a cumulative cost of 0, while the rest of the matrix has inf. Currently I am trying to use it on a cost matrix of size 12837 x 43345. If I split it by half it works, but given what I want to do now, I cannot split the cost matrix and work on subsets anymore.
Is there a size limit? Any ideas as to what might be causing this?
I am working on a MacPro with 64GB memory, 2 CPU's, OS X Lion, Python 2.72, Numpy 1.6.1.
I appreciate any help for solving this.
I've tested this as extensively as I know how, and contrary to the Note in its docstring img_as_float
returns negative inputs as negative outputs. The returned range is [-1, 1]. Of note, it appears the docstring for _convert
is correct - but the documentation for the more publicly recommended function img_as_float
may not have been updated at some point in the past.
Two potential fixes:
The documentation for img_as_float() could be changed to reflect this behavior, removing the second line in the Note and changing the range in the first line of the Note to [-1, 1]. Its current behavior would then be correctly described; no actual code changes necessary. This would be my recommendation.
Some type of shifting of negative values before or after the _convert
call. Either take the norm, do pseudo-aliasing on negative values (something like image[image < 0] = 1. - image[image < 0]
), or shift-rescale the data to fill the proper [0, 1] range.
I could go about fixing this either way, but decided to ask what the expected behavior is before proceeding.
If I try to run the example at http://scikits-image.org/docs/dev/auto_examples/plot_hough_transform.html, I get the error "ImportError: No module named skimage.io"
If I change skimage to scikits.image, it works.
When using the pil plugin to load a tiff file, e.g.
http://groups.google.com/group/scikits-image/attach/36fcae4bb4fabb09/1.tif.tar.gz?part=4
the result is
array([[ <PIL.TiffImagePlugin.TiffImageFile image mode=I;16 size=1536x1024 at 0x316BE60>]], dtype=object)
instead of an array of values. We need to implement some special handling of TiffImageFile objects.
The conversion problems I mentioned on the list have some implications that I'd like to remove for the release:
io.imshow(color.rgb2gray(data.lena()))
fails, since it is a float from 0 to 255.
If there is a consensus for rounding or dividing I'll fix it.
Here are a list of improvements to be done for the HOG module, and some ideas on implementation.
At present, the descriptor on works on a given image patch. The Dalal Triggs implementation supports a list of keypoints argument. This would require a change to the code to be able to assign each pixel to an orientation bin, and only compute the orientation histogram for the patch surrounding the keypoint.
The HOG feature descriptor was written with readability in mind. Therefore this implementation is not particularly fast (600ms for Lena). Improvements could be in the form of a cythonised version or a GPU implementation.
The graph.spath tests are failing because of a regression in numpy.insert() - I have filed an numpy issue
http://scikits-image.org/docs/dev/auto_examples/plot_watershed.html links to wikipedia but has extra "<>" in the href.
% cd scikits-image/skimage
% python setup.py --user develop (ssim⚡)
Traceback (most recent call last):
File "setup.py", line 38, in <module>
config = Configuration(top_path='').todict()
NameError: name 'Configuration' is not defined
Hi,
first of all I would like to congratulate all of you on your efforts. I am using this package and am looking forward to future extensions.
Recently I started using the label() function to evaluate some segmentation masks and stumbled across an issue with the background parameter of this function.
I pasted some unit tests below to illustrate my issue. From my understanding of the functionality all tests should pass, but they don't.
I am using v0.5 of scikits-image.
import unittest
import numpy as np
from numpy.testing import assert_array_equal
from skimage.morphology import label
class LabelTestsWithBackgroundParameters(unittest.TestCase):
def test_label_TwoRegions4ConnectedExcludeBackground_CorrectLabels(self):
"""This test fails but shouldn't if I understand the docs correctly"""
image_array = np.array([[0, 0, 6],
[0, 0, 6],
[5, 5, 5]])
labeled_image = label(image_array, background=0)
expected_labels = np.array([[-1, -1, 0],
[-1, -1, 0],
[ 1, 1, 1]])
assert_array_equal(labeled_image, expected_labels)
def test_label_OneRegionExcludeSurroundingBackground_CorrectLabels(self):
"""This test fails but shouldn't if I understand the docs correctly"""
image_array = np.array([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
labeled_image = label(image_array, neighbors=4, background=0)
expected_labels = np.array([[-1, -1, -1],
[-1, 0, -1],
[-1, -1, -1]])
assert_array_equal(labeled_image, expected_labels)
def test_label_OneRegionExcludeSurroundingBackgroundInt32_CorrectLabels(self):
"""This test fails but shouldn't if I understand the docs correctly"""
image_array = np.array([[0, 0, 0],
[0, 1, 0],
[0, 0, 0]])
labeled_image = label(image_array.astype(np.int32),
neighbors=4, background=0)
expected_labels = np.array([[-1, -1, -1],
[-1, 0, -1],
[-1, -1, -1]], dtype=np.int32)
assert_array_equal(labeled_image, expected_labels)
def test_label_TwoRegionsExcludeBackground_CorrectLabels(self):
"""This example from the documentation passes"""
image_array = np.array([[1, 0, 0],
[1, 1, 5],
[0, 0, 0]])
labeled_image= label(image_array, background=0)
expected_labels = np.array([[ 0, -1, -1],
[ 0, 0, 1],
[-1, -1, -1]])
assert_array_equal(labeled_image, expected_labels)
class LabelTestsWithoutBackgroundParameter(unittest.TestCase):
def test_label_ThreeConnectedRegions_LabelsInRowMajorOrder(self):
image_array = np.array([[0, 0, 1, 1],
[2, 0, 5, 5],
[5, 5, 5, 5],
[9, 9, 9, 9]])
labeled_image = label(image_array, neighbors=4)
expected_labels = np.array([[0, 0, 1, 1],
[2, 0, 3, 3],
[3, 3, 3, 3],
[4, 4, 4, 4]])
assert_array_equal(labeled_image, expected_labels)
def test_label_ThreeConnectedRegions8Connected_CorrectLabels(self):
image_array = np.array([[0, 0, 6, 0],
[0, 0, 5, 6],
[5, 5, 0, 6],
[0, 0, 0, 0]])
labeled_image = label(image_array, neighbors=8)
expected_labels = np.array([[0, 0, 1, 2],
[0, 0, 3, 1],
[3, 3, 0, 1],
[0, 0, 0, 0]])
assert_array_equal(labeled_image, expected_labels)
def test_label_OneRegionInTopLeftCornerExcludeBackground_CorrectLabels(self):
image_array = np.array([[1, 0, 0],
[0, 0, 0],
[0, 0, 0]], dtype=np.int32)
labeled_image = label(image_array.astype(np.int),
neighbors=4, background=0)
expected_labels = np.array([[ 0, -1, -1],
[-1, -1, -1],
[-1, -1, -1]])
assert_array_equal(labeled_image, expected_labels)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
Hello, this is strange to me:
/Users/sergeyk/work/timely/synthetic [switching_to_pandas *] [sergeyk@dhcp-44-222] [21:25]
> python -c "import skimage; import sklearn"
/Users/sergeyk/work/timely/synthetic [switching_to_pandas *] [sergeyk@dhcp-44-222] [21:25]
> python -c "import sklearn; import skimage"
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/util.py:14: DeprecationWarning: The compiler package is deprecated and removed in Python 3.x.
from compiler.consts import CO_GENERATOR
Any idea what's going on?
Running nosetests --with-doctest
gives me 45 errors and 25 failures (actually one of those errors occurs even without the doctest flag, but that's a separate issue). I don't have time to look into this now; I just wanted to post it so I don't forget.
I'm not sure what our policy is for doctests (i.e. are doctests just examples, or are they actually tests). A few of the errors/failures aren't important (e.g. matplotlib returns objects which get printed as output), but other errors/failures suggest outdated docstrings. BTW, did anyone follow the numpy discussion (a long time ago) on writing doctest-like examples that don't get run as doctests? That could be useful (with restraint) when writing examples that aren't perfectly testable with doctest.
I need fast LAB color conversions (python-colormath would take a couple of hours to convert a 2048x1248 image), which appear to only be available in the development version. However, when I run "sudo python setup.py install", I get "gcc-4.0: command not found."
I have gcc-4.2. How can setup.py figure out to call that instead?
skimage 0.7.0 on Windows XP 32 bit
from skimage.morphology import watershed
Results in:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\skimage\morphology\__init__.py", line 7, in <module>
from ._skeletonize import skeletonize, medial_axis
ImportError: cannot import name skeletonize
It would be nice to have an informative error when attempting to build if required dependencies, cython, numpy, etc., have an insufficient version or are not present.
The input image parameter is sometimes called "image", "data" or "arr".
I think for readablity it should be named consistently for all functions.
I think I would like "image" or "input".
Consider lines 78 to 82 in the util.dtype._convert function:
image = image.astype(np.float64)
out = image - imin
out *= scale
out += shift
out = round_fn(out).astype(dtype)
This is potentially very memory and computationally intensive. For example, in the case of uint8 to uint16 conversion, two intermediate arrays of 16 times the size of the input array will be created and the conversion could be done by simply shifting bits instead of using double precision floating point operations.
Second, the scale and round functions are not always correct. For example, double to uint8 should be converted as
np.uint8(image * np.nextafter(image.dtype.type(256), image.dtype.type(0)))
instead of
np.round(image * 255).astype(np.uint8)
Try with image = np.array([1.0/257.0])
The PIL plugin should call skimage.img_as_ubyte
on data before displaying, rather than failing.
The FreeImage IO plugin looks for the library in the _plugins folder and a few other places, which are pretty OS X/linux specific:
lib_dirs = [os.path.dirname(__file__),
'/lib',
'/usr/lib',
'/usr/local/lib',
'/opt/local/lib',
]
Where would good places on Windows be to look for additional DLLs?
Zach
I think it would make sense to automatically cast the image to be C-contiguous. ie img = np.array(img, order='C'), not sure where in the code this would go since it might be affecting more than just measure.find_contours
Example below:
import numpy as np
from skimage import measure
img = np.ones((3,3), order='F')
measure.find_contours(img, 0.5)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-43050895b1f9> in <module>()
2 from skimage import measure
3 img = np.ones((3,3), order='F')
----> 4 measure.find_contours(img, 0.5)
/Users/dharhas/work/scikits-image/skimage/measure/find_contours.pyc in find_contours(array, level, fully_connected, positive_orientation)
106 ' "positive_orientation" must be either "high" or "low".')
107 point_list = _find_contours.iterate_and_store(array, level,
--> 108 fully_connected == 'high')
109 contours = _assemble_contours(_take_2(point_list))
110 if positive_orientation == 'high':
/Users/dharhas/work/scikits-image/skimage/measure/_find_contours.so in skimage.measure._find_contours.iterate_and_store (skimage/measure/_find_contours.c:1269)()
ValueError: ndarray is not C-contiguous
Hi all,
When calling feature.peak_local_max() on a flat image, all the points of the image are returned:
>>> test_image = np.ones((10,10))
>>> coords = feature.peak_local_max(test_image)
>>> len(coords)
100
I wouldn't expect that... It originates from two things in the code:
The calculated thershold can be higher than the min of the image:
corner_threshold = np.max(image.ravel()) * threshold
All points higher or equal to the threshold are retrieved:
image_t = (image >= corner_threshold) * 1
So for me this is a bit strange, and is really bad when you have an empty frame on a sequence, which tends to happen. Also I think the threshold argument is a bit unusual, I more often expect an absolute value for the threshold, not a relative one.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.