This project has been merged into ilastik repository.
Lazyflow is a component of ilastik. It is typically installed using ilastik's conda-based build scripts.
Please see the online developer documentation for API reference.
lazy parallel ondemand zero copy numpy array data flows with caching and dirty propagation
License: Other
This project has been merged into ilastik repository.
Lazyflow is a component of ilastik. It is typically installed using ilastik's conda-based build scripts.
Please see the online developer documentation for API reference.
requested by anna.
remove endless operator.inputs["slotname"].connect(...) origes
estimated time: 1 hour
estimated time: 1 hour
File "/home/cstraehl/Projects/PHD/code/src/lazyflow/lazyflow/request/request.py", line 218, in _execute
self._result = self.fn()
File "/home/cstraehl/Projects/PHD/code/src/lazyflow/lazyflow/graph.py", line 811, in __call__
result_op = self.operator.execute(self.slot, (), self.roi, destination)
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/workflows/carving/opPreprocessing.py", line 181, in execute
self.enableDownstream(True)
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/workflows/carving/opPreprocessing.py", line 97, in enableDownstream
self.applet.enableDownstream(ed)
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/workflows/carving/preprocessingApplet.py", line 35, in enableDownstream
self.guiControlSignal.emit(ControlCommand.Pop)
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/utility/simpleSignal.py", line 21, in emit
f(*args, **kwargs)
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/utility/bind.py", line 29, in __call__
self.f(*(self.bound_args + args[0:self.numUnboundArgs]))
File "/home/cstraehl/Projects/PHD/code/src/ilastik/ilastik/shell/gui/ilastikShell.py", line 761, in handleAppletGuiControlSignal
command = self._controlCmds[applet_index].pop()
IndexError: pop from empty list
This is on Win32, standard classification workflow:
Exception in thread Thread-2:
Traceback (most recent call last):
File "c:\Python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "c:\ukoethe\ilastik-git\lazyflow\lazyflow\graph.py", line 2071, in run
gr.switch( reqObject)
File "c:\ukoethe\ilastik-git\lazyflow\lazyflow\graph.py", line 2040, in proces
sReqObject
reqObject.func(reqObject.arg1, reqObject.key, reqObject.destination)
File "c:\ukoethe\ilastik-git\lazyflow\lazyflow\operators\operators.py", line 6
65, in getOutSlot
req.wait()
AttributeError: 'NoneType' object has no attribute 'wait'
It seems like a bug that it doesn't do so right now. Add a unit test for this when it is fixed.
TestOpInputReader tries to remove "tests/test.png" in line 21 os.remove(cls.testImageFileName) of tests/testOpInputDataReader.py . This file is not present and the test fails
it seems that its is more convenient if some operators can have local state.
provide means for saving and retrieving the local state for these operators upon graph saving
reuse the existing continous memory arraycache for that
classifierOperators.py:OpSegmentation
and generic.py:OpMaxChannelIndicatorOperator
have similar jobs. Perhaps they can be combined.
Slots can be assigned a default value when they are created:
MySlot = InputSlot(value=5)
But there seems to be some problem when an operator with defaulted slots is wrapped within an operator wrapper. To my knowledge, the test suite does not cover this case.
If you call setValue on a slot, it then calls setDirty(slice(None)), which fails, if the slot rtype is a list, because slice object is not iterable. Maybe we need special handling for this case? Like some "everything" property for all types of slots?
discussed with anna, best way to reuse complete pipeline for prediction and prevent state changes to destroy old classifier when adding new images to graph/pipeline
OutputSlot has public members like shape or dtype and private properties (_shape and _dtype). That breaks the idea of properties, because usually i would manipulate the public members directly and not using private properties.
According to Thorben drtile is obsolete. Remove it though. Currently we have to compile the module and install it manually. That work is not necessary.
Assuming we have some slot called slot
:
d = {0 : "astring"}
slot.setValue(d)
slot[:].wait() # returns the full dictionary
slot.value # returns the string "astring"
Slot.value()
assumes that the value was wrapped in a numpy.ndarray
. Either this should happen automatically in Slot.setValue()
, or this assumption should be weakened.
The lightfield student assistent pointed out that it would be convenient if slots generated from hdf5 datasets exposed the hdf5 attributes as metadata. I agree.
Inside the init of SubRegion it is assumed, that the slot is ArrayLike. This is in general a wrong assumption since non array-like slot can also have SubRegion rois. Breaks the slot typing system.
Traceback (most recent call last):
File "c:\volumeeditor\volumeeditor\imageSceneRendering.py", line 199, in run
self._runImpl()
File "c:\volumeeditor\volumeeditor\imageSceneRendering.py", line 194, in _runImpl
self._takeJob()
File "c:\volumeeditor\volumeeditor\imageSceneRendering.py", line 189, in _takeJob
request.notify(self._onPatchFinished, request=request, patchNumber=patchNr, patchLayer=layerNr)
File "c:\volumeeditor\volumeeditor\pixelpipeline\imagesources.py", line 179, in notify
self._arrayreq.notify(self._onNotify, package = (callback, kwargs))
File "c:\volumeeditor\volumeeditor\pixelpipeline\slicesources.py", line 27, in notify
self._ar.notify(self._onNotify, package = (callback, kwargs))
File "c:\volumeeditor\volumeeditor\pixelpipeline\datasources.py", line 91, in notify
self._lazyflow_request.notify( callback, **kwargs)
File "C:\lazyflow\lazyflow\graph.py", line 414, in notify
assert self.destination is not None
AssertionError
provide a way to store a graph to a hdf5 file, including connections and arraychaches
The ArrayCacheMemoryMgr class and OpArrayCache work cooperatively to implement an LRU cache replacement strategy. This code needs review and thorough testing. Deadlocks inevitably occur during long memory-intensive batch jobs. An inspection of the code reveals multiple opportunities for lock order inversion to occur, involving ArrayCacheMemoryMgr._lock, OpArrayCache._lock, and OpArrayCache._cacheLock, which are presumably the source of the deadlocks.
lazyflow.Request objects automatically submit themselves upon construction. This turns out to be inconvenient in some cases.
lazyflow: '856baa84ef6704'...
OpTrainRandomForest: (1L, 32L, 32L, 32L, 1L) (1L, 32L, 32L, 32L, 1L)
featMatrix.shape: (1414L, 1L)
labelsMatrix.shape: (1414L, 1L)
Exception in thread Thread-3955:
Traceback (most recent call last):
File "C:\Python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "C:\lazyflow\lazyflow\graph.py", line 2134, in run
gr.switch( gr)
File "C:\lazyflow\lazyflow\graph.py", line 307, in _execute
self.func(self.arg1,self.key, self.destination)
File "C:\lazyflow\lazyflow\operators\generic.py", line 396, in getOutSlot
res = self.inputs["Input"][newKey].allocate().wait()
File "C:\lazyflow\lazyflow\graph.py", line 364, in wait
self._execute(gr)
File "C:\lazyflow\lazyflow\graph.py", line 307, in _execute
self.func(self.arg1,self.key, self.destination)
File "C:\lazyflow\lazyflow\operators\operators.py", line 46, in getOutSlot
res = req.wait()
File "C:\lazyflow\lazyflow\graph.py", line 364, in wait
self._execute(gr)
File "C:\lazyflow\lazyflow\graph.py", line 307, in _execute
self.func(self.arg1,self.key, self.destination)
File "C:\lazyflow\lazyflow\operators\classifierOperators.py", line 223, in getOutSlot
RF=self.inputs["Classifier"].value
File "C:\lazyflow\lazyflow\graph.py", line 556, in value
temp = self[:].allocate().wait()[0]
File "C:\lazyflow\lazyflow\graph.py", line 364, in wait
self._execute(gr)
File "C:\lazyflow\lazyflow\graph.py", line 307, in _execute
self.func(self.arg1,self.key, self.destination)
File "C:\lazyflow\lazyflow\operators\operators.py", line 366, in getOutSlot
self.graph._notifyMemoryHit()
File "C:\lazyflow\lazyflow\graph.py", line 2244, in _notifyMemoryHit
for c in self._registeredCaches:
RuntimeError: deque mutated during iteration
Is this allowed?
<_onAboutToResize(newSize=4), <volumeeditor.imageScene2D.ImageScene2D object at 0x00000000053BA0D0>>
</_onAboutToResize, <volumeeditor.imageScene2D.ImageScene2D object at 0x00000000053BA0D0>>
DELETING LABEL 3
Traceback (most recent call last):
File "C:\widgets\classificationWorkflow.py", line 209, in onLabelAboutToBeRemoved
self.opLabels.inputs["deleteLabel"].setValue(il+1)
File "C:\lazyflow\lazyflow\graph.py", line 546, in setValue
self._checkNotifyConnect()
File "C:\lazyflow\lazyflow\graph.py", line 609, in _checkNotifyConnect
self.operator._notifyConnect(self)
File "C:\lazyflow\lazyflow\graph.py", line 2084, in _notifyConnect
self.notifyConnect(inputSlot)
File "C:\lazyflow\lazyflow\operators\operators.py", line 765, in notifyConnect
l.inputs["deleteLabel"].setValue(self.inputs['deleteLabel'].value)
AttributeError: 'numpy.int64' object has no attribute 'inputs'
removing row: <Label name=Label 3, color=<PyQt4.QtGui.QColor object at 0x0000000004EAE118>>
switching to label=<Label name=Label 2, color=<PyQt4.QtGui.QColor object at 0x0000000004EAE0B0>>
Setting Drawnnumer 2
onBrushColor
Traceback (most recent call last):
File "C:\widgets\labelListModel.py", line 47, in onSelectionChanged
self.labelSelected.emit(selected[0].indexes()[0].row())
IndexError: sequence index out of range
removing 1 out of 2
found the prediction <Label name=Label 2, color=<PyQt4.QtGui.QColor object at 0x0000000004EAE0B0>> <
ImagePump.onRowsAboutToBeRemoved
<_onAboutToResize(newSize=3), <volumeeditor.imageScene2D.ImageScene2D object at 0x00000000053AFF28>>
(gdb) f 0
#0 0x00007fffe85afe78 in __pyx_tp_dealloc_4h5py_8_objects_ObjectID (o=0xe164ea8) at h5py/_objects.c:3599
3599 Py_XDECREF(p->_hash);
3595 static void __pyx_tp_dealloc_4h5py_8_objects_ObjectID(PyObject *o) {
3596 struct __pyx_obj_4h5py_8_objects_ObjectID *p = (struct __pyx_obj_4h5py_8_objects_ObjectID *)o;
3597 if (p->__weakref__) PyObject_ClearWeakRefs(o);
3598 Py_XDECREF(((PyObject *)p->proxy));
3599 Py_XDECREF(p->_hash);
3600 (*Py_TYPE(o)->tp_free)(o);
3601 }
(gdb) f 1
#1 0x00007fffe78d38c4 in __pyx_tp_dealloc_4h5py_3h5s_SpaceID (o=0xe164ea8) at h5py/h5s.c:4320
4320 return __pyx_r;
(gdb)
4311 __pyx_r = Py_None; __Pyx_INCREF(Py_None);
4312 goto __pyx_L0;
4313 __pyx_L1_error:;
4314 __Pyx_XDECREF(__pyx_t_1);
4315 __Pyx_AddTraceback("h5py._objects.IDProxy.locked.__get__");
4316 __pyx_r = NULL;
4317 __pyx_L0:;
4318 __Pyx_XGIVEREF(__pyx_r);
4319 __Pyx_RefNannyFinishContext();
4320 return __pyx_r;
4321 }
4322
4323 static PyObject *__pyx_tp_new_4h5py_3h5s_SpaceID(PyTypeObject *t, PyObject *a, PyObject *k) {
4324 PyObject *o = __pyx_ptype_4h5py_8_objects_ObjectID->tp_new(t, a, k);
4325 if (!o) return 0;
4326 return o;
4327 }
(gdb) pystack
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (1419): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (397): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (476): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1099): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (117): getSubOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (1414): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (482): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/classifierOperators.py (230): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (397): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (476): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1099): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (340): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2192): run
/usr/lib64/python2.7/threading.py (553): __bootstrap_inner
/usr/lib64/python2.7/threading.py (526): __bootstrap
During installation the developer still needs to go in the lazyflow/drtile folder and run manually cmake. This passage is not currently documented.
Commit 1ebbcab changed how drtile finds its dependencies. However, this does not seem to always work. For instance, when using python in a virtual environment, it could not find the Python header files or libraries. Reverting this commit fixed the issue.
The following code throws an IndexError:
from lazyflow.graph import Graph
from lazyflow.operators import Op5ToMulti, OpArrayCache, OpBlockedArrayCache,
OpArrayPiper, OpPredictRandomForest,
OpSingleChannelSelector, OpSparseLabelArray,
OpMultiArrayStacker, OpTrainRandomForest, OpPixelFeatures,
OpMultiArraySlicer2,OpH5Reader, OpBlockedSparseLabelArray,
OpMultiArrayStacker, OpTrainRandomForestBlocked, OpPixelFeatures,
OpH5ReaderBigDataset, OpSlicedBlockedArrayCache, OpPixelFeaturesPresmoothed
graph = Graph()
images = Op5ToMulti( graph )
features = OpPixelFeaturesPresmoothed( graph )
cache = OpBlockedArrayCache( graph )
features.inputs["Input"].connect(images.outputs["Outputs"])
cache.inputs["Input"].connect(features.outputs["Output"])
cache.inputs["innerBlockShape"].setValue((1,32,32,32,16))
cache.inputs["outerBlockShape"].setValue((1,128,128,128,64))
cache.inputs["fixAtCurrent"].setValue( False )
Instead of OperatorWrapper( SomeOp(x,y,z) ), it should be something like OperatorWrapper( SomeOp, (x,y,z) )
Currently every Roi is initialized with a corresponding Slot. This makes no sense. Only a slot knows how to interpret a Roi in its own context, not vice versa. Remove this coupling.
I introduced OpArraySlicer2 as a variant of OpArraySlicer which does not squeeze singletons dimension, for displaying data in the classification workflow fully from the graph. This two should be integrated together and make an option - squeeze input flag
Lazyflow edges are directed like this: Output -> Input
Therefore, I intuitively expect the following Api:
Op1.Output.connect( Op2.Input )
Currently, it's vice versa: Op2.Input.connect( Op1.Output )
This will lead to some frustration for 3rd party users.
the current system is not enough
estimated time: 2days
OpBlockedArrayCache initializes bookeeping datastructures that scale with the size of the input slot shape. See for example, the _dirtyShape member (1 byte per block) and the _flatBlockIndices member (8 bytes per block). This means that the cache has a non-negligible memory footprint even if the cache output is never used.
One consequence of this is that headless scripts intended to be used with large datasets (e.g. 1 TB) must be careful not to allow their workflow to even instantiate a cache, much less use it. It would be nice if this weren't the case.
Ideally, there would be no significant memory penalty to instantiating a cache, as long as it isn't actually used. Then we could use the same operators in headless workflows as we do in gui workflows. The only difference would be that the gui would request data the output slots that are connected to the caches, whereas headless workflows avoid those output slots.
provide means to configure paths to load operators from in a config file in the users home directory
estimated: 3h (need to refactor all existing lazyflow usecases, may take longer...)
Check lazyflow/operators/operators.py lines 275ff
psutil.phymem_usage is deprecated, update to virtual_memory().available instead
As we found during testing last week, the memory manager seems to think that most of the memory on the OS is already unavailable upon app startup. This may be due to some difference in how psutil reports available memory on Mac and Linux.
This leads to under-utilization of RAM for caches, causing the caches to be purged much more frequently than desired.
(gdb) f 0
#0 0x00007fffe85afe78 in __pyx_tp_dealloc_4h5py_8_objects_ObjectID (o=0x7fffcf450a28) at h5py/_objects.c:3599
3599 Py_XDECREF(p->_hash);
(gdb) p p->_hash
$1 = (PyObject *) 0x4
3595 static void __pyx_tp_dealloc_4h5py_8_objects_ObjectID(PyObject *o) {
3596 struct __pyx_obj_4h5py_8_objects_ObjectID *p = (struct __pyx_obj_4h5py_8_objects_ObjectID *)o;
3597 if (p->__weakref__) PyObject_ClearWeakRefs(o);
3598 Py_XDECREF(((PyObject *)p->proxy));
3599 Py_XDECREF(p->_hash);
3600 (*Py_TYPE(o)->tp_free)(o);
3601 }
(gdb) f 1
#1 0x00007fffe78d3ccd in __pyx_tp_dealloc_4h5py_3h5s_SpaceID (o=0x7fffcf450a28) at h5py/h5s.c:4200
4200 __pyx_why = 4;
(gdb)
4186 /* "/home/tachyon/h5py/h5py/h5s.pyx":551
4187 *
4188 * finally:
4189 * efree(start_array) # <<<<<<<<<<<<<<
4190 * efree(count_array)
4191 * efree(stride_array)
4192 */
4193 /*finally:*/ {
4194 int __pyx_why;
4195 PyObject *__pyx_exc_type, *__pyx_exc_value, *__pyx_exc_tb;
4196 int __pyx_exc_lineno;
4197 __pyx_exc_type = 0; __pyx_exc_value = 0; __pyx_exc_tb = 0; __pyx_exc_lineno = 0;
4198 __pyx_why = 0; goto __pyx_L8;
4199 __pyx_L7: {
4200 __pyx_why = 4;
4201 __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
4202 __Pyx_ErrFetch(&__pyx_exc_type, &__pyx_exc_value, &__pyx_exc_tb);
4203 __pyx_exc_lineno = __pyx_lineno;
4204 goto __pyx_L8;
4205 }
4206 __pyx_L8:;
4207 __pyx_f_4h5py_5utils_efree(__pyx_v_start_array);
(gdb) f 11
#11 0x00007ffff7ad9aee in PyEval_EvalFrameEx (f=0xeaa0f40, throwflag=0) at Python/ceval.c:1391
1391 x = PyObject_GetItem(v, w);
(gdb) p w.ob_type.tp_name
$8 = 0x7ffff7b4b23d "tuple"
(gdb) p v.ob_type.tp_name
$9 = 0xfc0114 "Dataset"
(gdb)
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (1419): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (397): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (476): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1099): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (117): getSubOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (1414): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (482): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/classifierOperators.py (230): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (397): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (476): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1099): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (340): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2224): run
/usr/lib64/python2.7/threading.py (553): __bootstrap_inner
/usr/lib64/python2.7/threading.py (526): __bootstrap
needed so that no recomputations after loading are neccessary
Provide means of loading parts of a graph,
possibility of specifying input/output ports for the partial flowgraph
Apparently this crash sometimes this happens when running Batch Processing.
It seems to be related to the request.Pool class. It appears as though the Pool.onFinish callbacks are getting called more than once, resulting in a "release unlocked lock" error in the special request.Lock class. How is that possible?
Traceback is here:
http://pastebin.com/J5LGzRB2
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffd1945700 (LWP 20751)]
0x00007ffff7addbba in PyEval_EvalFrameEx (f=0x7fff3c06b750, throwflag=0) at Python/ceval.c:2995
2995 if (tstate->frame->f_exc_type != NULL)
(gdb) pystack
/usr/lib64/python2.7/site-packages/psutil/__init__.py (378): get_memory_info
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2351): _freeMemory
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2299): _notifyMemoryAllocation
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2023): _notifyMemoryAllocation
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2023): _notifyMemoryAllocation
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (324): _allocateCache
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (390): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1099): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (117): getSubOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (1414): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/vigraOperators.py (482): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (49): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/generic.py (397): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (476): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (394): wait
/home/thorben/phd/src/lazyflow/lazyflow/operators/operators.py (1014): getOutSlot
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (313): _execute
/home/thorben/phd/src/lazyflow/lazyflow/graph.py (2208): run
/usr/lib64/python2.7/threading.py (553): __bootstrap_inner
/usr/lib64/python2.7/threading.py (526): __bootstrap
frame 0:
(gdb)
#0 0x00007ffff7addbba in PyEval_EvalFrameEx (f=0x7fff3c06b750, throwflag=0) at Python/ceval.c:2995
2995 if (tstate->frame->f_exc_type != NULL)
(gdb) print tstate
$1 = (PyThreadState *) 0x7fff3cae3a08
(gdb) p tstate->frame
$2 = (struct _frame *) 0xfffffffffffffffd
(gdb) p tstate->frame->f_exc_type
Cannot access memory at address 0x55
(gdb)
estimated time: 2.0 days
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.