Code Monkey home page Code Monkey logo

Comments (9)

olebole avatar olebole commented on June 26, 2024 1

The main difference seems to be that the failing build used libatlas 3.10.3 instead of blas 3.12.0 / libopenblas 0.3.26. All other differences seem minor:

  • Python distutils, lib2to3, tk from 3.12.2 to 3.12.3
  • pyproject-metadata only in the Debian release number,
  • liblua 5.46, gringo 5.6.2, clasp 3.3.5, aspcud 1.9.6 are only in the failed (experimental) build
  • Python defcon 0.10.3 is only in the failed (experimental) build
  • Python ufolib2 0.16.0 is only in the successfull (unstable) build

I must say that I don't know why in the experimental build libatlas was preferred over blas; it is a numpy dependency and libblas is there the preferred one. However, they should give the same results, shouldn't they?

from scikit-image.

lagru avatar lagru commented on June 26, 2024

Thanks for getting back to us with this! The specificity with regards to architecture makes this a bit harder to reproduce and fix. I'm looking into it.

from scikit-image.

jarrodmillman avatar jarrodmillman commented on June 26, 2024

@olebole Would you mind testing https://pypi.org/project/scikit-image/0.23.2rc1/ ?

from scikit-image.

olebole avatar olebole commented on June 26, 2024

I did, the mentioned ones are gone, but now there is another one; this time in amd64 (and obviousy unrelated):

_____________________ test_polynomial_weighted_estimation ______________________

    def test_polynomial_weighted_estimation():
        # Over-determined solution with same points, and unity weights
        tform = estimate_transform('polynomial', SRC, DST, order=10)
        tform_w = estimate_transform(
            'polynomial', SRC, DST, order=10, weights=np.ones(SRC.shape[0])
        )
        assert_almost_equal(tform.params, tform_w.params)
    
        # Repeating a point, but setting its weight small, should give nearly
        # the same result.
        point_weights = np.ones(SRC.shape[0] + 1)
        point_weights[0] = 1.0e-15
        tform1 = estimate_transform('polynomial', SRC, DST, order=10)
        tform2 = estimate_transform(
            'polynomial',
            SRC[np.arange(-1, SRC.shape[0]), :],
            DST[np.arange(-1, SRC.shape[0]), :],
            order=10,
            weights=point_weights,
        )
>       assert_almost_equal(tform1.params, tform2.params, decimal=4)

skimage/transform/tests/test_geometric.py:666: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
/usr/lib/python3.11/contextlib.py:81: in inner
    return func(*args, **kwds)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<function assert_array_almost_equal.<locals>.compare at 0x7ffad9691580>, array([[ 1.16431672e-10, -1.37662973e-11,  1...       -7.77541170e-06,  1.12016980e-06,  3.45277758e-05,
        -1.55254763e-05, -1.16166818e-05, -6.45696044e-06]]))
kwds = {'err_msg': '', 'header': 'Arrays are not almost equal to 4 decimals', 'precision': 4, 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Arrays are not almost equal to 4 decimals
E           
E           Mismatched elements: 3 / 132 (2.27%)
E           Max absolute difference: 0.00044363
E           Max relative difference: 26.96123959
E            x: array([[ 1.1643e-10, -1.3766e-11,  1.5589e-07,  1.0080e-07,  9.3082e-06,
E                    2.4074e-05,  2.8658e-05, -5.3848e-05,  1.0834e-10,  5.3736e-09,
E                   -1.7943e-07,  2.8586e-08, -9.4791e-06,  7.8816e-05, -2.8878e-05,...
E            y: array([[-9.0369e-07, -1.1004e-05, -2.1400e-07,  6.5641e-06,  9.2999e-06,
E                    1.9720e-05, -3.2870e-05, -5.0437e-05, -1.3570e-05, -9.9079e-05,
E                    2.0259e-04, -1.4415e-05, -1.1206e-05,  1.7683e-05, -2.8268e-05,...

/usr/lib/python3.11/contextlib.py:81: AssertionError

In ppc64el and loongarch64, I get:

_______________________ test_ellipse_parameter_stability _______________________
_____________________________ test_reproducibility _____________________________

    def test_reproducibility():
        """ensure cut_normalized returns the same output for the same input,
        when specifying random seed
        """
        img = data.coffee()
        labels1 = segmentation.slic(img, compactness=30, n_segments=400, start_label=0)
        g = graph.rag_mean_color(img, labels1, mode='similarity')
        results = [None] * 4
        for i in range(len(results)):
            results[i] = graph.cut_normalized(
                labels1, g, in_place=False, thresh=1e-3, rng=1234
            )
        graph.cut_normalized(labels1, g, in_place=False, thresh=1e-3, rng=1234)
    
        for i in range(len(results) - 1):
>           assert_array_equal(results[i], results[i + 1])

skimage/graph/tests/test_rag.py:224: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

args = (<built-in function eq>, array([[0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0...35, 135, ...,  11,  11,  11],
       [135, 135, 135, ...,  11,  11,  11],
       [135, 135, 135, ...,  11,  11,  11]]))
kwds = {'err_msg': '', 'header': 'Arrays are not equal', 'strict': False, 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Arrays are not equal
E           
E           Mismatched elements: 231553 / 240000 (96.5%)
E           Max absolute difference: 394
E           Max relative difference: 1.
E            x: array([[0, 0, 0, ..., 0, 0, 0],
E                  [0, 0, 0, ..., 0, 0, 0],
E                  [0, 0, 0, ..., 0, 0, 0],...
E            y: array([[  0,   0,   0, ...,  11,  11,  11],
E                  [  0,   0,   0, ...,  11,  11,  11],
E                  [  0,   0,   0, ...,  11,  11,  11],...

/usr/lib/python3.12/contextlib.py:81: AssertionError

All logs here.

from scikit-image.

lagru avatar lagru commented on June 26, 2024

Thanks for testing again. Glancing over a quick diff between the working and failing runs for amd64, it seems like the environment is different: "sid (unstable)" vs "experimental". Might that be the culprit? In that case we could hope for 0.23.2 passing in the "sid (unstable)" build env.

I'm not really sure how to go about reproducing and investigating this in a feasible manner...

from scikit-image.

olebole avatar olebole commented on June 26, 2024

The environment is almost the same; "experimental" pulls the same package versions as "unstable" (unless specifically asked for). However, there may be a differences because of the time of upload; uploads of new versions to "unstable" happen at any time.
I will dig out the differences in the environments between the two builds for you later.

from scikit-image.

Czaki avatar Czaki commented on June 26, 2024

There are two CI providers that provide linux ARM: https://circleci.com/open-source/ and https://cirrus-ci.org/pricing/ so it may be possible to add such testing to test metrics.

from scikit-image.

olebole avatar olebole commented on June 26, 2024

The latest failures appeared with amd64, not arm64.

from scikit-image.

lagru avatar lagru commented on June 26, 2024

However, they should give the same results, shouldn't they?

They should definitely give the same result. I'll create a new issue for this so but I'm not sure how soon we will be able to look into this, reproduce and fix it. But thanks for investigating the difference in the build environments, that should help!

from scikit-image.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.