Comments (13)
I'm having a weird behavior, because if I create the environment and then try to install the coiled-runtime I get:
conda install -c coiled coiled-runtime
--> I get coiled 0.0.71
mamba install -c coiled coiled-runtime
---> I get coiled 0.0.73
However, if I have a env.yaml
file that has
channels:
- conda-forge
- coiled
dependencies:
- python=3.9
- coiled-runtime
and I do
mamba env create -n test_distro -f env.yaml
---> I get coiled 0.0.71
So I'm not sure what's happening.
cc: @jrbourbeau do you have any idea what could be happening?
from benchmarks.
Thanks for reporting @dchudz. Following up on @ncclementi's post, I too get coiled=0.0.71
installed (both with conda
and mamba
) when using
name: coiled-runtime-test
channels:
- conda-forge
- coiled
dependencies:
- python=3.9
- coiled-runtime
Interestingly, if I pin coiled=0.0.73
name: coiled-runtime-test
channels:
- conda-forge
- coiled
dependencies:
- python=3.9
- coiled-runtime
- coiled=0.0.73
then the solve works and I get coiled=0.0.73
installed. It's not immediately clear to me why coiled=0.0.71
is preferred over coiled=0.0.73
when the newer coiled
version is valid for this environment solve.
FWIW looking at the logs for the coiled-runtime=0.0.3
release, coiled=0.0.71
was what was installed. I wouldn't think that would matter, but it does seem suspicious
from benchmarks.
@dchudz agreed this is something we should investigate more to try and figure out. Question: is using coiled=0.0.71
a blocker in some way?
from benchmarks.
Question: is using coiled=0.0.71 a blocker in some way?
I can't answer for @dchudz but it seems bad if users don't get newest coiled by default, since I'd be hesitant to say "just install coiled-runtime" in this case.
(in case it matters, I think coiled 0.0.73 adjusted how click
was pinned. it's still solvable with coiled 0.0.73 but maybe 0.0.71 is easier/earlier solve?)
from benchmarks.
That's a very fair point @ntabris. I'm just trying to get a sense for if things are completely broken right now (i.e. coiled=0.0.71
doesn't work)
from benchmarks.
coiled=0.0.71
works, and even if it didn't you can explicitly get 0.0.73. So not a blocker exactly.
But what Nat said is right. Once customers start using this, it's not really okay for them to end up on old versions by default.
from benchmarks.
Interestingly for me (linux, intel, Python 3.9), mamba
gets 0.0.73
version while conda
gets 0.0.71
(Also, mamba is a billion times faster)
mrocklin@carbon-7:~$ time mamba create -n coiled-test coiled-runtime -c conda-forge -c coiled --dry-run | grep coiled
+ coiled 0.0.73 pyhd8ed1ab_0 conda-forge/noarch 99 KB
+ coiled-runtime 0.0.3 py_1 coiled/noarch 4 KB
DryRunExit: Dry run. Exiting.
real 0m7.475s
user 0m6.889s
sys 0m0.433s
mrocklin@carbon-7:~$ time conda create -n coiled-test coiled-runtime -c conda-forge -c coiled --dry-run | grep coiled
environment location: /home/mrocklin/mambaforge/envs/coiled-test
- coiled-runtime
coiled-0.0.71 | pyhd8ed1ab_0 97 KB conda-forge
coiled-runtime-0.0.3 | py_1 4 KB coiled
coiled conda-forge/noarch::coiled-0.0.71-pyhd8ed1ab_0
coiled-runtime coiled/noarch::coiled-runtime-0.0.3-py_1
DryRunExit: Dry run. Exiting.
real 1m29.089s
user 1m27.943s
sys 0m1.239s
from benchmarks.
It's fine finding a 0.0.73 solution though if asked
mrocklin@carbon-7:~$ time conda create -n coiled-test coiled=0.0.73 coiled-runtime -c conda-forge -c coiled --dry-run | grep coiled
environment location: /home/mrocklin/mambaforge/envs/coiled-test
- coiled-runtime
- coiled=0.0.73
coiled-0.0.73 | pyhd8ed1ab_0 99 KB conda-forge
coiled-runtime-0.0.3 | py_1 4 KB coiled
coiled conda-forge/noarch::coiled-0.0.73-pyhd8ed1ab_0
coiled-runtime coiled/noarch::coiled-runtime-0.0.3-py_1
from benchmarks.
I think you're reporting the same as everyone else, @mrocklin (sorry if the above wasn't clear)
from benchmarks.
I had an existing environment with coiled==0.0.71
and my only channel is conda-forge
. I did mamba update coiled
and it said that all package dependencies were satisfied. Then I did mamba install coiled==0.0.73
and it told me that it had to downgrade click
from 8.1.2 to 8.0.0 to satisfy coiled's dependencies. I suspect that this is the cause of the behavior here, if not click specifically, then not requiring 0.0.73 explicitly leads to a "better" solve for 0.0.71 because of a more relaxed version constraint for other packages. I'm not sure how to prove out this hypothesis though 😄
Edit: Ah, I see @ntabris already mentioned this >_<
from benchmarks.
I think because click
is listed explicitly in the meta.yaml
, conda may be prioritizing that pin: https://github.com/coiled/coiled-runtime/blob/a66100ad4dee158031c2682ca2ba827f6a3b1fc0/recipe/meta.yaml#L42 I wonder if we set click == 8.0.0
(or probably more logically, pin coiled
), if the problem would be resolved.
Edit: Or just remove the pin on click.
from benchmarks.
It would be lovely if there was a way to tell conda to prioritize certain packages in the solve so it gets newest possible coiled
that's solveable w/ other constraints. Is there? (I don't think there is but maybe someone knows a way?)
from benchmarks.
Proposing a fix over in #57
from benchmarks.
Related Issues (20)
- ⚠️ CI failed ⚠️ - Regression - test_adjacent_groups [1-128MiB-p2p-disk] Duration HOT 1
- ⚠️ CI failed ⚠️ - stability/test_deadlock.py::test_repeated_merge_spill HOT 1
- Set index regression HOT 3
- ⚠️ CI failed ⚠️ - test_join_big_small / test_set_index duration regressions HOT 1
- ⚠️ CI failed ⚠️ - regressions: dataframe_cow_chain - prepreocess - q6 - q8 - set_index, write_wide_data HOT 3
- ⚠️ CI failed ⚠️ - test_basic_sum[slow-square] TimeoutError HOT 1
- ⚠️ CI failed ⚠️ - Regression: test_spilling HOT 1
- ⚠️ CI failed ⚠️ HOT 1
- optuna is failing HOT 1
- DuckDB fails with OutOfMemoryException HOT 1
- How to create a dev environment to run tpch benchmarks? HOT 4
- Difficulty generating local data HOT 3
- tpc-h query operations aren't aligned across backends HOT 1
- Add datafusion, chdb HOT 2
- Is duckdb out-of-core processing properly enabled? HOT 1
- Fair dataframe API vs API vs SQL benchmarking. HOT 7
- Make TPC-H data publicly available HOT 2
- Migrate AB runs (and database) to 3.10
- Rethink how we persist historical data of scheduled benchmarking runs
- `benchmark.db` quickly blows up in size
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchmarks.