Comments (19)
@williamFalcon it works now, nice! Thanks a lot!
from lightning.
Pip install by itself should be fine.
If you're trying to clone and run master, something might have broken with relative imports
@Borda
Make sure you're using python 3.6+ as well
from lightning.
@Borda looks like pep8 recommends absolute imports as well. let's just go back to those, i remember now why i got rid of relative imports a long time ago. Causes all sorts of headaches and it's not clear where things are coming from.
from lightning.
there was an error in the Trainer
since the subpackage was missing __init__
it was fixed in #44 eacd93e
it is also reason for #44 becase running only py.test
did not discover it :)
from lightning.
ok awesome. should be on master now
from lightning.
@egonuel pls could you check it now...
from lightning.
@Borda I just checked it and now I'm getting this error (numpy is installed):
pip install git+https://github.com/williamFalcon/pytorch-lightning.git@master --upgrade
Collecting git+https://github.com/williamFalcon/pytorch-lightning.git@master
Cloning https://github.com/williamFalcon/pytorch-lightning.git (to revision master) to /tmp/pip-req-build-42ipxsly
Running command git clone -q https://github.com/williamFalcon/pytorch-lightning.git /tmp/pip-req-build-42ipxsly
Installing build dependencies ... done
Getting requirements to build wheel ... error
ERROR: Command errored out with exit status 1:
command: /opt/miniconda3/envs/dev_pytorch_lightning36/bin/python /opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmpb9traczs
cwd: /tmp/pip-req-build-42ipxsly
Complete output (22 lines):
Traceback (most recent call last):
File "/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py", line 207, in
main()
File "/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py", line 197, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py", line 54, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-u6myx0it/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 145, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-u6myx0it/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 126, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-u6myx0it/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 234, in run_setup
self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-u6myx0it/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 141, in run_setup
exec(compile(code, file, 'exec'), locals())
File "setup.py", line 5, in
import pytorch_lightning
File "/tmp/pip-req-build-42ipxsly/pytorch_lightning/init.py", line 1, in
from .models.trainer import Trainer
File "/tmp/pip-req-build-42ipxsly/pytorch_lightning/models/trainer.py", line 9, in
import numpy as np
ModuleNotFoundError: No module named 'numpy'
ERROR: Command errored out with exit status 1: /opt/miniconda3/envs/dev_pytorch_lightning36/bin/python /opt/miniconda3/envs/dev_pytorch_lightning36/lib/python3.6/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmpb9traczs Check the logs for full command output.
from lightning.
it seems that numpy
is missing in the setup, see following fix req. in setup #60
from lightning.
Fixed with the patch from #60 on #67
from lightning.
I just tried to get the latest master using pip, and I still get the same error as posted above...
from lightning.
I googled arround a bit and this has something to do with the pip version. I had installed pip 19.2.1. After downgrading to 18.0 the installations works. For reference see here earthlab/earthpy#206
Does this work for you with a newer than 18.0 version?
from lightning.
@williamFalcon there are two possible solutions, fix __init__.py
or setup.py
- fixing
__init__.py
by wrapping import totry
try:
from .models.trainer import Trainer
from .root_module.root_module import LightningModule
from .root_module.decorators import data_loader
except ImportError as e:
print(e)
- fixing
setup.py
remove loadingpytorch_lightning
at the beginning...
see the fix in #71, you may try it as
pip install git+https://github.com/Borda/pytorch-lightning.git@fix-setup
from lightning.
unfortunately, the #68 was merged in really fast fashion without complete testing...
we may consider modifying CI such way that it will test installation to blank environment
@egonuel pls reopen this ticket till it is properly fixed... :)
from lightning.
@Borda thanks, this runs through and installs without errors.
I don't think I can reopen this issue as it was closed by @williamFalcon
from lightning.
@egonuel sorry you’re still having install issues!
@Borda thanks for taking a look. I’m not sure whether something was introduced that broke it, haven’t had any pip install issues.
a few things:
- try catch seems hacky to me. let’s do the setup fix?
- not sure it matters but we don’t explicitly support venv, people should just be using conda here.
- is the only fix for setup making a relative import? i want to keep imports absolute. Haven’t had clone+install issues in this repo ever until just this week with these new changes people introduced. does it have to do with that? or maybe that we all use conda?
installing to blank environment would be a good move for CI but might be slow no?
from lightning.
unfortunately, the #68 was merged in really fast fashion without complete testing...
we may consider modifying CI such way that it will test installation to blank environment
@egonuel pls reopen this ticket till it is properly fixed... :)
i run branches on my gpu machine before merging if tests pass there then i merge since those are more thorough. in some cases it may look like CI didn’t finish bc the local gpu run finishes first, which is when i merge
from lightning.
#56 (comment) sure but before you test the setup, you install or you already gad installed the requirements so all needed libraries were there... which is another story from installing to blank env
from lightning.
- try catch seems hacky to me. let’s do the setup fix?
up to you, if you chose fix setup you need to keep in mind that the version is at two files (setup.py
and __init__.py
)
- not sure it matters but we don’t explicitly support venv, people should just be using conda here.
for conda it should be fine since at already comes with preinstalled most libraries
- is the only fix for setup making a relative import? i want to keep imports absolute. Haven’t had clone+install issues in this repo ever until just this week with these new changes people introduced. does it have to do with that? or maybe that we all use conda?
relative/absolute import does not change anything in this context
installing to blank environment would be a good move for CI but might be slow no?
not necessary, it just changes order, you try to install which install also all requirement and later when you ask for requirements installation it pass since everything is already there...
from lightning.
@egonuel try again?
from lightning.
Related Issues (20)
- Trainer does not wait for neptune logger completion and logger connection stays open unless explicitly closed HOT 1
- Validation does not produce any output in PyTorch Lightning using my UNetTestModel
- Unable to extend FSDPStrategy to HPU accelerator HOT 7
- SaveConfigCallback.save_config is conflict with DDP HOT 1
- Logging Documentation Does not Detail How to Access the Logged Values during the fit loop
- Apply the ignore of the save_hyperparameters function to args as well.
- Cannot run in SLURM Interactive Session
- Resume from mid steps inside an epoch
- `DDPStrategy` fails when using accelerators other than CUDA
- PyTorch Lightning with T5 Model - RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn HOT 1
- Script freezes when Trainer is instantiated
- Sanitize object params before they get logged from argument-free classes
- Support GAN based model training with deepspeed which need to setup fabric twice HOT 2
- IndexError: Pytorch-lightning CompositionalMetric require tensor.item() if dim=0 whether I did so
- Huge metrics jump between epochs && Step and epoch log not matched, when accumulate_grad_batches > 1
- Does `fabric.save()` save on rank 0? HOT 3
- Turn off hpc checkpoint saving in SLURM environment if trainer.fit(..., ckpt_path="last") HOT 3
- DDP strategy doesn't work for on_validation_epoch_end, always hang HOT 4
- TensorBoardLogger does not document .add_image() function
- Passing a dataloader to save_hyperparams hangs trainer.fit
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lightning.