Comments (3)
return {} for now
from pytorch-lightning.
but we're adding support for not needing to implement a val function if not needed #82
from pytorch-lightning.
The recommended solution:
def validation_epoch_end(self, validation_outputs):
return {}
Also produces the error 'AttributeError: 'dict' object has no attribute 'callback_metrics':
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-27-f49f97955aea> in <module>
4 check_val_every_n_epoch=3,
5 )
----> 6 trainer.fit(model)
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/states.py in wrapped_fn(self, *args, **kwargs)
46 if entering is not None:
47 self.state = entering
---> 48 result = fn(self, *args, **kwargs)
49
50 # The INTERRUPTED state can be set inside the run function. To indicate that run was interrupted
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule)
1082 self.accelerator_backend = CPUBackend(self)
1083 self.accelerator_backend.setup(model)
-> 1084 results = self.accelerator_backend.train(model)
1085
1086 # on fit end callback
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/cpu_backend.py in train(self, model)
37
38 def train(self, model):
---> 39 results = self.trainer.run_pretrain_routine(model)
40 return results
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_pretrain_routine(self, model)
1222
1223 # run a few val batches before training starts
-> 1224 self._run_sanity_check(ref_model, model)
1225
1226 # clear cache before training
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_sanity_check(self, ref_model, model)
1255 num_loaders = len(self.val_dataloaders)
1256 max_batches = [self.num_sanity_val_steps] * num_loaders
-> 1257 eval_results = self._evaluate(model, self.val_dataloaders, max_batches, False)
1258
1259 # allow no returns from eval
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py in _evaluate(self, model, dataloaders, max_batches, test_mode)
397
398 # log callback metrics
--> 399 self.__update_callback_metrics(eval_results, using_eval_result)
400
401 # Write predictions to disk if they're available.
~/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py in __update_callback_metrics(self, eval_results, using_eval_result)
419 if isinstance(eval_results, list):
420 for eval_result in eval_results:
--> 421 self.callback_metrics = eval_result.callback_metrics
422 else:
423 self.callback_metrics = eval_results.callback_metrics
AttributeError: 'dict' object has no attribute 'callback_metrics'
Comment those two lines out and it works without any issue. This issue should not be closed until either the problem is fixed or the documentation is updated to show that it's not working.
from pytorch-lightning.
Related Issues (20)
- Script freezes when Trainer is instantiated
- Sanitize object params before they get logged from argument-free classes
- Support GAN based model training with deepspeed which need to setup fabric twice HOT 3
- IndexError: Pytorch-lightning CompositionalMetric require tensor.item() if dim=0 whether I did so
- Huge metrics jump between epochs && Step and epoch log not matched, when accumulate_grad_batches > 1
- Does `fabric.save()` save on rank 0? HOT 3
- Turn off hpc checkpoint saving in SLURM environment if trainer.fit(..., ckpt_path="last") HOT 4
- DDP strategy doesn't work for on_validation_epoch_end, always hang HOT 4
- TensorBoardLogger does not document .add_image() function
- Passing a dataloader to save_hyperparams hangs trainer.fit
- If saved_for_backward returns NumberProxy, the value is taken from compile time, not runtime. HOT 1
- logged images are not showing up in tensorboard images.
- Enable batch size finder for distributed strategies HOT 1
- Support wandb_logger.watch() when using LightningCLI
- The packages such as libraries and models are not loading from files
- Please make it simple!
- LOG issue HOT 1
- Multi-gpu training is much lower than single gpu (due to additional processes?)
- Missing documentation for the `log_weight_decay` argument in `lightning.pytorch.callbacks.LearningRateMonitor`
- parsing issue with `save_last` parameter of `ModelCheckpoint`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-lightning.