Code Monkey home page Code Monkey logo

labscript-suite / labscript-utils Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 45.0 1.77 MB

Shared modules used by the 𝘭𝘒𝘣𝘴𝘀𝘳π˜ͺ𝘱𝘡 𝘴𝘢π˜ͺ𝘡𝘦. Includes a graphical exception handler, debug tools, configuration management, cross platform filepath conversions, unit conversions and custom GUI widgets.

Home Page: http://labscriptsuite.org

License: Other

Python 100.00%

labscript-utils's Introduction

the labscript suite

the labscript suite – labscript the labscript suite – runmanager the labscript suite – blacs the labscript suite – lyse the labscript suite – runviewer

Experiment control and automation system

Actions Status Documentation Status License Python Version PyPI Conda Version Google Group DOI

The labscript suite is a powerful and extensible framework for experiment composition, control, execution, and analysis. Developed for quantum science and quantum engineering; deployable in laboratory and in-field devices. Also applicable to optics, microscopy, materials engineering, biophysics, and any application predicated on the repetition of parameterised, hardware-timed experiments.

This is a metapackage for the labscript suite. Formerly the labscript suite installer repository, prior to the packages being installable via PyPI and Anaconda Cloud.

Features:

  • Flexible and automated oversight of heterogeneous hardware.
  • The most mature and widely used open-source control system in quantum science.
  • Multiple analysis-based feedback modes.
  • Extensible plugin architecture (e.g. machine learning online optimisation).
  • Readily integrates with other software, including image acquisition, analysis, and even other control systems.
  • Compose experiments as human-readable Python code, leveraging modularity, revision control and re-use.
  • Dynamic visualisation of experiment composition and results.
  • Remote operation: different modules can run on physically separate hosts / single modules can be run on multiple hosts (including hardware supervisor, blacs).
  • Auto-generating user-interfaces.
  • High-level scripting: user-interface interaction can be programatically synthesised.

Table of contents

Installing the labscript suite

We're excited to announce that accompanying the recent migration to GitHub, labscript suite components are now distributed as Python packages on PyPI and Anaconda Cloud.

This makes it far easier to get started using the labscript suite, as you no longer require a Mercurial or Git installation (or any knowledge of version control software); components can be installed and upgraded using:

  • pip: the standard package manager common to all Python distributions; or
  • conda: a binary package and environment manager, part of the Anaconda Python distribution.

For further information, please see the documentation, which includes information about both regular and developer (editable) installations of the labscript suite.

Recent changes to the labscript suite

Upon migrating the code base to GitHub and publishing distributions on PyPI in April–May 2020, existing users should be aware of the following recent changes.

Profile directories

The labscript suite profile directory, containing application configurations, logs, and user-side code, is now located by default in the current user's home directory, e.g. for a local user named wkheisenberg this is:

  • C:\Users\wkheisenberg\labscript-suite on Windows.
  • ~/labscript-suite or /home/wkheisenberg/labscript-suite on Linux and Mac OS X.

A typical structure of the profile directory is:

    ~/labscript-suite/
    β”œβ”€β”€ app_saved_configs/
    β”‚   β”œβ”€β”€ default_experiment/
    β”œβ”€β”€ labconfig/
    β”œβ”€β”€ logs/
    └── userlib/
        β”œβ”€β”€ analysislib/
        β”œβ”€β”€ labscriptlib/
        β”œβ”€β”€ pythonlib/
        └── user_devices/

This structure is created by calling the command labscript-profile-create in a terminal after installing labscript-utils (per the installation instructions).

Note: As of labscript-suite/labscript-utils#37 an editable installation can be located within the labscript-suite profile directory.

Secure communication

Interprocess communication between components of the labscript suite is based on the ZeroMQ (ZMQ) messaging protocol. We have supported secure interprocess communication via encrypted ZMQ messaging since February 2019 (labscript-utils 2.11.0).

As of labscript-utils 2.16.0, encryted interprocess communication will be the default. If you haven't already, this means you'll need to create a new shared secret (or pre-shared key) as follows:

  1. Run python -m zprocess.makesecret from the labconfig directory.

  2. Specify the path of the resulting shared_secret in your labconfig. For example:

    [security]
    shared_secret = %(labscript_suite)s/labconfig/zpsecret-09f6dfa0.key
  3. Copy the same pre-shared key to all computers running the labscript suite that need to communicate with each other, repeating step 2 for each of them.

Treat this file like a password; it allows anyone on the same network access to labscript suite programs.

If you are on a trusted network and don't want to use secure communication, you may instead set:

[security]
allow_insecure = True

Notes:

  • Steps 1 and 2 are executed automatically as part of the labscript-profile-create command. However, for multiple hosts, step 3 above must still be followed to ensure the same public-key is used by all hosts running labscript suite programs.

  • There is an outstanding issue with the ZMQ Python bindings on Windows (zeromq/pyzmq#1148), whereby encryption is significantly slower for Python distributions other than Anaconda. Until this issue is resolved, we recommend that Windows users on an untrusted network use the Anaconda Python distribution (and install pyzmq using conda install pyzmq).

Application shortcuts

Operating-system menu shortcuts, correct taskbar behaviour, and environment activation for the Python GUI applications (blacs, lyse, runmanager, and runviewer) is now handled by a standalone Python package desktop-app (per installation instructions above). This currently supports Windows and Linux (Mac OS X support is forthcoming).

Source code structure (developer installation)

Existing users who move to a developer (editable) installation, please note the following structural changes to the labscript suite source code:

  • Each package has a top-level folder containing setup.py and setup.cfg used to build a distribution from source. The functional code base now resides in a subfolder corresponding to the name of the Python module, e.g. an editable installation might contain folders:

    <path-to-your-labscript-installation>/
    β”œβ”€β”€ blacs/
    β”‚   └── blacs/
    β”œβ”€β”€ labscript/
    β”‚   └── labscript/
    β”œβ”€β”€ labscript-devices/
    β”‚   └── labscript_devices/
    β”œβ”€β”€ labscript-utils/
    β”‚   └── labscript_utils/
    β”œβ”€β”€ lyse/
    β”‚   β”œβ”€β”€ lyse/
    β”œβ”€β”€ runmanager/
    β”‚   └── runmanager/
    └── runviewer/
        └── runviewer/
    
  • Package names (shared by repositories and top-level folders) are now hyphenated, e.g. labscript-devices and labscript-utils.

  • Module names remain underscored, e.g. labscript_devices and labscript_utils.

  • The mixing of hyphen and underscores is inelegant but conventional.

  • All references to blacs are now lowercase.

  • As installation no longer requires a separate package, the repository formerly named β€˜installer’ has been renamed to β€˜labscript-suite’, and is a metapackage for the labscript suite (installing it via pip/conda installs the suite).

Versioning (developer installation)

Aside from the maintenance branches documented here, versions of the labscript suite packages are introspected at run-time using either the importlib.metadata library (regular installations) or setuptools_scm (developer installations). Thus any changes to an editable install will be traceable by local version numbers, e.g. editing the released version of a package with version 2.4.0 will result in 2.4.0dev1+gc28fe94, for example. This will help us diagnose issues users have with their editable installations.

BitBucket archive

In April–May 2020 the labscript suite code base was migrated from BitBucket to GitHub. All commit history and issues was preserved, however some repository metadata (such as pull request discussions) could not be migrated directly. As such, we have created an archived copy of everything that was on BitBucket. This includes:

  • Issues (as they appear on BitBucket);
  • Pull requests discussions;
  • Commit comments for every labscript suite repository; and
  • Every public fork (as of 1st February, 2020).

This archive can be found at bitbucket-archive.labscriptsuite.org (this page can take some time to load for the first time). Copies of every public fork of our repositories are at github.com/labscript-suite-bitbucket-archive. As this is an archive, we will not be transferring ownership of these repositories back to their original owners. However, should you wish to continue development on one of those repositories you can fork it into your own account through the GitHub web interface. Should you have uncommitted changes (or changes made after 1st February, 2020) that you wish to have archived, please contact us to discuss the best approach to including these. Please note that we are not recommending continuing development in such forks long term, due to the changes in package structure outlined above.

Further information about migrating your own customisations of the labscript suite can be found here.

Contributing to the labscript suite

We are very grateful for all the contributions users have made in the past decade to make the labscript suite the most widely used open-source experiment control and automation system in quantum science. These include development, suggestions, and feedback, and we look forward to this continuing on GitHub.

Issue tracking

The issue tracking on GitHub is very similar to BitBucket, with the added advantage that you can add inter-repository issue references, e.g. referring to labscript-suite/runmanager#68 in any issue or pull request will link to the corresponding issue. We have imported all issues from the BitBucket repositories into the GitHub repositories. This import is not perfect (as each comment is now posted by Phil Starkey) but the comments have been modified to contain the original author attribution. We have also updated all links to files, pull requests, issues, and commits so that they point to the equivalent GitHub location and/or the archived copy of the data (as discussed above).

Please use the issue tracker of the relevant GitHub repository for:

  • Reporting bugs (when something doesn't work or works in a way you didn't expect);
  • Suggesting enhancements: new features or requests;
  • Issues relating to installation, performance, or documentation.

For advice on how to use the existing functionality of the labscript suite, please use our mailing list.

Request for developers

We would like to reaffirm our invitation for users to directly contribute toward developing the labscript suite. We have established a separate discussion forum on Zulip for discussing development direction and design. If you are interested in being a part of these discussions, and/or testing and merging pull requests, please reach out to us.

Further guidance on contributingβ€”including the branching model we use, and the procedure for issuing pull requestsβ€”can be found in the documentation.

Citing the labscript suite

If you use the labscript suite to control your experiment or perform analysis, please cite one or more of the following publications:

P. T. Starkey, A software framework for control and automation of precisely timed experiments. PhD thesis, Monash University (2019).
  @phdthesis{starkey_phd_2019,
    title = {State-dependent forces in cold quantum gases},
    author = {Starkey, P. T.},
    year = {2019},
    url = {https://doi.org/10.26180/5d1db8ffe29ef},
    doi = {10.26180/5d1db8ffe29ef},
    school = {Monash University},
  }
C. J. Billington, State-dependent forces in cold quantum gases. PhD thesis, Monash University (2018).
  @phdthesis{billington_phd_2018,
    title = {State-dependent forces in cold quantum gases},
    author = {Billington, C. J.},
    year = {2018},
    url = {https://doi.org/10.26180/5bd68acaf0696},
    doi = {10.26180/5bd68acaf0696},
    school = {Monash University},
  }
A scripted control system for autonomous hardware-timed experiments, Review of Scientific Instruments 84, 085111 (2013). arXiv:1303.0080.
  @article{labscript_2013,
    author = {Starkey, P. T. and Billington, C. J. and Johnstone, S. P. and
              Jasperse, M. and Helmerson, K. and Turner, L. D. and Anderson, R. P.},
    title = {A scripted control system for autonomous hardware-timed experiments},
    journal = {Review of Scientific Instruments},
    volume = {84},
    number = {8},
    pages = {085111},
    year = {2013},
    doi = {10.1063/1.4817213},
    url = {https://doi.org/10.1063/1.4817213},
    eprint = {https://doi.org/10.1063/1.4817213}
  }

labscript-utils's People

Contributors

chrisjbillington avatar dihm avatar dsbarker avatar lincolnturner avatar michwill avatar mjasperse avatar pacosalces avatar philipstarkey avatar phynerd avatar rpanderson avatar shjohnst avatar zakv avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

labscript-utils's Issues

Command line argument to turn off double import denier

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


Given the issues we are having with the double import denier, is it feasible to add a command line argument to all of our GUI programs that turns off the double import denier (so that, in the worse case, people can add the command line argument to the shortcut for the software, allowing them to continue working while they wait for the bugs to be fixed properly)?

Distutils double import for setuptools>=60.0.0

It appears that setuptools is shipping a distutils hack that has begun to trigger double import warnings from the double import denier.

Easiest method to reproduce is to have setuptools>=60.0.0 and run the following two commands (from labscript_utils.ls_zprocess):

import labscript_utils  # engages the double import denier, maybe other things
from distutils.version import LooseVersion

With setuptools=60.0.5, the error is

Traceback (most recent call last):
  File "c:\users\naqsl\src\labscript-suite\labscript-utils\labscript_utils\double_import_denier.py", line 72, in find_spec
    self._raise_error(path, fullname, tb, other_name, other_tb)
  File "c:\users\naqsl\src\labscript-suite\labscript-utils\labscript_utils\double_import_denier.py", line 136, in _raise_error
    raise RuntimeError(msg) from None
RuntimeError: Double import! The same file has been imported under two different names, resulting in two copies of the module. This is almost certainly a mistake. If you are running a script from within a package and want to import another submodule of that package, import it by its full path: 'import module.submodule' instead of just 'import submodule.'

Path imported: C:\Users\naqsL\Miniconda3\envs\labscript2\Lib\site-packages\setuptools\_distutils\dir_util.py

Traceback (first time imported, as setuptools._distutils.dir_util):
------------
  File "<stdin>", line 1, in <module>
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 92, in create_module
    return importlib.import_module('setuptools._distutils')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\__init__.py", line 8, in <module>
    import _distutils_hack.override  # noqa: F401
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 73, in do_override
    ensure_local_distutils()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 60, in ensure_local_distutils
    core = importlib.import_module('distutils.core')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\_distutils\core.py", line 18, in <module>
    from distutils.cmd import Command
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\_distutils\cmd.py", line 9, in <module>
    from distutils import util, dir_util, file_util, archive_util, dep_util
------------

Traceback (second time imported, as distutils.dir_util):
------------
  File "<stdin>", line 1, in <module>
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 92, in create_module
    return importlib.import_module('setuptools._distutils')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\__init__.py", line 8, in <module>
    import _distutils_hack.override  # noqa: F401
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 73, in do_override
    ensure_local_distutils()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 60, in ensure_local_distutils
    core = importlib.import_module('distutils.core')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\_distutils\core.py", line 18, in <module>
    from distutils.cmd import Command
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\_distutils\cmd.py", line 9, in <module>
    from distutils import util, dir_util, file_util, archive_util, dep_util
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\_distutils\archive_util.py", line 18, in <module>
    from distutils.dir_util import mkpath
------------

Under setuptools=60.9.3, (most recent release), the message changes slightly and becomes decidedly less helpful.

Traceback (most recent call last):
  File "c:\users\naqsl\src\labscript-suite\labscript-utils\labscript_utils\double_import_denier.py", line 72, in find_spec
    self._raise_error(path, fullname, tb, other_name, other_tb)
  File "c:\users\naqsl\src\labscript-suite\labscript-utils\labscript_utils\double_import_denier.py", line 136, in _raise_error
    raise RuntimeError(msg) from None
RuntimeError: Double import! The same file has been imported under two different names, resulting in two copies of the module. This is almost certainly a mistake. If you are running a script from within a package and want to import another submodule of that package, import it by its full path: 'import module.submodule' instead of just 'import submodule.'

Path imported: C:\Users\naqsL\Miniconda3\envs\labscript2\Lib\site-packages\setuptools\_distutils\__init__.py

Traceback (first time imported, as setuptools._distutils):
------------
  File "<stdin>", line 1, in <module>
  File "c:\users\naqsl\src\labscript-suite\labscript-utils\labscript_utils\double_import_denier.py", line 57, in find_spec
    spec = importlib.util.find_spec(fullname, path)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\util.py", line 103, in find_spec
    return _find_spec(fullname, parent_path)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 90, in find_spec
    return method()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 101, in spec_for_distutils
    mod = importlib.import_module('setuptools._distutils')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\setuptools\__init__.py", line 8, in <module>
    import _distutils_hack.override  # noqa: F401
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\override.py", line 1, in <module>
    __import__('_distutils_hack').do_override()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 72, in do_override
    ensure_local_distutils()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 55, in ensure_local_distutils
    importlib.import_module('distutils')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 90, in find_spec
    return method()
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\site-packages\_distutils_hack\__init__.py", line 101, in spec_for_distutils
    mod = importlib.import_module('setuptools._distutils')
  File "C:\Users\naqsL\Miniconda3\envs\labscript2\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
------------

Traceback (second time imported, as distutils):
------------
------------

While I suppose we could whitelist distutils, I think the more correct solution is to remove calls to distutils (seeing as it will be fully deprecated in python 3.12). It appears the equivalent function for this present problem is now packaging.version.Version. If we go this route, it's probably worth checking the whole suite for random distutils calls and updating them as well.

When I'm trying to use FlyCapture2Camera,double import error in blacs.

The error is

Exception in worker - Thu May 12, 22:08:08 :
Traceback (most recent call last):
  File "c:\labscript\lib\site-packages\labscript_devices\IMAQdxCamera\blacs_workers.py", line 251, in init
    self.camera = self.get_camera()
RuntimeError: Double import! The same file has been imported under two different names, resulting in two copies of the module. This is almost certainly a mistake. If you are running a script from within a package and want to import another submodule of that package, import it by its full path: 'import module.submodule' instead of just 'import submodule.'

Path imported: C:\Users\Administrator\namespace

Traceback (first time imported, as user_devices):
------------
  File "C:\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "c:\labscript\lib\site-packages\zprocess\process_class_wrapper.py", line 86, in <module>
    _setup()
  File "c:\labscript\lib\site-packages\zprocess\process_class_wrapper.py", line 70, in _setup
    module = importlib.import_module(module_name)
  File "C:\Python36\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "c:\labscript\lib\site-packages\labscript_devices\__init__.py", line 7, in <module>
    from labscript_utils.device_registry import *
  File "c:\labscript\lib\site-packages\labscript_utils\device_registry\__init__.py", line 1, in <module>
    from ._device_registry import *
  File "c:\labscript\lib\site-packages\labscript_utils\device_registry\_device_registry.py", line 84, in <module>
    LABSCRIPT_DEVICES_DIRS = _get_device_dirs()
  File "c:\labscript\lib\site-packages\labscript_utils\device_registry\_device_registry.py", line 81, in _get_device_dirs
    return _get_import_paths(['labscript_devices'] + user_devices)
  File "c:\labscript\lib\site-packages\labscript_utils\device_registry\_device_registry.py", line 66, in _get_import_paths
    spec = importlib.util.find_spec(name)
  File "C:\Python36\lib\importlib\util.py", line 91, in find_spec
    return _find_spec(fullname, None)
------------

Traceback (second time imported, as PyCapture2):
------------
  File "C:\Python36\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Python36\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "c:\labscript\lib\site-packages\zprocess\process_class_wrapper.py", line 86, in <module>
    _setup()
  File "c:\labscript\lib\site-packages\zprocess\process_class_wrapper.py", line 83, in _setup
    instance._run()
  File "c:\labscript\lib\site-packages\zprocess\process_tree.py", line 1540, in _run
    _Process._run(self)
  File "c:\labscript\lib\site-packages\zprocess\process_tree.py", line 1121, in _run
    self.run(*args, **kwargs)
  File "c:\labscript\lib\site-packages\blacs\tab_base_classes.py", line 891, in run
    self.mainloop()
  File "c:\labscript\lib\site-packages\blacs\tab_base_classes.py", line 923, in mainloop
    results = func(*args,**kwargs)
  File "c:\labscript\lib\site-packages\labscript_devices\IMAQdxCamera\blacs_workers.py", line 251, in init
    self.camera = self.get_camera()
  File "c:\labscript\lib\site-packages\labscript_devices\IMAQdxCamera\blacs_workers.py", line 280, in get_camera
    return self.interface_class(self.serial_number)
  File "c:\labscript\lib\site-packages\labscript_devices\FlyCapture2Camera\blacs_workers.py", line 70, in __init__
    import PyCapture2
------------

Fatal exception in main process - Thu May 12, 22:08:08 :
 Traceback (most recent call last):
  File "c:\labscript\lib\site-packages\blacs\tab_base_classes.py", line 837, in mainloop
    next_yield = inmain(generator.send,results)
  File "c:\labscript\lib\site-packages\qtutils\invoke_in_main.py", line 88, in inmain
    return get_inmain_result(_in_main_later(fn, False, *args, **kwargs))
  File "c:\labscript\lib\site-packages\qtutils\invoke_in_main.py", line 150, in get_inmain_result
    raise value.with_traceback(traceback)
  File "c:\labscript\lib\site-packages\qtutils\invoke_in_main.py", line 46, in event
    result = event.fn(*event.args, **event.kwargs)
  File "c:\labscript\lib\site-packages\blacs\tab_base_classes.py", line 536, in _initialise_worker
    raise Exception('Device failed to initialise')
Exception: Device failed to initialise

@chrisjbillington @philipstarkey

Add enum control widget

Original report (archived issue) by David Meyer (Bitbucket: dihm, GitHub: dihm).


When programming devices with physical front panels, it is often the case that a control I’d like to manipulate with labscript (at a static level) is best described as an enum (looking at you SRS). I propose we add a basic combobox based widget (like AnalogOutput or DigitalOutput) that can be stuffed with a dictionary of labels and programming values at runtime from the device blacs tab. These controls do not always have an associated output or input class associated with them, rather being a device level setting that influences general operation.

I’m happy to work on this one since we have a current need, but I’d like a bit of guidance on how to integrate with the rest of the BLACS auto-creation of widgets magic. If I have understood correctly, the current paradigm for an AnalogOutput widget is to have the device_tab call a widget auto-populating function which creates AnalogOutput widgets which in turn links to the labscript AO class. This ensures settings from the connection table and blacs tab can configure each output correctly. What is the best way to modify this paradigm?

My initial thought is to drop auto-populating in the blacs_tab in favor writing something akin to ddsoutput.py for any (often conglomerate) control that would be device specific and kept in the device folder. I’m a little less clear on how to handle enum settings at the AO class level. Should I create a commensurate class that behaves as a StaticAO with discrete values set by dictionary?

Anyway, this is starting to get long and likely confusing since I don’t really know what I’m talking about. So I’ll end by describing what our need is and what I would like to see.

We have an RF Signal Generator (SRS SG386) that has modulation controls. The controllable options include: Enable (on/off), Type (AM/FM/PM/Sweep), Function (Sine, Triangle, Square, External), Deviation (float), Depth (float), and External Coupling (AC/DC). Since all of these controls are inter-related and operate on the same function, it would be nice to create a monolithic control widget that groups them together in the BLACS tab and allows user control while enforcing allowable settings. Slightly beyond the scope of this discussion, when writing an experiment script; having corresponding SG386.mod(Enable) and/or SG386.mod.Depth(1MHz) commands would be great. Getting started, StaticAO/DO covers the boolean and float options just fine. But I need an enum for everything else.

ToolPalette._layout_widgets() appears to recurse, causing a stack overflow

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


A lab here at NIST got a "Python has stopped working" error upon starting BLACS today. Starting Python as python -X faulthander -m blacs revealed it to be a stack overflow in ToolPalette._layout_widgets(), line 348:

self.setMinimumSize(QSize(self.minimumSize().width(), total_height))

Adding a printline to inspect the dimensions of the QSize() object revealed that _layout_widgets() was being called a large number of times prior to the crash, with the dimensions alternating back and forth between two values.

I added a counter to print the recursion depth and confirm that the method is recursing, but I made a syntax error and BLACS started successfully (having caught the error), modifying its save file and widget geometries such that the stack overflow no longer occurred. So unfortunately I lost the ability to reproduce the problem, as it is sensitive to the widget geometries.

Just documenting what I found here. ToolPalette._layout_widgets() seems to be recursing and not converging on a fixed layout geometry that would break the cycle. If I see it again I will backup BLACS save file and connection table file to create a reliable reproducer of the problem.

Others have reported similar crashes before, but they have not been stack overflows, they have been segfaults that looked to be bugs in Qt. This one actually is plausibly our fault which means we may be able to fix it.

.pth file not processed in editable install

In an editable install, labscript-suite.pth is not processed at interpreter startup because it is not in a site directory, even though it's in the python path.

This .pth file adds userlib and pythonlib (as defined in labconfig) to the python import path to make user code available for import from the python interpreter regardless of whether the code is running within a labscript suite program.

We could:

  • Work out how to get it into a site directory during an editable install (possibly not reversible or otherwise a bit magic)
  • Convince Python that .egg-link files should imply the target directory is a site directory to be processed by the site module, make a patch and wait for them to incorporate it
  • Work out if there's some way to otherwise add our package directory as a site directory during editable install.
  • Live with it and instead process the .pth file whenever labscript_utils is imported. This means user code would not be available for import in editable installs when not running inside a labscript suite program, unless labscript_utils has been imported. This could simply be documented as a limitation of editable installs.

Log files aren't rotating

Original report (archived issue) by Russell Anderson (Bitbucket: rpanderson, GitHub: rpanderson).


The RotatingFileHandler used in setup_logging.py requires a non-zero backupCount argument for rollover to occur, but this argument defaults to zero and we hadn't specified otherwise. With the log level set to DEBUG, log files can grow pretty fast, resulting in unwieldy log files.

Modulewatcher: ability to blacklist modules

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


the ModuleWatcher class should have a method called 'blacklist', which takes a fully qualified module name as an argument.

A blacklisted module will be added, unsurprisingly, to a blacklist. Any modules that import this module will also be blacklisted. This will be achieved using an import tracer (repurposed from labscript_utils.impprof) that is set up when a ModuleWatcher is instantiated.

Blacklisted modules shouldn't be deleted from sys.modules immediately, that wouldn't make sense. They will be deleted on a method call 'clear_blacklisted' or similar.

This functionality is so that calling code can blacklist a module that it knows has import side effects and thus needs to be re-imported in code that runs repeatedly in the same interpreter, like lyse routines and labscript compilation.

So that far away code can blacklist itself, ModuleWatcher should provide access to an existing instance.

Concurrent log handler causes unbearable slowdown

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


BLACS is intolerably slow to do anything when using the concurrent log handler introduced to fix the logging bug. I suppose all that file opening and closing as the processes exchange locks is is just too much. Toggling a digital out takes ~1 second with all the logging we do, whereas with regular logging it takes ~30ms. This latency adds up everywhere in BLACS, reducing shot throughput and making everything you do a little sluggish.

So back to the drawing board with logging - I'll probably subclass a logging handler to send data over zeromq to a server a-la zlock, which was the original plan.

This sluggishness was observed in Windows 10 and is clearly gone if the logging is switched back to a regular FileHandler. I don't observe the problem on linux.

.pth file has no effect in editable install

.pth files only work if they're in a site packages directory. Editable installs add the current directory to the path, but they don't add them as site directories so no .pth files get processed.

Perhaps labscript_utils should process the .pth file at import time, if it sees that it has not already been processed. Then user code will be available in the interpreter so long as a labscript suite module has been imported.

Unitconversions base class should use different style of dynamic method creation

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


exec'ing def statements is a bit magical, and hard for exception code and code analysis tools (and humans too!) to tell what's going on if anything goes wrong. Since Python provides the tools to create methods dynamically by other means, these should be used instead.

Oooh, maybe this will be excuse enough to write a metaclass, but probably not so I won't get my hopes up. I mean they're pretty magical too, so should be avoided without a good reason.

`imp` module removed from new python

the imp module has been depreciated since 3.4 and is now removed from python. importlib is the replacement.

I found it being used in the following:

labscript_utils/device_registry/_device_registry.py
labscript_utils/double_import_denier.py
labscript_utils/modulewatcher.py

At this instant I am doing feature development for lyse which uses modulewatcher.py. In this case imp is being used to get a global import lock, but importlib does not support this functionality.

I am looking into this right now, but this is a hight priority issue because it blocks labscript from being used in current python releases.

Can't save integer connection table properties that come from globals

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


In Python 3.7 with latest numpy (observed on numpy 1.16.3), one cannot save a connection table attribute or unit calibration parameter that is an integer, and that has been through a HDF5 file as a global. JSON serialisation chokes on the integer, saying it can't serialise it. Here is a minimal breaking example:

import json
import h5py
with h5py.File('test.h5', 'w') as f:
    f.attrs['x'] = 5
    json.dumps(dict(f.attrs))
Traceback (most recent call last):
  File "260.py", line 5, in <module>
    print(json.dumps(dict(f.attrs)))
  File "/usr/lib/python3.7/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib/python3.7/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python3.7/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/usr/lib/python3.7/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type int64 is not JSON serializable

Of course JSON can serialise normal Python integers, but having been through the HDF5 file, the integer became a np.int32. So an even more minimal breaking example might be:

import json
import numpy as np
json.dumps([np.int32(5)])

And it doesn't matter if it is a np.int32 or np.int64, both break .

This works fine in Python 2 with the same numpy version and works if you convert the integer to a float instead. Looks like a regression in either Python or numpy, I'm not sure which. But I'll see if I can figure out which to report a bug to. We could work around it in labscript suite code, but should not bother if it is to be imminently fixed upstream.

Encapsulate running external programs

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


Opening an HDF5 file or text file should be done with common functions in labscript_utils rather than code duplication across each program that does it.

If there is an error or no external program is configured in labconfig, this function should be able to pop up a dialog prompting the user to pick a text editor or help it find the path to HDFview (or prompting them to install HDFview). It should then write their choice to labconfig.

h5py 2.10.0 Deprecation Warning

Hi all, I get the deprecation warning below when using h5py 2.10.0. It appears in runmanager when compiling shots and in lyse when analysis scripts run.

C:\Users\waveguide3\Desktop\labscript_test_install_3\labscript_utils\h5_lock.py:64: H5pyDeprecationWarning: The default file mode will change to 'r' (read-only) in h5py 3.0. To suppress this warning, pass the mode you need to h5py.File(), or set the global default h5.get_config().default_file_mode, or set the environment variable H5PY_DEFAULT_READONLY=1. Available modes are: 'r', 'r+', 'w', 'w-'/'x', 'a'. See the docs for details.
  _File.__init__(self, name, mode, driver, libver, **kwds)

This was done using labscript-utils mercurial changeset e5a908bdc9cb. In the most recent git commit, this command is now on line 53:

_File.__init__(self, name, mode, driver, libver, **kwds)
.

Looks like having mode=None is deprecated now. The h5py code for handling that case is here: https://github.com/h5py/h5py/blob/497d6a8ccbf3519fea2cd10093d0cee5da72358a/h5py/_hl/files.py#L188. I'm not very familiar with h5_lock.py but it may be fine to just use mode='a'. Alternatively the old h5py behavior could be reproduced by recreating those try/except statements from h5py in h5lock.py.

The warning pollutes the terminal outputs in runmanager and lyse but doesn't prevent anything from working, so this is a pretty minor issue. Also, h5py 2.9.0 works fine and doesn't issue this warning.

labscript_utils.excepthook uses tk

Is there a reason that import labscript_utils.excepthook uses the tk GUI system while the rest of labscript uses qt? It seems like a needless use of system resources to import two different GUI libraries.

Load example_experiment.py on first run of runmanager

This PR contributes to works-out-the-box functionality. Implementation would be by including app_saved_configs/example_apparatus/runmanager/runmanager.ini in the default_profile created by labscript-utils.

[runmanager_state]
current_labscript_file = 'C:\\Users\\rpanderson\\labscript-suite\\userlib\\labscriptlib\\example_apparatus\\example_experiment.py'

However, this either needs to be dynamically generated during labscript-profile-create or permit wildcards, e.g.

[runmanager_state]
current_labscript_file = %(labscript_suite)s\userlib\labscriptlib\example_apparatus\example_experiment.py

The Win7AppId executable can be replaced with Python calls now

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


Not sure if this has always existed, but I've discovered that the shortcut property modification done by win7appid.exe can be replicated with direct Python calls with the win32com library. For example, the below code fixes a bug with Spyder that's existed for almost 5 years which no-one bothered to solve before me:

#!python

from win32com.shell import shellcon
from win32com.propsys import propsys, pscon
import pythoncom

shortcut_path = r"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Anaconda3 (64-bit)\Spyder.lnk"
store = propsys.SHGetPropertyStoreFromParsingName(shortcut_path, None, shellcon.GPS_READWRITE, propsys.IID_IPropertyStore)
store.SetValue(pscon.PKEY_AppUserModel_ID, propsys.PROPVARIANTType(u'spyder.Spyder', pythoncom.VT_LPWSTR))
store.Commit()

I'm pretty sure we should be able to do the same thing for our own shortcuts :)

If you have lots of tabs, the width of each tab shrinks rather than allowing scrolling

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


The new changes to the drag-drop-tab-bar are less than ideal for those managing a large number of tabs. Each tab now shrinks so that all tabs are visible at all times. This makes it hard to read the device name (and it can be difficult to distinguish devices if several have a similar name with a differing number at the end (e.g. pulseblaster_3, pulseblaster_4).

This should be fixed!

win32 dependency has a minimum requirement

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


In the dual species BEC lab at Monash University, we are running the EPD 7.0.2 (32-bit) Python 2.7.1 distribution. This came with the win32 library.

When starting runmanager (and presumably other programs) the following exception was raised

#!python

Traceback (most recent call last):
  File "C:\pythonlib\runmanager\__main__.py", line 101, in set_win_appusermodel
    set_appusermodel(window_id, appids['runmanager'], icon_path, relaunch_command, relaunch_display_name)
  File "C:\pythonlib\labscript_utils\winshell\__init__.py", line 56, in set_appusermodel
    store = propsys.SHGetPropertyStoreForWindow(window_id, propsys.IID_IPropertyStore)
AttributeError: 'module' object has no attribute 'SHGetPropertyStoreForWindow'

Upgrading win32 to build 219 (from sourceforge) made the exception go away. Presumably there is a minimum version requirement for the win32 python wrapper. Unfortunately I couldn't see an easy way to find the version number from the win32 library (but I didn't try very hard).

We should probably try and include some sort of version check on this dependency.

Excepthook should save me from my stupidity

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


I just launched a Qt program with a typo in like the topmost event filter or something, which got called about a billion times as my program started up. Of course Qt's event loop doesn't miss a beat on an exception in a callback, and so my program spawned what looked (with ps) like several hundred excepthook processes. I didn't see them actually pop up, and they eventually stopped accumulating, because my desktop environment immediately became unresponsive. After dropping to a virtual terminal to check things out and executing a carefully crafted few lines of Python to filter ps and kill them all, I was able to not lose my record of not having not rebooted my laptop yet on this continent.

Anyway it occured to me that this shouldn't happen. Excepthook should cough max ten or so concurrent exception windows, and should cough a final one saying further errors are not being shown graphically.

Can't Imort userlib

Hi All,

I'm attempting to get labscript running on a new computer with an anaconda developer install on Windows. The installation runs fine but I'm not able to import userlib, even after running labscript-profile-create. InsteadΒ I get ModuleNotFoundError: No module named 'userlib'. This occurs even with theΒ default values in the labconfig. I'm able to import pythonlib just fine though.

In an interactive session I checked sys.path and it included C:\Users\UserName\labscript-suite\userlib and C:\Users\UserName\labscript-suite\userlib\pythonlib. I tried editing sys.path to move each of those up one directory, so sys.path then containedΒ C:\Users\UserName\labscript-suite and C:\Users\UserName\labscript-suite\userlib. After that I was able to import both userlib and pythonlib in that interactive session without error.

Maybe the parent directories of userlib and pythonlib should be added to path instead?

def add_userlib_and_pythonlib():
"""Find the users's labconfig file, read the userlib and pythonlib keys, and add
those directories to the Python search path. This function intentionally
re-implements finding and reading the config file so as to not import
labscript_utils, since we dont' want to import something like labscript_utils every
time the interpreter starts up"""
labconfig = default_labconfig_path()
if labconfig is not None and labconfig.exists():
config = ConfigParser(defaults={'labscript_suite': LABSCRIPT_SUITE_PROFILE})
config.read(labconfig)
for option in ['userlib', 'pythonlib']:
try:
paths = config.get('DEFAULT', option).split(',')
except (NoSectionError, NoOptionError):
paths = []
for path in paths:
site.addsitedir(path)

Somewhat related: #43

This seems like an issue that others would have run into before so there may be something different about my set up somehow. Maybe it's because this computer never had a mercurial install of labscript on it? I checked sys.path on one of our other computers which still has a mercurial labscript install on it, and it included the paths to all three of C:\Users\UserName\labscript-suite, C:\Users\UserName\labscript-suite\userlib and C:\Users\UserName\labscript-suite\userlib\pythonlib,

Cheers,
Zak

Change experiment_name to apparatus_name in labconfig

The term 'experiment' is overused and currently classifies at least:

  • A given labscript (experiment) Python script (although this is strictly referred to e.g. in the [experiment!] shot file as the script_basename); and
  • The apparatus that this experiment is executed on.

This proposal would see the experiment_name keyword renamed to apparatus_name in labconfig and downstream code, which is a better acknowledgement of the intended classification (which was only termed 'experiment' owing to this being a colloquially synonymous with 'apparatus').

This would mostly affect the [DEFAULT] labconfig section, i.e.

[DEFAULT]
apparatus_name = default_apparatus
shared_drive = C:
experiment_shot_storage = %(shared_drive)s\Experiments\%(apparatus_name)s
userlib=%(labscript_suite)s\userlib
pythonlib = %(userlib)s\pythonlib
labscriptlib = %(userlib)s\labscriptlib\%(apparatus_name)s
analysislib = %(userlib)s\analysislib\%(apparatus_name)s
app_saved_configs = %(labscript_suite)s\app_saved_configs\%(apparatus_name)s

The impetus for this proposal came from working example code that I am currently developing (so that the entire suite will work out-of-the-box), which I'd like to name, e.g. example_experiment.py. which runs on default_apparatus (or example_apparatus), etc. Otherwise you have example_experiment.py running on default_experiment which is ambiguous and exposes this mis/over-use of terminology.

Labconfig hostname

Original report (archived issue) by Jan Werkmann (Bitbucket: PhyNerd, GitHub: PhyNerd).


Having labsconfig named after hostname can cause problems under mobile systems, as they can have changing hostnames in different network environments.
Possible solutions:

  • New identifier that is better than hostname

  • a file that saves what config file to use

ModuleWatcher: ability to append to whitelist

Original report (archived issue) by Russell Anderson (Bitbucket: rpanderson, GitHub: rpanderson).


Use case: reloading tensorflow fails.

Minimum failing example: lyse analysis routine containing:

import your_face
import tensorflow.core

… where your_face.py contains:

print('Your face')

Modify your_face.py to trigger module reloading, and get:

Traceback (most recent call last):
  File "C:\labscript_suite\userlib\analysislib\common\tensorflow_bug.py", line 2, in <module>
    import tensorflow.core
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 52, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\core\framework\graph_pb2.py", line 15, in <module>
    from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\core\framework\node_def_pb2.py", line 15, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 15, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 15, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 91, in <module>
    __module__ = 'tensorflow.core.framework.resource_handle_pb2'
TypeError: A Message class can only inherit from Message

Similar issue reported here, where Spyder users report disabling the user-module reloader, or preventing google.* modules from being reloaded. For me, the above error was resolved by modifying modulewatcher.py to prevent tensorflow.* modules from being reloaded.

FileWatcher has thread unsafe attributes exposed

FileWatcher has a locking object that it uses when updating the files and folders attributes. All of these attributes are actually public, and not thread safe. FileWatcher should be cleaned up, with variables renamed to indicate they should not be used by external code directly.

Unitconversions: less magic and more explicit importing

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


Unlike labscript_devices, we don't require a unit conversions class to be in a file with the same name as the class within it. And you can have multiple unit conversion classes within a file, in fact I wrote some today. This is good, unit conversion classes can be small and we don't want a proliferation files when related things could be in a file together.

So that means that unitconversions.__init__.py goes out and does import * from everything, to make sure it gets all the conversion classes.

This can have unfortunate side effects, like when you import a single unit conversion class, you unwittingly import them all (this can be a problem if you have line-in-the-sand style reloading of modules, and accidentally put these modules on the wrong side of the line. This isn't just me making up issues, it happened today to me!)

It also pollutes the namespace of unitconversions.__init__.py with all the global variables from the files in which unit conversions are defined.

What we should do instead is have labscript store the full, qualified class name of the unit conversion class that is used. So that's the fully qualified module name and class name (eg: labscript_utils.unitconversions.myconversionmodule.MyConversionClass), the same way that the pickle module stores what class you're using, so it can import it and instantiate one on unpickling, regardless of whether that class has been imported.

A function in unitconversions.__init__.py should then be provided that will go and find the relevant class (which can be anywhere in the Python search path), and return it to the caller. This function should happily import the required modules every time it is called, but mostly this will just do nothing because the module will already be in sys.modules and so it will be returned without the code being run. However if a ModuleWatcher has in the meantime unloaded a module due to it changing, it will be re-executed, and the caller will get the brand, shiny new class.

When users want a unit conversion class, they will import it directly from wherever it is. When BLACS wants a unit conversion class, it should call that function to get the class by name.

This will not be backward compatible, there is no easy way to make it backward compatible without defeating the benefits completely. So it will be a major version bump and changes in other code will have to have corresponding bumps in their dependency checks.

This helps us further along the path of 'don't run code that isn't yours'. We started with having literally everything in labscript.py, but as we branch out to more users and devices, we shouldn't be just executing all code everywhere, just what we need. Otherwise users are subject to the import dependencies and possible crashes of code that is not theirs and they aren't using.

Module importlib_metadata is not in installer dependency list

Original report (archived issue) by Shaun Johnstone (Bitbucket: shjohnst, GitHub: shjohnst).


importlib_metadata is now required when using Python 2.7, so it should be added to the installer dependency list. Also, when updating from an older combination of BLACS and labscript_utils, the error when this module is missing is silent when not running from a terminal - BLACS will simply fail to open. Is it possible to catch this? Obviously the main solution is to encourage everyone to move to Python 3!

labscript_suite variable in labconfig is not used as config and is misleading

The labscript_suite variable in labconfig used to point to the install directory. Now we don't have an install directory per se that we explicitly need to know about - the packages are expected to be somewhere in the python path and should work so long as they are. Instead we have a 'profile' directory where user data and config is stored. This profile's path is hard-coded so can't be modified anyway (and we can't make it customisable via labconfig since labconfig is in the profile directory - this is a bootstrapping problem).

The only purpose this path serves is as a variable the user can use in other configurable settings.

Therefore let's get rid of it, and introduce the convention that relative paths in labconfig variables are interpreted as relative to the profile directory.

This will mean writing a get_path method or something for LabConfig that will interpret paths this way, and callers that want paths will need to call it instead of the regular get function.

So I'll aim to add that to labconfig.py, remove the labscript_suite variable, and then grep through project repos for instances of reading the config variables which are paths.

Units in spinbox for analog quantities

Original report (archived issue) by Shaun Johnstone (Bitbucket: shjohnst, GitHub: shjohnst).


As per @Spielman's suggestion in pull request #12, we should consider moving the units to within the spinbox for analog quantity widgets.

I would encourage a solution where the Hz (for exampe) is rendered in the spinbox and perhaps where a right click gives access to the units.

The unit change could be a sub-menu in the right-click menu for that widget. This helps to avoid the "jagged" appearance of the new vertical DDS widget layout.

dragdroptabs behave strange

Original report (archived issue) by Jan Werkmann (Bitbucket: PhyNerd, GitHub: PhyNerd).


If I try and drag a inactive tab that works as long as I stay in the same tab container. Once I drag it out the dragged tab is suddenly switched and I'm dragging the selected tab.

It would be better id the dragged Tab stays the same.

Allow check_version to require only minimum version

Original report (archived issue) by Russell Anderson (Bitbucket: rpanderson, GitHub: rpanderson).


At present, check_version requires the version of a dependency be equal to or greater than a particular version (at_least), and less than a higher specified version (less_than). This proposal would permit check_version to only require the version of a dependency be at_least or later, with no upper bound on the required version.

Reasoning: Won't have to keep updating check_version calls in line with packages that change minor version number regularly. I think the check_version is more of a hindrance than a help here, e.g. if a user updates packages in their environment and the new version exceeds less_than specified in a check_version call of a labscript module's _init_.py, then that module won't start. This may elicit the user to post an issue (diligence), downgrade the dependency (apathy), or give up (despair). If the issue is reported, checking the higher version of the dependency may not be exhaustive as we don't have unit tests, etc. Instead, this proposal permits (but does not demand) a "works with dependency vX or later until we know better" approach. If there is a real incompatibility with a newer version of a dependency, this will likely be detected in real-world use. The issue can then be resolved expeditiously if reported.

Editable installs with setuptools>63 broken

Between Johannes and Chris, an issue has been identified that prevents proper device path resolution when using editable installs and setuptools that is PEP 660 compliant (ie >=64).

Doing a very minor amount of sleuthing on my own, I've confirmed that PEP 660 compliant setuptools developer builds no longer call the develop command, which is what installs the labscript-suite.pth file into site-packages via a command overload in labscript-utils. For me, the called command is now editable_wheel, but this command does not appear to expose the install directory. Other called commands (like build_py) also do not seem to expose this. Furthermore, there doesn't really seem to be a great backwards compatible option, meaning whatever is done will likely need to be setuptools version dependent, which is a little annoying.

Looking over the docs, it appears that the exact method used for editable installs is not guaranteed on every system. On my system, the last method using a dynamic import finder happens to be used. Given what the docs say, this may be the consistently chosen option since we control project structure and install options, but this may be something to be aware of.

Unitconversions module does not belong in labscript_utils

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


The unitconversions module does no belong in labscript_utils. Unit conversion classses update and require committing often, and should not be in sync with the rest of labscript_utils, which provides application libraries that change less often. They should be considered 'user space' code, like labscript_devices is.

unitconversions should be its own module at the toplevel, or else part of labscript_devices. My preference leans toward the former, especially seeing that it is not coupled with other devices, and that we now have an automated installer making the minimisation of the number of packages less important (though I would not propose splitting off the rest of labscript_utils, with possible exceptions due to some of them having general interest outside of labscript, like excepthook and h5_lock, which may be released separately one day).

For backward compatability, labscript_utils.unitconversions should continue to import the separate module to provide seamless functionality, but should print a deprecation warning. This code should be tagged with a comment # DEPRECATED or similar, so it can be seen and removed at a major version bump. (this approach should be followed for backward compatibility issues in general)

Winshell functions fail on Windows Vista

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


The make_shortcut and set_appusermodel functions in winshell.__init__.py fail to run on windows Vista.

This causes the installer to fail towards the end of the install process and also all the applications to raise exception windows (although so far these have not been reported to prevent execution of the software).

We should ensure these functions use a try-except block so they don't crash on windows versions that are not supported, and print an appropriate warning (and encourage the user to report a bug if they are on a modern version of windows)

Permit profile creation in existing directory

Profile creation fails if pathlib.Path.home() / 'labscript-suite' exists.

~/labscript-suite$ labscript-profile-create
Traceback (most recent call last):
  File "/home/rpanderson/labscript-suite/.venv/bin/labscript-profile-create", line 11, in <module>
    load_entry_point('labscript-utils==2.16.0.dev3', 'console_scripts', 'labscript-profile-create')()
  File "/home/rpanderson/labscript-suite/.venv/lib/python3.7/site-packages/labscript_profile/create.py", line 41, in create_profile
    raise FileExistsError(LABSCRIPT_SUITE_PROFILE)

This proposal is to permit existence of pathlib.Path.home() / 'labscript-suite', but:

  1. not if any files of the same name as those in DEFAULT_PROFILE_CONTENTS exist; or, alternatively
  2. not if directories of the same name as those in DEFAULT_PROFILE_CONTENTS exist.

This would permit–at least–installing a virtual environment and/or a local install of the suite in pathlib.Path.home() / 'labscript-suite'.

For (1) above, the dirs_exist_ok parameter of shutil.copytree could be used, but this requires python_version >= '3.8'.

Restore check_version

labscript_utils 3.0.0 removed the check_version function. This was because it was no longer used by the labscript suite, which now manages its dependencies using pip/conda. However, code that is distributed in forms other than packages on PyPI/anaconda cloud, particularly code living in userlib, cannot use pip/conda to manage versions of its dependencies, so it is still useful to provide a function to do runtime checks.

So we should add it back in

See brief discussion here

Zlog port not included in default profile

Totally minor thing, but the zlog default port is not configured in the default.ini profile (unlike all the other services). Everything works fine since the default is pulled from zprocess.zlog itself, but it would be nice to include it explicitly in the config file with zlock so that it is obvious it can be changed there.

Note for future self: the default configuration is
zlog = 7340

labconfig should do some initialisation on config files

Original report (archived issue) by Chris Billington (Bitbucket: cbillington, GitHub: chrisjbillington).


When a LabConfig is instantiated, it should call a general initialisation function that tries to make sense of the labconfig file.

This should do things like:

  • Ensure the experiment name is a valid python module name
  • Automatically create a subfolder for the experiment in analysislib and labscriptlib, with an init.py in each (if they don't already exist).
  • Ensure paths to shared drive exist

And there are more things I'm sure, which are currently scattered throughout the programs that use labconfig, but ought to be in a consistent place.

This initialisation should be able to be suppressed with a keyword argument.

Perhaps also this is where setting of default arguments should live, rather than in the individual programs. We don't have 'runmanagerconfig' and 'BLACSconfig', so so long as we have one config file, all its defaults should probably live in the same place.

Double import denier breaks with runviewer v2.1.0

Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: philipstarkey).


The latest tagged version of runviewer is v2.1.0. When using the tagged (or tip) version of labscript utils, the following exception is raised:

#!python

Traceback (most recent call last):
  File "C:\labscript_suite\labscript_py27\runviewer\__main__.py", line 78, in <module>
    from resample import resample as _resample
RuntimeError: Double import! The same file has been imported under two different names, resulting in two copies of the module. This is almost certainly a mistake. If you are running a script from within a package and want to import another submodule of that package, import it by its full path: 'import module.submodule' instead of just 'import submodule.'

Path imported: C:\labscript_suite\labscript_py27\runviewer\resample

Traceback (first time imported, as resample):
------------
  File "C:\labscript_suite\labscript_py27\runviewer\__main__.py", line 78, in <module>
    from resample import resample as _resample
------------

Traceback (second time imported, as runviewer.resample):
------------
  File "C:\labscript_suite\labscript_py27\runviewer\__main__.py", line 78, in <module>
    from resample import resample as _resample
  File "C:\labscript_suite\labscript_py27\runviewer\resample\__init__.py", line 33, in <module>
    module = importlib.import_module('runviewer.resample.%s.resample'%plat_name)
  File "C:\Anaconda3\envs\labscript_py27\lib\importlib\__init__.py", line 37, in import_module
    __import__(name)
------------

Updating runviewer to tip fixes the issue (and so we should really tag a new runviewer version ASAP), but only because we change the way we compile and use the resample algorithm. However it looks to me like the double import denier is still being too aggressive here because a single import line triggered the double import denier.

Double import denier breaks python3 stuff

Original report (archived issue) by Jan Werkmann (Bitbucket: PhyNerd, GitHub: PhyNerd).


In modules where labscrupt_utils is imported the command python -m module_name_here leads to a exception:

#!python

Traceback (most recent call last):
  File "/Users/janwerkmann/anaconda/envs/snowflakes/lib/python3.6/runpy.py", line 185, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/Users/janwerkmann/anaconda/envs/snowflakes/lib/python3.6/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/Users/janwerkmann/anaconda/envs/snowflakes/lib/python3.6/runpy.py", line 155, in _get_module_details
    code = loader.get_code(mod_name)

I did a bit of debugging and it seems to be caused by loader being of type <labscript_utils.double_import_denier.Loader object at 0x10e910438>. So we should maybe fix that for python 3 compat.
Fun fact python filepath_to_module still works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.