Code Monkey home page Code Monkey logo

mannlabs / alphamap Goto Github PK

View Code? Open in Web Editor NEW
73.0 6.0 17.0 292.97 MB

An open-source Python package for the visual annotation of proteomics data with sequence specific knowledge.

Home Page: https://mannlabs.github.io/alphamap/

License: Apache License 2.0

Jupyter Notebook 66.12% Python 0.88% Makefile 0.01% Shell 0.02% Inno Setup 0.01% Batchfile 0.01% HTML 0.01% CSS 0.01% JavaScript 32.94%
mass-spectrometry proteomics peptide-level visualization python gui

alphamap's People

Contributors

dependabot[bot] avatar eugeniavoytik avatar ibludau avatar julias92 avatar straussmaximilian avatar swillems avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

alphamap's Issues

Update to new Fragpipe output

Hi all,

Very nice tool, I really like it!!

Fragpipe's IonQuant has recently been updated which now produces combined_peptide.tsv and combined_peptide_modified.tsv. I was wondering if this recent update can be added? The update also causes the combined_peptide.tsv not to work anymore.

Cheers,
Patrick

Add binder link

Hi

I am a contributor to HoloViz Panel and wanted to quickly check out this repo to understand how it uses Panel.

It seems I have to do a local installation if I want to play around with this. It would help me and probably others if we could quickly try out this project on Binder instead of having to install and clean up on our laptops.

Please add binder link.

For inspiration checkout my binder configuration here https://github.com/MarcSkovMadsen/panel-chemistry/tree/main/binder and how I use it in my README.md file https://github.com/MarcSkovMadsen/panel-chemistry

image

Potential performance issue: .apply slow in pandas 1.4.0

Issue Description:

Hello.
I have discovered a performance degradation in the .apply function of pandas version below 1.5. And I notice the repository depends on pandas 1.4.0 requirements.txt. I am not sure whether this performance problem in pandas will affect this repository. I found some discussions on pandas GitHub related to this issue, including #44172 and #45404.
I also found that alphamap/preprocessing.py and alphamap/importing.py used the influenced api. There may be more files using the influenced api and more parts using pandas 1.4.0.

Suggestion

I would recommend considering an upgrade to a different version of pandas >= 1.5 or exploring other solutions to optimize the performance of .apply.
Any other workarounds or solutions would be greatly appreciated.
Thank you!

problem in uploading file.tsv from DIANN

I have uploaded tsv.file froma DIANN...all the single files appear to be selected but I have these messages with a final error:

Traceback (most recent call last):
File "tornado\ioloop.py", line 741, in _run_callback
ret = callback()
File "tornado\ioloop.py", line 765, in _discard_future_result
future.result()
File "bokeh\server\session.py", line 67, in _needs_document_lock_wrapper
result = func(self, *args, **kwargs)
File "bokeh\server\session.py", line 195, in with_document_locked
return func(*args, **kwargs)
File "bokeh\document\document.py", line 1183, in wrapper
return doc._with_self_as_curdoc(invoke)
File "bokeh\document\document.py", line 1169, in _with_self_as_curdoc
return f()
File "bokeh\document\document.py", line 1182, in invoke
return f(*args, **kwargs)
File "bokeh\document\document.py", line 972, in remove_then_invoke
return callback(*args, **kwargs)
File "panel\reactive.py", line 204, in _change_event
self._process_events(events)
File "panel\reactive.py", line 187, in _process_events
self.param.set_param(**self.process_property_change(events))
File "param\parameterized.py", line 1451, in set_param
self
._batch_call_watchers()
File "param\parameterized.py", line 1578, in _batch_call_watchers
watcher.fn(*events)
File "panel\param.py", line 739, in _replace_pane
new_object = self.eval(self.object)
File "panel\param.py", line 663, in eval
return function(*args, **kwargs)
File "param\parameterized.py", line 337, in _depends
return func(*args,**kw)
File "alphamap\gui.py", line 1452, in upload_data
upload_organism_info()
File "alphamap\gui.py", line 1138, in upload_organism_info
full_fasta = import_fasta(select_organism.value)
File "alphamap\organisms_data.py", line 89, in import_fasta
with urllib.request.urlopen(github_file) as response, open(os.path.join(DATA_PATH, fasta_name), 'wb') as out_file:
PermissionError: [Errno 13] Permission denied: 'C:\Program Files (x86)\AlphaMap\alphamap\..\alphamap\data\mouse.fasta'

Dtypewarning inherited from pandas

When attempting to upload data following opening my own *.tsv file from DIA-NN the GUI will briefly indicate it is processing the data before hanging. Checking the cmd window the following warning is given:

pandas\io\parsers\c_parser_wrapper.py:404: DtypeWarning: Columns (2) have mixed types. Specify dtype option on import or set low_memory=False.
warnings.warn(warning_message, DtypeWarning, stacklevel=find_stack_level())

This error is reproducible using the test_diann_input.tsv file

image
image

Version 0.1.10 installed via one-click GUI installer on Windows 11 Pro for Workstations (Version 10.0.22000 Build 22000)

Installation via pip not possible on Mac with M1 silicon processor

Describe the bug
Installation not possible

To Reproduce
Run pip install alphamap

Expected behavior
Installation runs through

Screenshots
Screenshot 2022-11-24 at 14 49 32
[...]
Screenshot 2022-11-24 at 14 49 45

Desktop (please complete the following information):

  • Installation Type: pip
  • OS: macOS Ventura
  • Version: 0.1.10 (latest today)

Additional context
I guess this problem occurs because of a too old version of Numpy (see also #53 )

Adding organism

Hello,

Thank you for the tool. I am interested in adding fasta for organisms not included in the top 13 list (porcine). Is there a way to already do this. Any guidance would be appreciated.

Thank you

Current installation approach does not work with either pip or

I am currently having difficulty installing the alphamap package using either Pip or the developer approach.

To Reproduce
Steps to reproduce the behavior:

  1. I installed Python3.8 using brew install [email protected].
  2. I then installed conda using the instructions here (https://docs.anaconda.com/free/anaconda/install/mac-os/)
  3. I cloned the github repository into a folder on my local machine, and then followed the instructions here (https://mannlabs.github.io/alphamap/#Pip) in order to get alphamap installed.
  4. After running the commands conda create -n alphamap python=3.8 -y and conda activate alphamap, I try to run pip3.8 install -e ., but the following error occurs:

ERROR: Cannot install alphamap and alphamap==0.1.10 because these package versions have conflicting dependencies.

The conflict is caused by:
    alphamap 0.1.10 depends on numpy==1.19.2
    bokeh 2.2.2 depends on numpy>=1.11.3
    h5py 3.6.0 depends on numpy>=1.14.5
    pandas 1.4.0 depends on numpy>=1.20.0; platform_machine == "arm64" and python_version < "3.10"

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

I tried to solve this by changing the required numpy version from 1.19.2 to 1.20, but this resulted in the following abridged error:

ERROR: Failed building wheel for numpy
Failed to build h5py numpy
ERROR: Could not build wheels for h5py, numpy, which is required to install pyproject.toml-based projects

While the one-click GUI installer works fine, I need to work with some of the code in the developer setting.

Desktop:

  • Installation Type: Pip/Developer
  • OS: Ventura 13.4.1, Apple M1 Max Chip

Make it very easy for a user to explore the app

I just tried out the app on Binder. But I was not able to use it. I tried downloading and using some of the sample files but I did not manage to get anything working.

image

For a user just exploring the app its very text heavy and depends on files on their local drive.

The application would be easier to try out if a an example file was directly provided, a link to more files was provided and you could use drag-and-drop FileInput instead of finding files in your local or the servers file directory.

Also if the docs included some kind of visualization or video it would be easier to understand how it works.

MSFragger / FragPipe integration

It would be very helpful to also have the possibility to upload results from a search performed with FragPipe (combined_peptide.tsv?).
Thank you for this tool!

When uploading DIA-NN report.tsv: param\parameterized.py:337: DtypeWarning: Columns (2) have mixed types

Describe the bug
When uploading a DIA-NN report.tsv file the following error is displayed in the terminal:

param\parameterized.py:337: DtypeWarning: Columns (2) have mixed types.Specify dtype option on import or set low_memory=False.

The error shown in the browser is the following:

The columns necessary for further analysis cannot be extracted from the first experimental file. Please check the data uploading instructions for a particular software tool.

From my own testing of working with DIA-NN output (version 1.8) with pandas setting low_memory=False fixes the problem.

To Reproduce
Steps to reproduce the behavior:

  1. Enter the path to a DIA-NN output file
  2. See that the sample names are parsed correctly
  3. Click Upload
  4. See error

Expected behavior
No error

Desktop (please complete the following information):

  • Installation Type: Windows Installer
  • OS: Windows 10
  • Version 0.0.8

Hi, is it possible to have a more detailed guide on how to use fasta files not included in the list? Thanks

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

peptide.tsv file from FragPipe output fails to load

Dear alphamap team,
First, thanks for mantaining such a useful tool!

I have noticed what I think is a bug when attempting to load Fragpipe outputs to AlphaMap.

After writing the file location in the corresponding area of the AlphaMap app, the loading gets stuck.

Accoriding to what I see in the command line, it is looking for the column Sequence, but as it is not part of this file, it is not able to process the file.

I created a new file in which I created a new Sequence column with the peptide sequences, and the loading then works.

  • Installation Type: One-Click Installer
  • OS: Win 11
  • Version: 0.1.10

The command line shows this error:

Using frozen version. Setting SSL context to unverified.
******************************
****** AlphaMap 0.1.10 *******
******************************
Launching server at http://localhost:63564
WARNING:tornado.access:404 GET /favicon.ico (::1) 1.00ms
pandas\io\parsers\c_parser_wrapper.py:404: DtypeWarning: Columns (2) have mixed types. Specify dtype option on import or set low_memory=False.
  warnings.warn(warning_message, DtypeWarning, stacklevel=find_stack_level())
ERROR:tornado.application:Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x0000020261B2F910>>, <Task finished name='Task-286' coro=<_needs_document_lock.<locals>._needs_document_lock_wrapper() done, defined at bokeh\server\session.py:51> exception=KeyError(229.1629)>)
Traceback (most recent call last):
  File "alphamap\importing.py", line 602, in import_fragpipe_data
    data_sub = pd.read_csv(file, sep=sep, low_memory=False, usecols=combined_fragpipe_columns)
  File "pandas\util\_decorators.py", line 311, in wrapper
    return func(*args, **kwargs)
  File "pandas\io\parsers\readers.py", line 680, in read_csv
    return _read(filepath_or_buffer, kwds)
  File "pandas\io\parsers\readers.py", line 575, in _read
    parser = TextFileReader(filepath_or_buffer, **kwds)
  File "pandas\io\parsers\readers.py", line 933, in __init__
    self._engine = self._make_engine(f, self.engine)
  File "pandas\io\parsers\readers.py", line 1231, in _make_engine
    return mapping[engine](f, **self.options)
  File "pandas\io\parsers\c_parser_wrapper.py", line 131, in __init__
    self._validate_usecols_names(usecols, self.orig_names)
  File "pandas\io\parsers\base_parser.py", line 913, in _validate_usecols_names
    raise ValueError(
ValueError: Usecols do not match columns, columns expected but not found: ['Sequence']

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tornado\ioloop.py", line 741, in _run_callback
    ret = callback()
  File "tornado\ioloop.py", line 765, in _discard_future_result
    future.result()
  File "bokeh\server\session.py", line 67, in _needs_document_lock_wrapper
    result = func(self, *args, **kwargs)
  File "bokeh\server\session.py", line 195, in with_document_locked
    return func(*args, **kwargs)
  File "bokeh\document\document.py", line 1183, in wrapper
    return doc._with_self_as_curdoc(invoke)
  File "bokeh\document\document.py", line 1169, in _with_self_as_curdoc
    return f()
  File "bokeh\document\document.py", line 1182, in invoke
    return f(*args, **kwargs)
  File "bokeh\document\document.py", line 972, in remove_then_invoke
    return callback(*args, **kwargs)
  File "panel\reactive.py", line 204, in _change_event
    self._process_events(events)
  File "panel\reactive.py", line 187, in _process_events
    self.param.set_param(**self._process_property_change(events))
  File "param\parameterized.py", line 1451, in set_param
    self_._batch_call_watchers()
  File "param\parameterized.py", line 1578, in _batch_call_watchers
    watcher.fn(*events)
  File "panel\param.py", line 739, in _replace_pane
    new_object = self.eval(self.object)
  File "panel\param.py", line 663, in eval
    return function(*args, **kwargs)
  File "param\parameterized.py", line 337, in _depends
    return func(*args,**kw)
  File "alphamap\gui.py", line 1453, in upload_data
    upload_experimental_data()
  File "alphamap\gui.py", line 1039, in upload_experimental_data
    df = import_data(
  File "alphamap\importing.py", line 692, in import_data
    data = import_fragpipe_data(input_info, sample=sample)
  File "alphamap\importing.py", line 613, in import_fragpipe_data
    modif_seq = data_sub.apply(lambda row: convert_fragpipe_mq_mod(row["Peptide"], row["Assigned Modifications"]), axis=1)
  File "pandas\core\frame.py", line 8827, in apply
    return op.apply().__finalize__(self, method="apply")
  File "pandas\core\apply.py", line 727, in apply
    return self.apply_standard()
  File "pandas\core\apply.py", line 851, in apply_standard
    results, res_index = self.apply_series_generator()
  File "pandas\core\apply.py", line 867, in apply_series_generator
    results[i] = self.f(v)
  File "alphamap\importing.py", line 613, in <lambda>
    modif_seq = data_sub.apply(lambda row: convert_fragpipe_mq_mod(row["Peptide"], row["Assigned Modifications"]), axis=1)
  File "alphamap\importing.py", line 545, in convert_fragpipe_mq_mod
    modifs_posit[posit] = modif_convers_dict[mod_mass].format(add_aa)
KeyError: 229.1629

Do I need to use Uniprot?

Hi,

I am not sure if I completely understood how to use Alphamap correctly: Being an Arabidopsis guy, I mainly work with the databases from Araport or TAIR with some individual changes and additions but not Uniprot. Does my evidence.txt need to contain the respective Uniprot identifiers for Alphamap to work at all? Or could I also use databases I edited? I am mainly interested to compare sequence coverage between isoforms etc..

At the moment I get the following errors when I try to upload my evidence.txt from my MaxQuant Run (2.0.3.0):

"The columns necessary for further analysis cannot be extracted from the first experimental file. Please check the data uploading instructions for a particular software tool."

The console says:

param\parameterized.py:337: DtypeWarning:

Columns (2) have mixed types.Specify dtype option on import or set low_memory=False.

So is it me using the tool incorrectly or is there a fix to my problem?

Many thanks,
Nils

Uninformative separator error

In file: alphamap/importing.py
Error: local variable 'sep' referenced before assignment
Function: import_data(file, sample, verbose, dashboard)

I tried to read a file without any file-extension (easily fixed on my end).
I'd still suggest either a default separator if the filetype doesn't match any defined ones, or an informative message that the filetype isn't supported.

Peptide sequence is shown without modifications in the hover tool

Issue:

In case when the same peptide was identified in several runs and modified only in one case (e.g. oxidated), then in the hover tool for the peptide sequence we see a one without modifications but the PTM information is still there.

Look at the attached screenshot:
issue_alphamap

FragPipe v22.0 Output File Throws KeyError

Throws key errors when trying to upload a FragPipe output file. The Key Error is being thrown for mods: 57. 0214 for example-an alkylation.

Desktop (please complete the following information):

  • Installation Type: One click installer
  • AlphaMap 0.1.10

Additional context
Seems like columns do not match the FragPip output peptide.tsv file?

Feature request: various q-value filters

First, thank you for a fantastic tool! Just tried it, works great.

I wonder, would it be possible to add options for applying various kinds of filters based on q-values?
Of course, the user can do the filtering before uploading the report, but it would be cool if all was automatic. With DIA-NN applying filters based on global precursor & protein q-values as well as run-specific PTM q-value & potentially site confidence score will usually be beneficial.

One simple way to implement it is by just adding a single textbox where the user can enter filters in a text format, one line per filter, e.g. Global.Q.Value < 0.01. And this is very easily parsed: find '<' or '>' in the string, everything to the left is the column name, everything to the right is the value.

Best,
Vadim

cannot import name 'Markup' from 'jinja2'

Hello,

I installed alphamap via pip in a separate conda environment.
When i try to run alphamap, i run into an import error from Jinja2 package:

(alphamap) $ alphamap
Traceback (most recent call last):
  File "/home/momi/miniconda3/envs/alphamap/bin/alphamap", line 33, in <module>
    sys.exit(load_entry_point('alphamap', 'console_scripts', 'alphamap')())
  File "/home/momi/miniconda3/envs/alphamap/bin/alphamap", line 25, in importlib_load_entry_point
    return next(matches).load()
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/importlib/metadata.py", line 77, in load
    module = import_module(match.group('module'))
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 843, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/momi/tools/alphamap/alphamap/gui.py", line 15, in <module>
    import panel as pn
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/__init__.py", line 3, in <module>
    from . import layout # noqa
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/layout/__init__.py", line 1, in <module>
    from .accordion import Accordion # noqa
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/layout/accordion.py", line 5, in <module>
    from .base import NamedListPanel
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/layout/base.py", line 13, in <module>
    from ..io.model import hold
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/io/__init__.py", line 8, in <module>
    from ..config import config
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/config.py", line 20, in <module>
    from .io.notebook import load_notebook
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/panel/io/notebook.py", line 16, in <module>
    import bokeh.embed.notebook
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/bokeh/embed/__init__.py", line 23, in <module>
    from .server import server_document, server_session
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/bokeh/embed/server.py", line 25, in <module>
    from ..core.templates import AUTOLOAD_REQUEST_TAG, FILE
  File "/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/bokeh/core/templates.py", line 42, in <module>
    from jinja2 import Environment, FileSystemLoader, Markup
ImportError: cannot import name 'Markup' from 'jinja2' (/home/momi/miniconda3/envs/alphamap/lib/python3.8/site-packages/jinja2/__init__.py)

I read that Markup is no longer supported on Jinja2.
Could you advise on how to solve the issue?

Flask and Jinja2 versions:

Successfully installed Flask-2.3.3 Jinja2-3.1.2 Werkzeug-2.3.7 blinker-1.6.2 click-8.1.7 itsdangerous-2.1.2

problem in uploading file.tsv from DIANN

I have uploaded tsv.file froma DIANN...all the single files appear to be selected but I have these messages with a final error:

Traceback (most recent call last):
File "tornado\ioloop.py", line 741, in _run_callback
ret = callback()
File "tornado\ioloop.py", line 765, in _discard_future_result
future.result()
File "bokeh\server\session.py", line 67, in _needs_document_lock_wrapper
result = func(self, *args, **kwargs)
File "bokeh\server\session.py", line 195, in with_document_locked
return func(*args, **kwargs)
File "bokeh\document\document.py", line 1183, in wrapper
return doc._with_self_as_curdoc(invoke)
File "bokeh\document\document.py", line 1169, in _with_self_as_curdoc
return f()
File "bokeh\document\document.py", line 1182, in invoke
return f(*args, **kwargs)
File "bokeh\document\document.py", line 972, in remove_then_invoke
return callback(*args, **kwargs)
File "panel\reactive.py", line 204, in _change_event
self._process_events(events)
File "panel\reactive.py", line 187, in _process_events
self.param.set_param(**self.process_property_change(events))
File "param\parameterized.py", line 1451, in set_param
self
._batch_call_watchers()
File "param\parameterized.py", line 1578, in _batch_call_watchers
watcher.fn(*events)
File "panel\param.py", line 739, in _replace_pane
new_object = self.eval(self.object)
File "panel\param.py", line 663, in eval
return function(*args, **kwargs)
File "param\parameterized.py", line 337, in _depends
return func(*args,**kw)
File "alphamap\gui.py", line 1452, in upload_data
upload_organism_info()
File "alphamap\gui.py", line 1138, in upload_organism_info
full_fasta = import_fasta(select_organism.value)
File "alphamap\organisms_data.py", line 89, in import_fasta
with urllib.request.urlopen(github_file) as response, open(os.path.join(DATA_PATH, fasta_name), 'wb') as out_file:
PermissionError: [Errno 13] Permission denied: 'C:\Program Files (x86)\AlphaMap\alphamap\..\alphamap\data\mouse.fasta'

Update Panel and Bokeh

Panel and Bokeh has been significantly improved since the versions you are currently on. Please upgrade as it will enable your users to use recent versions of Panel and Bokeh as well as get performance gains.

image

pygments.util.ClassNotFound: no lexer for alias None found

Hi developers,

I am using the latest windows installer (https://github.com/MannLabs/alphamap/releases/tag/v0.0.210730-alpha). My computer has an older version, I didn't uninstall it before installing the new one. Not sure if it makes any difference.

Following is the whole error message

******************************
******* AlphaMap 0.0.8 *******
******************************
Launching server at http://localhost:51189
ERROR:tornado.application:Uncaught exception GET / (::1)
HTTPServerRequest(protocol='http', host='localhost:51189', method='GET', uri='/', version='HTTP/1.1', remote_ip='::1')
Traceback (most recent call last):
  File "markdown\extensions\codehilite.py", line 133, in hilite
    lexer = get_lexer_by_name(self.lang, **self.options)
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\pygments\lexers\__init__.py", line 106, in get_lexer_by_name
    raise ClassNotFound('no lexer for alias %r found' % _alias)
pygments.util.ClassNotFound: no lexer for alias None found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "tornado\web.py", line 1704, in _execute
    result = await result
  File "bokeh\server\views\doc_handler.py", line 52, in get
    session = await self.get_session()
  File "bokeh\server\views\session_handler.py", line 120, in get_session
    session = await self.application_context.create_session_if_needed(session_id, self.request, token)
  File "bokeh\server\contexts.py", line 218, in create_session_if_needed
    self._application.initialize_document(doc)
  File "bokeh\application\application.py", line 171, in initialize_document
    h.modify_document(doc)
  File "bokeh\application\handlers\function.py", line 132, in modify_document
    self._func(doc)
  File "panel\io\server.py", line 91, in _eval_panel
    doc = as_panel(panel)._modify_doc(server_id, title, doc, location)
  File "panel\viewable.py", line 230, in _modify_doc
    return self.server_doc(doc, title, location)
  File "panel\viewable.py", line 749, in server_doc
    model = self.get_root(doc)
  File "panel\viewable.py", line 482, in get_root
    root = self._get_model(doc, comm=comm)
  File "panel\layout\base.py", line 112, in _get_model
    objects = self._get_objects(model, [], doc, root, comm)
  File "panel\layout\base.py", line 102, in _get_objects
    child = pane._get_model(doc, root, model, comm)
  File "panel\layout\base.py", line 112, in _get_model
    objects = self._get_objects(model, [], doc, root, comm)
  File "panel\layout\base.py", line 102, in _get_objects
    child = pane._get_model(doc, root, model, comm)
  File "panel\layout\base.py", line 112, in _get_model
    objects = self._get_objects(model, [], doc, root, comm)
  File "panel\layout\base.py", line 102, in _get_objects
    child = pane._get_model(doc, root, model, comm)
  File "panel\layout\base.py", line 112, in _get_model
    objects = self._get_objects(model, [], doc, root, comm)
  File "panel\layout\base.py", line 102, in _get_objects
    child = pane._get_model(doc, root, model, comm)
  File "panel\pane\markup.py", line 43, in _get_model
    model = self._bokeh_model(**self._get_properties())
  File "panel\pane\markup.py", line 295, in _get_properties
    html = markdown.markdown(data, extensions=self.extensions,
  File "markdown\core.py", line 387, in markdown
    return md.convert(text)
  File "markdown\core.py", line 268, in convert
    newRoot = treeprocessor.run(root)
  File "markdown\extensions\codehilite.py", line 246, in run
    placeholder = self.md.htmlStash.store(code.hilite())
  File "markdown\extensions\codehilite.py", line 137, in hilite
    lexer = guess_lexer(self.src, **self.options)
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\pygments\lexers\__init__.py", line 311, in guess_lexer
    for lexer in _iter_lexerclasses():
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\pygments\lexers\__init__.py", line 237, in _iter_lexerclasses
    yield from find_plugin_lexers()
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\pygments\plugin.py", line 54, in find_plugin_lexers
    yield entrypoint.load()
  File "pkg_resources\__init__.py", line 2450, in load
  File "pkg_resources\__init__.py", line 2456, in resolve
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\IPython\__init__.py", line 56, in <module>
    from .terminal.embed import embed
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\IPython\terminal\embed.py", line 14, in <module>
    from IPython.core.magic import Magics, magics_class, line_magic
  File "C:\Users\yufe\AppData\Local\Programs\AlphaMap\IPython\core\magic.py", line 23, in <module>
    from decorator import decorator
ModuleNotFoundError: No module named 'decorator'
ERROR:tornado.access:500 GET / (::1) 986.46ms
WARNING:tornado.access:404 GET /favicon.ico (::1) 1.00ms

Thanks,

Fengchao

Softcode colors for experimental traces

Using the library outside the app it would be nice to be able to define the trace colors yourself and importantly also add more than 5 traces. This currently gives and index out-of-bounds error on the color list.

Unpin Python requirements

Overview

https://github.com/MannLabs/alphamap/blob/master/requirements.txt pins a handful of Python package dependencies, e.g. numpy==1.19.2. I cannot decide from the commit history that this is due to specific requirements of alphamap.

Pinning packages makes it difficult to combine alphamap with other packages that also depend on e.g. numpy. Instead, it would be helpful to specify dependencies either open, numpy>=1.19.2, or limited to the next major version (numpy>=1.19.2,<2), assuming dependency maintainers adhere to semantic versioning and don't introduce breaking changes between major versions.

Dependencies that need more specific versions could be pinned with a commit that references the issue describing the reasoning for pinning to exact versions. Then it's easier to go back later and check if the pinning is still necessary.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.