Code Monkey home page Code Monkey logo

matchms's Issues

Conflicting metadata fields

Since recently I handle MS data from mgf as well as json files (both coming form GNPS). Unfortunately both formats come with conflicting metadata fields.
To a large extend I think it will be fine to just parse the fields accordingly when loading data (PR matchms/matchms-backup#207).
But this doesn't work for the precursor mass.

  • For the mgf files we use pyteomics which gives us a pepmass field with two values: [mz, intensity].
  • The json files come with a precursor_mz field.

Add import and export for json files

I tried using save_as_mgf on the actual data.
The good part: it seems to work even for large lists of spectrums.
The bad part: it is pretty slow. Writing the entire dataset (147,000 spectra) to a mgf file of about 1GB took >20 minutes. I slightly changed save_as_mgf (#152) but that didn't improve things much, so it is mostly likely due to the way pyteomics writes it to an mgf file.

I ran a quickly made json function for comparison, which was at least 5 times faster.

import json

def save_as_json(spectrums, filename):
    """Save spectrum(s) as json file.

    Args:
    ----
    spectrums: list of Spectrum() objects, Spectrum() object
        Expected input are match.Spectrum.Spectrum() objects.
    filename: str
        Provide filename to save spectrum(s).
    """
    if not isinstance(spectrums, list):
        # Assume that input was single Spectrum
        spectrums = [spectrums]

    # Convert matchms.Spectrum() into dictionaries
    spectrum_dicts = []
    for spectrum in spectrums:
        spec = spectrum.clone()
        spectrum_dict = {"intensities": spec.peaks.intensities.tolist(),
                         "mz": spec.peaks.mz.tolist(),
                         "metadata": spec.metadata}
        spectrum_dicts.append(spectrum_dict)

    # Write to json file
    with open(filename, 'w') as fout:
        json.dump(spectrum_dicts, fout)

Question: Does it make sense to add such a function?
We still need the save_to_mgf function because that is how the data can be shared within the community. But to create safe points, store data during the processing pipeline etc. one could still switch to a different (better?) format.

By the way:
Json was just what first crossed my mind. It could of course be any other suited file format!

Unclear why isort test is failing

In PR #45 half of the tests fail because of isort complaining about Imports are incorrectly sorted..
Even when running isort on one of the named files (add_fingerprint()) that didn't disappear.

I am now not sure if there is something going wrong with the isort test, or whether import must be in one very particular order and format (in which case that should be specified to avoid unlimited trial&error).

splitting off spec2vec

We should consider splitting off matchms/similarity/spec2vec/* into its own repo. If we choose to do so, we may want to use the same method I used for splitting off notebooks and old-iomega-spec2vec.

Also, we should decide on which repository (if any) should be a fork of iomega/spec2vec.

Missing similarity measure based on molecular fingerprints.

In the spec2vec project we compare different spectral similarities to the similarity between respective molecular fingerprints (as a kind of "ground truth").
Those fingerprints are calculated based on the actual structure of a molecule so they need smiles (or inchi) as input.

There are many different flavors of molecular fingerprints. We implemented some key types using rdkit in the old spec2vec code. That should be refactored and added to matchms I think.

About parallel execution

The current implementation of calc_scores.calculate is deliberately naive while we are sorting out the refactored skeleton of matchms:

def calculate(self):
for i_ref, reference_spectrum in enumerate(self.reference_spectrums):
for i_simfun, simfun in enumerate(self.similarity_functions):
self.scores[i_ref][i_simfun] = simfun(self.measured_spectrum, reference_spectrum)
return self

However, once we sort that out, we should starting thinking about increasing performance through parallel execution of parts of the code.
There are various ways that can help us, such as:

  • vectorized evaluation
  • multi-threading
  • multi-processing
  • distributed computing
    and there are multiple tools available that can help with each of these.

However, I'd like to emphasize that we should first do a performance analysis (make your suggestions for tools and procedures in the comments below).

Regardless of the type of parallelization, we'll likely need some kind of chunking ability. We could opt to add a property mask to the Scores object, which is exactly equal in size to Scores.scores but of type Boolean. mask could thus be used to slice the larger Scores.score matrix into chunks, which can be sent off to different processes or machines. I think multithreading might not need this as the memory is already shared (needs confirmation).

If we do implement a mask and parallelization, it's probably convenient to also implement Scores.__add__(self) which could then be used to recombine partially evaluated instances of Scores according to their masks (with checks on equality of Scores.reference_spectrums and Scores.measured_spectrums).

Cosine Greedy and Cosine Hungarian duplicate code

They both define and implement the functions get_peaks_arrays() and get_matching_pairs() and the same line of code to normalize the score at the end of the calculation.
To solve this we could:

  • Create an abstract base class that already implements these 3 blocks of code.
  • Put the shared code blocks in some util file so it can be imported by the two similarity modules
  • ..,. (Other suggestions?)

Actual implementation on GNPS

Actual implementation on GNPS as a workflow. GNPS is in Conda and has an API.

  • Check what we need to do for GNPS implementation.

add isort based correcting of imports

Refs matchms/matchms-backup#189

Correction should be such that it passes

run: isort --check-only --diff --conda-env matchms --recursive --wrap-length 79 --lines-after-imports 2 --force-single-line --no-lines-before FUTURE --no-lines-before STDLIB --no-lines-before THIRDPARTY --no-lines-before FIRSTPARTY --no-lines-before LOCALFOLDER matchms/ tests/ integration-tests/

without problems. This may be simply a matter of adding an alias to setup.cfg with the line above in it minus the --check-only and --diff parts.

KNIME workflow

For users who don't like Python we could create KNIME nodes for matchms which can be used in a KNIME workflow.

The workflow can be based on the integration tests.
The first prototype could use Python nodes with code snippets written by us.
Later we could write proper nodes with the archetype

CosineGreedy has incorrect method signature

The signature of CosineGreedy.call is

def __call__(self, spectrum: SpectrumType, reference_spectrum: SpectrumType) -> float

But it should be

def __call__(self, spectrum: SpectrumType, reference_spectrum: SpectrumType) -> Tuple[float, int]

Missing modified score

So far we implemented the cosine score. The other main score we work with in the iomega project is called "modified cosine score" and in addition takes into account mass-shifts of peaks due to different precursor-m/z.

Nice to have: metadata import/export functions

Within the iOMEGA project I had do the following a few times:

  • Overwrite and/or complete spectrum metadata based on a metadata collection I got (as a json file).
  • Export my cleaned-up metadata as json file.

It would hence be nice to have function like import_metadata() or import_metadata_json().
For instance something like this:

from matchms.importing import import_metadata

import_metadata(spectrums, metadata_json, 
                match_field_original='spectrumid', 
                match_field_json='gnpsid',
                replace_fields=['inchi', 'smiles', 'inchikey'])

And for exporting it could be something like this:

from matchms.exporting import export_metadata

export_metadata(spectrums, metadata_filename,
                fields_to_export=['name', 'spectrumid', 'inchi', 'smiles', 'inchikey'])

Move lookup related functionality to its own directory

Any function that mentions:

  • library
  • inchikey
  • SMILES
  • pubchem
  • fingerprint
  • rdkit
  • openbabel

is likely part of looking up what is known about a compound, so you can interpret a match with a measured spectrum. This is completely separate from the similarity calculation, and (optionally) happens after. We should therefore move it to a new directory, /lookup to be created in the root of the module.

Intensities of losses are in wrong order

The add losses filter creates losses by subtracting peak mz from the precursor-m/z.
When using it on actual data I ran into two issues:

  • loss intensities are in the wrong order (inverse to the loss mz)
  • people using losses restrict them to a much smaller range than mz values, so the function should have a from_mz and to_mz parameters.
    There also should be a default from_mz=0, because some spectra have peaks with m/z larger than the precursor-m/z.

normalize_intensities filter should also normalize losses

Currently the filter only normalized peak intensities, but that would give very different results for running

spectrums = [add_losses(s) for s in spectrums]
spectrums = [normalize_intensities(s) for s in spectrums]

than for

spectrums = [normalize_intensities(s) for s in spectrums]
spectrums = [add_losses(s) for s in spectrums]

Add filter to reduce number of peaks

For the similarity calculation it is crucial to remove excessive amounts of low intensity peaks from spectra. Raw spectra vary widely in their number of peaks (about 6000 of the 147,000 have more than 1,000 peaks, the maximum number of peaks is even >70,000). Main reason why this is a problem:

  • It drastically slows down similarity score calculations. While small intensity values will have little impact on classical scores (cosine, modified cosine) they cause a severe slowing down of the calculations.
  • It will confuse (and slow down) Spec2Vec. Many of the low intensity peaks are simply noise but will still appear as "words" in documents. Spec2Vec works much better when documents have comparable amounts of words and when most small peaks are removed.

In the past, this has been done in different ways.
One of them was to fit an exponential function to the peak intensity distribution. This is already partly implemented in spectrum.plot() but would need adjustments.
Even better in my view, however, is to skip it for now and move to something simpler. Namely, removing low intensity peaks until a desired maximum number of peaks is reached.

spikes[0] returns unexpected

I tried to use the spikes[i], but an unexpected result.

import numpy as np
from matchms import Scores, Spectrum
from matchms.similarity import CosineGreedy

spectrum = Spectrum(mz=np.array([100, 150, 200.]), intensities=np.array([0.7, 0.2, 0.1]), metadata={'id': 'spectrum1'})

spectrum.peaks[0]

I expected

(100.0, 0.7)

But I got

[100. 150. 200.]

Current CosineGreedy doesn't scale with large spectra

Actual spectra can have largely varying number of peaks (some with many 1,000s or 10,000s).

The current CosineGreedy implementation is based on numpy vector operations. Unfortunately, that doesn't scale well with high number of peaks. When calculating the cosine similarity between spectrum1 and spectrum2 (with n1 and n2 number of peaks), the current CosineGreedy has to calculate a matrix multiplication of two n1*n2-sized matrices.
For spectra with 1,000s of peaks that seems to explode in terms of compute (and memory).

We should hence shift to the former implementation which was added in PR matchms/matchms-backup#239 .

Here a quick comparison:

import time
from matchms import Spectrum, Scores
from matchms.similarity import CosineGreedy, CosineGreedyNumba


n_peaks = 2000
n_spectrums = 10

test_spectrum = Spectrum(mz=numpy.linspace(10, 1000, n_peaks),
                         intensities=numpy.ones((n_peaks)),
                         metadata={})

# current numpy vector based implementation
cosine_greedy = CosineGreedy(tolerance=0.01)

tstart = time.time()
similarity_matrix = Scores(n_spectrums*[test_spectrum], n_spectrums*[test_spectrum],
                           cosine_greedy).calculate().scores
tend = time.time()
print("execution time:", tend-tstart)


# numba based implementation
cosine_greedy = CosineGreedyNumba(tolerance=0.01)

tstart = time.time()
similarity_matrix = Scores(n_spectrums*[test_spectrum], n_spectrums*[test_spectrum],
                           cosine_greedy).calculate().scores
tend = time.time()
print("execution time:", tend-tstart)

For the above code I got:
execution time: 22.43099617958069
execution time: 0.5635242462158203

Test and compare different cosine score implementations (profiling)

When it comes to finding bottlenecks regarding memory and compute, I strongly expect that the similarity score calculations will be the most important issue.

The current implementation of the cosine score calculation CosineGreedy is already making use of vectorized numpy calculations. But at least in a quick check I found that a former numba based implementation still seems to be faster (bit more than 2-fold?).

Missing logger for word2vec model training

Depending on the number of epochs, the number of documents, and the size of the documents, model training can take a while. In my view, some type of logger is really needed to see that the process is working fine. For now I just used the following:

from gensim.models.callbacks import CallbackAny2Vec


class EpochLogger(CallbackAny2Vec):
    """Callback to log information about training progress.
    Used to keep track of gensim model training"""

    def __init__(self, num_of_epochs):
        self.epoch = 0
        self.num_of_epochs = num_of_epochs
        self.loss = 0

    def on_epoch_end(self, model):
        """Return progress of model training"""
        loss = model.get_latest_training_loss()
        # loss_now = loss - self.loss_to_be_subed
        print('\r',
              ' Epoch ' + str(self.epoch+1) + ' of ' + str(self.num_of_epochs) + '.',
              end="")
        print('Change in loss after epoch {}: {}'.format(self.epoch+1, loss - self.loss))
        self.epoch += 1
        self.loss = loss

Maybe we can add something along those lines to spec2vec such that people could simply import it.

from spec2vec import EpochLogger
...

Docstrings missing

In my code example (#41) I would like to link to documentation of methods. Sadly the docs do not show the methods because they do not have doc strings.

For

  • matchms/similarity/CosineGreedy.py:CosineGreedy
  • matchms/filtering/default_filters.py:default_filters, has docstring but does not explain what it does.
  • matchms/calculate_scores.py:calculate_scores

Fix and/or add metadata through pubchem lookup

My former pre-processing of the data included a step where I tried to "repair" incorrect Inchis or smiles by finding a matching compound via PubChem.
This is not a very simple task, because one needs to define what a good-enough match is (it used to be a very lengthy function find_pubchem_match which was part of ms_functions.py in the old code.

While I consider this something nice to have, it first should be decided:

  • Should this be part of matchms at all?
  • Should this better be postponed?
    (I think that working on the spec2vec part and the library matching is more 'urgent')

Handle different compound name input types and fields

When handling both data from GNPS json files (see matchms/matchms-backup#207), a number of new issues arise.
One of the is that GNPS json files do not have name field, but already provide a (poorly) cleaned compound_name and adduct instead.

The current add_adduct function does not work on the new format. And later lookup operations rely on one specific field to retrieve the compound name (preferably in a cleaned version).

Add consistent logging

Currently several filters use print statements when they change metadata.
As Stefan mentioned (in PR #34, #34 (comment)), it would probably be cleaner to do this via logging.

  • replace print statements by logging
  • allow logging to file (if you run many filters over a lot of data it would be nice to get a report of the done changes)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.