Code Monkey home page Code Monkey logo

xfuse's Introduction

XFuse: Deep spatial data fusion

https://github.com/ludvb/xfuse/workflows/build/badge.svg?branch=master

This repository contains code for the paper “Super-resolved spatial transcriptomics by deep data fusion”.

Nature Biotechnology: https://doi.org/10.1038/s41587-021-01075-3

BioRxiv preprint: https://doi.org/10.1101/2020.02.28.963413

Hardware requirements

XFuse can run on CPU-only hardware, but training new models will take exceedingly long. We recommend running XFuse on a GPU with at least 8 GB of VRAM.

Software requirements

XFuse has been tested on GNU/Linux but should run on all major operating systems. XFuse requires Python 3.8. All other dependencies are pulled in by pip during the installation.

Installing

To install XFuse to your home directory, run

pip install --user git+https://github.com/ludvb/xfuse@master

This step should only take a few minutes.

Getting started

This section will guide you through how to start an analysis with XFuse using data on human breast cancer from [fn:1].

[fn:1]: https://doi.org/10.1126/science.aaf2403

Data

The data is available here. To download all of the required files for the analysis, run

# Image data
curl -Lo section1.jpg https://www.spatialresearch.org/wp-content/uploads/2016/07/HE_layer1_BC.jpg
curl -Lo section2.jpg https://www.spatialresearch.org/wp-content/uploads/2016/07/HE_layer2_BC.jpg
curl -Lo section3.jpg https://www.spatialresearch.org/wp-content/uploads/2016/07/HE_layer3_BC.jpg
curl -Lo section4.jpg https://www.spatialresearch.org/wp-content/uploads/2016/07/HE_layer4_BC.jpg

# Gene expression count data
curl -Lo section1.tsv https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer1_BC_count_matrix-1.tsv
curl -Lo section2.tsv https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer2_BC_count_matrix-1.tsv
curl -Lo section3.tsv https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer3_BC_count_matrix-1.tsv
curl -Lo section4.tsv https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer4_BC_count_matrix-1.tsv

# Alignment data
curl -Lo section1-alignment.txt https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer1_BC_transformation.txt
curl -Lo section2-alignment.txt https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer2_BC_transformation.txt
curl -Lo section3-alignment.txt https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer3_BC_transformation.txt
curl -Lo section4-alignment.txt https://www.spatialresearch.org/wp-content/uploads/2016/07/Layer4_BC_transformation.txt

Preprocessing

XFuse uses a specialized data format to optimize loading speeds and allow for lazy data loading. XFuse has inbuilt support for converting data from 10X Space Ranger (xfuse convert visium) and the Spatial Transcriptomics Pipeline (xfuse convert st) to its own data format. If your data has been produced by another pipeline, it may need to be wrangled into a supported format before continuing. Feel free to open an issue on our issue tracker if you run into any problems or to request support for a new platform.

The data from the Data section was produced by the Spatial Transcriptomics Pipeline, so we can run the following commands to convert it to the right format:

xfuse convert st --counts section1.tsv --image section1.jpg --transformation-matrix section1-alignment.txt --scale 0.15 --save-path section1
xfuse convert st --counts section2.tsv --image section2.jpg --transformation-matrix section2-alignment.txt --scale 0.15 --save-path section2
xfuse convert st --counts section3.tsv --image section3.jpg --transformation-matrix section3-alignment.txt --scale 0.15 --save-path section3
xfuse convert st --counts section4.tsv --image section4.jpg --transformation-matrix section4-alignment.txt --scale 0.15 --save-path section4

It may be worthwhile to try out different values for the --scale argument, which downsamples the image data by the given factor. Essentially, a higher scale increases the resolution of the model but requires considerably more compute power.

Verifying tissue masks

It is usually a good idea to verify that the computed tissue masks look good. This can be done using the script ./scripts/visualize_tissue_masks.py included in this repository:

curl -LO https://raw.githubusercontent.com/ludvb/xfuse/master/scripts/visualize_tissue_masks.py
python visualize_tissue_masks.py */data.h5

The script will show the tissue images with the detected backgrounds blacked out. If tissue detection fails, a custom mask can be passed to xfuse convert using the --mask-file argument (see xfuse convert visium --help for more information).

Configuring and starting the run

Settings for the run are specified in a configuration file. Paste the following into a file named my-config.toml:

[xfuse]
network_depth = 6
network_width = 16
min_counts = 50

[expansion_strategy]
type = "DropAndSplit"
[expansion_strategy.DropAndSplit]
max_metagenes = 50

[optimization]
batch_size = 3
epochs = 100000
learning_rate = 0.0003
patch_size = 768

[analyses]
[analyses.metagenes]
type = "metagenes"
[analyses.metagenes.options]
method = "pca"

[analyses.gene_maps]
type = "gene_maps"
[analyses.gene_maps.options]
gene_regex = ".*"

[slides]
[slides.section1]
data = "section1/data.h5"
[slides.section1.covariates]
section = 1

[slides.section2]
data = "section2/data.h5"
[slides.section2.covariates]
section = 2

[slides.section3]
data = "section3/data.h5"
[slides.section3.covariates]
section = 3

[slides.section4]
data = "section4/data.h5"
[slides.section4.covariates]
section = 4

Here is a non-exhaustive summary of the available configuration options:

  • xfuse.network_depth: The number of up- and downsampling steps in the fusion network. If you are running on large images (using a large value for the --scale argument in xfuse convert), you may need to increase this number.
  • xfuse.network_width: The number of channels in the image and expression decoders. You may need to increase this value if you are studying tissues with many different cell types.
  • xfuse.min_counts: The minimum number of reads for a gene to be included in the analysis.
  • expansion_strategy.DropAndSplit.max_metagenes: The maximum number of metagenes to create during inference. You may need to increase this value if you are studying tissues with many different cell types.
  • optimization.batch_size: The mini-batch size. This number should be kept as high as possible to keep gradients stable but can be reduced if you are running XFuse on a GPU with limited memory capacity.
  • optimization.epochs: The number of epochs to run. When set to a value below zero, XFuse will use a heuristic stopping criterion.
  • optimization.patch_size: The size of training patches. This number should preferably be a multiple of 2^xfuse.network_depth to avoid misalignments during up- and downsampling steps.
  • slides: This section defines which slides to include in the experiment. Each slide is associated with a unique subsection. In each subsection, a data path and optional covariates to control for are specified. For example, in the configuration file above, we have given each slide a section condition with a distinct value to control for sample-wise batch effects. If our dataset contained samples from different patients, we could, for example, also include a patient condition to control for patient-wise effects.

We are now ready to start the analysis!

xfuse run my-config.toml --save-path my-run

Tip: XFuse can generate a template for the configuration file by running

xfuse init my-config.toml section1.h5 section2.h5 section3.h5 section4.h5

Tracking the training progress

XFuse continually writes training data to a Tensorboard log file. To check how the optimization is progressing, start a Tensorboard web server and direct it to the --save-path of the run:

tensorboard --logdir my-run

Stopping and resuming a run

To stop the run before it has completed, press Ctrl+C. A snapshot of the model state will be saved to the --save-path. The snapshot can be restored by running

xfuse run my-config.toml --save-path my-run --session my-run/exception.session

Finishing the run

Training the model from scratch will take roughly three days on a normal desktop computer with an Nvidia GeForce 20 series graphics card. After training, XFuse runs the analyses specified in the configuration file. Results will be saved to a directory named analyses in the --save-path.

xfuse's People

Contributors

angadps avatar dependabot[bot] avatar ludvb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

xfuse's Issues

ModuleNotFoundError: No module named 'xfuse'

I am trying to run xfuse on the supercomputer system; I'm not an expert but I do manage to run other programs.

When I run xfuse
xfuse convert st --counts section1.tsv --image section1.jpg --transformation-matrix section1-alignment.txt --scale 0.15 --save-path section1

I get the following:

Traceback (most recent call last):
File "/mydir/xfuse/bin/xfuse", line 5, in
from xfuse.main import cli
ModuleNotFoundError: No module named 'xfuse'

So the problem here is he thinks __main__ is located in mydir/xfuse/bin/xfuse, while in reality, it is located in mydir/xfuse/
The export path is set to the bin directory, otherwise he doesnt find xfuse. But now it seems like the other files cannot be found

What am I doing wrong here???

Kind regards,
Nicolaas

Memory issue

Hello,
Thanks for the useful approach. I am trying to run xfuse on 5 visium slide with a 16GB GPU. But always get a memory error (CUDA out of memory). Can i adjust the parameters to reduce the required memory used by xfuse? or can i run the 5 slide separately, if i run them separately, will the results be inaccurate?

Which results to look at?

Hi there,
I have another question about the results. The output has 3 images for each gene in the gene map folder, for example,
CD3D_invcv+.jpg,
CD3D_mean.jpg,
CD3D_stdv.jpg
They all look very similar. Which one would you recommend me to use as the enhanced gene expression?

Thanks.

Annotation file

Hey Lil'L!

How does an example annotation file look file?

S

License?

Hi, thank you for sharing this repo!

Could you let me know what license is associated with this repo? The Nature Biotechnology paper lists the code as being made available under the MIT license but I don't see a license file within this repo.

Thanks!

Tissue masking failed

Hi, I don't manage to get the tissue masking in xfuse convert visium right in any of my samples because of
OpenCV(4.6.0) /io/opencv/modules/imgproc/src/grabcut.cpp:386: error: (-215:Assertion failed) !bgdSamples.empty() && !fgdSamples.empty() in function 'initGMMs'

I use the original spaceranger input image file. #10 and #33 didn't help much in this case. Any help would be much appreciated!

Input:

xfuse convert visium --bc-matrix visium_files_copy/${sample}/outs/filtered_feature_bc_matrix.h5 --image visium_files_copy/${sample}/${sample}.tif --tissue-positions visium_files_copy/${sample}/outs/spatial/tissue_positions_list.csv --scale-factors visium_files_copy/${sample}/outs/spatial/scalefactors_json.json --scale 0.3

Output:

[2022-09-30 17:15:06,384] INFO : Running xfuse version 0.2.1
[2022-09-30 17:16:06,703] INFO : Computing tissue mask:
[2022-09-30 17:16:06,713] WARNING : UserWarning (/home/sf/.local/lib/python3.8/site-packages/xfuse/utility/mask.py:74): Failed to mask tissue
OpenCV(4.6.0) /io/opencv/modules/imgproc/src/grabcut.cpp:386: error: (-215:Assertion failed) !bgdSamples.empty() && !fgdSamples.empty() in function 'initGMMs'
[2022-09-30 17:16:21,767] WARNING : UserWarning (/home/sf/.local/lib/python3.8/site-packages/xfuse/convert/utility.py:216): Count matrix contains duplicated columns. Counts will be summed by column name.
[2022-09-30 17:16:23,430] WARNING : FutureWarning (/home/sf/.local/lib/python3.8/site-packages/xfuse/convert/utility.py:226): Using the level keyword in DataFrame and Series aggregations is deprecated and will be removed in a future version. Use groupby instead. df.sum(level=1) should use df.groupby(level=1).sum().
[2022-09-30 17:16:45,524] INFO : Writing data to data.h5

image

Trained model

Hello,

Thank you very much for providing this exciting tool! As training on the provided data generally takes a long time, are there already-trained models or super-resolved gene expressions for the data used here? It would be greatly helpful for us to quickly understand what the super-resolved data will look like. Thanks!

Best,
Hao

Is it possible for Gene_maps values to use the same scale instead of min-max per gene?

If I understand correctly; the gene maps are minimum and maximum expression for each gene.

However, genes that are hardly expressed in your biological sample will look like they have a strong expression pattern which is overblown (imho).

Would it be possible to have an output where the gene_maps of all genes have the same scale? In this way, genes that are hardly expressed would also show dramatic expression. For example in my tissue, CD1E expression is really low, but it appears enormous in XFuse (with a distribution that doesnt make much sense, its more noise that gets amplified...)

image

Maybe I did sth wrong and this should not be? I expect in this case that the entire tissue would be blackish colour...

Visualization scheme for gene maps

Hello,

This is mostly a question to better understand the current visualizations and if and how they could be configured. Hope this hasn't been already discussed.

I've run xfuse on a couple of dozen samples so far and I'm trying to investigate about a dozen genes in each of the samples. Based on the raw Visium gene expression data some of the plots aren't making sense to me. For instance, one of the genes is expressed in < 10 spots in the Visium data, still I see bright spots throughout the tissue in the gene maps analysis. Maybe having a better understanding of the visuals might help.

  1. Is the data being normalized by default when visualizing the "xfused" gene expression? If so, the relative scaling can cause some of the regions to "light up", even though the absolute predicted expression values might be low. This would not require a scale/legend as there isn't one right now. However, if the visualization is that of the absolute values, having a scale would be useful.

  2. Is the visualization happening on the the original scale, or on a log-scale? Some of the genes are both highly expressed throughout the tissue and have large variation in the expression levels. Visualizing using a log scale can help catch the high expression values throughout along with capturing the differences.

Alternatively, are there configurable options for the above which I can use to display my plots?

Thanks,
--Angad.

Docker image

Hi, can you provide a Docker image for xfuse? Would be much appreciated.

In silico spatial transcriptomics

Hello,
Thank you very much for providing this exciting tool!
I am interseted in the "In silico spatial transcriptomics", which predict the gene expression from the sole H&E stain images.
Now I have 10 slices, and 5 out of them have both H&E stain images and associated gene expression data and others only have H&E stain images. How can I use your tool to predict the gene expression of the rest 5 slices?

Best wishes
Le

Error: ValueError: The parameter loc has invalid values

While running on the tests data I am facing this error,

[2022-12-13 04:01:49,551] 🚨 ERROR : ValueError: The parameter loc has invalid values
              Trace Shapes:      
               Param Sites:      
          !!metagene!1!!_mu    10
          !!metagene!1!!_sd    10
mu_rate_g_condition-section  1 10
sd_rate_g_condition-section  1 10
              Sample Sites:      
        !!metagene!1!! dist 10  |
                      value 10  |
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 133, in guide
    _go(self.get_experiment(experiment), x)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 129, in _go
    for i, y in enumerate(experiment.guide(x)):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 647, in guide
    _sample_condition(batch_idx, slide, covariate, condition)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 616, in _sample_condition
    Normal(mu_rate_g_condition, 1e-8 + sd_rate_g_condition),
  File "/usr/local/lib/python3.8/dist-packages/pyro/distributions/distribution.py", line 18, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/normal.py", line 50, in __init__
    super(Normal, self).__init__(batch_shape, validate_args=validate_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/distribution.py", line 53, in __init__
    raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter loc has invalid values

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/run.py", line 173, in run
    train(epochs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 62, in train
    _epoch(epoch=epoch)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 47, in _epoch
    _step(x=to_device(x))
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 35, in _step
    pyro.infer.SVI(model.model, model.guide, optim, loss).step(x)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/svi.py", line 128, in step
    loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/trace_elbo.py", line 126, in loss_and_grads
    for model_trace, guide_trace in self._get_traces(model, guide, args, kwargs):
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/elbo.py", line 170, in _get_traces
    yield self._get_trace(model, guide, args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/trace_elbo.py", line 52, in _get_trace
    model_trace, guide_trace = get_importance_trace(
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/enum.py", line 44, in get_importance_trace
    guide_trace = poutine.trace(guide, graph_type=graph_type).get_trace(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 187, in get_trace
    self(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 171, in __call__
    raise exc from e
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 133, in guide
    _go(self.get_experiment(experiment), x)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 129, in _go
    for i, y in enumerate(experiment.guide(x)):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 647, in guide
    _sample_condition(batch_idx, slide, covariate, condition)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 616, in _sample_condition
    Normal(mu_rate_g_condition, 1e-8 + sd_rate_g_condition),
  File "/usr/local/lib/python3.8/dist-packages/pyro/distributions/distribution.py", line 18, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/normal.py", line 50, in __init__
    super(Normal, self).__init__(batch_shape, validate_args=validate_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/distribution.py", line 53, in __init__
    raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter loc has invalid values
              Trace Shapes:      
               Param Sites:      
          !!metagene!1!!_mu    10
          !!metagene!1!!_sd    10
mu_rate_g_condition-section  1 10
sd_rate_g_condition-section  1 10
              Sample Sites:      
        !!metagene!1!! dist 10  |
                      value 10  |

ERROR:xfuse.logging.logging:ValueError: The parameter loc has invalid values
              Trace Shapes:      
               Param Sites:      
          !!metagene!1!!_mu    10
          !!metagene!1!!_sd    10
mu_rate_g_condition-section  1 10
sd_rate_g_condition-section  1 10
              Sample Sites:      
        !!metagene!1!! dist 10  |
                      value 10  |
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 133, in guide
    _go(self.get_experiment(experiment), x)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 129, in _go
    for i, y in enumerate(experiment.guide(x)):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 647, in guide
    _sample_condition(batch_idx, slide, covariate, condition)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 616, in _sample_condition
    Normal(mu_rate_g_condition, 1e-8 + sd_rate_g_condition),
  File "/usr/local/lib/python3.8/dist-packages/pyro/distributions/distribution.py", line 18, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/normal.py", line 50, in __init__
    super(Normal, self).__init__(batch_shape, validate_args=validate_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/distribution.py", line 53, in __init__
    raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter loc has invalid values

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/run.py", line 173, in run
    train(epochs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 62, in train
    _epoch(epoch=epoch)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 47, in _epoch
    _step(x=to_device(x))
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/usr/local/lib/python3.8/dist-packages/xfuse/train.py", line 35, in _step
    pyro.infer.SVI(model.model, model.guide, optim, loss).step(x)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/svi.py", line 128, in step
    loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/trace_elbo.py", line 126, in loss_and_grads
    for model_trace, guide_trace in self._get_traces(model, guide, args, kwargs):
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/elbo.py", line 170, in _get_traces
    yield self._get_trace(model, guide, args, kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/trace_elbo.py", line 52, in _get_trace
    model_trace, guide_trace = get_importance_trace(
  File "/usr/local/lib/python3.8/dist-packages/pyro/infer/enum.py", line 44, in get_importance_trace
    guide_trace = poutine.trace(guide, graph_type=graph_type).get_trace(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 187, in get_trace
    self(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 171, in __call__
    raise exc from e
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 133, in guide
    _go(self.get_experiment(experiment), x)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/xfuse.py", line 129, in _go
    for i, y in enumerate(experiment.guide(x)):
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 647, in guide
    _sample_condition(batch_idx, slide, covariate, condition)
  File "/usr/local/lib/python3.8/dist-packages/xfuse/model/experiment/st/st.py", line 616, in _sample_condition
    Normal(mu_rate_g_condition, 1e-8 + sd_rate_g_condition),
  File "/usr/local/lib/python3.8/dist-packages/pyro/distributions/distribution.py", line 18, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/normal.py", line 50, in __init__
    super(Normal, self).__init__(batch_shape, validate_args=validate_args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributions/distribution.py", line 53, in __init__
    raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter loc has invalid values
              Trace Shapes:      
               Param Sites:      
          !!metagene!1!!_mu    10
          !!metagene!1!!_sd    10
mu_rate_g_condition-section  1 10
sd_rate_g_condition-section  1 10
              Sample Sites:      
        !!metagene!1!! dist 10  |
                      value 10  |

[2022-12-13 04:01:49,552] ℹ : Saving session to exception.session
INFO:xfuse.logging.logging:Saving session to exception.session

the error is not consistent, that is, it goes away sometimes if I run it from scratch again. what could be the reason behind this error?

changes i made:
in the xfuse/_config.py file: at 155 line -> ("epochs", Item(value=200)),
cause, higher epoch value was giving me memory error.

Runtime error related to tensor size when running "analysis-gene_maps"

Hi @ludvb,

Thank you for inventing the tool for upscaling ST data.

I came across this error after training the model when running analysis "analysis-gene_maps". ERROR : RuntimeError: The size of tensor a (30598626) must match the size of tensor b (23687921) at non-singleton dimension 0

[2023-03-27 17:04:21,118] ℹ : Running analysis "analysis-gene_maps"
[2023-03-27 17:06:43,048] ⚠ WARNING : FutureWarning (/Users/fengyuzhou/myenv/lib/python3.8/site-packages/xfuse/data/dataset.py:114): iteritems is deprecated and will be removed in a future version. Use .items instead.
[2023-03-27 17:08:03,224] ⚠ WARNING : FutureWarning (/Users/fengyuzhou/myenv/lib/python3.8/site-packages/xfuse/data/dataset.py:114): iteritems is deprecated and will be removed in a future version. Use .items instead.
[2023-03-27 17:08:26,770] 🚨 ERROR : RuntimeError: The size of tensor a (30598626) must match the size of tensor b (23687921) at non-singleton dimension 0              
Traceback (most recent call last):
  File "/Users/fengyuzhou/myenv/lib/python3.8/site-packages/xfuse/analyze/prediction.py", line 146, in _sample
    yield from _run_model(
  File "/Users/fengyuzhou/myenv/lib/python3.8/site-packages/xfuse/analyze/prediction.py", line 91, in _run_model
    sample = sample / torch.as_tensor(sizes).to(sample).unsqueeze(1)
RuntimeError: The size of tensor a (30598626) must match the size of tensor b (23687921) at non-singleton dimension 0

For the training part, since I am very new to deep learning models, I tried parameters with small values so that the model can be trained fast and I could get a sense of what the result looks like. I am not sure if the parameters used caused the problem. Here is my config file.

[xfuse]
# This section defines modeling options. It can usually be left as-is.
network_depth = 3
network_width = 8
gene_regex = "^(?!RPS|RPL|MT-).*" # Regex matching genes to include in the model. By default, exclude mitochondrial and ribosomal genes.
min_counts = 50 # Exclude all genes with fewer reads than this value.

[settings]
cache_data = true
data_workers = 8 # Number of worker processes for data loading. If set to zero, run data loading in main thread.

[expansion_strategy]
# This section contains configuration options for the metagene expansion strategy.
type = "DropAndSplit" # Available choices: Extra, DropAndSplit
purge_interval = 200 # Metagene purging interval (epochs)

[expansion_strategy.Extra]
num_metagenes = 4
anneal_to = 1
anneal_epochs = 1000

[expansion_strategy.DropAndSplit]
max_metagenes = 20

[optimization]
# This section defines options used during training. It may be necessary to decrease the batch or patch size if running out of memory during training.
batch = 1
batch_size = 4
epochs = 10
learning_rate = 0.0003
patch_size = 1000 # Size of training patches. Set to '-1' to use as large patches as possible.

[analyses]
# This section defines which analyses to run. Each analysis has its own subtable with configuration options. Remove the table to stop the analysis from being run.

[analyses.analysis-gene_maps]
# Constructs a map of imputed expression for each gene in the dataset.
type = "gene_maps"

[analyses.analysis-gene_maps.options]
gene_regex = ".*"
num_samples = 1
genes_per_batch = 10
predict_mean = true
normalize = false
mask_tissue = true
scale = 1.0
writer = "image"

[analyses.analysis-metagenes]
# Creates summary data of the metagenes
type = "metagenes"

[analyses.analysis-metagenes.options]
method = "pca"

[slides]
# This section defines the slides to use in the experiment. Covariates are specified in the "covariates" table. Slide-specific options can be specified in the "options" table.
[slides.section1]
data = "converted/data.h5"
[slides.section1.covariates]
section = 1

Could you please help with the issue? Thank you in advance!

Yuzhou

Is xfuse an executable program?

The way xfuse is installed suggests that xfuse is a python library, but the way it is called later looks like xfuse is a runnable software? This is really weird, when I run these commands after installing xfuse as instructed, it says: "No command 'xfuse' found".
By the way, I installed it in python 3.8 environment under conda.

Use Xfuse in Windows

Hi there,
When I use Xfuse in Windows, I encounter the following error.

🚨 ERROR : RuntimeError: CUDA out of memory. Tried to allocate 108.00 MiB (GPU 0; 4.00 GiB total capacity; 1.75 GiB already allocated; 0 bytes free; 1.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

But when I check the available memory of the GPU, it shows that there is still 3GB of unused space. How can I solve this problem?

RuntimeError: Annotation layer "" is missing

Hello Prof, it is a fancy paper in ST. I have run the model successfully. But when the model training over, i got a erro poping up. I check the original script, and i do not kown how to fix it. Please! I want to represent this outcome.
1720957306327

xfuse not found after installing

Dear @ludvb ,

Thanks for sharing this great work.

After installing the xfuse, I met the following error when converting the data to the right format:

1641231605(1)

I'm sure that xfuse has been installed.

1641231875(1)

Could you please give me some suggestions about how to fix this problem?

Error running xfuse

When I run the xfuse run my-config.toml --save-path my-run line, I ran into an error:

[2022-10-07 21:59:44,127] ℹ : Running xfuse version 0.2.1
[2022-10-07 21:59:44,146] ℹ : Using the following design table:
[2022-10-07 21:59:44,146] ℹ :
[2022-10-07 21:59:44,147] ℹ : | | section {1,2,3,4} |
[2022-10-07 21:59:44,147] ℹ : |----------+---------------------|
[2022-10-07 21:59:44,147] ℹ : | section1 | 1 |
[2022-10-07 21:59:44,147] ℹ : | section2 | 2 |
[2022-10-07 21:59:44,147] ℹ : | section3 | 3 |
[2022-10-07 21:59:44,147] ℹ : | section4 | 4 |
[2022-10-07 21:59:44,147] ℹ :
[2022-10-07 21:59:44,149] 🚨 ERROR : AttributeError: module 'os' has no attribute 'sched_getaffinity'
Traceback (most recent call last):
File "/Users/name/xfusestuff/xfusevenv/lib/python3.8/site-packages/xfuse/main.py", line 585, in run
_run(
File "/Users/name/xfusestuff/xfusevenv/lib/python3.8/site-packages/xfuse/run.py", line 64, in run
if (available_cores := len(os.sched_getaffinity(0))) < num_data_workers:
AttributeError: module 'os' has no attribute 'sched_getaffinity'

I searched this problem on stack overflow but it I didn't find a solution to it. Is there an alternative function to use in place of os.sched_getaffinity. I am using an M1 Macbook Air 2020, 16GB Ram.

purge_interval for metagene expansion not working

Hi,

This is a great method and we had some initial success applying it to our data! I was hoping to evaluate (add/delete) metagenes more often by setting the purge_interval, because each epoch took 5 minutes given the amount of data I have. However, it's still purging at the default interval of 1000 instead of 15. Could you provide some suggestions on how I can change the interval? Thanks!

[xfuse]
network_depth = 6
network_width = 32
min_counts = -1

[expansion_strategy]
type = "DropAndSplit"
purge_interval=15
[expansion_strategy.DropAndSplit]
max_metagenes = 50

[optimization]
batch_size = 60
epochs = -1
learning_rate = 0.001
patch_size = 200

[analyses]
[analyses.metagenes]
type = "metagenes"
[analyses.metagenes.options]
method = "pca"

[analyses.gene_maps]
type = "gene_maps"
[analyses.gene_maps.options]
gene_regex = ".*"
mask_tissue = false
writer="tensor"
scale=1
genes_per_batch=1

[slides]
[slides.section1]

The parameter v has invalid values

Hi there,
Thanks for providing this nice tool! I was able to run using the sample data you provided, so I suppose the environment is correct. I have an error when runing XFuse for another ST data. The error message is attached below, could you help me with that?

[2022-02-24 11:58:53,539] INFO : Running xfuse version 0.2.1
[2022-02-24 11:58:53,614] INFO : Using the following design table:
[2022-02-24 11:58:53,614] INFO :
[2022-02-24 11:58:53,615] INFO : | section1 |
[2022-02-24 11:58:53,615] INFO :
[2022-02-24 11:58:54,029] INFO : The following 14382 genes have been filtered out:
....
....
[2022-02-24 11:58:54,037] INFO : Adding metagene: 1
[2022-02-24 11:58:54,038] INFO : Registering experiment: ST (data type: "ST")
[2022-02-24 11:59:06,688] WARNING : RuntimeWarning (/appl/python-3.9/lib/python3.9/site-packages/xfuse/data/utility/misc.py:87): Mean of empty slice.
[2022-02-24 11:59:06,689] WARNING : RuntimeWarning (/appl/python-3.9/lib/python3.9/site-packages/numpy/core/_methods.py:189): invalid value encountered in double_scalars
[2022-02-24 11:59:09,574] ERROR : ValueError: The parameter v has invalid values

Traceback (most recent call last):
  File "/appl/python-3.9/lib/python3.9/site-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/appl/python-3.9/lib/python3.9/site-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/appl/python-3.9/lib/python3.9/site-packages/pyro/poutine/messenger.py", line 12, in _context_wrap
    return fn(*args, **kwargs)
  File "/appl/python-3.9/lib/python3.9/site-packages/xfuse/model/xfuse.py", line 94, in model
    _go(self.get_experiment(experiment), x)
  File "/appl/python-3.9/lib/python3.9/site-packages/xfuse/model/xfuse.py", line 91, in _go
    experiment.model(x, zs)
  File "/appl/python-3.9/lib/python3.9/site-packages/xfuse/model/experiment/st/st.py", line 372, in model
    Delta(
  File "/appl/python-3.9/lib/python3.9/site-packages/pyro/distributions/distribution.py", line 18, in __call__
    return super().__call__(*args, **kwargs)
  File "/appl/python-3.9/lib/python3.9/site-packages/pyro/distributions/delta.py", line 44, in __init__
    super().__init__(batch_shape, event_shape, validate_args=validate_args)
  File "/appl/python-3.9/lib/python3.9/site-packages/torch/distributions/distribution.py", line 53, in __init__
    raise ValueError("The parameter {} has invalid values".format(param))
ValueError: The parameter v has invalid values

How do I make a tissue mask (with photoshop?)

I want to run a new analysis, this time with a mask. I can create a mask .png file with photoshop (a black and white image, in grayscale).

When I run the code:
mydir/XFuse/bin/xfuse convert image --image /mydir/XFuse/sections/23O2788/23O2788cirroseHE.jpg --scale 1.0 --mask --mask-file /mydir/XFuse/sections/23O2788/23O2788_mask_grayscale.png --save-path /mydir/XFuse/sections/23O2788_scale1

I get the following error:

[2023-08-07 16:06:31,485] ℹ : Running xfuse version 0.2.1
[2023-08-07 16:06:34,931] ℹ : Computing tissue mask:
[2023-08-07 16:06:34,932] ⚠ WARNING : UserWarning (/mydir/XFuse/xfuse/utility/mask.py:74): Failed to mask tissue
OpenCV(4.8.0) /io/opencv/modules/imgproc/src/grabcut.cpp:343: error: (-5:Bad argument) mask element value must be equal GC_BGD or GC_FGD or GC_PR_BGD or GC_PR_FGD in function 'checkMask'
[2023-08-07 16:06:35,006] ⚠ WARNING : UserWarning (/mydir/XFuse/xfuse/convert/utility.py:207): The image resolution is very large! 😱 XFuse typically works best on medium resolution images (approximately 1000x1000 px). If you experience performance issues, please consider reducing the resolution.
[2023-08-07 16:06:38,991] ℹ : Writing data to data.h5

So the .h5 file is made; but the assignments GC_BGD or GC_FGD or GC_PR_BGD or GC_PR_FGD need to be made - I understand I need to tell him GC_BGD is background (black) and GC_FGD is foreground (white). But how do I do this?

This is my mask made in photoshop:
23O2788_mask_grayscale

If I run python visualize_tissue_masks.py /mydir/XFuse/sections/23O2788_scale1/data.h5 to visualize the mask, I don't get a mask, but I do get the big border (which I did not ask for).

image

So, (i) How can I make a mask
and (ii) How can I get rid of the border?

TypeError: cannot pickle 'weakref' object related to save_session

Hi, @ludvb

Thank you so much for providing such a fantastic tool. I've encountered some issues while running the software. I came across the error when I ran xfuse run config_demo.toml

Traceback (most recent call last):
  File "/root/.local/lib/python3.8/site-packages/xfuse/run.py", line 145, in _panic
    save_session("exception")
  File "/root/.local/lib/python3.8/site-packages/xfuse/session/io.py", line 33, in save_session
    **{
  File "/root/.local/lib/python3.8/site-packages/xfuse/session/io.py", line 38, in <dictcomp>
    if _can_pickle(k, v)
  File "/root/.local/lib/python3.8/site-packages/xfuse/session/io.py", line 23, in _can_pickle
    _ = pickle.dumps(x)
TypeError: cannot pickle 'weakref' object

It seems like there was an issue while saving the session. Here is my config_demo.toml file.

[xfuse]
# This section defines modeling options. It can usually be left as-is.
network_depth = 6
network_width = 12
gene_regex = "^(?!RPS|RPL|MT-).*" # Regex matching genes to include in the model. By default, exclude mitochondrial and ribosomal genes.
min_counts = 1 # Exclude all genes with fewer reads than this value.

[settings]
cache_data = true
data_workers = 0 # Number of worker processes for data loading. If set to zero, run data loading in main thread.

[expansion_strategy]
# This section contains configuration options for the metagene expansion strategy.
type = "DropAndSplit" # Available choices: Extra, DropAndSplit
purge_interval = 2 # Metagene purging interval (epochs)

[expansion_strategy.Extra]
num_metagenes = 4
anneal_to = 1
anneal_epochs = 1000

[expansion_strategy.DropAndSplit]
max_metagenes = 50

[optimization]
# This section defines options used during training. It may be necessary to decrease the batch or patch size if running out of memory during training.
batch_size = 2
epochs = 10
learning_rate = 0.0003
patch_size = 768 # Size of training patches. Set to '-1' to use as large patches as possible.

[analyses]
# This section defines which analyses to run. Each analysis has its own subtable with configuration options. Remove the table to stop the analysis from being run.

[analyses.analysis-gene_maps]
# Constructs a map of imputed expression for each gene in the dataset.
type = "gene_maps"

[analyses.analysis-gene_maps.options]
gene_regex = ".*"
num_samples = 10
genes_per_batch = 10
predict_mean = true
normalize = false
mask_tissue = true
scale = 1.0
writer = "tensor"


[slides]
# This section defines the slides to use in the experiment. Covariates are specified in the "covariates" table. Slide-specific options can be specified in the "options" table.

[slides.section0]
data = "data/demo/data.h5"

[slides.section0.covariates]
section = "0"

[slides.section0.options]
min_counts = 1
always_filter = []
always_keep = [1]

Could you please help with the issue? Thank you very much.

Yizhi

xfuse on synthetic data

Hello,
Thank you very much for providing this great method!
I tried to use xfuse on synthetic data, but I ran into some difficulties. How does the synthetic data look like? And how can I modify the .toml file to get the result just like that in your article? May I ask if you can provide the synthetic data in your article and the corresponding running code for xfuse?

Many thanks

xfuse on Visium data

Hello,
Your approach looks very interesting! And I hope your pre-print will get accepted.
I would like to run xfuse on GPU cluster using one dataset from Visium. Few questions:

  • does one need to do standard preprocessing/normalization of spatial data, eg in Seurat, or even do some prior deconvolution of spots using reference sc/snRNAseq data (eg using SPOTlight)? And then run xfuse preprocessing, before xfuse run
  • what would be the example to prepare Visium data for xfuse? eg:
    xfuse convert visium --counts s1_filtered_feature_bc_matrix.h5 --image tissue_hires_image.png --transformation-matrix tissue_positions_list.csv --scale 0.3 --output-file section1.h5
    .h5, .pnd, .csv are supported?
  • in my-config.toml [analyses.gene_maps] normalize = false what does that exactly mean, no normalization of counts?
  • is there a full summary of configuration options for xfuse?

Many Thanks

TensordBoard related error at epoch 500

Hello, I am trying running XFuse on 8 Visium slides using GPUs. The training proceeds until epoch 499. Then at epoch 500 the session panics and I get this error. It seems to involve TensorBoard. Any idea on how to fix it?

[2020-09-30 12:28:39,561] INFO : Epoch 00480 | ELBO -7.875e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.087
[2020-09-30 12:28:48,967] INFO : Epoch 00481 | ELBO -7.904e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.086
[2020-09-30 12:28:58,310] INFO : Epoch 00482 | ELBO -8.582e+07 | Running ELBO -7.8119e+07 | Running RMSE 2.086
[2020-09-30 12:29:06,470] INFO : Epoch 00483 | ELBO -7.125e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.085
[2020-09-30 12:29:15,118] INFO : Epoch 00484 | ELBO -7.666e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.085
[2020-09-30 12:29:23,711] INFO : Epoch 00485 | ELBO -8.059e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.084
[2020-09-30 12:29:31,970] INFO : Epoch 00486 | ELBO -7.903e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.084
[2020-09-30 12:29:40,730] INFO : Epoch 00487 | ELBO -7.779e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.083
[2020-09-30 12:29:49,476] INFO : Epoch 00488 | ELBO -7.583e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.083
[2020-09-30 12:29:57,758] INFO : Epoch 00489 | ELBO -7.945e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.082
[2020-09-30 12:30:06,495] INFO : Epoch 00490 | ELBO -7.764e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.082
[2020-09-30 12:30:15,647] INFO : Epoch 00491 | ELBO -7.852e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.082
[2020-09-30 12:30:23,895] INFO : Epoch 00492 | ELBO -7.722e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.081
[2020-09-30 12:30:32,097] INFO : Epoch 00493 | ELBO -7.302e+07 | Running ELBO -7.8117e+07 | Running RMSE 2.081
[2020-09-30 12:30:40,771] INFO : Epoch 00494 | ELBO -7.402e+07 | Running ELBO -7.8116e+07 | Running RMSE 2.080
[2020-09-30 12:30:49,114] INFO : Epoch 00495 | ELBO -8.163e+07 | Running ELBO -7.8117e+07 | Running RMSE 2.080
[2020-09-30 12:30:58,408] INFO : Epoch 00496 | ELBO -7.960e+07 | Running ELBO -7.8117e+07 | Running RMSE 2.079
[2020-09-30 12:31:06,642] INFO : Epoch 00497 | ELBO -7.547e+07 | Running ELBO -7.8117e+07 | Running RMSE 2.079
[2020-09-30 12:31:15,746] INFO : Epoch 00498 | ELBO -8.405e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.078
[2020-09-30 12:31:24,526] INFO : Epoch 00499 | ELBO -8.042e+07 | Running ELBO -7.8118e+07 | Running RMSE 2.078
                                                                             [2020-09-30 12:31:33,190] ERROR : session panic! 
[2020-09-30 12:31:33,483] INFO : saving session to my-run/exception.session  
Traceback (most recent call last):
  File "/nfs/users/nfs_l/lm17/.local/bin/xfuse", line 8, in <module>
    sys.exit(cli())
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/utility/utility.py", line 281, in _wrapped
    return f(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/__main__.py", line 51, in _wrapped
    return f(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/__main__.py", line 312, in run
    _run(
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/run.py", line 144, in run
    train(epochs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/train.py", line 110, in train
    elbo = _epoch(epoch=epoch)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/train.py", line 90, in _epoch
    elbo.append(_step(x=to_device(x)))
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 263, in _fn
    apply_stack(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 198, in apply_stack
    default_process_message(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 159, in default_process_message
    msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/train.py", line 78, in _step
    return -pyro.infer.SVI(model.model, model.guide, optim, loss).step(x)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/infer/svi.py", line 128, in step
    loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/infer/trace_elbo.py", line 126, in loss_and_grads
    for model_trace, guide_trace in self._get_traces(model, guide, args, kwargs):
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/infer/elbo.py", line 170, in _get_traces
    yield self._get_trace(model, guide, args, kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/infer/trace_elbo.py", line 52, in _get_trace
    model_trace, guide_trace = get_importance_trace(
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/infer/enum.py", line 47, in get_importance_trace
    model_trace = poutine.trace(poutine.replay(model, trace=guide_trace),
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/trace_messenger.py", line 187, in get_trace
    self(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/trace_messenger.py", line 165, in __call__
    ret = self.fn(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/messenger.py", line 11, in _context_wrap
    return fn(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/messenger.py", line 11, in _context_wrap
    return fn(*args, **kwargs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/model/xfuse.py", line 90, in model
    return {e: _go(self.get_experiment(e), x) for e, x in xs.items()}
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/model/xfuse.py", line 90, in <dictcomp>
    return {e: _go(self.get_experiment(e), x) for e, x in xs.items()}
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/model/xfuse.py", line 88, in _go
    return experiment.model(x, zs)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/model/experiment/st/st.py", line 389, in model
    image_distr = self._sample_image(x, decoded)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/model/experiment/image.py", line 184, in _sample_image
    pyro.sample(
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/primitives.py", line 113, in sample
    apply_stack(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/pyro/poutine/runtime.py", line 201, in apply_stack
    frame._postprocess_message(msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/handlers/stats/stats_handler.py", line 76, in _postprocess_message
    self._handle(**msg)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/handlers/stats/image.py", line 15, in _handle
    self.add_images("image/ground_truth", (1 + value) / 2)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/xfuse/handlers/stats/stats_handler.py", line 37, in <lambda>
    lambda *args, **kwargs: method(
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 583, in add_images
    image(tag, img_tensor, dataformats=dataformats), global_step, walltime)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/torch/utils/tensorboard/summary.py", line 310, in image
    tensor = convert_to_HWC(tensor, dataformats)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/torch/utils/tensorboard/_utils.py", line 107, in convert_to_HWC
    tensor_CHW = make_grid(tensor_NCHW)
  File "/nfs/users/nfs_l/lm17/.local/lib/python3.8/site-packages/torch/utils/tensorboard/_utils.py", line 76, in make_grid
    assert I.ndim == 4 and I.shape[1] == 3
AssertionError

About mouse olfactory bulb dataset

Hi author:
I obtained the mouse olfactory bulb dataset obeyed the address: https://www.spatialresearch.org/resources-published-datasets/doi-10-1126science-aaf2403/.
However, I didn't understand how to plot the data on top of the HE image. For example, the instructions in section Alignments: "you can either rescale the image to the array size (1,1,33,35) or transform the spot coordinates to image pixel coordinates using these alignment 3×3 matrices:"

Can you tell me how to plot the data on the top of the HE image easily?

Thanks

AttributeError: 'DataLoader' object has no attribute 'reset_workers'

I run "xfuse run my-config.toml --save-path my-run" in cmd, everything is fine.
When I run this python file in debug mode,and my args is "run myconfig.toml --save-path my-run"

import re
import sys
from xfuse.__main__ import cli
if __name__ == '__main__':
    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
    sys.exit(cli())

I got this error

Traceback (most recent call last):
  File "D:\xfuse\xfuse\__main__.py", line 585, in run
    _run(
  File "D:\xfuse\xfuse\run.py", line 149, in run
    with Session(
  File "D:\xfuse\xfuse\session\session.py", line 45, in __enter__
    _apply_session(get_session())
  File "D:\xfuse\xfuse\session\session.py", line 85, in _apply_session
    setter(getattr(session, name, default))
  File "D:\xfuse\xfuse\session\items\genes.py", line 10, in _set_genes
    dataloader.reset_workers()
AttributeError: 'DataLoader' object has no attribute 'reset_workers'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "D:\xfuse\xfuse\__main__.py", line 585, in run
    _run(
  File "D:\xfuse\xfuse\session\session.py", line 64, in __exit__
    assert self == _SESSION_STACK.pop()
AssertionError

I want to read the code in debug mode, how to solve this problem?

Decoding error in data preprocessing

Hi,

Is there any way to fix the "UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte"?

The following is the complete error message.

xia00045@ln1002 [~/final_project] % /home/ruping/xia00045/.local/bin/xfuse convert st --counts section1.tsv --image section1.jpg --transformation-matrix section1-alignment.txt --scale 0.15 --save-path section1 [2022-05-08 14:45:33,429] ℹ : Running xfuse version 0.2.1 [2022-05-08 14:45:33,432] 🚨 ERROR : UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte Traceback (most recent call last): File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/xfuse/__main__.py", line 78, in _wrapped return f(*args, **kwargs) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/xfuse/__main__.py", line 244, in _convert_st counts_data = pd.read_csv(counts, sep="\t", index_col=0) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 680, in read_csv return _read(filepath_or_buffer, kwds) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 575, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 933, in __init__ self._engine = self._make_engine(f, self.engine) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/io/parsers/readers.py", line 1235, in _make_engine return mapping[engine](f, **self.options) File "/home/ruping/xia00045/.local/lib/python3.8/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 75, in __init__ self._reader = parsers.TextReader(src, **kwds) File "pandas/_libs/parsers.pyx", line 544, in pandas._libs.parsers.TextReader.__cinit__ File "pandas/_libs/parsers.pyx", line 633, in pandas._libs.parsers.TextReader._get_header File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1952, in pandas._libs.parsers.raise_parser_error UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

RuntimeError: CUDA error: no kernel image is available for execution on the device

Hi,

I followed the instructions on the GitHub README page to install xFuse. However, when I ran the command:

xfuse run my-config.toml --save-path my-run

I encountered the following error:

Screenshot 2024-06-13 at 2 08 32 PM

To resolve this, I attempted to upgrade Torch by running:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

After the upgrade, I encountered a different error:

Screenshot 2024-06-13 at 3 11 28 PM

Additionally, when I ran nvidia-smi, here is the GPU information:

Screenshot 2024-06-13 at 2 30 07 PM

Could you please provide guidance on how to resolve this issue?

Thank you!

Best,
Mingyu

Missing h5 file question

Hello,
I am currently using a public dataset that has only the mtx and tsv files (barcodes and features) and no h5 files. I don't have access to the raw fastq files to generate the h5 file. I was wondering if there was a workaround or recommendations for this case? I noticed the convert visium function has an h5 as an input.

How to use masks?

I have trained the model on Visium data and in the gene maps I see strong signals associated to the fiducial spots. How can I tell the model to take advantage of a mask for each Visium slide to ignore the pixels outside the tissue?

My TIF files already contain the mask information in the alpha channel, but I had to get rid of that channels for two reasons:

  1. when I call xfuse convert visium ... if I don't pass the argument --no-mask I get the error:
💔 ERROR : session panic! OpenCV(4.4.0) /tmp/pip-req-build-vu_aq9yd/opencv/modules/imgproc/src/grabcut.cpp:557: error: (-5:Bad argument) image must have CV_8UC3 type in function 'grabCut'

which, as discussed in this Stack Overflow question, is linked to having an unexpected alpha channel.
2. when training the model, if my TIF files had an alpha channel (even if I converted the data passing --no-mask), I encounter this error.

Thanks for your help.

Visium: Failed to mask tissue

Hi, thank you for the great method! I try to apply it to my Visium dataset but I got the following warnings for all my samples on conversion step:
[2021-12-29 13:40:49,020] ℹ : Running xfuse version 0.2.1 [2021-12-29 13:40:56,493] ℹ : Computing tissue mask: [2021-12-29 13:40:56,500] ⚠ WARNING : UserWarning (/nfs/users/nfs_p/pm19/.local/lib/python3.9/site-packages/xfuse/utility/mask.py:67): Failed to mask tissue OpenCV(4.5.4) /tmp/pip-req-build-kv0l0wqx/opencv/modules/imgproc/src/grabcut.cpp:386: error: (-215:Assertion failed) !bgdSamples.empty() && !fgdSamples.empty() in function 'initGMMs' [2021-12-29 13:41:07,029] ⚠ WARNING : UserWarning (/nfs/users/nfs_p/pm19/.local/lib/python3.9/site-packages/xfuse/convert/utility.py:217): Count matrix contains duplicated columns. Counts will be summed by column name. [2021-12-29 13:41:09,749] ⚠ WARNING : FutureWarning (/nfs/users/nfs_p/pm19/.local/lib/python3.9/site-packages/xfuse/convert/utility.py:227): Using the level keyword in DataFrame and Series aggregations is deprecated and will be removed in a future version. Use groupby instead. df.sum(level=1) should use df.groupby(level=1).sum().
I mostly worry about "Failed to mask tissue" warning. In this dataset we instructed spaceranger to consider all spots because tissue autodetection failed to find relatively transparent adipose tissue. Then we manually annotated tissue spots and I introduced this information into tissue-positions file (second column). As far as I can see xfuse ignores this information and attempts to mask tissue internaly, but this procedure fails. Am I right that in this case xfuse considers all spots? At least it looks like this based on manuall inspection of data.h5 file and high intensity of some of metagenes in out-of-tissue regions. May I force xfuse to use tissue mask provided in tissue-positions file?

Is "Count matrix contains duplicated columns" warning about gene names?

Then, when I run xfuse at some points it tells that "Registering experiment: ST (data type: "ST")" while actually it is visium data, is it important or can I just ignore it?

error when installing

I can't execute the install command "pip install --user git+https://github.com/ludvb/xfuse@master" due to network reasons.
So I download this repository and install the dependencies by the command "pip install -v -e ." .But I got this error

Installing collected packages: xfuse
  Running setup.py develop for xfuse
    Running command python setup.py develop
    ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
    error: subprocess-exited-with-error

    × python setup.py develop did not run successfully.
    │ exit code: 1
    ╰─> See above for output.

    note: This error originates from a subprocess, and is likely not a problem with pip.

how to install it without setup.py

Train on multiple GPU

Hello,
Very interesting paper ! I would like to train the model on my own datasets and therefore I would like to know how I can run it using multiple GPUs.

Many thanks

xfuse with black and white image

Hi, xfuse convert visium works perfectly on colour images, but fails when using a black and white image. I get the same error as mentioned in #17:

WARNING : UserWarning (/home//.local/lib/python3.8/site-packages/xfuse/utility/mask.py:74): Failed to mask tissue
OpenCV(4.6.0) /io/opencv/modules/imgproc/src/grabcut.cpp:557: error: (-5:Bad argument) image must have CV_8UC3 type in function 'grabCut' 

I am not confident with OpenCV, it would be much appreciated if you could help out here!

Best,
Simon

prediction of spatial gene expression

Hi,
I have a question, in your paper, extended fig 4 shows that xfuse can be used to predict the spatial gene expression. You used two tissues as reference and 2 others to predict. Can you explain how to do that with xfuse? i want to try it with my dataset
thank you

When is my model fully trained?

Hello again Ludvig,

I got XFuse running on the supercomputer (thanks for the help!!).

There is a time limit of 24 hrs on the GPU node, so I stopped and restarted a few times during the first part (100k epochs), but managed to run it entirely in about 3-4 full days.

Then at the end he started to make the gene_maps, but I just didnt make it in time, the gene_maps for the last 2000 or so genes were not created - the process was aborted due to time-out.

So my question is: is my model already fully trained? Or does this gene_maps script needs to be fully executed, maybe even a follow-on script that needs to be ran?

Or is the gene_maps just a script that is ran AFTER the model is built. If so; how can I run it separately?

Thank you for your support

Nicolaas

Error when Running with Windows

Hello xfuse developers,

Thanks in advance for this package! I'm looking to utilize xfuse with H&E images to predict expression in Breast Cancer tissue. I created a conda environment to mitigate dependency conflicts and installed using the recommended method, i.e. pip install --user git+https://github.com/ludvb/xfuse@master

My specs are as follows:

Windows 11 v22H2 OS build 22621.1105, 64bit
conda v22.11.1
python v3.8.16
xfuse v0.2.1 (and all associated dependencies, I'm willing and able to provide if necessary but I will save these for brevity)

I attempted the walkthrough run on the README, but unfortunately I was met with the error below.

(xfuse_env) C:\dummy\path> xfuse run my-config.toml --save-path my-run
[2023-01-25 13:41:38,090] ℹ : Running xfuse version 0.2.1
[2023-01-25 13:41:38,122] ℹ : Using the following design table:
[2023-01-25 13:41:38,122] ℹ :
[2023-01-25 13:41:38,122] ℹ : |          |   section {1,2,3,4} |
[2023-01-25 13:41:38,122] ℹ : |----------+---------------------|
[2023-01-25 13:41:38,122] ℹ : | section1 |                   1 |
[2023-01-25 13:41:38,122] ℹ : | section2 |                   2 |
[2023-01-25 13:41:38,122] ℹ : | section3 |                   3 |
[2023-01-25 13:41:38,122] ℹ : | section4 |                   4 |
[2023-01-25 13:41:38,122] ℹ :
[2023-01-25 13:41:38,122] 🚨 ERROR : AttributeError: module 'os' has no attribute 'sched_getaffinity'
Traceback (most recent call last):
  File "C:\Users\matth\anaconda3\envs\xfuse_env\lib\site-packages\xfuse\__main__.py", line 585, in run
    _run(
  File "C:\Users\matth\anaconda3\envs\xfuse_env\lib\site-packages\xfuse\run.py", line 64, in run
    if (available_cores := len(os.sched_getaffinity(0))) < num_data_workers:
AttributeError: module 'os' has no attribute 'sched_getaffinity'

I noticed that xfuse had been tested with GNU/Linux on the README, but that other OS should work. I have Ubuntu on my machine, but would prefer to keep this installation in a conda environment. Furthermore, the version of python associated with Ubuntu causes dependency conflicts with torch which cannot be resolved by pip. In any case, I am unsure of how to proceed, and figured you may have come across this error before. I opened this as a new issue because #51 is relevant to mac, but not Windows. Please let me know if there is any other information I can provide to help you in helping me; thank you again!

installation problem: setup.py not found?

Hi,

I'm trying to run the tutorial, by first setting it up on my system. I'm trying to install it on a ubuntu command line.

When running 'pip install --user git+https://github.com/ludvb/xfuse@master', i get the following error:

nico@DESKTOP-4P3PDJ4:~$ pip install --user git+https://github.com/ludvb/xfuse@master
Collecting git+https://github.com/ludvb/xfuse@master
Cloning https://github.com/ludvb/xfuse (to master) to /tmp/pip-cf1SxT-build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
IOError: [Errno 2] No such file or directory: '/tmp/pip-cf1SxT-build/setup.py'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-cf1SxT-build/
nico@DESKTOP-4P3PDJ4:~$

Any idea what to do? I'm clueless...

Installation of xfuse

I am trying to install xfuse on my laptop and on our HPC. I run the pip install command as listed on the GitHub but run into an error on both computers, as shown below. Does this package require some dependencies? Can xfuse be installed through conda? Does it matter if I try to install it with pip or pip3 ?

`File "", line 188, in configuration
File "/tmp/pip-build-env-da5v2_l9/overlay/lib/python3.10/site-packages/numpy/distutils/misc_util.py", line 1050, in add_subpackage
config_list = self.get_subpackage(subpackage_name, subpackage_path,
File "/tmp/pip-build-env-da5v2_l9/overlay/lib/python3.10/site-packages/numpy/distutils/misc_util.py", line 1016, in get_subpackage
config = self.get_configuration_from_setup_py(
File "/tmp/pip-build-env-da5v2_l9/overlay/lib/python3.10/site-packages/numpy/distutils/misc_util.py", line 958, in get_configuration_from_setup_py
config = setup_module.configuration(*args)
File "/tmp/pip-install-455o447
/scikit-learn_ed3da863168d40b98d93dbf65283ce18/sklearn/setup.py", line 83, in configuration
cythonize_extensions(top_path, config)
File "/tmp/pip-install-455o447
/scikit-learn_ed3da863168d40b98d93dbf65283ce18/sklearn/_build_utils/init.py", line 70, in cythonize_extensions
config.ext_modules = cythonize(
File "/tmp/pip-build-env-da5v2_l9/overlay/lib/python3.10/site-packages/Cython/Build/Dependencies.py", line 1125, in cythonize
result.get(99999) # seconds
File "/homes/cathal.king/anaconda3/envs/xfuse/lib/python3.10/multiprocessing/pool.py", line 774, in get
raise self._value
Cython.Compiler.Errors.CompileError: sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.`

Customizing Training Procedure

Hi Author!

This was a super interesting work and I wanted to explore the architecture of this model a bit. However, I'm getting a little confused about the specifics of the codebase. I can't seem to figure out:

  1. Where are the recognition and generator networks are trained?
  2. If I wanted to slightly change any of the recognition/generator networks, where can I do that?
  3. Why is the training time so high? I can't seem to fully dissect what computations are being done and I'd like to try a few approaches that maybe don't perform data augmentation.

Sorry if these questions are somewhat novice; I just couldn't figure this out entirely by myself :(

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.