Code Monkey home page Code Monkey logo

diffdock-pp's People

Contributors

ketatam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

diffdock-pp's Issues

Need to make 'storage' directory before running inference

When running src/inference.sh with the single_pair_inference config file provided, I get this error:

Traceback (most recent call last):
  File "/home/DiffDock-PP/src/main_inf.py", line 620, in <module>
    main()
  File "/home/DiffDock-PP/src/main_inf.py", line 354, in main
    dump_predictions(args,results)
  File "/home/DiffDock-PP/src/main_inf.py", line 383, in dump_predictions
    with open(args.prediction_storage, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'storage/single_pair_run.pkl'

Simply create a 'storage' directory before running inference.sh, or mkdir (if the dir doesn't exist) somewhere in main_inf.py

Can the authors provide the hyper-parameters in training step?

Hi, @ketatam , thanks for releasing the code, I have a question: Now I want to reproduce the results on the DIPS test set, so I directly run the command bash ./src/train.sh, but I find that the hyper-parameters in dips_esm.yaml are different from the original paper, in the dips_plus.yaml, the epoch is 2000, but in your original paper and README.md in this repo, you said you set epoch as 170, and the args.yaml in the pre-trained model you provide, the ns, nv are 32, 6, but in the dips_esm.yaml, the ns, nv are 16, 4. so it makes me very confused which parameters should I use. Can the authors tell me which parameters were used when they did the experiment?

Question about the sde schedule

Hi, thanks for your amazing work! After reading your Diffdock-PP paper and running the code, I am curious why you chose the VE-SDE for the score model. Have you tried other SDEs like VP and the one in the EDM paper? Will the schedule greatly affect the sample quality due to the different weights in the exploring and refinement stages?

Besides, I find a bug in the code for atom features. It seems you put the residue type at the end of atom features but use the front of atom features as the residue type in the atom embedding layer during the forward function.

Train.sh data loading stops at 70%

Dear all,

After installing DiffDock-PP as per installation guide I attempted to run train.sh script.
Unfortunately loading of data stops somewhere at 70%. To be more precise, it becomes very slow and then dies.

Does anybody have advice or a workaround to this challenge?

Thank you!

Luka

Clarification on NUM_SAMPLES, NUM_FOLDS, and visualize_n_val_graphs

I'm setting up some custom runs, but I'm not certain about what the variables NUM_SAMPLES, NUM_FOLDS, and visualize_n_val_graphs do. Any help would be appreciated!

As far as I can tell from skimming the code, NUM_FOLDS allows for starting the diffusion process with different seeds (i.e. you'd let the diffusion process begin from the same centered and randomly rotated binding partners for each prediction if NUM_FOLDS=1), whereas NUM_SAMPLES refers to the number of poses sampled from the same fold. Then the minimum of the number of samples (which itself is probably NUM_SAMPLES or perhaps NUM_FOLDS * NUM_SAMPLES?) or the value of visualize_first_n_samples is used to actually save pdb files, showing the protein complex structure at each time step of the reverse diffusion process.

However, when I actually run my custom test set through, varying all three of these values does not change what's saved in the visualization directory. Instead, I consistently get 41 ligand files numbered from 0-40, a ligand-gt file, and a receptor file. I assume that the ligand-gt file is the randomized and centered starting position of the ligand, and each ligand structure numbered 0-40 are different time steps, for a single diffusion process.

Could I get some clarification on what those three flags (NUM_FOLDS, NUM_SAMPLES, and visualize_first_n_samples) mean, and how I can save all the ranked predicted final structures as pdb files? Thanks so much for your time.

Offline run of DiffDock-pp

Consider to add utilities / modify code to work with offline computing resources.
Most HPC do not have direct internet conection to internet.
Thus, the use of torch.hub to download ESM model, might be problematic!

I came up with a simple solution that could be integrated (or at least mentioned in furhter examples)

  1. Instal esm package via pip:
    pip install fair-esm
    (https://github.com/facebookresearch/esm)

  2. Download model and regression .pt files:
    https://dl.fbaipublicfiles.com/fair-esm/models/esm2_t33_650M_UR50D.pt
    https://dl.fbaipublicfiles.com/fair-esm/regression/esm2_t33_650M_UR50D-contact-regression.pt

  3. Import esm function to load precomputed models:
    from esm.pretrained import load_model_and_alphabet_local

  4. Modify data.train.utils.compute_emedding function [325-327]:
    modelpath = 'path/to/model/esm2_t33_650M_UR50D.pt'
    esm_model, alphabet = load_model_and_alphabet_local(modelpath)

I do know that with torch.hub its possible to pre-cache the files. And then just load the pre-downloaded ones, but its not ideal
This is just a consideration, to make the tool more scalable and useful for other teams!

Victor M

Is this belongs to your institution?

I received this email, can I trust him? cuz I didn‘t find his name in the author list in your paper.

The following is the email I received:
Hi there,

My name is Berke, and I’m the Founder and CEO of Superbio.ai. I’m reaching out because I noticed that you starred DiffDock on Github. You are Ming-Qin-tech, correct?

Superbio works to put cutting-edge apps like DiffDock in the hands of researchers, without any setup. You can navigate straight to DiffDock on Superbio and give it a go.

Let me know if you have any questions, and I’d be happy to walk you through the platform.

Kind regards,
Berke Buyuccucak
Founder & CEO, Superbio.ai

Error when running the single_pair_inference example

Hi,

I am getting the following error when running the inference example using the two provided pdb files:

Traceback (most recent call last):
  File "/root/DiffDock-PP/src/main_inf.py", line 620, in <module>
    main()
  File "/root/DiffDock-PP/src/main_inf.py", line 339, in main
    pred_list = evaluate_confidence(model_confidence,samples_loader,args) # TODO -> maybe list inside
  File "/root/DiffDock-PP/src/main_inf.py", line 61, in evaluate_confidence
    pred = model(data)
  File "/opt/mamba/envs/diffdock_pp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/DiffDock-PP/src/model/model.py", line 84, in forward
    logits = self.encoder(batch)
  File "/opt/mamba/envs/diffdock_pp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/DiffDock-PP/src/model/diffusion.py", line 279, in forward
    tr_t = batch.complex_t["tr"]
  File "/opt/mamba/envs/diffdock_pp/lib/python3.10/site-packages/torch_geometric/data/hetero_data.py", line 156, in __getattr__
    raise AttributeError(f"'{self.__class__.__name__}' has no "
AttributeError: 'HeteroDataBatch' has no attribute 'complex_t'

Any ideas on what could be causing the problem?

Potentially relevant info: I am running this code in CPU only for now. So I commented out all the sections that were assigning the model or data to cuda.

Thanks!

Error when trying single_pair_dataset example

Errors when testing the single_pair_dataset example (1A2K):
Total time spent: 169.33692002296448
ligand_rmsd_summarized: {'mean': 23.944883, 'median': 23.944883, 'std': 0.0, 'lt1': 0.0, 'lt2': 0.0, 'lt5': 0.0, 'lt10': 0.0}
complex_rmsd_summarized: {'mean': 14.106365, 'median': 14.106365, 'std': 0.0, 'lt1': 0.0, 'lt2': 0.0, 'lt5': 0.0, 'lt10': 0.0}
interface_rmsd_summarized: {'mean': 9.92274, 'median': 9.92274, 'std': 0.0, 'lt1': 0.0, 'lt2': 0.0, 'lt5': 0.0, 'lt10': 100.0}
Traceback (most recent call last):
File "/common/workdir/DiffDock-PP/src/main_inf.py", line 620, in
main()
File "/common/workdir/DiffDock-PP/src/main_inf.py", line 354, in main
dump_predictions(args,results)
File "/common/workdir/DiffDock-PP/src/main_inf.py", line 383, in dump_predictions
with open(args.prediction_storage, 'wb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'storage/run_on_pdb_pairs.pkl'

how to get the best pose

Thank you for sharing this amazing work and a few examples!

I just wanted to ask how could we know which pose is ranked as the best. I tested your model by running src/db5_inference.sh and I found 40 poses generated in the visualization path, but I was not sure which one is the best pose. By the way, does the numbering of 0-40 refer to the diffusion time? Many thanks!

DB5Loader single-pair PDB error

Hi (again!)

I have been playing around with your code.
Followed your suggestions using the DB5Loader loader and did the trick!

Found that, if one uses a .csv file with only ONE pair of pdbs (i.e one line) the code fails.
Concretely, the error rises at data_train_utils.compute_embeddings, the torch.cat [line 350, 350] function.

With 2+ pairs of .pdb script works correctly.

Possible solution:
Simple check of rec_reps dimensions before running the concatanation.

Let me know if you require further info

Victor M

Give binding site as input

Hi,

I have two proteins that I would like to test the docking. I have adapted your example script (single_pairs) to run with my proteins. The output of the DiffDock seems to be quite off regarding the ground truth position of the ligand (which is available in the structure downloaded from the PDB server).

Is there a way to input some bias toward the binding site? or something that could perform a similar function?

Thanks in advance!

Interpreting output as a docked protein complex

I was able to fix the output PDB files according to #10 to get visuals that look more like proteins. However, when combining the ligand and receptor PDBs, the ligand and receptor appear to have overlapping coordinates as if they are just smashed together rather than docked as a complex. Here is a screenshot of the combined PDB visualized:

Screenshot 2023-07-31 at 8 47 18 AM

In the visualization/ output directory I get ligand PDB files with numbers appended 0 through 40, plus a ‘-gt' appended ligand PDB and a receptor PDB. Still trying to figure out the meaning of these outputs.

Is combining the ligand PDB appended with ‘-40’ (final timestep?) with the receptor output PDB into one single "complex" PDB the correct way to get the output complex?

PDB viz code

Hi (again)

It seems that the code for visualization (in main_inf.py) is commented.
Not clear what should be un-commented in order to visualze the output .pdb of the estimates.
Inisghts?

V

location of confidence model score?

Thanks for the amazing work,and I have finished the test demo of DIPS,Do there have a specific file for users to check the confidence scores of protein pairs?

ToyLoader not implemented?

First, I would like to thank you for open-source this tool, and congratz for the publication!

I wanted to test Diffdock-pp using my own pair of proteins.
I was expecting to see something similar to DiffDock, where you can parse --receptor-pdb and --ligand-pdb over main_inf.py. This was not the case. I was debugging the code to try to find a workaround and saw the database "toy" option. Nevertheless, the ToyLoader class is not implemented.

Main Q:
Any inisight into how to parse my own pair of proteins relatively quick (i.e no change in main code)?

Again, thanks!
Victor Montal

E3NN vs Diffusion

Hi,

The default model_type in args.py is "diffusion" but in factory.py the model is loaded only if the model_type is "e3nn" and raises an exception otherwise. Could you explain what's going on here?

Thanks,

Kevin

Obtaining confidence values from the model

Hi there, thanks a lot for your work on this software, its quite impressive. However, I'm trying to understand if their is a way to obtain the confidence values of each predicted pose ?

I printed out the prediction pickle and got a list composed of this data structure (40 copies of it per number of sampled structures):

name='2Q3A',
center=[1, 3],
receptor={
pos=[117, 3],
x=[117, 1281],
},
ligand={
pos=[117, 3],
x=[117, 1281],
},
(receptor, contact, receptor)={ edge_index=[2, 2340] },
(ligand, contact, ligand)={ edge_index=[2, 2340] }
), -7.416438102722168)]]

I suspected the -7.41 to be the gradient value and not the confidence one, but I also couldn't find any clear way how to get a handle on this via main_inf.py
Could you please advise ?

Kindest regards,
Yoav

How can I recover the side chains ?

Hi,

Thank you for releasing such a nice tool.

After completing docking (single pair),

I got a result as below (PyMOL v2.5.0).

스크린샷 2023-05-01 오후 2 26 49

The docking outcome is promising, but I also want to see the side chains.

Is there any protocol for recovering the side chains in the docking result ?

Sincerely,

Jongseo

Installation Issue

I get the following error while running the first two commands in -- Local, VM, Docker -- Kindly resolve the issue, I have checked the https://data.pyg.org/whl/torch-1.13.0+cu116.html also.

Command: pip install --no-cache-dir torch-scatter==2.0.9 torch-sparse==0.6.15 torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.13.0+cu116.html && pip install numpy dill tqdm pyyaml pandas biopandas scikit-learn biopython e3nn wandb tensorboard tensorboardX matplotlib

pip install --no-cache-dir  torch-scatter==2.0.9 torch-sparse==0.6.15 torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-1.13.0+cu116.html && pip install numpy dill tqdm pyyaml pandas biopandas scikit-learn biopython e3nn wandb tensorboard tensorboardX matplotlib
Looking in indexes: https://pypi.org/simple, https://packagecloud.io/github/git-lfs/pypi/simple
Looking in links: https://data.pyg.org/whl/torch-1.13.0+cu116.html
Collecting torch-scatter==2.0.9
  Downloading https://data.pyg.org/whl/torch-1.13.0%2Bcu116/torch_scatter-2.0.9-cp310-cp310-linux_x86_64.whl (9.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.4/9.4 MB 16.8 MB/s eta 0:00:00
Collecting torch-sparse==0.6.15
  Downloading https://data.pyg.org/whl/torch-1.13.0%2Bcu116/torch_sparse-0.6.15%2Bpt113cu116-cp310-cp310-linux_x86_64.whl (4.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 9.4 MB/s eta 0:00:00
Collecting torch-cluster
  Downloading torch_cluster-1.6.3.tar.gz (54 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.5/54.5 kB 60.6 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting torch-spline-conv
  Downloading https://data.pyg.org/whl/torch-1.13.0%2Bcu116/torch_spline_conv-1.2.2%2Bpt113cu116-cp310-cp310-linux_x86_64.whl (868 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 868.3/868.3 kB 2.9 MB/s eta 0:00:00
Collecting torch-geometric
  Downloading torch_geometric-2.4.0-py3-none-any.whl.metadata (63 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.9/63.9 kB 62.4 MB/s eta 0:00:00
Collecting scipy (from torch-sparse==0.6.15)
  Downloading scipy-1.11.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.4/60.4 kB 182.3 MB/s eta 0:00:00
Collecting tqdm (from torch-geometric)
  Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 183.5 MB/s eta 0:00:00
Collecting numpy (from torch-geometric)
  Downloading numpy-1.26.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.2/61.2 kB 157.0 MB/s eta 0:00:00
Collecting jinja2 (from torch-geometric)
  Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 199.0 MB/s eta 0:00:00
Collecting requests (from torch-geometric)
  Downloading requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pyparsing (from torch-geometric)
  Downloading pyparsing-3.1.1-py3-none-any.whl.metadata (5.1 kB)
Collecting scikit-learn (from torch-geometric)
  Downloading scikit_learn-1.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB)
Collecting psutil>=5.8.0 (from torch-geometric)
  Downloading psutil-5.9.6-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (21 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch-geometric)
  Downloading MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting charset-normalizer<4,>=2 (from requests->torch-geometric)
  Downloading charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torch-geometric)
  Downloading idna-3.4-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 143.5 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1 (from requests->torch-geometric)
  Downloading urllib3-2.0.7-py3-none-any.whl.metadata (6.6 kB)
Collecting certifi>=2017.4.17 (from requests->torch-geometric)
  Downloading certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Collecting joblib>=1.1.1 (from scikit-learn->torch-geometric)
  Downloading joblib-1.3.2-py3-none-any.whl.metadata (5.4 kB)
Collecting threadpoolctl>=2.0.0 (from scikit-learn->torch-geometric)
  Downloading threadpoolctl-3.2.0-py3-none-any.whl.metadata (10.0 kB)
Downloading torch_geometric-2.4.0-py3-none-any.whl (1.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.0/1.0 MB 189.2 MB/s eta 0:00:00
Downloading psutil-5.9.6-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (283 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 283.6/283.6 kB 183.3 MB/s eta 0:00:00
Downloading numpy-1.26.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 149.2 MB/s eta 0:00:00
Downloading pyparsing-3.1.1-py3-none-any.whl (103 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 103.1/103.1 kB 202.3 MB/s eta 0:00:00
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 115.7 MB/s eta 0:00:00
Downloading scikit_learn-1.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.8/10.8 MB 143.8 MB/s eta 0:00:00
Downloading scipy-1.11.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (36.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 36.4/36.4 MB 181.0 MB/s eta 0:00:00
Downloading tqdm-4.66.1-py3-none-any.whl (78 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.3/78.3 kB 126.9 MB/s eta 0:00:00
Downloading certifi-2023.7.22-py3-none-any.whl (158 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.3/158.3 kB 200.8 MB/s eta 0:00:00
Downloading charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 142.1/142.1 kB 169.0 MB/s eta 0:00:00
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 124.7 MB/s eta 0:00:00
Downloading MarkupSafe-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading threadpoolctl-3.2.0-py3-none-any.whl (15 kB)
Downloading urllib3-2.0.7-py3-none-any.whl (124 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 kB 189.8 MB/s eta 0:00:00
Building wheels for collected packages: torch-cluster
  Building wheel for torch-cluster (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [45 lines of output]
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-cpython-310
      creating build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/knn.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/typing.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/sampler.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/__init__.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/nearest.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/testing.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/rw.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/grid.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/radius.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/graclus.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      copying torch_cluster/fps.py -> build/lib.linux-x86_64-cpython-310/torch_cluster
      running egg_info
      writing torch_cluster.egg-info/PKG-INFO
      writing dependency_links to torch_cluster.egg-info/dependency_links.txt
      writing requirements to torch_cluster.egg-info/requires.txt
      writing top-level names to torch_cluster.egg-info/top_level.txt
      reading manifest file 'torch_cluster.egg-info/SOURCES.txt'
      reading manifest template 'MANIFEST.in'
      warning: no previously-included files matching '*' found under directory 'test'
      adding license file 'LICENSE'
      writing manifest file 'torch_cluster.egg-info/SOURCES.txt'
      running build_ext
      building 'torch_cluster._graclus_cpu' extension
      creating build/temp.linux-x86_64-cpython-310
      creating build/temp.linux-x86_64-cpython-310/csrc
      creating build/temp.linux-x86_64-cpython-310/csrc/cpu
      gcc -pthread -B /home/ubuntu/anaconda3/envs/diffdock_pp/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -DWITH_PYTHON -Icsrc -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/diffdock_pp/include/python3.10 -c csrc/cpu/graclus_cpu.cpp -o build/temp.linux-x86_64-cpython-310/csrc/cpu/graclus_cpu.o -O2 -Wno-sign-compare -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=_graclus_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
      gcc -pthread -B /home/ubuntu/anaconda3/envs/diffdock_pp/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -DWITH_PYTHON -Icsrc -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/diffdock_pp/include/python3.10 -c csrc/graclus.cpp -o build/temp.linux-x86_64-cpython-310/csrc/graclus.o -O2 -Wno-sign-compare -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=_graclus_cpu -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
      g++ -pthread -B /home/ubuntu/anaconda3/envs/diffdock_pp/compiler_compat -shared -Wl,-rpath,/home/ubuntu/anaconda3/envs/diffdock_pp/lib -Wl,-rpath-link,/home/ubuntu/anaconda3/envs/diffdock_pp/lib -L/home/ubuntu/anaconda3/envs/diffdock_pp/lib -Wl,-rpath,/home/ubuntu/anaconda3/envs/diffdock_pp/lib -Wl,-rpath-link,/home/ubuntu/anaconda3/envs/diffdock_pp/lib -L/home/ubuntu/anaconda3/envs/diffdock_pp/lib build/temp.linux-x86_64-cpython-310/csrc/cpu/graclus_cpu.o build/temp.linux-x86_64-cpython-310/csrc/graclus.o -L/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-310/torch_cluster/_graclus_cpu.so -s
      building 'torch_cluster._graclus_cuda' extension
      creating build/temp.linux-x86_64-cpython-310/csrc/cuda
      gcc -pthread -B /home/ubuntu/anaconda3/envs/diffdock_pp/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/diffdock_pp/include -fPIC -DWITH_PYTHON -DWITH_CUDA -Icsrc -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/diffdock_pp/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/include/python3.10 -c csrc/cpu/graclus_cpu.cpp -o build/temp.linux-x86_64-cpython-310/csrc/cpu/graclus_cpu.o -O2 -Wno-sign-compare -DAT_PARALLEL_OPENMP -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=_graclus_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
      /home/ubuntu/anaconda3/envs/diffdock_pp/bin/nvcc -DWITH_PYTHON -DWITH_CUDA -Icsrc -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/TH -I/home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/THC -I/home/ubuntu/anaconda3/envs/diffdock_pp/include -I/home/ubuntu/anaconda3/envs/diffdock_pp/include/python3.10 -c csrc/cuda/graclus_cuda.cu -o build/temp.linux-x86_64-cpython-310/csrc/cuda/graclus_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O2 --expt-relaxed-constexpr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=_graclus_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_52,code=compute_52 -gencode=arch=compute_52,code=sm_52 -std=c++14
      In file included from csrc/cuda/graclus_cuda.cu:3:
      /home/ubuntu/anaconda3/envs/diffdock_pp/lib/python3.10/site-packages/torch/include/ATen/cuda/CUDAContext.h:10:10: fatal error: cusolverDn.h: No such file or directory
         10 | #include <cusolverDn.h>
            |          ^~~~~~~~~~~~~~
      compilation terminated.
      error: command '/home/ubuntu/anaconda3/envs/diffdock_pp/bin/nvcc' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for torch-cluster
  Running setup.py clean for torch-cluster
Failed to build torch-cluster
ERROR: Could not build wheels for torch-cluster, which is required to install pyproject.toml-based projects

Making sense of PDB files generated by DiffDock-PP

Thank you for providing an example showing how to run DiffDock-PP inference with PDB files. I was able to run the example you provided, but I'm confused by the output. The pdb files generated by the inference script don't look like protein structures. As an example, consider the reference receptor for the 1A2K example. If I look at only the alpha carbons for the 1A2K_r_b.pdb I see this, which looks like a normal protein structure.

image

However the resulting protein and ligand pdb files look nothing like proteins. Is this a bug or am I missing something?

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.