Code Monkey home page Code Monkey logo

xtrimomultimer's People

Contributors

fazziekey avatar kk666-ai avatar popfido avatar shenggan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

xtrimomultimer's Issues

cannot find params_model_1.npz' file

When i use bin/inference.sh to test the code, the erro reported that cannot find such npz file, and i cannot find it in github as well
report like this:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/data/user/liwh/.conda/envs/xTrimoMultimer/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in wrap
fn(i, *args)
File "/data/user/liwh/xTrimoMultimer/inference.py", line 371, in predict_structure
import_jax_weights
(model, param_path, version=model_name)
File "/data/user/liwh/xTrimoMultimer/xtrimomultimer/utils/import_weights.py", line 550, in import_jax_weights_
data = np.load(npz_path)
File "/data/user/liwh/.conda/envs/xTrimoMultimer/lib/python3.9/site-packages/numpy/lib/npyio.py", line 405, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: './data/params/params_model_1.npz'

Is there someone experienced who can tell me what's going on
thanks so much

only size-1 arrays can be converted to Python scalars

it failed to process features model. could anyone give a suggestion?

Traceback (most recent call last):
File "inference.py", line 657, in
main(args)
File "inference.py", line 566, in main
is_multimer=global_is_multimer,
File "/data/xTrimoMultimer/xtrimomultimer/data/feature_pipeline.py", line 108, in process_features
mode=mode,
File "/data/xTrimoMultimer/xtrimomultimer/data/feature_pipeline.py", line 77, in np_example_to_features
num_res = int(np_example["seq_length"])
TypeError: only size-1 arrays can be converted to Python scalars

Disturbing code with the hhsearch and hmmsearch wrappers

Just cloned the project and was giving a try of inference and I ran into this:

Traceback (most recent call last):
  File "inference.py", line 658, in <module>
    main(args)
  File "inference.py", line 535, in main
    alignment_runner.run(chain_fasta_path, local_alignment_dir)
  File "/xTrimoMultimer/xtrimomultimer/data/data_pipeline.py", line 476, in run
    if self.template_searcher.input_format == "sto":
AttributeError: 'HHSearch' object has no attribute 'input_format'

So I took a look at this HHSearch object at data/tools/hhsearch.py. Turns out that it doesn't have a input_format property. So I tried to seek this property name within the package and found that it is provided in a certain Hmmsearch class. So it seems that these 2 wrappers are so alike that the HHSearch class should've had the same properties and methods but they were somehow forgotten.

Build fails, missing colossalai

Building the docker image on Alma 8, build fails

$ docker buildx build . --file Dockerfile --tag xtrimomultimer:latest
...
#0 398.5 Linking aiohttp-3.8.3-py37h540881e_0
#0 398.6 Linking google-auth-1.35.0-pyh6c4a22f_0
#0 398.6 Linking google-auth-oauthlib-0.4.6-pyhd8ed1ab_0
#0 398.6 Linking tensorboard-2.4.1-pyhd8ed1ab_1
#0 398.8 Linking pytorch-lightning-1.5.10-pyhd8ed1ab_0
#0 399.1 Transaction finished
#0 399.1 
#0 399.1 Installing pip packages: deepspeed==0.5.3, ml-collections==0.1.0, colossalai
#0 400.3 Collecting deepspeed==0.5.3
#0 401.0   Downloading deepspeed-0.5.3.tar.gz (477 kB)
#0 401.2      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 477.5/477.5 kB 4.4 MB/s eta 0:00:00
#0 401.3   Preparing metadata (setup.py): started
#0 403.4   Preparing metadata (setup.py): finished with status 'done'
#0 403.5 Collecting ml-collections==0.1.0
#0 403.5   Downloading ml_collections-0.1.0-py3-none-any.whl (88 kB)
#0 403.5      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 88.7/88.7 kB 6.4 MB/s eta 0:00:00
#0 403.6 ERROR: Could not find a version that satisfies the requirement colossalai (from versions: none)
#0 403.6 ERROR: No matching distribution found for colossalai
#0 404.1 critical libmamba pip failed to install packages
------
ERROR: failed to solve: executor failed running [/bin/sh -c micromamba create -n xtrimomultimer -f /opt/xTrimoMultimer/requirements/environment.yaml -y  && micromamba clean --all]: exit code: 1

https://colossalai.org/docs/get_started/installation/ was last updated 17-Oct-22, looks like they've done away with the vanilla pip install

gammut@host2[~]:pip3 install --user colossalai
ERROR: Could not find a version that satisfies the requirement colossalai (from versions: none)
ERROR: No matching distribution found for colossalai

failed in hhblit

image

ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (5,) + inhomogeneous part.

  • inference.sh

python inference.py /data/scratch/alphafold/alphafold//pdb_mmcif/mmcif_files/
--fasta_paths target.fasta
--model_preset multimer
--output_dir ./tmp/
--param_dir /data/scratch/alphafold/alphafold//params/
--gpu_nums 1
--cpus 12
--use_fastfold_optimize
--uniref90_database_path /data/scratch/alphafold/alphafold//uniref90/uniref90.fasta
--mgnify_database_path /data/scratch/alphafold/alphafold//mgnify/mgy_clusters_2018_12.fa
--pdb70_database_path /data/scratch/alphafold/alphafold//pdb70/pdb70
--uniclust30_database_path /data/scratch/alphafold/alphafold//uniclust30/uniclust30_2018_08/uniclust30_2018_08
--pdb_seqres_database_path /home/lcmql/data/pdb_seqres.txt
--bfd_database_path /data/scratch/alphafold/alphafold/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt
--jackhmmer_binary_path which jackhmmer
--hhblits_binary_path which hhblits
--hhsearch_binary_path which hhsearch
--hmmsearch_binary_path which hmmsearch
--kalign_binary_path which kalign
--hmmbuild_binary_path which hmmbuild
--model_name model_1_multimer,model_2_multimer,model_3_multimer,model_4_multimer,model_5_multimer

setting/script for benchmark of long sequence inference

Can you share the details of the following benchmark? Is that an end-to-end benchmark? With A100 40G? Multimer or single chain? (as OpenFold didn't support multimer yet). Any scripts to reproduce the benchmark results?
How about the results of single GPU of xTrimo-Multimer?
image

xtrimomultimer/utils/__init__.py", line 2, No module named 'xtrimomultimer.utils.rich_utils'

python /inference.py --help

Python 3.7.12
Traceback (most recent call last):
File "/opt/xTrimoMultimer/inference.py", line 34, in
import xtrimomultimer.np.relax.relax as relax
File "/opt/xTrimoMultimer/xtrimomultimer/np/relax/init.py", line 11, in
_modules = [(m, importlib.import_module("." + m, name)) for m in all]
File "/opt/xTrimoMultimer/xtrimomultimer/np/relax/init.py", line 11, in
_modules = [(m, importlib.import_module("." + m, name)) for m in all]
File "/opt/conda/envs/xtrimomultimer/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/xTrimoMultimer/xtrimomultimer/np/relax/relax.py", line 19, in
from xtrimomultimer.np.relax import amber_minimize, utils
File "/opt/xTrimoMultimer/xtrimomultimer/np/relax/amber_minimize.py", line 23, in
import xtrimomultimer.utils.loss as loss
File "/opt/xTrimoMultimer/xtrimomultimer/utils/init.py", line 2, in
from xtrimomultimer.utils.rich_utils import enforce_tags, print_config_tree
ModuleNotFoundError: No module named 'xtrimomultimer.utils.rich_utils'

https://github.com/biomap-research/xTrimoMultimer/tree/main/xtrimomultimer/utils
cann't find rich_utils.py and rich_utils.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.