topazape / lstm_chem Goto Github PK
View Code? Open in Web Editor NEWImplementation of the paper - Generative Recurrent Networks for De Novo Drug Design.
License: The Unlicense
Implementation of the paper - Generative Recurrent Networks for De Novo Drug Design.
License: The Unlicense
I tried to reproduce with the same dataset (chemble22) that the author said was used in the paper by referring to the code created by the you, but the results are different.
I tried below.
SELECT DISTINCT canonical_smiles FROM compound_structures WHERE molregno IN ( SELECT DISTINCT molregno FROM activities WHERE standard_type IN ("Kd", "Ki", "Kb", "IC50", "EC50") AND standard_units = "nM" );
result is [Result: 802320 rows]
Author said "dataset of 677,044 SMILES strings with annotated nanomolar activities(Kd/i/B, IC/EC50) from ChEMBL22 "
So I use Chembl22, and
insert [standard_units = "nM"] for "nanomolar" ,
and [standard_type IN ("Kd", "Ki", "Kb", "IC50", "EC50")] for "activities(Kd/i/B, IC/EC50)"
what I missed?
Model training and molecule generation run only on single GPU even if 4 GPUs are available.
How to run on all the GPUs available.
Hello,
I was trying to run the Fine tuning notebook with a different dataset for my model and it is taking too long, I was wondering how to use the GPU version of Tensorflow to run the code?
It would be great if you could help. Thank you!
Hi
Thanks for your great job.
At the end of epoch 2 before saving model and last iteration I got this error:
`Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\keras\utils\data_utils.py", line 748, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\site-packages\tensorflow_core\python\keras\utils\data_utils.py", line 727, in pool_fn
initargs=(seqs, None, get_worker_id_queue()))
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\pool.py", line 176, in init
self._repopulate_pool()
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\pool.py", line 241, in _repopulate_pool
w.start()
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
RuntimeError: dictionary changed size during iteration
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Radical\Anaconda3\envs\myenv\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input`
would you explain why it is happening or fixed it.. thank you.
[Errno 2] No such file or directory: './datasets/fine-tune.smi'
I've noticed that BatchSize even it is there for Fine Tuning it is always 1 to make sure it functions properly. If one wants to set it bigger than 1, then it triggers an error due to a fact that self.max_len=0 and no padding takes place. I don't know how it would affect training if one uses max_len vs not using max_len with batchsize=1.
in your project ,i can t see file to finetune,how to get ? if you have ,cao you give me a file or tell me how to get it ,thx.
Hi,
Really loved your work, I went through your code and through the paper, the code is well written and documented.
I tried running the model and yes, it does take a lot of time to run on CPU, especially when I try to run on a mobile chip of a laptop. Unfortunately I don't have the luxury to get a GPU because of 2 reasons:
If you could make your transfer learning model public ( Ie the trained model created from model.save() ) It would be a great help. I could then alter your code and use transfer learning for my usecase.
1 solution I found was to use 100K data points to train the initial network ( LSTM Cell + RNN ) , however that is also quiet time consuming surprisingly on the CPU + the results will suffer.
Awaiting your reply
Your's Sincerely
I have run the train.py model and am trying to run the example_randomly_generated_SMILES file on jupyter notebook but I cannot run the command where we import LSTMChem from lstm_chem.model. The kernel keeps dying.
Is there any way to resolve this?
Hi,topazape
Thank you for developing such an excellent code.
I have some questions and I hope I can get your help.
When I run the command: Python train.py ,Attributeerror occurred,
I noticed that gwanseum had the same problem,borrow his pictures here.
I didn't see your discussion with gwanseum,
I hope I can get your help!
Thanks in advance for your help,
Sincerely
Hello topazape,
Could you please share the file named' LSTM_Chem-22-0.42.hdf5'?
Thank you in advance!
Fantastic implementation of paper, although they have one more method of fine-tuning called as 'fragment growing', where if you give one fragment as SMILE, it will generate SMILES around that fragment. Is there any direction that you can point me to?
Thank the author for the code.
I'd like to ask you some questions:
1.The memory capacity and GPU model of your computer.
2.GPU memory footprint while training code.
Looking forward to your reply. I want to know if my GTX1060 GPU can train with this code.
Heartfelt thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.