Code Monkey home page Code Monkey logo

deepconv-dti's Introduction

DeepConvDTI

Overview

This Python script is used to train, validate, test deep learning model for prediction of drug-target interaction (DTI) Deep learning model will be built by Keras with tensorflow. You can set almost hyper-parameters as you want, See below parameter description DTI, drug, target protein and their interaction data must be written as csv file format. And feature should be tab-delimited format for script to parse data. Basically, this script builds convolutional neural network on sequence. If you don't want convolutional neural network but traditional dense (fully connected) layers on provide protein feature, specify type of feature and feature length.

Requirement

tensorflow > 1.0 and < 2.0
keras > 2.0 
numpy
pandas 
scikit-learn  

Usage

  usage: DeepConvDTI.py [-h] [--test-name [TEST_NAME [TEST_NAME ...]]]
                     [--test-dti-dir [TEST_DTI_DIR [TEST_DTI_DIR ...]]]
                     [--test-drug-dir [TEST_DRUG_DIR [TEST_DRUG_DIR ...]]]
                     [--test-protein-dir [TEST_PROTEIN_DIR [TEST_PROTEIN_DIR ...]]]
                     [--with-label WITH_LABEL]
                     [--window-sizes [WINDOW_SIZES [WINDOW_SIZES ...]]]
                     [--protein-layers [PROTEIN_LAYERS [PROTEIN_LAYERS ...]]]
                     [--drug-layers [DRUG_LAYERS [DRUG_LAYERS ...]]]
                     [--fc-layers [FC_LAYERS [FC_LAYERS ...]]]
                     [--learning-rate LEARNING_RATE] [--n-epoch N_EPOCH]
                     [--prot-vec PROT_VEC] [--prot-len PROT_LEN]
                     [--drug-vec DRUG_VEC] [--drug-len DRUG_LEN]
                     [--activation ACTIVATION] [--dropout DROPOUT]
                     [--n-filters N_FILTERS] [--batch-size BATCH_SIZE]
                     [--decay DECAY] [--validation] [--predict]
                     [--save-model SAVE_MODEL] [--output OUTPUT]
                     dti_dir drug_dir protein_dir

Data Specification

All training, validation, test should follow specification to be parsed correctly by DeepConv-DTI

  • Model takes 3 types data as a set, Drug-target interaction data, target protein data, compound data.

  • They should be .csv format.

  • For feature column, each dimension of features in columns should be delimited with tab (\t)

After three data are correctly listed, target protein data and compound data will be joined with drug-target data, generating DTI feature.

Drug target interaction data

Drug target interaction data should be at least 2 columns Protein_ID and Compound_ID,

and should have Label column except --test case. Label colmun has to have label 0 as negative and 1 as positive.

Protein_ID Compound_ID Label
PID001 CID001 0
... ... ...
PID100 CID100 1

Target protein data

Because DeepConvDTI focuses on convolution on protein sequence, protein data specification is little different from other data.

If Sequence column is specified in data and --prot-vec is Convolution, it will execute convolution on protein.

Or if you specify other type of column with --prot-vec(i.e. Prot2Vec), it will construct dense (fully connected) network

Protein_ID column will be used as foreign key from Protein_ID from Drug-target interaction data.

Protein_ID Sequence Prot2Vec
PID001 MALAC....ACC 0.539\t-0.579\t...\t0.39

Compound data

Basically same with Target protein data, but no Convolution.

Compound_ID column will be used as forein key from Compound_ID from Drug-target interaction data.

Compound_ID morgan_r2
CID001 0\t1\t...\t0\t1

Parameter specification

Positional arguments

    dti_dir               Training DTI information [drug, target, label]
    drug_dir              Training drug information [drug, SMILES,[feature_name,
                          ..]]
    protein_dir           Training protein information [protein, seq,
                          [feature_name]]

For training model, you should input 3 files, DTI information file, drug information file and target protein information file, as their format is specified above.

DTI information file for training should have Label column for training.

Optional arguments

    --validation          Excute validation with independent data, will give AUC
                          and AUPR (No prediction result)
    --predict             Predict interactions of independent test set

DeepConvDTI script has two mode, validation and predict mode.

In validation mode, performances (AUC, AUPR, threshold for AUC and AUPR) on each step and selected hyperparameters are recorded.

In test step, prediction results for given test dataset will be reported after training.

    --test-name [TEST_NAME [TEST_NAME ...]], -n [TEST_NAME [TEST_NAME ...]]
                          Name of test data sets
    --test-dti-dir [TEST_DTI_DIR [TEST_DTI_DIR ...]], -i [TEST_DTI_DIR [TEST_DTI_DIR ...]]
                          Test dti [drug, target, [label]]
    --test-drug-dir [TEST_DRUG_DIR [TEST_DRUG_DIR ...]], -d [TEST_DRUG_DIR [TEST_DRUG_DIR ...]]
                          Test drug information [drug, SMILES,[feature_name,
                          ..]]
    --test-protein-dir [TEST_PROTEIN_DIR [TEST_PROTEIN_DIR ...]], -t [TEST_PROTEIN_DIR [TEST_PROTEIN_DIR ...]]
                          Test Protein information [protein, seq,
                          [feature_name]]
    --with-label WITH_LABEL, -W WITH_LABEL
                          Existence of label information in test DTI

You can input multiple datasets for validation or test with argument specifier.

In addition to DTI information file, drug information file and target protein information file, you need to name of validation or test datasets.

For test dataset, you can inform that test dataset has label or not with -W value

    --n-epoch N_EPOCH, -e N_EPOCH
                          The number of epochs for training or validation

The number of epoch for model training.

Validation will stop evaluating performance with specified epoch.

Test for given dataset will be executed after specified epoch.

    --prot-vec PROT_VEC, -v PROT_VEC
                          Type of protein feature, if Convolution, it will
                          execute conlvolution on sequeunce
    --prot-len PROT_LEN, -l PROT_LEN
                          Protein vector length
    --drug-vec DRUG_VEC, -V DRUG_VEC
                          Type of drug feature
    --drug-len DRUG_LEN, -L DRUG_LEN
                          Drug vector length

Parameters for parsing data.

Model will parse column with specified name of column for drug and target protein feature.

Also, you need to give length of feature, which will be parsed.

For special case, with -v Convolution model will build convolution layer on Sequence of dataset.

    --window-sizes [WINDOW_SIZES [WINDOW_SIZES ...]], -w [WINDOW_SIZES [WINDOW_SIZES ...]]
                          Window sizes for model (only works for Convolution)
    --protein-layers [PROTEIN_LAYERS [PROTEIN_LAYERS ...]], -p [PROTEIN_LAYERS [PROTEIN_LAYERS ...]]
                          Dense layers for protein
    --drug-layers [DRUG_LAYERS [DRUG_LAYERS ...]], -c [DRUG_LAYERS [DRUG_LAYERS ...]]
                          Dense layers for drugs
    --fc-layers [FC_LAYERS [FC_LAYERS ...]], -f [FC_LAYERS [FC_LAYERS ...]]
                          Dense layers for concatenated layers of drug and
                          target layer
    --n-filters N_FILTERS, -F N_FILTERS
                          Number of filters for convolution layer, only works
                          for Convolution

Hyperparameters which determine shape of neural network.

If you want to deeper layer, you can write as -c 128 32, which will construct two consecutive neural layer on input drug feature with 128 and 32 units

    --activation ACTIVATION, -a ACTIVATION
                          Activation function of model
    --dropout DROPOUT, -D DROPOUT
                          Dropout ratio

Other hyperparameters.

We don't recommend to use dropout

    --batch-size BATCH_SIZE, -b BATCH_SIZE
                          Batch size
    --learning-rate LEARNING_RATE, -r LEARNING_RATE
                          Learning late for training
    --decay DECAY, -y DECAY
                          Learning rate decay

Hyperparameters for training

    --save-model SAVE_MODEL, -m SAVE_MODEL
                          save model

Save model after training done.

    --output OUTPUT, -o OUTPUT
                          Prediction output

Directory of .csv format file.

In validation mode, it will contain results for every epoch

In test mode, it will contain prediction results for given DTI pairs.

Toy Examples

Training and validation of model

For line example, if you have training dataset, toy_examples/training_dataset/training_dti.csv, toy_examples/training_dataset/training_compound.csv and toy_examples/training_dataset/training_protein.csv with right specification. You can validate model with validation dataset, toy_examples/validation_dataset/validation_dti.csv, toy_examples/validation_dataset/validation_compound.csv and toy_examples/validation_dataset/validation_protein.csv by using this command line.

python DeepConvDTI.py ./toy_examples/training_dataset/training_dti.csv ./toy_examples/training_dataset/training_compound.csv ./toy_examples/training_dataset/training_protein.csv --validation -n validation_dataset -i ./toy_examples/validation_dataset/validation_dti.csv -d ./toy_examples/validation_dataset/validation_compound.csv -t ./toy_examples/validation_dataset/validation_protein.csv -W -c 512 128 -w 10 15 20 25 30 -p 128 -f 128 -r 0.0001 -n 30 -v Convolution -l 2500 -V morgan_fp_r2 -L 2048 -D 0 -a elu -F 128 -b 32 -y 0.0001 -o ./validation_output.csv -m ./model.model -e 1

This command will train model with given hyper-parameters for 1 epoch. (because of -e 1). And resulting in validation result ./validation_output.csv, and corresponding model ./model.model

Prediction by using model

There are two ways to predict DTIs with command line When you have toy_examples/test_dataset/test_dti.csv, toy_examples/test_dataset/test_compound.csv and toy_examples/test_dataset/test_protein.csv

python DeepConvDTI.py ./toy_examples/training_dataset/training_dti.csv ./toy_examples/training_dataset/training_compound.csv ./toy_examples/training_dataset/training_protein.csv --predict -n predict -i ./toy_examples/test_dataset/test_dti.csv -d ./toy_examples/test_dataset/test_compound.csv -t ./toy_examples/test_dataset/test_protein.csv -c 512 128 -w 10 15 20 25 30 -p 128 -f 128 -r 0.0001 -n 30 -v Convolution -l 2500 -V morgan_fp_r2 -L 2048 -D 0 -a elu -F 128 -b 32 -y 0.0001 -o ./test_output.csv -m ./model.model -e 15 -W

With this command, model will be trained with training dataset for 15 epochs and predict test dataset when it finished training, resulting in test_output.csv which have prediction score and its true label (because of -W) The second way of prediction is using predict_with_model.py. If you have model which is saved from validation or something, you can use it.

python predict_with_model.py ./model.model -n predict -i ./toy_examples/test_dataset/test_dti.csv -d ./toy_examples/test_dataset/test_compound.csv -t ./toy_examples/test_dataset/test_protein.csv -v Convolution -l 2500 -V morgan_fp_r2 -L 2048 -W -o test_result.csv

this code will result in same result file with first command.

evaluation of performance of model

In addition, you can evaluate the performances of prediction results with label by using evaluate_performance.py. When you have an optimal threshold from validation and the names of the test dataset.

python evaluate_performance.py test_result.csv -n predict -T 0.2

This command will report performances.

License

DeepConv-DTI follow GPL 3.0v license. Therefore, DeepConv-DTI is open source and free to use for everyone.

However, compounds which are found by using DeepConv-DTI follows CC-BY-NC-4.0. Thus, those compounds are freely available for academic purpose or individual research, but restricted for commecial use.

Contact

[email protected]

[email protected]

deepconv-dti's People

Contributors

dlsrnsi avatar lig050129 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

deepconv-dti's Issues

Example Command Line Usage?

Hi,

thanks for the code and detailed instruction. For the command line usage, could you Kindly provide an example input line since the brackets in the usage column seem not clear to me? Thanks!

Issue with Adam

While executing the DeepConvDTI.py script, I am getting the error
ImportError: cannot import name 'Adam' from 'keras.optimizers'

I found that this could be solved by changing to:
from tensorflow.keras.optimizers import Adam

But then other errors pop-up:
AttributeError: module 'tensorflow' has no attribute 'global_variables_initializer'

Any better way to resolve the issue other than downgrading to tensorflow=1.*.

Question about how to preprocess about compound '0\t0\t0\t...'

Dear authors:

I want to know how the author trained the baseline about DeepConv-DTI mainly about how to preprocess the origin data, When I trained the code ar DeepConv-DTI, the memory was quite high.

Thank you very much for your kind consideration and I am looking forward to your reply.

Data preparation

I want to predict with DeepDTI using my data, but the process of making input is difficult.
Could you upload the code that makes 3 input data?
If you upload it, it will be useful to many people.

Error while running the toy_examples

Hello, I'm running DeepConv-DTI for toy_examples, but it was getting an error message in process of prediction model.

python predict_with_model.py ./model.model -n predict -i ./toy_examples/test_dataset/test_dti.csv -d ./toy_examples/test_dataset/test_compound.csv -t ./toy_examples/test_dataset/test_protein.csv -v Convolution -l 2500 -V morgan_fp_r2 -L 2048 -W -o test_result.csv

Traceback (most recent call last): 
 File "predict_with_model.py", line 118, in <module>
 d_splitted = np.array_split(prediction_dic["drug_feature"], N)
File "<__array_function__ internals>", line 6, in array_split
File "/Data1/program/anaconda3/lib/python3.7/site-packages/numpy/lib/shape_base.py", line 761, in array_split
raise ValueError('number sections must be larger than 0.')

When I showed the variable, "N" indicated zero.

N = int(prediction_dic["drug_feature"].shape[0]/50)

In the code, ###prediction_dic["drug_feature"].shape[0] is 20.

Please, any help!
Thank you

model fit not using GPU

Hi,

I'm trying to run DeepConv-DTI on my data. The model starts training but it won't use the available GPUs. I don't see any parameter to force GPU usage. Am I missing something?

Thank you.

Example Files

Hello. I was wondering if you had any example files of the input data to test if the program will run. Thank You!

Inquiry about SAE code

Hi,

I saw on your paper, you replicate the result of SAE MFDR. I am wondering if you have any plan to share the code to the community for comparison? Thank you!

Is drug_features a list of strings?

I'm new to tf so I may have misunderstood something in the code.

In DeepConvDTI.py, the line where we extract the drug features is as follows:

drug_feature = np.stack(dti_df[drug_vec].map(lambda fp: fp.split("\t")))
Which makes it a list of strings. I checked the inputs before calling self.model_t.predict() and the drug feature remains unchanged. Does this mean we are giving a list of strings to the drug's pipeline? Or does tf.keras.Model() modify it implicitly?

KeyError: 'morgan_fp'

Hi i was trying to run the example but ran into an error. Any advice would be greatly appreciated. Thank you.

/home/shared/DeepConv-DTI$ python DeepConvDTI.py ./toy_examples/training_dataset/training_dti.csv ./toy_examples/training_dataset/training_compound.csv ./toy_examples/training_dataset/training_protein.csv --predict -n predict -i ./toy_examples/test_dataset/test_dti.csv -d ./toy_examples/test_dataset/test_compound.csv -t ./toy_examples/test_dataset/test_protein.csv -c 512 128 -w 10 15 20 25 30 -p 128 -f 128 -r 0.0001 -n
Using TensorFlow backend.
model parameters summary

drug_layers : [512, 128]
protein_strides : [10, 15, 20, 25, 30]
protein_layers : [128]
fc_layers : [128]
learning_rate : 0.0001
decay : 0.0
activation : None
filters : 64
dropout : 0.2
prot_vec : Convolution
prot_len : 2500
drug_vec : morgan_fp
drug_len : 2048

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
2019-11-25 18:27:44.659002: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2019-11-25 18:27:44.659036: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
2019-11-25 18:27:44.659056: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ip-10-76-116-174): /proc/driver/nvidia/version does not exist
2019-11-25 18:27:44.659300: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2019-11-25 18:27:44.669291: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3000000000 Hz
2019-11-25 18:27:44.672797: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5463830 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2019-11-25 18:27:44.672822: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:From DeepConvDTI.py:164: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.

Model: "model_1"


Layer (type) Output Shape Param # Connected to

input_2 (InputLayer) (None, 2500) 0


embedding_1 (Embedding) (None, 2500, 20) 520 input_2[0][0]


spatial_dropout1d_1 (SpatialDro (None, 2500, 20) 0 embedding_1[0][0]


input_1 (InputLayer) (None, 2048) 0


conv1d_1 (Conv1D) (None, 2500, 64) 12864 spatial_dropout1d_1[0][0]


conv1d_2 (Conv1D) (None, 2500, 64) 19264 spatial_dropout1d_1[0][0]


conv1d_3 (Conv1D) (None, 2500, 64) 25664 spatial_dropout1d_1[0][0]


conv1d_4 (Conv1D) (None, 2500, 64) 32064 spatial_dropout1d_1[0][0]


conv1d_5 (Conv1D) (None, 2500, 64) 38464 spatial_dropout1d_1[0][0]


dense_1 (Dense) (None, 512) 1049088 input_1[0][0]


batch_normalization_3 (BatchNor (None, 2500, 64) 256 conv1d_1[0][0]


batch_normalization_4 (BatchNor (None, 2500, 64) 256 conv1d_2[0][0]


batch_normalization_5 (BatchNor (None, 2500, 64) 256 conv1d_3[0][0]


batch_normalization_6 (BatchNor (None, 2500, 64) 256 conv1d_4[0][0]


batch_normalization_7 (BatchNor (None, 2500, 64) 256 conv1d_5[0][0]


batch_normalization_1 (BatchNor (None, 512) 2048 dense_1[0][0]


activation_3 (Activation) (None, 2500, 64) 0 batch_normalization_3[0][0]


activation_4 (Activation) (None, 2500, 64) 0 batch_normalization_4[0][0]


activation_5 (Activation) (None, 2500, 64) 0 batch_normalization_5[0][0]


activation_6 (Activation) (None, 2500, 64) 0 batch_normalization_6[0][0]


activation_7 (Activation) (None, 2500, 64) 0 batch_normalization_7[0][0]


activation_1 (Activation) (None, 512) 0 batch_normalization_1[0][0]


global_max_pooling1d_1 (GlobalM (None, 64) 0 activation_3[0][0]


global_max_pooling1d_2 (GlobalM (None, 64) 0 activation_4[0][0]


global_max_pooling1d_3 (GlobalM (None, 64) 0 activation_5[0][0]


global_max_pooling1d_4 (GlobalM (None, 64) 0 activation_6[0][0]


global_max_pooling1d_5 (GlobalM (None, 64) 0 activation_7[0][0]


dropout_1 (Dropout) (None, 512) 0 activation_1[0][0]


concatenate_1 (Concatenate) (None, 320) 0 global_max_pooling1d_1[0][0]
global_max_pooling1d_2[0][0]
global_max_pooling1d_3[0][0]
global_max_pooling1d_4[0][0]
global_max_pooling1d_5[0][0]


dense_2 (Dense) (None, 128) 65664 dropout_1[0][0]


dense_3 (Dense) (None, 128) 41088 concatenate_1[0][0]


batch_normalization_2 (BatchNor (None, 128) 512 dense_2[0][0]


batch_normalization_8 (BatchNor (None, 128) 512 dense_3[0][0]


activation_2 (Activation) (None, 128) 0 batch_normalization_2[0][0]


activation_8 (Activation) (None, 128) 0 batch_normalization_8[0][0]


dropout_2 (Dropout) (None, 128) 0 activation_2[0][0]


dropout_3 (Dropout) (None, 128) 0 activation_8[0][0]


concatenate_2 (Concatenate) (None, 256) 0 dropout_2[0][0]
dropout_3[0][0]


dense_4 (Dense) (None, 128) 32896 concatenate_2[0][0]


batch_normalization_9 (BatchNor (None, 128) 512 dense_4[0][0]


activation_9 (Activation) (None, 128) 0 batch_normalization_9[0][0]


dense_5 (Dense) (None, 1) 129 activation_9[0][0]


lambda_1 (Lambda) (None, 1) 0 dense_5[0][0]

Total params: 1,322,569
Trainable params: 1,320,137
Non-trainable params: 2,432


Parsing ./toy_examples/training_dataset/training_dti.csv , ./toy_examples/training_dataset/training_compound.csv, ./toy_examples/training_dataset/training_protein.csv with length 2500, type Convolution
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py", line 2890, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'morgan_fp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "DeepConvDTI.py", line 352, in
train_dic = parse_data(**train_dic)
File "DeepConvDTI.py", line 51, in parse_data
drug_feature = np.stack(dti_df[drug_vec].map(lambda fp: fp.split("\t")))
File "/usr/local/lib/python3.6/dist-packages/pandas/core/frame.py", line 2975, in getitem
indexer = self.columns.get_loc(key)
File "/usr/local/lib/python3.6/dist-packages/pandas/core/indexes/base.py", line 2892, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'morgan_fp'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.