Data Science Qubole Engineering Tech

Deep Learning for Drug Discovery with Keras

Deep Learning for Drug Discovery with Keras

Drug discovery is the process of figuring out molecular compounds that are more likely to develop into the lively ingredient in prescription drugs. At a excessive degree, it really works by taking a set of candidate compounds (either synthetic or naturally derived) and evaluating their chemical reactions with (an typically cloned) molecule which is essentially correlated with a specific illness [1]. Machine studying, and deep studying particularly, have been highly profitable in predicting the chemical reactions between candidate compounds and target molecules [2, 3, 4]. These models have enabled biomedical engineers to quickly iterate on the design of latest synthetic compounds by querying a educated deep neural network to estimate how a candidate compound would interact with a goal molecule. This permits the most important pharmaceutical corporations, reminiscent of Merck, to significantly scale back their drug discovery costs.

Purposes of machine learning similar to these sometimes require a cluster of GPU machines to perform parallel model coaching, or parallel mannequin choice, or both. Cloud suppliers reduce the capital expenses in any other case incurred on initial deployment of such clusters. Qubole Knowledge Service (QDS) minimizes the time and operating expenses in any other case incurred on sustaining and updating such infrastructure. This demonstration subsequently makes use of the framework developed in my blog submit on distributed deep studying inside QDS [5].

The Aim: Drug Discovery Utilizing Deep Studying On The Merck Molecular Exercise Dataset

Upon completion of this blog publish, an enterprising knowledge scientist should have educated no less than one deep neural network to predict molecular exercise on the Merck dataset. Within a Qubole pocket book [7]you’ll cross validate your DNNs using the Spark ML Pipeline interface [6], whereas sustaining the large knowledge greatest practices carried out in QDS [8]. Ensuing prediction quality will probably be reported as an R2 score [9]in addition to visually inspected with a scatter plot overlaying the mannequin outputs with the corresponding labels.

The convergence of coaching will even be verified by plotting the loss perform of the perfect cross validated model, as it’s retrained on training knowledge plus holdout knowledge. Essential to notice that the loss perform decreases in the direction of a non-unique minima. There could also be better minima in the price perform topology.

R2 score on holdout validation knowledge may be estimated using the next example code. Observe this quantitative metric lacks the power to explain the standard of the educated mannequin (eg., favoring predictions around 4.3 as seen within the labels, however overestimating molecular activity on combination).

##
# `cv_model` and `evaluator` each defined in example code to cross validate
##
df_predictions = cv_model.rework(df_holdout)
r2_score = evaluator.consider(df_predictions)
print(r2_score)
zero.915

Last but not least, GPU utilization and reducing loss perform values may be seen during coaching by inspecting the Spark executor log file output. Instructions for navigating to these logs might be found at the final entry of this weblog.

Example Code To Cross Validate Your Drug Discovery Models

Working within the framework specified by my weblog publish on distributed deep learning inside QDS [5]you should particular your input dataset and which columns to rename, then exclude from the training features. This instance uses the ACT01 dataset from the Merck molecular exercise challenge [10]

base_dir = “//absolutely/qualified/path/to/root/folder/of/dataset/”

excluded_columns = (‘MOLECULE’, ‘label’)
columns_renamed = ((‘Act’, ‘label’), )

train_set = “ACT01_training.csv”
test_set = “ACT01_testing.csv”

num_workers = 50

The bottom directory will probably be some persistent cloud storage, comparable to Amazon S3 or Azure Blobs, and the number of staff have to be equivalent to the variety of Spark executors presently operating in your cluster. Next step is to ingest DataFrames from the CSV knowledge, and to define the parameter grid over which you’ll cross validate your drug discovery models. Here’s a affordable grid with which to begin on the ACT01 dataset

df_train = process_csv(base_dir + train_set,
columns_renamed=columns_renamed,
excluded_columns=excluded_columns,
num_workers=num_workers)
df_test = process_csv(base_dir + test_set,
columns_renamed=columns_renamed,
excluded_columns=excluded_columns,
num_workers=num_workers)

input_dim = 9491

param_grid = tuning.ParamGridBuilder()
.baseOn([‘regularizer’, regularizers.l1_l2])
.addGrid(‘activations’, [[‘tanh’, ‘relu’]])
.addGrid(‘initializers’, [[‘glorot_normal’,
‘glorot_uniform’]])
.addGrid(‘layer_dims’, [[input_dim, 2000, 300, 1]])
.addGrid(‘metrics’, [[‘mae’]])
.baseOn([‘learning_rate’, 1e-2])
.baseOn([‘reg_strength’, 1e-2])
.baseOn([‘reg_decay’, 0.25])
.baseOn([‘lr_decay’, 0.90])
.addGrid(‘dropout_rate’, [0.20, 0.35, 0.50, 0.65, 0.80])
.addGrid(‘loss’, [‘mse’, ‘msle’])
.build()

Lastly you have to outline an estimator and an evaluation metric towards which will probably be cross validated. All tuning shall be completed with the Spark ML pipelines, leveraging the framework in [5].

estimator = DistKeras(trainers.ADAG,
‘batch_size’: 64,
‘communication_window’: 3,
‘num_epoch’: 5,
‘num_workers’: num_workers,
**param_grid[0])

evaluator = evaluation.RegressionEvaluator(metricName=’r2′)

cv_estimator = tuning.CrossValidator(estimator=estimator,
estimatorParamMaps=param_grid,
evaluator=evaluator,
numFolds=four)
cv_model = cv_estimator.fit(df_train)

Example Code To Visualize Prediction High quality And Confirm Reducing Training Loss

Spark ML Pipelines API allows us to compute R2 scores [9]to run a educated mannequin and acquire predictions, in addition to offering us with a reference to the perfect Dist-Keras model from our cross validation above. The code under demonstrates the way to leverage all of these

##
# equivalently:
# dist_keras_model = cv_model.bestModel
# df_predictions = dist_keras_model.rework(df_holdout)
##
df_predictions = cv_model.rework(df_holdout)
df_predictions = df_predictions.choose(‘label’, ‘prediction’).toPandas()
yt = df_predictions[‘label’].as_matrix()
yp = df_predictions[‘prediction’].as_matrix()

Observe the number of solely label and prediction column, previous to accumulating into an area Pandas DataFrame. Random sampling 1K labels and predictions yields the qualitative comparison exemplified within the objectives section of this demonstration.

fig = plt.determine()

idx = np.random.randint(df_predictions.rely(), measurement=1000)
nx = np.arange(1000) + 1

axa = fig.add_subplot(121)
_ = axa.scatter(nx, yt[idx]s=20, facecolors=’none’, edgecolors=’okay’)
_ = axa.scatter(nx, yp[idx]s=20, facecolors=’none’, edgecolors=’r’)
_ = axa.set_title(‘Qualitative Comparison: Predictions Versus Labels’)
_ = axa.set_xticks([j for j in nx if not j % 100])
_ = axa.set_xticklabels([str(j) for j in nx if not j % 100])
_ = axa.set_xlabel(‘Molecule Subsample ID’)
_ = axa.set_ylabel(‘Molecular Activity’)
_ = axa.legend((‘Label’, ‘Prediction’))

Dist-Keras supplies us with the historical past of loss perform values during training [11]as well as giving us entry to the native Keras mannequin educated in a knowledge parallel trend. The latter supplies a perform which outputs the mannequin format in a format which is straightforward to plot [12]. The code under leverages these features to complete our meant objective for this demonstration.

loss_and_metrics = dist_keras._trainer.get_averaged_history()
x = np.arange(loss_and_metrics.shape[0]) + 1
y = np.array(map(itemgetter(0), loss_and_metrics))

axb = fig.add_subplot(122)
_ = axb.plot(x, y, ‘r-‘)
_ = axb.set_title(‘Training Loss Values’)
_ = axb.set_xticks([j for j in x if not j % 50])
_ = axb.set_xticklabels([str(j) for j in x if not j % 50])
_ = axb.set_xlabel(‘coaching iteration’)
_ = axb.set_ylabel(‘mean absolute proportion error’)

fig.tight_layout()
show(fig)

##
# plot construction of greatest cross validated mannequin to file
##
keras_model = dist_keras_model._keras_model
utils.plot_model(keras_model, to_file=’drug_discovery_cv_model.png’)

Viewing Logs Throughout Mannequin Coaching

Inside your Qubole pocket book [7]you have got the power to view Spark executor logs [13] whereas your mannequin is coaching. To view these logs in an overlaid body, merely click on `Job UI` and comply with the darkish purple arrows in the diagram under for subsequent clicks. Clicking on `stderr` will end in a log file just like that in the objectives part of this publish.

About The Writer

Horia Margarit is a profession knowledge scientist with business experience in machine studying for digital media, shopper search, cloud infrastructure, life sciences, and shopper finance business verticals [14]. His experience is in modeling and optimization on internet scale datasets, particularly leveraging distributed deep studying among other methods. He earned dual Bachelors degrees in Cognitive and Pc Science from UC Berkeley, and a Master’s diploma in Statistics from Stanford College.

References

  1. Wikipedia: Drug Discovery
  2. Staff ‘.’ takes 3rd within the Merck Problem
  3. Workforce DataRobot: Merck 2nd place Interview
  4. Deep Learning How I Did It: Merck 1st place interview
  5. Distributed Deep Studying with Keras on Apache Spark
  6. Spark ML Pipelines
  7. Introduction To Qubole Notebooks
  8. The Cloud Benefit: Decoupling Storage And Compute
  9. Wikipedia: R2 Scores
  10. Merck Molecular Activity Challenge
  11. Dist-Keras: Historical past Of Loss Perform Values
  12. Wikipedia: DOT (graph description language)
  13. Apache Spark: Monitoring And Instrumentation
  14. Writer’s LinkedIn Profile