Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Bayesian Reaction Optimization Using EDBO - Part II

10 minute read

Published:

Part II - Software introduction

In part I we installed the pre-release of EDBO and ran some basic functionality tests. Now in part II we can dive into a basic introduction to using the software. In this post we provide example code for Bayesian optimization of a 1D objective which can be used to explore some of the softwares features. The main Bayesian optimization program is accessed through the edbo.bro module. The main BO classes, edbo.bro.BO and edbo.bro.BO_express, enable users to select initial experiments with experimental designs, running BO on human-in-the-loop or computational objectives, model data, and analyze results. Note: BO parameters are preset to those optimized for DFT encodings in the paper. However, BO_express attempts to automate the selection of priors based on the search space. In general, the BO class is more flexible but as a result less user friendly. Therefore let’s use the BO_express class in this demonstration.

To start we need to define a search space and an objective. In general, for any application it is up to us to define where to optimizer will search for conditions that maximize our objective. For a reaction your objective may be the yield of desired product, here I am using an arbitrary function so feel free to change it to anything you want for this demo.

Define Objective and Search Space

import numpy as np
import matplotlib.pyplot as plt

# Define a computational objective
# EDBO works with feature vectors so even a 1D objective needs to be vectorized

def f(x):
    """Noise free objective."""
    
    return np.sin(x[0]) * x[0] * 5 + 30

def g(x):
    """With noise."""
    
    return f(x) + (np.random.random() - 0.5) * 15
  
# BO uses a user defined domain

X = np.linspace(0,10,1000)    # Grid of 1000 points between 0 and 10

Now we can use matplotlib to visualize the objective.

sample = np.random.choice(X, 100)
plt.figure(figsize=(5,5))
plt.plot(X, [f([x]) for x in X])
plt.scatter(sample, [g([x]) for x in sample], alpha=0.5)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('"Unknown" Objective')
plt.show()

Using EDBO

With our search space prepared we can now use EDBO to choose initial experiments, evaluate models, and run Bayesian optimization. There are several ways in which the main BO methods can be used. Let’s start by checking out the options when instantiating BO objects. Here is a link to the documentation page: edbo.bro.

First, as we are checking out some of EDBO’s features it will be handy to have nice plotting function.

# Handy function to visualize the results

def map_corr(df):
    """Get corresponding points in unstandardized domain."""
    
    index = []
    for x in df.values:
        i = np.argwhere(bo.obj.domain.values == x).flatten()[0]
        index.append(i)
    
    return bo.reaction.get_experiments(index)

def plot_results(export_path=None, plot_samples=True):
    """Plot summary of 1D BO simulations."""

    mean = bo.obj.scaler.unstandardize(bo.model.predict(bo.obj.domain))                             # GP posterior mean
    std = np.sqrt(bo.model.variance(bo.obj.domain)) * bo.obj.scaler.std * 2                         # GP posterior standard deviation
    next_points = bo.reaction.get_experiments(bo.proposed_experiments.index.values).copy()          # Next points proposed by BO
    next_points['g(x)'] = [f(x) for x in next_points.values]
    results = map_corr(bo.obj.results.drop('g(x)', axis=1))                                         # Results for known data
    results['g(x)'] = [g(x) for x in results.values]    
    
    plt.figure(1, figsize=(8,8))

    # Model mean and standard deviation
    plt.subplot(211)
    plt.plot(X, [f([x]) for x in X], color='black')
    plt.plot(X, mean, label='GP')
    plt.fill_between(X, mean-std, mean+std, alpha=0.4)

    # Known results and next selected point
    plt.scatter(results['x_index'], results['g(x)'], color='black', label='known')
    plt.scatter(next_points['x_index'], next_points['g(x)'], color='red', label='next')
    plt.ylabel('f(x)')
    
    # Plot some posterior samples
    if plot_samples:
        samples = bo.obj.scaler.unstandardize(bo.model.sample_posterior(bo.obj.domain, batch_size=2))
        i = 1
        for sample in samples:
            plt.plot(X, sample.numpy(), '--', label='sample' + str(i))
            i += 1
    
    plt.legend(loc='lower left')

    # Plot the acquisition function
    plt.subplot(212)
    for p in bo.acq.function.projections:
        plt.plot(bo.obj.domain['x'], p)

    plt.xlabel('x')
    plt.ylabel('Acquisition Function')
    
    if export_path is not None:
        plt.savefig(export_path, format='svg', dpi=1200, bbox_inches='tight')
    
    plt.show()

Initialization methods

Suppose we have no data and want to start by selecting initial experiments to run. We can do this at random or by using clustering methods using EDBO. I have also written some DOE add on modules which enable you to use response surface (e.g., central composite) and fractional factorial designs. However, these are not included in EDBO 0.0.0. Here we use the centroids from k-Means clustering for initialization.

from edbo.bro import BO_express

# (1) Define a dictionary of components
components = {'x':X}

# (2) Define a dictionary of desired encodings
encoding={'x':'numeric'}

# (3) Instatiate BO object
bo = BO_express(components,
                encoding,
                batch_size=2,
                target='g(x)',
                init_method='kmeans')

# (4) Choose initial experiments using k-means
bo.init_sample()

print('\nNormalized domain points:')
bo.proposed_experiments

Normalized domain points:

 x
2520.252252
7510.751752

We can get the unnormalized experiments (or SMILES strings etc.) using the get_experiments method.

print('\nDomain points:')
bo.get_experiments()

Domain points:

 x_index
2522.52252
7517.51752

And we can plot the choices on the domain.

plt.figure(figsize=(6,1))
plt.scatter(bo.obj.domain['x'], np.ones((len(bo.obj.domain))))
plt.scatter(bo.proposed_experiments, np.ones((len(bo.proposed_experiments))), s=100)
plt.xlabel('x')
plt.yticks([])
plt.show()

Human-in-the-loop optimization

Now we can move on to the optimization. If you were really running experiments in the lab you would likely just want to use the run method to iteratively choose experiments. Then go into the lab run the experiments, collect the results, and read them back into the optimizer. Let’s see what that would look like. First lets export the proposed experiments to a CSV file so we can add the results after we “run” the experiments.

# Without an arguement this will export 'experiments.csv' to the cwd.
bo.export_proposed()

Since this is actually a computational objective we can “run” the experiments right here.

# "Run" the experiments
expts = bo.get_experiments()
expts['g(x)'] = [g(x) for x in expts.values]

# Save the results as a CSV
expts.to_csv('results.csv')

# Load the results
bo.add_results('results.csv')

Then in order to choose the next experiments we simply use the run method.

bo.run()

And we can return basic analysis of the acquisition process using the acquisition_summary method.

bo.acquisition_summary()
 xpredicted g(x)variance
9600.96096160.72321581.51
6310.63163264.5289969.982

You can continue this process iteratively until the objective is maximized or you run out of resources. We can get an idea of what is going on under the hood using our plotting function. In the top plot notice that the model mean fits the experimental results well and that the model confidence region (2$\sigma$) capture the unknown objective. As a result, when we sample the posterior predictive distribution of the model you can see that one of the random functions (yellow dashed) actually captures most of the variation in the objective. The default acquisition function used by EDBO is parallel expected improvement (EI). The computed EI, used to select the next round of experiments, is shown in the bottom plot. Notice that the ArgMax of the acquisition function gives the next two experiments (red points).

Automated optimization

Given that $f$ is actually a computational objective, we could just use EDBO to automatically optimize the objective. Below is some sample code for how you can do this using the computational objective option.

# EDBO works on the normalized search space
# We need a new function that maps to the real domain
def h(x):
    """Deal with scaling."""
    
    i = np.argwhere(bo.obj.domain.values == x).flatten()[0]
    df = bo.reaction.get_experiments(i)
    
    return g(df.values)

# Use the computational_objective arguement 
bo = BO_express(components,
                encoding,
                batch_size=2,
                target='g(x)',
                computational_objective=h)

# Run the optimization automatically using simulate
bo.simulate(seed=4, iterations=5)

# Plot the results
plot_results()

Configuring the optimizer

Models. In Bayesian optimization, the surrogate model type defines a prior over functions which capture our assumptions about the shape of a the response surface. When we combined this prior with observed reaction data we then get a posterior distribution of functions which we can use to reason about the possible positions of global optima. Practically speaking, many acquisition functions (but not all, e.g., Thompson Sampling) are formulated from the surrogate models mean and variance. Thus, in principal any regression model can be employed in Bayesian optimization (e.g., by bootstrapping variance estimates). EDBO currently has three different surrogate models built into the edbo.models module: gaussian processes (edbo.models.GP_Model, GPyTorch), random forests (edbo.models.RF_Model, Scikit-Learn), and Bayesian linear regression (edbo.models.Bayesian_Linear_Model, Scikit-Learn). See the edbo.models documentation page for more details. We can get an idea of the shape of these functions using the plotting method we wrote above (vide infra). It is also straightforward to implement your own model - see the edbo.models module for examples. Below is an example code block for utilizing a random forest model instead of the default gaussian process.

from edbo.bro import BO_express
from edbo.models import RF_Model

# (1) Define a dictionary of components
components = {'x':X}

# (2) Define a dictionary of desired encodings
encoding={'x':'numeric'}

# (3) Instatiate BO object
bo = BO_express(components,
                encoding,
                model=RF_Model)

Gaussian Process Regression (EDBO’s default model):

GP

Random Forest Regression:

RF

Bayesian Linear Regression:

BLM

Acquisition functions. The acquisition function is the algorithm responsible for selecting the next experiments to run based on the information captured by the surrogate model. Most acquisition functions are built to balance the exploration of the search space with the exploitation of information availible from evaluated experiments. EDBO has several acquisition functions availible via keyword arguements from the BO and BO_express classes. A full list can be found the the documentation. The default acquisition function, expected improvement, is derived from the expectation value of the improvement utility function. Below is an example code block for choosing different acquisition functions and a few examples of parallel acquisition functions which utilize the Kriging Believer algorithm for batching.

from edbo.bro import BO_express

# (1) Define a dictionary of components
components = {'x':X}

# (2) Define a dictionary of desired encodings
encoding={'x':'numeric'}

# (3) Instatiate BO object
bo = BO_express(components,
                encoding,
                acquisition_function='UCB')

Expected improvement (EDBO’s default acquisition function):

Probability of improvement:

Upper confidence bound:

Mean maximization (pure exploitation):

Variance maximization (pure exploration):

Analysis

During optimization we can run misc analysis using some of EDBO’s built in functions. For example, we can plot the optimizers path.

bo.plot_convergence()

And we can evaluate how well the model fits the experimental data.

bo.model.regression()

Finally, note that if you need help EDBO has a basic BOT which can run most of its methods. You can call the BOT using the help method. For example, if you wanted to save your workspace for later.

bo.help()

This will span an interactive session:

edbo bot: What can I help you with?
~  Save my workspace

edbo bot: Can you clarify: pickle BO object for later, export proposed, or exit?
~  pickle it

edbo bot: Save instace? (yes or no) You can load instance later with edbo.BO_express.load().
~  yes

edbo bot: Saving edbo.BO instance...


edbo bot: What can I help you with?
~  exit

edbo bot: Exiting...

Up next

EDBO and the main BO classes have a lot more features but hopefully this gives you an idea of how it could be used. In the next post we will see how to apply EDBO to chemical reaction data.

Bayesian Reaction Optimization Using EDBO - Part I

2 minute read

Published:

Recently, in collaboration with folks over at Princeton and Bristol Myers Squibb, I finished writing a python package called Experimental Design via Bayesian Optimization (EDBO) for reaction optimization which enables the application of Bayesian optimization, an uncertainty guided response surface method, to chemical reactions in the laboratory. Now, the paper is submitted for publication and under review so I have not yet made the repository public. However, to facilitate training and beta testing I am writing a few preliminary posts on (1) installation and basic software usage, (2) simulations with real chemical reaction data, (3) using EDBO in the lab, and (4) tackling computational optimization problems.

Reference: Shields, Benjamin J.; Stevens, Jason; Li, Jun; Parasram, Marvin; Damani, Farhan, Martinez Alvarado, Jesus; Janey, Jacob; Adams, Ryan P.; Doyle, Abigail G. “Bayesian Reaction Optimization as A Tool for Chemical Synthesis” Manuscript Submitted.

Part I - Installation

Ok boring stuff first. In this post we will be tackling software installation from the code in my private repository (so no Git, PyPI, or Anaconda for now).

Install conda

If you haven’t already installed anaconda (or miniconda) on your machine you can follow the instructions provided by conda.

Install EDBO

Windows Script

I wrote a shell script (install.sh) to install EDBO on windows machines. You will find a copy in the edbo.zip folder provided.

  1. Download and unzip the folder.

  2. Open an anaconda prompt, navigate to the edbo directory, and run the script.

cd path/to/edbo/directory
sh install.sh

Mac/Linux Script

I wrote a slightly different shell script (install_mac.sh) to install EDBO on Mac/Linux machines. You will find a copy in the edbo.zip folder provided.

  1. Download and unzip the folder.

  2. Open a terminal and create a conda environment for EDBO.

conda create -y --name edbo python=3.7.5
conda activate edbo
  1. Navigate to the edbo directory and run the script.
cd path/to/edbo/directory
sh install_mac.sh

Software tests

Use the pytest framework to run some basic software tests to make sure the installation worked. In the anaconda prompt (or terminal for Mac/Linux) navigate to the folder containing edbo. Then run the following commands and you will see test logs appear in the testing directory. These may take a few min to run and you should see some warnings but no failed tests. If you do please let me know so I can fix the issue and update the software.

conda activate edbo
cd tests
sh basic_tests.sh

Up next

That wraps up this post. In Part II we will walk through a basic introduction to the software.

Hello World!

less than 1 minute read

Published:

My site basics are finished! Later I will add some posts on things I am working on.

portfolio

publications

Direct C(sp3)–H Cross Coupling Enabled by Catalytic Generation of Chlorine Radicals

Published in Journal of the American Chemical Society, 2016

Here the development of a novel C(sp3)–H cross-coupling platform enabled by the catalytic generation of chlorine radicals by nickel and photoredox catalysis is reported. This work has led to the a large body of new literature. Highlighted in an ACS Select Virtual Issue. One of the most read articles in August and September.

Recommended citation: Shields, Benjamin J.; Doyle, Abigail G. “Direct C(sp3)–H Cross Coupling Enabled by Catalytic Generation of Chlorine Radicals” J. Am. Chem. Soc., 2016, 138, 12719–12722. https://pubs.acs.org/doi/full/10.1021/jacs.6b08397?src=recsys

Mild Redox-Neutral Formylation of Aryl Chlorides through the Photocatalytic Generation of Chlorine Radicals

Published in Angewandte Chemie International Edition, 2017

A novel redox-neutral method for formylation of aryl chlorides is presented. The mild conditions give unprecedented scope from abundant and complex aryl chloride starting materials. Highlighted in Organic Process Research & Development.

Recommended citation: Nielsen, Matthew K.^; Shields, Benjamin J.^; Liu, Junyi; Williams, M. J.; Zacuto, M. J.; Doyle, Abigail G. “Mild Redox-Neutral Formylation of Aryl Chlorides through the Photocatalytic Generation of Chlorine Radicals” Angew. Chem. Int. Ed. 2017, 56 7191–7194. ^Equal contributions. https://onlinelibrary.wiley.com/doi/abs/10.1002/anie.201702079

Long-Lived Charge Transfer States of Nickel(II) Aryl Halide Complexes Facilitate Bimolecular Photoinduced Electron Transfer

Published in Journal of the American Chemical Society, 2018

This paper summarizes a synthetic, computational, and ultrafast spectroscopyic study of Ni(II) complexes common to cross-coupling and Ni/photoredox reactions. Computational and ultrafast spectroscopic studies reveal that these complexes feature long-lived excited states, implicating Ni as an underexplored alternative to precious metal photocatalysts.

Recommended citation: Shields, Benjamin J.; Kudisch, Bryan; Scholes, Gregory, D.; Doyle, Abigail G. “Long-Lived Charge Transfer States of Nickel(II) Aryl Halide Complexes Facilitate Bimolecular Photoinduced Electron Transfer” J. Am. Chem. Soc., 2018, 140, 3035–3039. https://pubs.acs.org/doi/10.1021/jacs.7b13281

3d-d Excited States of Ni(II) Complexes Relevant to Photoredox Catalysis, Spectroscopic Identification and Mechanistic Implications

Published in Journal of the American Chemical Society, 2020

Building on our previous work, we spectroscopically investigate the long-lived state’s of Ni(II) aryl halide complexes. Ultrafast UV-Vis and mid-IR transient absorption data suggest that a MLCT state is generated initially upon excitation, but decays to a long-lived state that is 3d-d in character.

Recommended citation: Ting, Stephen I.; Garakyaraghi, Sofia; Taliaferro, Chelsea M.; Shields, Benjamin J.; Scholes, Gregory D.; Castellano, Felix N.; and Doyle, Abigail G. “3d-d Excited States of Ni(II) Complexes Relevant to Photoredox Catalysis, Spectroscopic Identification and Mechanistic Implications” J. Am. Chem. Soc., 2020, xxx, xxx–xxx. https://pubs.acs.org/doi/10.1021/jacs.0c00781#

Nickel/Photoredox-Catalyzed Methylation of (Hetero)aryl Chlorides Using Trimethyl Orthoformate as a Methyl Radical Source

Published in Journal of the American Chemical Society, 2020

We report a radical approach to the methylation of (hetero)aryl chlorides using a widely availible solvent as the methyl source.

Recommended citation: Kariofillis, Stavros K.; Shields, Benjamin J.; Tekle-Smith, Makeda; Zacuto, Michael. J.; Doyle, Abigail G. “Nickel/Photoredox-Catalyzed Methylation of (Hetero)aryl Chlorides Using Trimethyl Orthoformate as a Methyl Radical Source” J. Am. Chem. Soc., 2020, 142, 7683–7689. https://pubs.acs.org/doi/pdf/10.1021/jacs.0c02805

Regioselective Cross-Electrophile Coupling of Epoxides and (Hetero)aryl Iodides via Ni/Ti/Photoredox Catalysis

Published in ACS Catalysis, 2020

We report a novel cross-electrophile compling reaction of epoxides enabled by Ni-, Ti-, and photoredox catalysis.

Recommended citation: Parasram, Marvin; Shields, Benjamin J.; Ahmad, Omar; Knauber, Thomas; Doyle, Abigail G. “Regioselective Cross-Electrophile Coupling of Epoxides and (Hetero)aryl Iodides via Ni/Ti/Photoredox Catalysis” ACS Catalysis, 2020, 10, 5821–5827. https://pubs.acs.org/doi/10.1021/acscatal.0c01199

Bayesian reaction optimization as a tool for chemical synthesis

Published in Nature, 2021

Here we report the development of a framework for Bayesian reaction optimization and an open-source software tool that allows chemists to easily integrate state-of-the-art optimization algorithms into their everyday laboratory practices.

Recommended citation: Shields, Benjamin J.; Stevens, Jason; Li, Jun; Parasram, Marvin; Damani, Farhan; Martinez Alvarado, Jesus; Janey, Jacob; Adams, Ryan; Doyle, Abigail G. "Bayesian Reaction Optimization as A Tool for Chemical Synthesis", Nature, 2021, 590, 89–96. https://www.nature.com/articles/s41586-021-03213-y

Predicting Reaction Yields via Supervised Learning

Published in Accounts of Chemical Research, 2021

In this Account, we present a review and perspective on three studies conducted by our group where ML models have been employed to predict reaction yield.

Recommended citation: Zuranski, Andrzej M.; Martinez Alvarado, Jesus; Shields, Benjamin J.; Doyle, Abigail G. "Predicting Reaction Yields via Supervised Learning", Acc. Chem. Res., 2021, 54, 1856–1865. https://pubs.acs.org/doi/10.1021/acs.accounts.0c00770?ref=pdf

Auto-QChem an automated workflow for the generation and storage of DFT calculations for organic molecules

Published in Reaction Chemistry & Engineering, 2022

This perspective describes Auto-QChem, an automatic, high-throughput and end-to-end DFT calculation workflow that computes chemical descriptors for organic molecules.

Recommended citation: Zuranski, Andrzej M.; Wang, J. Y.; Shields, Benjamin J.; Doyle, Abigail G. "Auto-QChem an automated workflow for the generation and storage of DFT calculations for organic molecules", React. Chem. Eng., 2022, 7, 1276. https://doi.org/10.1039/D2RE00030J

Reinforcement learning prioritizes general applicability in reaction optimization

Published in ChemRxiv, 2023

In this work, we report the design, implementation, and application of reinforcement learning bandit optimization models to identify generally applicable conditions in a variety of chemical transformations.

Recommended citation: Wang, Jason Y.; Stevens, Jason M.; Kariofillis, Stavros K.; Tom Mai-Jan; Li, Jun; Tabora, Jose E.; Parasram, Marvin; Shields, Benjamin J.; Primer, David; Hao, Bo; Valle, David D.; DiSomma, Stacey; Furman, Ariel; Zipp, Greg G.; Melnikov, Sergey; Paulson, James; Doyle, Abigail G. "Reinforcement learning prioritizes general applicability in reaction optimization", ChemRxiv, 2023. https://doi.org/10.26434/chemrxiv-2023-dcg9d

Scoring Methods in Lead Optimization of Molecular Glues

Published in ChemRxiv, 2023

Molecular glue compounds are characterized by the potency and the depth of their protein degradation dose response measurement, representing additional complexity toward identifying drug candidates. We developed degradation efficiency metrics that are based on both potency and depth of degradation. They serve as basic scoring functions to effectively track lead optimization objectives.

Recommended citation: Jia, Lei; Weiss, Dahlia; Shields, Benjamin J.; Claus, Brian; Shanmugasundaram, Veerabahu; Johnson, Stephen; Riggs, Jennifer; Zapf, Christoph "Scoring Methods in Lead Optimization of Molecular Glues", ChemRxiv, 2023. https://doi.org/10.26434/chemrxiv-2023-4hn4s

MDFit, Automated molecular simulations workflow enables high throughput assessment of ligands-protein dynamics

Published in ChemRxiv, 2024

We present an automated workflow that streamlines setting up, running, and analyzing Desmond MD simulations.

Recommended citation: Bruechner, Alexander; Shields, Benjamin J.; Kirubakaran, Palani; Suponya, Alexander; Panda, Manoranjan; Posy, Shana; Johnson, Stephen; Lakkaraju, Sirish K. "MDFit, Automated molecular simulations workflow enables high throughput assessment of ligands-protein dynamics", ChemRxiv, 2024. https://doi.org/10.26434/chemrxiv-2024-gfcqx

talks

Machine learning in methods development: From reaction outcome prediction to mechanistic understanding

Published:

Machine learning (ML), the development and study of computer algorithms that can learn from data, is increasingly important across a wide array of applications in chemistry. For example, ML has facilitated virtual screening of druglike molecules for medical applications, rapid prediction of physical data, and computer aided synthesis planning. While ML has become well-established in these areas, scientists have only just begun to advance tools for synthetic methods development (reaction optimization, prediction, mechanistic study). Though these burgeoning areas of research have already added to the synthetic chemist’s toolbox, average research practices have remained relatively unaffected. One approach to facilitating the adoption of ML in synthetic chemistry is to develop applications which integrate seamlessly with the typical methods of synthetic chemists. Here I will discuss approaches to some obstacles to incorporating ML in the synthetic mainstay including: (1) interpretability – scientists may not trust a model because predictions appear to be unintelligible or derived randomly from regressors. This challenge could be overcome by using simple interpretable graphics and traditional physical organic chemistry to explain and experimentally probe ML results. (2) Data – current approaches to applying ML in synthetic chemistry have focused on mining the chemical literature or actively generating new datasets on a per problem basis. However, mined data is sparse, noisy, and often incomplete and data set curation imposes a heavy experimental cost. An alternative approach is to draw from the success of ML in other areas which incorporate data endogenous to a given domain (e.g. product recommendation systems). Much of the data collected in synthetic chemistry laboratories is derived from the optimization of reactions. While this data is typically leveraged only towards the discovery of optimal conditions, a method which draws from optimization data, quantum chemical calculations, and ML could naturally integrate with synthetic research practices.

Bayesian optimization as an approach to drug development

Published:

Optimization is ubiquitous in pharmaceutical development, from tuning chemical structure to maximize potency to optimizing the yield of a chemical process. Likewise, parameter optimization is omnipresent in artificial intelligence, from tuning virtual personal assistants to training social media and product recommendation systems. Owing to the high cost associated with carrying out experiments, scientists in both areas set numerous (hyper)parameter values by evaluating only a small subset of the possible configurations. Bayesian optimization, an iterative response surface-based global optimization algorithm, has demonstrated exceptional performance in the tuning of machine learning models. Here we report the development of a framework for Bayesian reaction optimization and an open-source software tool that allows chemists to easily integrate state-of-the-art optimization algorithms into their everyday laboratory practices. We collect a large benchmark dataset for a palladium-catalysed direct arylation reaction, perform a systematic study of Bayesian optimization compared to human decision-making in reaction optimization, and apply Bayesian optimization to two real-world optimization efforts (Mitsunobu and deoxyfluorination reactions). Benchmarking is accomplished via an online game that links the decisions made by expert chemists and engineers to real experiments run in the laboratory. Our findings demonstrate that Bayesian optimization outperforms human decision making in both average optimization efficiency (number of experiments) and consistency (variance of outcome against initially available data). Overall, our studies suggest that adopting Bayesian optimization methods into everyday laboratory practices could facilitate more efficient synthesis of functional chemicals by enabling better-informed, data-driven decisions about which experiments to run.

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.