Reinforcement learning prioritizes general applicability in reaction optimization
Published in ChemRxiv, 2023
Recommended citation: Wang, Jason Y.; Stevens, Jason M.; Kariofillis, Stavros K.; Tom Mai-Jan; Li, Jun; Tabora, Jose E.; Parasram, Marvin; Shields, Benjamin J.; Primer, David; Hao, Bo; Valle, David D.; DiSomma, Stacey; Furman, Ariel; Zipp, Greg G.; Melnikov, Sergey; Paulson, James; Doyle, Abigail G. "Reinforcement learning prioritizes general applicability in reaction optimization", ChemRxiv, 2023. https://doi.org/10.26434/chemrxiv-2023-dcg9d
Statistical methods in chemistry have a rich history, but only recently has ML gained widespread attention in reaction development. As the untapped potential of ML is explored, novel tools are likely to arise from future research. Our studies suggest that supervised ML can lead to improved predictions of reaction yield over simpler modeling methods and facilitate mechanistic understanding of reaction dynamics. However, further research and development is required to establish ML as an indispensable tool in reactivity modeling.