Transferability of MACE Graph Neural Network for Range Corrected Δ-Machine Learning Potential QM/MM Applications

The Journal of Physical Chemistry B DOI: 10.1021/acs.jpcb.5c02006  Published: 2025-05-28 


Timothy J. Giese [ ] , Jinzhe Zeng, Darrin M. York [ ]

  View Full Article

Abstract

We previously introduced a “range corrected” Δ−machine learning potential (ΔMLP) that used deep neural networks to improve the accuracy of combined quantum mechanical/molecular mechanical (QM/MM) simulations by correcting both the internal QM and QM/MM interaction energies and forces [J. Chem. Theory Comput. 2021, 17, 6993–7009]. The present work extends this approach to include graph neural networks. Specifically, the approach is applied to the MACE message passing neural network architecture, and a series of AM1/d + MACE models are trained to reproduce PBE0/6–31G* QM/MM energies and forces of model phosphoryl transesterification reactions. Several models are designed to test the transferability of AM1/d + MACE by varying the amount of training data and calculating free energy surfaces of reactions that were not included in the parameter refinement. The transferability is compared to AM1/d + DP models that use the DeepPot-SE (DP) deep neural network architecture. The AM1/d + MACE models are found to reproduce the target free energy surfaces even in instances where the AM1/d + DP models exhibit inaccuracies. We train “end-state” models that include data only from the reactant and product states of the 6 reactions. Unlike the uncorrected AM1/d profiles, the AM1/d + MACE method correctly reproduces a stable pentacoordinated phosphorus intermediate even though the training did not include structures with a similar bonding pattern. Furthermore, the message passing mechanism hyperparameters defining the MACE network are varied to explore their effect on the model’s accuracy and performance. The AM1/d + MACE simulations are 28% slower than AM1/d QM/MM when the ΔMLP correction is performed on a graphics processing unit. Our results suggest that the MACE architecture may lead to ΔMLP models with improved transferability.