https://doi.org/10.71352/ac.58.030825
A reproducing kernel framework for interpretable parameter estimation and learning in dynamical systems
Abstract. In this work, we investigate a suite of explainable machine learning models — including Kernel Ridge Regression, Gaussian Process Regression, Convolutional Neural Networks, and Multi-Layer Perceptrons — for parameter inference in systems of differential equations. By combining these models with explainability techniques such as SHapley Additive exPlanations, we extract explicit feature-to-parameter mappings, offering deeper insight into the inference process. Building on these insights, we propose lightweight, hand-engineered estimators that approximate parameter estimation tasks without requiring complex optimization. Additionally, we introduce a systematic methodology for dataset generation, incorporating time-series simulation and diverse feature extraction. Our results demonstrate that explainability-driven modeling can achieve accurate, interpretable, and computationally efficient parameter estimation, offering a new perspective on the integration of machine learning with domain-specific modeling.
Full text PDF