Publications
If you have questions regarding my publications, please feel contact me. You can also find my research on Google Scholar and ResearchGate. My persistent digital identifier is at ORCiD 0000-0002-3626-7925.
Preprints
Abstract
Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems in a manner that accounts for either open- or closed-loop observability and controllability aspects of the system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the energy functions required for characterization of controllability and observability are solutions of high-dimensional Hamilton-Jacobi-(Bellman) equations, which have been computationally intractable and (b) the transformations to construct the reduced-order models (ROMs) are potentially ill-conditioned and the resulting ROMs are difficult to simulate on the nonlinear balanced manifolds. Part~1 of this two-part article addressed challenge (a) via a scalable tensor-based method to solve for polynomial approximations of the open- and closed-loop energy functions. This article, (Part~2), addresses challenge (b) by presenting a novel and scalable method to reduce the dimensionality of the full-order model via model reduction on polynomially-nonlinear balanced manifolds. The associated nonlinear state transformation simultaneously 'diagonalizes' relevant energy functions in the new coordinates. Since this nonlinear balancing transformation can be ill-conditioned and expensive to evaluate, inspired by the linear case we develop a computationally efficient balance-and-reduce strategy, resulting in a scalable and better conditioned truncated transformation to produce balanced nonlinear ROMs. The algorithm is demonstrated on a semi-discretized partial differential equation, namely Burgers equation, which illustrates that higher-degree transformations can improve the accuracy of ROM outputs.
Bibtex
@article {KGB_NonlinearBT_Part2,
author = {Kramer, Boris and Gugercin, Serkan and Borggaard, Jeff},
title = {Nonlinear Balanced Truncation: Part 2--Model Reduction on Manifolds},
year = {2023},
note = {arXiv:2302.02036}
}
Abstract
Mathematical models are indispensable to the system biology toolkit for studying the structure and behavior of intracellular signaling networks. A common approach to modeling is to develop a system of equations that encode the known biology using approximations and simplifying assumptions. As a result, the same signaling pathway can be represented by multiple models, each with its set of underlying assumptions, which opens up challenges for model selection and decreases certainty in model predictions. Here, we use Bayesian multimodel inference to develop a framework to increase certainty in systems biology models. Using models of the extracellular regulated kinase (ERK) pathway, we first show that multimodel inference increases predictive certainty and yields predictors that are robust to changes in the set of available models. We then show that predictions made with multimodel inference are robust to data uncertainties introduced by decreasing the measurement duration and reducing the sample size. Finally, we use multimodel inference to identify a new model to explain experimentally measured sub-cellular location-specific ERK activity dynamics. In summary, our framework highlights multimodel inference as a disciplined approach to increasing the certainty of intracellular signaling activity predictions.
Bibtex
@article {LiZhKrRa,
author = {Linden-Santangeli, N. and Zhang, J. and Kramer, B. and Rangamani, P.},
title = {Increasing certainty in systems biology models using Bayesian multimodel inference},
year = {2024},
doi = {10.1101/2024.06.16.599231}
}
Abstract
We present a scalable tensor-based approach to computing input-normal/output-diagonal nonlinear balancing transformations for control-affine systems with polynomial nonlinearities. This transformation is necessary to determine the states that can be truncated when forming a reduced-order model. Given a polynomial representation for the controllability and observability energy functions, we derive the explicit equations to compute a polynomial transformation to induce input-normal/outputdiagonal structure in the energy functions in the transformed coordinates. The transformation is computed degree-by-degree, similar to previous Taylor-series approaches in the literature. However, unlike previous works, we provide a detailed analysis of the transformation equations in Kronecker product form to enable a scalable implementation. We derive the explicit algebraic structure for the equations, present rigorous analyses for the solvability and algorithmic complexity of those equations, and provide general purpose open-source software implementations for the proposed algorithms to stimulate broader use of nonlinear balanced truncation model reduction. We demonstrate that with our efficient implementation, computing the nonlinear transformation is approximately as expensive as computing the energy functions.
Bibtex
@article {CoSaSchKr_scalable_Computation_balanced_realization,
author = {Corbin, Nicholas and Sarkar, Arijit and Scherpen, Jacquelien M.A. and Kramer, Boris},
title = {Scalable computation of input-normal/output-diagonal balanced realization for control-affine polynomial systems},
year = {2024},
note = {arXiv:2410.22435}
}
Abstract
Hamiltonian operator inference has been developed in [Sharma, H., Wang, Z., Kramer, B., Physica D: Nonlinear Phenomena, 431, p.133122, 2022] to learn structure-preserving reduced-order models (ROMs) for Hamiltonian systems. The method constructs a low-dimensional model using only data and knowledge of the functional form of the Hamiltonian. The resulting ROMs preserve the intrinsic structure of the system, ensuring that the mechanical and physical properties of the system are maintained. In this work, we extend this approach to port-Hamiltonian systems, which generalize Hamiltonian systems by including energy dissipation, external input, and output. Based on snapshots of the system's state and output, together with the information about the functional form of the Hamiltonian, reduced operators are inferred through optimization and are then used to construct data-driven ROMs. To further alleviate the complexity of evaluating nonlinear terms in the ROMs, a hyper-reduction method via discrete empirical interpolation is applied. Accordingly, we derive error estimates for the ROM approximations of the state and output. Finally, we demonstrate the structure preservation, as well as the accuracy of the proposed port-Hamiltonian operator inference framework, through numerical experiments on a linear mass-spring-damper problem and a nonlinear Toda lattice problem.
Bibtex
@article {GeJuKraWa_pHOPINF_2025,
author = {Geng, Yuwei and Ju, Lili and Kramer, Boris and Wang, Zhu},
title = {Data-driven reduced-order models for port-{H}amiltonian systems with operator inference},
year = {2025},
note = {arXiv:2501.02183}
}
Abstract
Numerical simulations of complex multiphysics systems, such as char combustion considered herein, yield numerous state variables that inherently exhibit physical constraints. This paper presents a new approach to augment Operator Inference -- a methodology within scientific machine learning that enables learning from data a low-dimensional representation of a high-dimensional system governed by nonlinear partial differential equations -- by embedding such state constraints in the reduced-order model predictions. In the model learning process, we propose a new way to choose regularization hyperparameters based on a key performance indicator. Since embedding state constraints improves the stability of the Operator Inference reduced-order model, we compare the proposed state constraints-embedded Operator Inference with the standard Operator Inference and other stability-enhancing approaches. For an application to char combustion, we demonstrate that the proposed approach yields state predictions superior to the other methods regarding stability and accuracy. It extrapolates over 200\% past the training regime while being computationally efficient and physically consistent.
Bibtex
@article{KimKra_OpInf_State_Constraints_2025,
author = {Kim, Hyeonghun and Kramer, Boris},
title = {Physically consistent predictive reduced-order modeling by enhancing Operator Inference with state constraints},
year = {2025},
note = {arXiv:2502.03672}
}
Abstract
Dynamical systems with quadratic or polynomial drift exhibit complex dynamics, yet compared to nonlinear systems in general form, are often easier to analyze, simulate, control, and learn. Results going back over a century have shown that the majority of nonpolynomial nonlinear systems can be recast in polynomial form, and their degree can be reduced further to quadratic. This process of polynomialization/quadratization reveals new variables (in most cases, additional variables have to be added to achieve this) in which the system dynamics adhere to that specific form, which leads us to discover new structures of a model. This chapter summarizes the state of the art for the discovery of polynomial and quadratic representations of finite-dimensional dynamical systems. We review known existence results, discuss the two prevalent algorithms for automating the discovery process, and give examples in form of a single-layer neural network and a phenomenological model of cell signaling.
Bibtex
@article{KraPog_survey_quadratization_2025,
author = {Kramer, Boris and Pogudin, Gleb},
title = {Discovering Polynomial and Quadratic Structure in Nonlinear Ordinary Differential Equations},
year = {2025},
note = {arXiv:2502.10005}
}
Abstract
Dynamical systems with quadratic or polynomial drift exhibit complex dynamics, yet compared to nonlinear systems in general form, are often easier to analyze, simulate, control, and learn. Results going back over a century have shown that the majority of nonpolynomial nonlinear systems can be recast in polynomial form, and their degree can be reduced further to quadratic. This process of polynomialization/quadratization reveals new variables (in most cases, additional variables have to be added to achieve this) in which the system dynamics adhere to that specific form, which leads us to discover new structures of a model. This chapter summarizes the state of the art for the discovery of polynomial and quadratic representations of finite-dimensional dynamical systems. We review known existence results, discuss the two prevalent algorithms for automating the discovery process, and give examples in form of a single-layer neural network and a phenomenological model of cell signaling.
Bibtex
@article{KraPog_survey_quadratization_2025,
author = {Harsh, Sharma and Draxl Giannoni, Juan Diego and Kramer, Boris},
title = {Nonlinear energy-preserving model reduction with lifting transformations that quadratize the energy},
year = {2025},
note = {arxiv:2503.02273}
}
Abstract
Dynamical systems with quadratic or polynomial drift exhibit complex dynamics, yet compared to nonlinear systems in general form, are often easier to analyze, simulate, control, and learn. Results going back over a century have shown that the majority of nonpolynomial nonlinear systems can be recast in polynomial form, and their degree can be reduced further to quadratic. This process of polynomialization/quadratization reveals new variables (in most cases, additional variables have to be added to achieve this) in which the system dynamics adhere to that specific form, which leads us to discover new structures of a model. This chapter summarizes the state of the art for the discovery of polynomial and quadratic representations of finite-dimensional dynamical systems. We review known existence results, discuss the two prevalent algorithms for automating the discovery process, and give examples in form of a single-layer neural network and a phenomenological model of cell signaling.
Bibtex
@article{KanKimKra_pOPINF_purging_2025,
author = {Kang, Seunghyon and Kim, Hyeonghun and Kramer, Boris},
title = {Parametric Operator Inference to simulate the purging process in semiconductor manufacturing},
year = {2025},
note = {arxiv:2504.03990}
}
Abstract
Kinetic simulations are computationally intensive due to six-dimensional phase space discretization. Many kinetic spectral solvers use the asymmetrically weighted Hermite expansion due to its conservation and fluid-kinetic coupling properties, i.e., the lower-order Hermite moments capture and describe the macroscopic fluid dynamics and higher-order Hermite moments describe the microscopic kinetic dynamics. We leverage this structure by developing a parametric data-driven reduced-order model based on the proper orthogonal decomposition, which projects the higher-order kinetic moments while retaining the fluid moments intact. This approach can also be understood as learning a nonlocal closure via a reduced modal decomposition. We demonstrate analytically and numerically that the method ensures local and global mass, momentum, and energy conservation. The numerical results show that the proposed method effectively replicates the high-dimensional spectral simulations at a fraction of the computational cost and memory, as validated on the weak Landau damping and two-stream instability benchmark problems.
Bibtex
@article{IsKoHaDelKra_ROM_fluid-kinetic-solver_2025,
author = {Issan, Opal and Koshkarov, Oleksandr and Halpern, Federico D and Delzanno, Gian Luca and Kramer, Boris},
title = {Conservative data-driven model order reduction of a fluid-kinetic spectral solver},
year = {2025},
note = {arxiv:2504.09682}
}
Journal Publications
Abstract
This work presents a robust design optimization approach for a char combustion process in a limited data setting, where simulations of the fluid-solid coupled system are computationally expensive. We integrate a polynomial dimensional decomposition (PDD) surrogate model into the design optimization and induce computational efficiency in three key areas. First, we transform the input random variables to have fixed probability measures, which eliminates the need to recalculate the PDD’s basis functions associated with these probability quantities. Second, using the limited data available from a physics-based high-fidelity solver, we estimate the PDD coefficients via sparsity-promoting diffeomorphic modulation under observable response preserving homotopy regression. Third, we propose a single pass surrogate model training that avoids the need to generate new training data and update the PDD coefficients during the derivative-free optimization. The results provide insights for optimizing process parameters to ensure consistently high energy production from char combustion.
Bibtex
@article {GuLeKra25_RDO_Char_Combustion,
author = {Guo, Yulin and Lee, Dongjin and Kramer, Boris},
title = {Robust Design Optimization with Limited Data for Char Combustion},
year = {2024},
journal = {Structural \& Multidisciplinary Optimization},
volume = {68},
pages = {59},
doi = {10.1007/s00158-025-03988-y}
}
Abstract
We derive conservative closures of the Vlasov-Poisson equations discretized in velocity via the symmetrically weighted Hermite spectral expansion. The short note analyzes the conservative closures preservation of the hyperbolicity and anti-symmetry of the Vlasov equation. Furthermore, we verify numerically the analytically derived conservative closures on simulating a classic electrostatic benchmark problem: the Langmuir wave. The numerical results and analytic analysis show that the closure by truncation is the most suitable conservative closure for the symmetrically weighted Hermite formulation.
Bibtex
@article {IsKoHaKraDe25_conservativeClosures_Vlasov_Poisson,
author = {Issan, Opal and Koshkarov, Oleksandr and Halpern, Federico D and Kramer, Boris and Delzanno, Gian Luca},
title = {Conservative closures of the {V}lasov-{P}oisson equations discretized with a symmetrically weighted {H}ermite spectral expansion in velocity},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {524},
pages = {113741},
year = {2025},
doi = {10.1016/j.jcp.2025.113741}
}
Abstract
We present a scalable approach to computing nonlinear balancing energy functions for control-affine systems with polynomial nonlinearities. Al'brekht's power-series method is used to solve the Hamilton-Jacobi-Bellman equations for polynomial approximations to the energy functions. The contribution of this article lies in the numerical implementation of the method based on the Kronecker product, enabling scalability to over 1000 state dimensions. The tensor structure and symmetries arising from the Kronecker product representation are key to the development of efficient and scalable algorithms. We derive the explicit algebraic structure for the equations, present rigorous theory for the solvability and algorithmic complexity of those equations, and provide general purpose open-source software implementations for the proposed algorithms. The method is illustrated on two simple academic models, followed by a high-dimensional semidiscretized PDE model of dimension as large as n=1080.
Bibtex
@article {CoKr24_scalable_Computation_Hinfinity_EnergyFcts,
author = {Corbin, Nicholas and Kramer, Boris},
title = {Scalable Computation of $\mathcal{H}_\infty$ Energy Functions for Polynomial Control-Affine Systems},
journal = {IEEE Transactions on Automatic Control},
year = {2024},
doi = {10.1109/TAC.2024.3494472}
}
Abstract
This paper presents a structure-preserving Bayesian approach for learning nonseparable Hamiltonian systems using stochastic dynamic models allowing for statistically-dependent, vector-valued additive and multiplicative measurement noise. The approach is comprised of three main facets. First, we derive a Gaussian filter for a statistically-dependent, vector-valued, additive and multiplicative noise model that is needed to evaluate the likelihood within the Bayesian posterior. Second, we develop a novel algorithm for cost-effective application of Bayesian system identification to high-dimensional systems. Third, we demonstrate how structure-preserving methods can be incorporated into the proposed framework, using nonseparable Hamiltonians as an illustrative system class. We compare the Bayesian method to a state-of-the-art machine learning method on a canonical nonseparable Hamiltonian model and a chaotic double pendulum model with small, noisy training datasets. The results show that using the Bayesian posterior as a training objective can yield upwards of 724 times improvement in Hamiltonian mean squared error using training data with up to 10\% multiplicative noise compared to a standard training objective. Lastly, we demonstrate the utility of the novel algorithm for parameter estimation of a 64-dimensional model of the spatially-discretized nonlinear Schr\"odinger equation with data corrupted by up to 20\% multiplicative noise.
Bibtex
@article{GaShKrGo_BayesianID_nonseparable_Hamiltonians_2024,
title = {Bayesian identification of nonseparable {H}amiltonians with multiplicative noise using deep learning and reduced-order modeling},
author = {Galioto, N. and Sharma, H. and Kramer, B. and Gorodetsky, A.A.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {430},
pages = {117194},
year = {2024},
doi = {10.1016/j.cma.2024.117194}
}
Abstract
We analyze the anti-symmetric properties of spectral discretization for the one-dimensional Vlasov-Poisson equations. The discretization is based on a spectral expansion in velocity with the symmetrically weighted Hermite basis functions, central finite differencing in space, and an implicit Runge Kutta integrator in time. The proposed discretization preserves the anti-symmetric structure of the advection operator in the Vlasov equation, resulting in a stable numerical method. We apply such discretization to two formulations: the canonical Vlasov-Poisson equations and their continuously transformed square-root representation. The latter preserves the positivity of the particle distribution function. We derive analytically the conservation properties of both formulations, including particle number, momentum, and energy, which are verified numerically on the following benchmark problems: manufactured solution, linear and nonlinear Landau damping, two-stream instability, and bump-on-tail instability.
Bibtex
@article{IsKoHaKrDe_AWSPS-Vlasov-Poisson_2024,
title = {Anti-symmetric and Positivity Preserving Formulation of a Spectral Method for {V}lasov-{P}oisson Equations},
author = {Issan, O. and Koshkarov, O. and Halpern, F.D. and Kramer, B. and Delzanno, G.L.},
journal = {Journal of Computational Physics},
volume = {514},
pages = {113263},
year = {2024},
doi = {https://doi.org/10.1016/j.jcp.2024.113263},
}
Abstract
Forward simulation-based uncertainty quantification that studies the distribution of quantities of interest (QoI) is a crucial component for computationally robust engineering design and prediction. There is a large body of literature devoted to accurately assessing statistics of QoIs, and in particular, multilevel or multifidelity approaches are known to be effective, leveraging cost-accuracy tradeoffs between a given ensemble of models. However, effective algorithms that can estimate the full distribution of QoIs are still under active development. In this paper, we introduce a general multifidelity framework for estimating the cumulative distribution function (CDF) of a vector-valued QoI associated with a high-fidelity model under a budget constraint. Given a family of appropriate control variates obtained from lower-fidelity surrogates, our framework involves identifying the most cost-effective model subset and then using it to build an approximate control variates estimator for the target CDF. We instantiate the framework by constructing a family of control variates using intermediate linear approximators and rigorously analyze the corresponding algorithm. Our analysis reveals that the resulting CDF estimator is uniformly consistent and asymptotically optimal as the budget tends to infinity, with only mild moment and regularity assumptions on the joint distribution of QoIs. The approach provides a robust multifidelity CDF estimator that is adaptive to the available budget, does not require \textit{a priori} knowledge of cross-model statistics or model hierarchy, and applies to multiple dimensions. We demonstrate the efficiency and robustness of the approach using test examples of parametric PDEs and stochastic differential equations including both academic instances and more challenging engineering problems.
Bibtex
@article{HaKrLeNaXu_controlvariate_MF_distribution_estimation_2023,
title = {An approximate control variates approach to multifidelity distribution estimation},
author = {Han, R., Kramer, B., Lee, D., Narayan, A., Xu, Y.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {12},
issue = {4},
pages = {1349-1388},
year = {2024},
doi = {10.1137/23M1584307}
}
Abstract
In uncertainty quantification, variance-based global sensitivity analysis quantitatively determines the effect of each input random variable on the output by partitioning the total output variance into contributions from each input. However, computing conditional expectations can be prohibitively costly when working with expensive-to-evaluate models. Surrogate models can accelerate this, yet their accuracy depends on the quality and quantity of training data, which is expensive to generate (experimentally or computationally) for complex engineering systems. Thus, methods that work with limited data are desirable. We propose a diffeomorphic modulation under observable response preserving homotopy (D-MORPH) regression to train a polynomial dimensional decomposition surrogate of the output that minimizes the number of training data. The new method first computes a sparse Lasso solution and uses it to define the cost function. A subsequent D-MORPH regression minimizes the difference between the D-MORPH and Lasso solution. The resulting D-MORPH based surrogate is more robust to input variations and more accurate with limited training data. We illustrate the accuracy and computational efficiency of the new surrogate for global sensitivity analysis using mathematical functions and an expensive-to-simulate model of char combustion. The new method is highly efficient, requiring only 15% of the training data compared to conventional regression.
Bibtex
@article{LeLaKr_GSA_limiteddata_combustion_2024,
title = {Global sensitivity analysis with limited data via sparsity-promoting {D-MORPH} regression: Application to char combustion},
journal = {Journal of Computational Physics},
volume = {511},
pages = {113116},
year = {2024},
doi = {https://doi.org/10.1016/j.jcp.2024.113116},
author = {Dongjin Lee and Elle Lavichant and Boris Kramer}
}
Abstract
Hamiltonian Operator Inference has been introduced in [Sharma, H., Wang, Z., Kramer, B., Physica D: Nonlinear Phenomena, 431, p.133122, 2022] to learn structure-preserving reduced-order models (ROMs) for Hamiltonian systems. This approach constructs a low-dimensional model using only data and knowledge of the Hamiltonian function. Such ROMs can keep the intrinsic structure of the system, allowing them to capture the physics described by the governing equations. In this work, we extend this approach to more general systems that are either conservative or dissipative in energy, and which possess a gradient structure. We derive the optimization problems for inferring structure-preserving ROMs that preserve the gradient structure. We further derive an {\em a priori} error estimate for the reduced-order approximation. To test the algorithms, we consider semi-discretized partial differential equations with gradient structure, such as the parameterized wave and Korteweg-de-Vries equations in the conservative case and the one- and two-dimensional Allen-Cahn equations in the dissipative case. The numerical results illustrate the accuracy, structure-preservation properties, and predictive capabilities of the gradient-preserving Operator Inference ROMs.
Bibtex
@article{GeSiJuKrWa_gradientpreserving_OPINF_2024,
title = {Gradient preserving operator inference: data-driven reduced-order models for equations with gradient structure},
author = {Geng, Y. and Singh, J. and Ju, L. and Kramer, B. and Wang, Z.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {427},
pages = {117033},
year = {2024},
doi = {https://doi.org/10.1016/j.cma.2024.117033}
}
Abstract
Nonlinear balanced truncation is a model order reduction technique that reduces the dimension of nonlinear systems in a manner that accounts for either open- or closed-loop observability and controllability aspects of the system. Two computational challenges have so far prevented its deployment on large-scale systems: (a) the energy functions required for characterization of controllability and observability are solutions of high-dimensional Hamilton-Jacobi-(Bellman) equations, and (b) efficient model reduction and subsequent reduced-order model (ROM) simulation on the resulting nonlinear balanced manifolds. This work proposes a unifying and scalable approach to the challenge (a) by considering a Taylor series-based approach to solve a class of parametrized Hamilton-Jacobi-Bellman equations that are at the core of the balancing approach. The value of a formulation parameter provides either open-loop balancing or a variety of closed-loop balancing options. To solve for coefficients of the Taylor-series approximation to the energy functions, the presented method derives a linear tensor structure and heavily utilizes this to solve structured linear systems with billions of unknowns. The strength and scalability of the algorithm is demonstrated on two semi-discretized partial differential equations, namely the Burgers equation and the Kuramoto-Sivashinsky equation.
Bibtex
@article {KGB_NonlinearBT_Part1,
author = {Kramer, Boris and Gugercin, Serkan and Borggaard, Jeff and Balicki, Linus},
title = {Scalable computation of energy functions for nonlinear balanced truncation},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {427},
pages = {117011},
year = {2024},
doi = {https://doi.org/10.1016/j.cma.2024.117011}
}
Abstract
This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of Lagrangian mechanical systems. Existing intrusive projection-based model reduction approaches construct structure-preserving Lagrangian ROMs by projecting the Euler-Lagrange equations of the full-order model (FOM) onto a linear subspace. This Galerkin projection step requires complete knowledge about the Lagrangian operators in the FOM and full access to manipulate the computer code. In contrast, the proposed Lagrangian operator inference approach embeds the mechanics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying Lagrangian structure. The proposed approach exploits knowledge of the governing equations (but not their discretization) to define the form and parametrization of a Lagrangian ROM which can then be learned from projected snapshot data. The method does not require access to FOM operators or computer code. The numerical results demonstrate Lagrangian operator inference on an Euler-Bernoulli beam model and a large-scale discretization of a soft robot fishtail with 779,232 degrees of freedom. Accurate long-time predictions of the learned Lagrangian ROMs far outside the training time interval illustrate their generalizability.
Bibtex
@article{SK_LagrangianOPINF,
title = {Preserving {L}agrangian structure in data-driven reduced-order modeling of large-scale mechanical systems},
author = {Sharma, H. and Kramer, B.},
journal = {Physica D: Nonlinear Phenomena},
volume = {462},
pages = {134128},
year = {2024},
doi = {https://doi.org/10.1016/j.physd.2024.134128}
}
Abstract
Complex mechanical systems often exhibit strongly nonlinear behavior due to the presence of nonlinearities in the energy dissipation mechanisms, material constitutive relationships, or geometric/connectivity mechanics. Numerical modeling of these systems leads to nonlinear full-order models that possess an underlying Lagrangian structure. This work proposes a Lagrangian operator inference method enhanced with structure-preserving machine learning to learn nonlinear reduced-order models (ROMs) of nonlinear mechanical systems. This two-step approach first learns the best-fit linear Lagrangian ROM via Lagrangian operator inference and then presents a structure-preserving machine learning method to learn nonlinearities in the reduced space. The proposed approach can learn a structure-preserving nonlinear ROM purely from data, unlike the existing operator inference approaches that require knowledge about the mathematical form of nonlinear terms. From a machine learning perspective, it accelerates the training of the structure-preserving neural network by providing an informed prior (i.e., the linear Lagrangian ROM structure), and it reduces the computational cost of the network training by operating on the reduced space. The method is first demonstrated on two simulated examples: a conservative nonlinear rod model and a two-dimensional nonlinear membrane with nonlinear internal damping. Finally, the method is demonstrated on an experimental dataset consisting of digital image correlation measurements taken from a lap-joint beam structure from which a predictive model is learned that captures amplitude-dependent frequency and damping characteristics accurately. The numerical results demonstrate that the proposed approach yields generalizable nonlinear ROMs that exhibit bounded energy error, capture the nonlinear characteristics reliably, and provide accurate long-time predictions outside the training data regime.
Bibtex
@article{ShNaToKr-Lagrangian-OPINF-SPML-2024,
title = {Lagrangian operator inference enhanced with structure-preserving machine learning for nonintrusive model reduction of mechanical systems},
author = {Sharma, Harsh and Najera-Flores, David A. and Todd, Michael D. and Kramer, Boris},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {423},
pages = {116865},
year = {2024},
doi = {https://doi.org/10.1016/j.cma.2024.116865}
}
Abstract
This review discusses Operator Inference, a nonintrusive reduced modeling approach that incorporates physical governing equations by defining a structured polynomial form for the reduced model, and then learns the corresponding reduced operators from simulated training data. The polynomial model form of Operator Inference is sufficiently expressive to cover a wide range of nonlinear dynamics found in fluid mechanics and other fields of science and engineering, while still providing efficient reduced model computations. The learning steps of Operator Inference are rooted in classical projection-based model reduction; thus, some of the rich theory of model reduction can be applied to models learned with Operator Inference. This connection to projection-based model reduction theory offers a pathway toward deriving error estimates and gaining insights to improve predictions. Furthermore, through formulations of Operator Inference that preserve Hamiltonian and other structures, important physical properties such as energy conservation can be guaranteed in the predictions of the reduced model beyond the training horizon. This review illustrates key computational steps of Operator Inference through a large-scale combustion example.
Bibtex
@article{KPW_OPINF_survey_2024,
title = {Learning Nonlinear Reduced Models from Data with Operator Inference},
author = {Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {Annual Review of Fluid Mechanics},
volume = {56},
issue = {1},
pages = {521-548},
year = {2024},
doi = {https://doi.org/10.1146/annurev-fluid-121021-025220}
}
Abstract
Quadratization of polynomial and nonpolynomial systems of ordinary differential equations is advantageous in a variety of disciplines, such as systems theory, fluid mechanics, chemical reaction modeling and mathematical analysis. A quadratization reveals new variables and structures of a model, which may be easier to analyze, simulate, control, and provides a convenient parametrization for learning. This paper presents novel theory, algorithms and software capabilities for quadratization of non-autonomous ODEs. We provide existence results, depending on the regularity of the input function, for cases when a quadratic-bilinear system can be obtained through quadratization. We further develop existence results and an algorithm that generalizes the process of quadratization for systems with arbitrary dimension that retain the nonlinear structure when the dimension grows. For such systems, we provide dimension-agnostic quadratization. An example is semi-discretized PDEs, where the nonlinear terms remain symbolically identical when the discretization size increases. As an important aspect for practical adoption of this research, we extended the capabilities of the QBee software towards both non-autonomous systems of ODEs and ODEs with arbitrary dimension. We present several examples of ODEs that were previously reported in the literature, and where our new algorithms find quadratized ODE systems with lower dimension than the previously reported lifting transformations. We further highlight an important area of quadratization: reduced-order model learning. This area can benefit significantly from working in the optimal lifting variables, where quadratic models provide a direct parametrization of the model that also avoids additional hyperreduction for the nonlinear terms. A solar wind example highlights these advantages.
Bibtex
@article{ByIsPoKr_Quadratization_2023,
title = {Exact and optimal quadratization of nonlinear finite-dimensional non-autonomous dynamical systems},
author = {Bychkov, A. and Issan, O. and Pogudin, G. and Kramer, B.},
journal = {SIAM Journal on Applied Dynamical Systems},
volume = {23},
number = {1},
pages = {982-1016},
year = {2024},
doi = {10.1137/23M1561129}
}
Abstract
The ambient solar wind plays a significant role in propagating interplanetary coronal mass ejections and is an important driver of space weather geomagnetic storms. A computationally efficient and widely used method to predict the ambient solar wind radial velocity near Earth involves coupling three models: Potential Field Source Surface, Wang-Sheeley-Arge (WSA), and Heliospheric Upwind eXtrapolation. However, the model chain has eleven uncertain parameters that are mainly non-physical due to empirical relations and simplified physics assumptions. We, therefore, propose a comprehensive uncertainty quantification (UQ) framework that is able to successfully quantify and reduce parametric uncertainties in the model chain. The UQ framework utilizes variance-based global sensitivity analysis followed by Bayesian inference via Markov chain Monte Carlo to learn the posterior densities of the most influential parameters. The sensitivity analysis results indicate that the five most influential parameters are all WSA parameters. Additionally, we show that the posterior densities of such influential parameters vary greatly from one Carrington rotation to the next. The influential parameters are trying to overcompensate for the missing physics in the model chain, highlighting the need to enhance the robustness of the model chain to the choice of WSA parameters. The ensemble predictions generated from the learned posterior densities significantly reduce the uncertainty in solar wind velocity predictions near Earth.
Bibtex
@article{IsRiCaKr_BayesianSolarWind_2023,
title = {Bayesian inference and global sensitivity analysis for ambient solar wind prediction},
author = {Issan, O. and Riley, P. and Camporeale, E. and Kramer, B.},
note = {arXiv:2305.08009},
year = {2023}
}
Abstract
This work presents two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems using data-driven quadratic manifolds. Classical symplectic model reduction approaches employ linear symplectic subspaces for representing the high-dimensional system states in a reduced-dimensional coordinate system. While these approximations respect the symplectic nature of Hamiltonian systems, the linearity of the approximation imposes a fundamental limitation to the accuracy that can be achieved. We propose two different model reduction methods based on recently developed quadratic manifolds, each presenting its own advantages and limitations. The addition of quadratic terms in the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality in the problem at hand. Both approaches are effective for issuing predictions in settings well outside the range of their training data while providing more accurate solutions than the linear symplectic reduced-order models.
Bibtex
@article{ShMuBuGeGlKr_symplecticMORmanifolds_2023,
title = {Symplectic model reduction of {H}amiltonian systems using data-driven quadratic manifolds},
author = {Sharma, H. and Mu, H. and Buchfink, P. and Geelen, R. and Glas, S. and Kramer, B.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {417},
pages = {116402},
year = {2023},
doi = {https://doi.org/10.1016/j.cma.2023.116402}
}
Abstract
Despite gradual progress over the past decades, the simulation of progressive damage in composite laminates remains a challenging task, in part due to inherent uncertainties of material properties. This paper combines three computational methods - finite element analysis (FEA), machine learning and Markov Chain Monte Carlo - to estimate the probability density of FEA input parameters while accounting for the variation of mechanical properties. First, 15,000 FEA simulations of open-hole tension tests are carried out with randomly varying input parameters by applying continuum damage mechanics material models. This synthetically-generated data is then used to train and validate a neural network consisting of five hidden layers and 32 nodes per layer to develop a highly efficient surrogate model. With this surrogate model and the incorporation of statistical test data from experiments, the application of Markov Chain Monte Carlo algorithms enables Bayesian parameter estimation to learn the probability density of input parameters for the simulation of progressive damage evolution in fibre reinforced composites. This methodology is validated against various open-hole tension test geometries enabling the determination of virtual design allowables.
Bibtex
@article{ReLiVaZoKr_Bayesian_damage_simulation_composites,
title = {Bayesian parameter estimation for the inclusion of uncertainty in progressive damage simulation of composites},
author = {Reiner, J. and Linden, N. and Vaziri, R. and Zobeiry, N. and Kramer, B.},
journal = {Composite Structures},
volume = {321},
pages = {117257},
doi = {10.1016/j.compstruct.2023.117257},
year = {2023}
}
Abstract
We propose novel methods for Conditional Value-at-Risk (CVaR) estimation for nonlinear systems under high-dimensional dependent random inputs. We propose a DD-GPCE-Kriging surrogate that merges dimensionally decomposed generalized polynomial chaos expansion and Kriging to accurately approximate nonlinear and nonsmooth random outputs. We integrate DD-GPCE-Kriging with (1) Monte Carlo simulation (MCS) and (2) multifidelity importance sampling (MFIS). The MCS-based method samples from DD-GPCE-Kriging, which is efficient and accurate for high-dimensional dependent random inputs. A surrogate model introduces bias, so we propose an MFIS-based method where DD-GPCE-Kriging determines the biasing density efficiently and the high-fidelity model is used to estimate CVaR from biased samples. To speed up the biasing density construction, we compute DD-GPCE-Kriging using a cheap-to-evaluate low-fidelity model. Numerical results for mathematical functions show that the MFIS-based method is more accurate than the MCS-based method when the output is nonsmooth. The scalability of the proposed methods and their applicability to complex engineering problems are demonstrated on a two-dimensional composite laminate with 28 (partly dependent) random inputs and a three-dimensional composite T-joint with 20 (partly dependent) random inputs. In the former, the proposed MFIS-based method achieves 104x speedup compared to standard MCS using the high-fidelity model, while accurately estimating CVaR with 1.15% error.
Bibtex
@article{LeeKramer_Bifidelity_CVAR_DDGPCE-Kriging,
title = {Multifidelity conditional value-at-risk estimation by dimensionally decomposed generalized polynomial chaos-Kriging},
author = {Lee, D. and Kramer, B.},
journal = {Reliability Engineering \& System Safety},
volume = {235},
pages = {109208},
doi = {10.1016/j.ress.2023.109208},
year = {2023}
}
Abstract
Digital twin models allow us to continuously assess the possible risk of damage and failure of a complex system. Yet high-fidelity digital twin models can be computationally expensive, making quick-turnaround assessment challenging. Towards this goal, this article proposes a novel bi-fidelity method for estimating the conditional value-at-risk (CVaR) for nonlinear systems subject to dependent and high-dimensional inputs. For models that can be evaluated fast, a method that integrates the dimensionally decomposed generalized polynomial chaos expansion (DD-GPCE) approximation with a standard sampling-based CVaR estimation is proposed. For expensive-to-evaluate models, a new bi-fidelity method is proposed that couples the DD-GPCE with a Fourier-polynomial expansions of the mapping between the stochastic low-fidelity and high-fidelity output data to ensure computational efficiency. The method employs a measure-consistent orthonormal polynomial in the random variable of the low-fidelity output to approximate the high-fidelity output. Numerical results for a structural mechanics truss with 36-dimensional (dependent random variable) inputs indicate that the DD-GPCE method provides very accurate CVaR estimates that require much lower computational effort than standard GPCE approximations. A second example considers the realistic problem of estimating the risk of damage to a fiber-reinforced composite laminate. The high-fidelity model is a finite element simulation that is prohibitively expensive for risk analysis, such as CVaR computation. Here, the novel bi-fidelity method can accurately estimate CVaR as it includes low-fidelity models in the estimation procedure and uses only a few high-fidelity model evaluations to significantly increase accuracy.
Bibtex
@article{LK_bifidelityDDGPCE,
title = {Bi-fidelity conditional value-at-risk estimation by dimensionally decomposed generalized polynomial chaos expansion},
author = {Lee, D. and Kramer, B.},
journal={Structural and Multidisciplinary Optimization},
volume={66},
number={2},
pages={33},
year={2023},
publisher={Springer}
}
Abstract
Operator inference learns low-dimensional dynamical-system models with polynomial nonlinear terms from trajectories of high-dimensional physical systems (non-intrusive model reduction). This work focuses on the large class of physical systems that can be well described by models with quadratic nonlinear terms and proposes a regularizer for operator inference that induces a stability bias onto quadratic models. The proposed regularizer is physics informed in the sense that it penalizes quadratic terms with large norms and so explicitly leverages the quadratic model form that is given by the underlying physics. This means that the proposed approach judiciously learns from data and physical insights combined, rather than from either data or physics alone. Additionally, a formulation of operator inference is proposed that enforces model constraints for preserving structure such as symmetry and definiteness in the linear terms. Numerical results demonstrate that models learned with operator inference and the proposed regularizer and structure preservation are accurate and stable even in cases where using no regularization or Tikhonov regularization leads to models that are unstable.
Bibtex
@article{SKP_PIregulartizationOPINF,
title = {Physics-informed regularization and structure preservation for learning stable reduced models from data with operator inference},
author = {Sawant, N. and Kramer, B. and Peherstorfer, B.},
journal={Computer Methods in Applied Mechanics and Engineering},
volume={404},
pages={115836},
year={2023},
publisher={Elsevier}
}
Abstract
Dynamical systems modeling, particularly via systems of ordinary differential equations, has been used to effectively capture the temporal behavior of different biochemical components in signal transduction networks. Despite the recent advances in experimental measurements, including sensor development and {\textquoteright}-omics{\textquoteright} studies that have helped populate protein-protein interaction networks in great detail, systems biology modeling lacks systematic methods to estimate kinetic parameters and quantify associated uncertainties. This is because of multiple reasons, including sparse and noisy experimental measurements, lack of detailed molecular mechanisms underlying the reactions, and missing biochemical interactions. Additionally, the inherent nonlinearities with respect to the states and parameters associated with the system of differential equations further compound the challenges of parameter estimation. In this study, we propose a comprehensive framework for Bayesian parameter estimation and complete quantification of the effects of uncertainties in the data and models. We apply these methods to a series of signaling models of increasing mathematical complexity. Systematic analysis of these dynamical systems showed that parameter estimation depends on data sparsity, noise level, and model structure, including the existence of multiple steady states. These results highlight how focused uncertainty quantification can enrich systems biology modeling and enable additional quantitative analyses for parameter estimation.
Bibtex
@article {LKR_BayesianID_SysBio,
author = {Linden, Nathaniel J and Kramer, Boris and Rangamani, Padmini},
title = {Bayesian Parameter Estimation for Dynamical Models in Systems Biology},
year = {2022},
volume={18},
number={10},
pages={e1010651},
doi = {10.1371/journal.pcbi.1010651},
journal = {PLOS Computational Biology}
}
Abstract
Solar wind conditions are predominantly predicted via three-dimensional numerical magnetohydrodynamic (MHD) models. Despite their ability to produce highly accurate predictions, MHD models require computationally intensive high-dimensional simulations. This renders them inadequate for making time-sensitive predictions and for large-ensemble analysis required in uncertainty quantification. This paper presents a new data-driven reduced-order model (ROM) capability for forecasting heliospheric solar wind speeds. Traditional model reduction methods based on Galerkin projection have difficulties with advection-dominated systems-such as solar winds-since they require a large number of basis functions and can become unstable. A core contribution of this work addresses this challenge by extending the non-intrusive operator inference ROM framework to exploit the translational symmetries present in the solar wind caused by the Sun's rotation. The numerical results show that our method can adequately emulate the MHD simulations and outperforms a reduced-physics surrogate model, the Heliospheric Upwind Extrapolation model.
Bibtex
@article{IK22_solarWind_sOPINF,
title = {Predicting solar wind streams from the inner-{H}eliosphere to {E}arth via shifted operator inference},
author = {Issan, O. and Kramer, B.},
journal = {Journal of Computational Physics},
volume = {3473},
pages = {111689},
url = {https://doi.org/10.1016/j.jcp.2022.111689},
year = {2022}
}
Bibtex
@article{kramer2022learningStateVariables,
title={Learning state variables for physical systems},
author={Kramer, Boris},
journal={Nature Computational Science},
volume={2},
number={7},
pages={414-415},
year={2022},
publisher={Nature Publishing Group}
}
Abstract
This work presents a nonintrusive physics-preserving method to learn reduced-order models (ROMs) of Hamiltonian systems. Traditional intrusive projection-based model reduction approaches utilize symplectic Galerkin projection to construct Hamiltonian reduced models by projecting Hamilton's equations of the full model onto a symplectic subspace. This symplectic projection requires complete knowledge about the full model operators and full access to manipulate the computer code. In contrast, the proposed Hamiltonian operator inference approach embeds the physics into the operator inference framework to develop a data-driven model reduction method that preserves the underlying symplectic structure. Our method exploits knowledge of the Hamiltonian functional to define and parametrize a Hamiltonian ROM form which can then be learned from data projected via symplectic projectors. The proposed method is `gray-box' in that it utilizes knowledge of the Hamiltonian structure at the partial differential equation level, as well as knowledge of spatially local components in the system. However, it does not require access to computer code, only data to learn the models. Our numerical results demonstrate Hamiltonian operator inference on a linear wave equation, the cubic nonlinear Schroedinger equation, and a nonpolynomial sine-Gordon equation. Accurate long-time predictions far outside the training time interval for nonlinear examples illustrate the generalizability of our learned models.
Bibtex
@article{SWK21-HOPINF,
title = {{H}amiltonian Operator Inference: Physics-preserving Learning of Reduced-order Models for {H}amiltonian Systems},
author = {Sharma, Harsh and Wang, Zhu and Kramer, Boris},
journal = {Physica {D}: {N}onlinear {P}henomena},
volume = {431},
pages = {133122},
doi = {https://doi.org/10.1016/j.physd.2021.133122},
year = {2022}
}
Abstract
Reliable, risk-averse design of complex engineering systems with optimized performance requires dealing with uncertainties. A conventional approach is to add safety margins to a design that was obtained from deterministic optimization. Safer engineering designs require appropriate cost and constraint function definitions that capture the risk associated with unwanted system behavior in the presence of uncertainties. The paper proposes two notions of certifiability. The first is based on accounting for the magnitude of failure to ensure data-informed conservativeness. The second is the ability to provide optimization convergence guarantees by preserving convexity. Satisfying these notions leads to certifiable risk-based design optimization (CRiBDO). In the context of CRiBDO, risk measures based on superquantile (a.k.a. conditional value-at-risk) and buffered probability of failure are analyzed. CRiBDO is contrasted with reliability-based design optimization (RBDO), where uncertainties are accounted for via the probability of failure, through a structural and a thermal design problem. A reformulation of the short column structural design problem leading to a convex CRiBDO problem is presented. The CRiBDO formulations capture more information about the problem to assign the appropriate conservativeness, exhibit superior optimization convergence by preserving properties of underlying functions, and alleviate the adverse effects of choosing hard failure thresholds required in RBDO.
Bibtex
@article{CKNRW21_CRiBDO,
title = {Certifiable risk-based engineering design optimization.},
author = {Chaudhuri, Anirban and Kramer, Boris and Norton, Matthew and Royset, Johannes O. and Willcox, Karen},
journal = {AIAA Journal},
volume = {60},
number = {2},
pages = {551-565},
year = {2022},
doi = {10.2514/1.J060539}
}
Abstract
We present a balanced truncation model reduction approach for a class of nonlinear systems with time-varying and uncertain inputs. First, our approach brings the nonlinear system into quadratic-bilinear (QB) form via a process called lifting, which introduces transformations via auxiliary variables to achieve the specified model form. Second, we extend a recently developed QB balanced truncation method to be applicable to such lifted QB systems that share the common feature of having an indefinite system matrix. We illustrate this framework and the multi-stage lifting transformation on a tubular reactor model. In the numerical results we show that our proposed approach can obtain reduced-order models that are more accurate than proper orthogonal decomposition reduced-order models in situations where the latter are sensitive to the choice of training data.
Bibtex
@Incollection{KW2019_balanced_truncation_lifted_QB,
title = {Balanced Truncation Model Reduction for Lifted Nonlinear Systems},
author = {Kramer, B. and Willcox, K.},
editor={Beattie, Christopher and Benner, Peter and Embree, Mark and Gugercin, Serkan and Lefteriu, Sanda},
bookTitle="Realization and Model Reduction of Dynamical Systems: A Festschrift in Honor of the 70th Birthday of Thanos Antoulas",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="157--174",
isbn="978-3-030-95157-3",
doi="10.1007/978-3-030-95157-3_9"
}
Abstract
We propose a computational approach to estimate the stability domain of quadratic-bilinear reduced-order models (ROMs), which are low-dimensional approximations of large-scale dynamical systems. For nonlinear ROMs, it is not only important to show that the origin is locally asymptotically stable, but also to quantify if the operative range of the ROM is included in the region of convergence. While accuracy and structure preservation remain the main focus of development for nonlinear ROMs, a quantitative understanding of stability domains has been lacking thus far. In this work, for a given quadratic Lyapunov function, we first derive an analytical estimate of the stability domain, which is rather conservative but can be evaluated efficiently. With the goal to enlarge this estimate, we provide an optimal ellipsoidal estimate of the stability domain via solving a convex optimization problem. This provides us with valuable information about stability properties of the ROM, and important aspect of predictive simulation. We do not assume a specific ROM method, so a particular appeal is that the approach is applicable to quadratic-bilinear models obtained via data-driven approaches, where ROM stability properties cannot-per definition-be derived from the full-order model.
Bibtex
@article{K2020_stability_domains_QBROMs,
title = {Stability Domains for Quadratic-Bilinear Reduced-Order Models},
author = {Kramer, Boris},
volume = {20},
number={2},
pages = {981--996},
doi = {10.1137/20M1364849},
journal = {SIAM Journal on Applied Dynamical Systems},
year = {2021}
}
Abstract
We propose a new framework to design controllers for high-dimensional nonlinear systems. The control is designed through the iterative linear quadratic regulator (ILQR), an algorithm that computes control by iteratively applying the linear quadratic regulator on the local linearization of the system at each time step. Since ILQR is computationally expensive, we propose to first construct reduced-order models (ROMs) of the high-dimensional nonlinear system. We derive nonlinear ROMs via projection, where the basis is computed via balanced truncation (BT) and LQG balanced truncation (LQG-BT). Numerical experiments are performed on a semi-discretized nonlinear Burgers equation. We find that the ILQR algorithm produces good control on ROMs constructed either by BT or LQG-BT, with BT-ROM based controllers outperforming LQG-BT slightly for very low-dimensional systems.
Bibtex
@article{HK2020_ILQR_balancedROMs,
title = {Balanced Reduced-Order Models for Iterative Nonlinear Control of Large-Scale Systems},
author = {Huang, Yizhe and Kramer, Boris},
volume = {5},
number={5},
pages = {1699--1704},
doi = {10.1109/LCSYS.2020.3042835},
journal = {IEEE Control Systems Letters},
year = {Nov. 2021}
}
Abstract
This work presents a non-intrusive model reduction method to learn low-dimensional models of dynamical systems with non-polynomial nonlinear terms that are spatially local and that are given in analytic form. In contrast to state-of-the-art model reduction methods that are intrusive and thus require full knowledge of the governing equations and the operators of a full model of the discretized dynamical system, the proposed approach requires only the non-polynomial terms in analytic form and learns the rest of the dynamics from snapshots computed with a potentially black-box full-model solver. The proposed method learns operators for the linear and polynomially nonlinear dynamics via a least-squares problem, where the given non-polynomial terms are incorporated in the right-hand side. The least-squares problem is linear and thus can be solved efficiently in practice. The proposed method is demonstrated on three problems governed by partial differential equations, namely the diffusion-reaction Chafee-Infante model, a tubularreactor model for reactive flows, and a batch-chromatography model that describes a chemical separation process. The numerical results provide evidence that the proposed approach learns reduced models that achieve comparable accuracy as models constructed with state-of-the-art intrusive model reduction methods that require full knowledge of the governing equations.
Bibtex
@article{BGKPW2020_OpInf_nonpoly,
title = {Operator inference for non-intrusive model reduction of systems with non-polynomial nonlinear terms},
author = {Benner, P. and Goyal, P. and Kramer, B. and Peherstorfer, P. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {372},
pages = {113433},
url = {https://doi.org/10.1016/j.cma.2020.113433},
year = {2020}
}
Abstract
This paper shows how to systematically and efficiently improve a reduced-order model (ROM) to obtain a better ROM-based estimate of the Conditional Value-at-Risk (CVaR) of a computationally expensive quantity of interest (QoI). Efficiency is gained by exploiting the structure of CVaR, which implies that a ROM used for CVaR estimation only needs to be accurate in a small region of the parameter space, called the epsilon-risk region. Hence, any full-order model (FOM) queries needed to improve the ROM can be restricted to this small region of the parameter space, thereby substantially reducing the computational cost of ROM construction. However, an example is presented which shows that simply constructing a new ROM that has a smaller error with the FOM is in general not sufficient to yield a better CVaR estimate. Instead a combination of previous ROMs is proposed that achieves a guaranteed improvement, as well as epsilon-risk regions that converge monotonically to the FOM risk region with decreasing ROM error. Error estimates for the ROM-based CVaR estimates are presented. The gains in efficiency obtained by improving a ROM only in the small epsilon-risk region over a traditional greedy procedure on the entire parameter space is illustrated numerically.
Bibtex
@article{HKT2020_Adaptive_ROM_CVAR_estimation,
title = {Adaptive reduced-order model construction for conditional value-at-risk estimation},
author = {Heinkenschloss, M. and Kramer, B. and Takhtaganov, T.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {8},
number={2},
pages = {668--692},
url = {https://doi.org/10.1137/19M1257433},
year = {2020},
publisher={SIAM}
}
Abstract
We present Lift & Learn, a physics-informed method for learning low-dimensional models for large-scale dynamical systems. The method exploits knowledge of a system’s governing equations to identify a coordinate transformation in which the system dynamics have quadratic structure. This transformation is called a lifting map because it often adds auxiliary variables to the system state. The lifting map is applied to data obtained by evaluating a model for the original nonlinear system. This lifted data is projected onto its leading principal components, and low-dimensional linear and quadratic matrix operators are fit to the lifted reduced data using a least-squares operator inference procedure. Analysis of our method shows that the Lift & Learn models are able to capture the system physics in the lifted coordinates at least as accurately as traditional intrusive model reduction approaches. This preservation of system physics makes the Lift & Learn models robust to changes in inputs. Numerical experiments on the FitzHugh-Nagumo neuron activation model and the compressible Euler equations demonstrate the generalizability of our model.
Bibtex
@article{QKPW2020_lift_and_learn,
title = {Lift \& Learn: Physics-informed machine learning for large-scale nonlinear dynamical systems.},
author = {Qian, E. and Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {Physica {D}: {N}onlinear {P}henomena},
volume = {406},
pages = {132401},
url = {https://doi.org/10.1016/j.physd.2020.132401},
year = {2020}
}
Abstract
This paper introduces a new approach for importance-sampling-based reliability-based design optimization (RBDO) that reuses information from past optimization iterations to reduce computational effort. RBDO is a two-loop process-an uncertainty quantification loop embedded within an optimization loop-that can be computationally prohibitive due to the numerous evaluations of expensive high-fidelity models to estimate the probability of failure in each optimization iteration. In this work, we use the existing information from past optimization iterations to create efficient biasing densities for importance sampling estimates of probability of failure. The method involves two levels of information reuse: (1) reusing the current batch of samples to construct an a posteriori biasing density with optimal parameters, and (2) reusing the a posteriori biasing densities of the designs visited in past optimization iterations to construct the biasing density for the current design. We demonstrate for the RBDO of a benchmark speed reducer problem and a combustion engine problem that the proposed method leads to computational savings in the range of 51% to 76%, compared to building biasing densities from scratch in each iteration.
Bibtex
@article{CKW2020_IRIS_RBDO,
title = {Information Reuse for Importance Sampling in Reliability-Based Design Optimization},
author = {Chaudhuri, A. and Kramer, B. and Willcox, K.},
journal = {Reliability Engineering \& System Safety},
volume = {201},
pages = {106853},
doi = {https://doi.org/10.1016/j.ress.2020.106853},
year = {2020}
}
Some recent media coverage on this can be found at Tech Briefs: A Faster Way to Design Rockets: Scientific Machine Learning, as well as Science Daily: Scientific machine learning paves way for rapid rocket engine design and UT News: Scientific Machine Learning Paves Way for Rapid Rocket Engine Design.
[Abstract] [BibTeX] [download] [code]Abstract
This paper presents a physics-based data-driven method to learn predictive reduced-order models (ROMs) from high-fidelity simulations, and illustrates it in the challenging context of a single-injector combustion process. The method combines the perspectives of model reduction and machine learning. Model reduction brings in the physics of the problem, constraining the ROM predictions to lie on a subspace defined by the governing equations. This is achieved by defining the ROM in proper orthogonal decomposition (POD) coordinates, which embed the rich physics information contained in solution snapshots of a high-fidelity computational fluid dynamics (CFD) model. The machine learning perspective brings the flexibility to use transformed physical variables to define the POD basis. This is in contrast to traditional model reduction approaches that are constrained to use the physical variables of the high-fidelity code. Combining the two perspectives, the approach identifies a set of transformed physical variables that expose quadratic structure in the combustion governing equations and learns a quadratic ROM from transformed snapshot data. This learning does not require access to the high-fidelity model implementation. Numerical experiments show that the ROM accurately predicts temperature, pressure, velocity, species concentrations, and the limit-cycle amplitude, with speedups of more than five orders of magnitude over high-fidelity models. ROM-predicted pressure traces accurately match the phase of the pressure signal and yield good approximations of the limit-cycle amplitude.
Bibtex
@article{SKHW2020_learning_ROMs_combustor,
title = {Learning physics-based reduced-order models for a single-injector combustion process},
author = {Swischuk, R. and Kramer, B. and Huang, C. and Willcox, K.},
journal = {AIAA Journal},
volume = {58:6},
pages = {2658--2672},
url = {https://doi.org/10.2514/1.J058943},
year = {2020}
}
Abstract
This paper presents a structure-exploiting nonlinear model reduction method for systems with general nonlinearities. First, the nonlinear model is lifted to a model with more structure via variable transformations and the introduction of auxiliary variables. The lifted model is equivalent to the original model-it uses a change of variables, but introduces no approximations. When discretized, the lifted model yields a polynomial system of either ordinary differential equations or differential algebraic equations, depending on the problem and lifting transformation. Proper orthogonal decomposition (POD) is applied to the lifted models, yielding a reduced-order model for which all reduced-order operators can be pre-computed. Thus, a key benefit of the approach is that there is no need for additional approximations of nonlinear terms, in contrast with existing nonlinear model reduction methods requiring sparse sampling or hyper-reduction. Application of the lifting and POD model reduction to the FitzHugh-Nagumo benchmark problem and to a tubular reactor model with Arrhenius reaction terms shows that the approach is competitive in terms of reduced model accuracy with state-of-the-art model reduction via POD and discrete empirical interpolation, while having the added benefits of opening new pathways for rigorous analysis and input-independent model reduction via the introduction of the lifted problem structure.
Bibtex
@article{KW18nonlinearMORliftingPOD,
title = {Nonlinear Model Order Reduction via Lifting Transformations and Proper Orthogonal Decomposition},
author = {Kramer, B. and Willcox, K.},
journal = {AIAA Journal},
volume = {57},
number = {6},
pages = {2297-2307},
year = {2019},
doi = {10.2514/1.J057791},
URL = {https://doi.org/10.2514/1.J057791}
}
Abstract
This paper develops a multifidelity method that enables estimation of failure probabilities for expensive-to-evaluate models via information fusion and importance sampling. The presented general fusion method combines multiple probability estimators with the goal of variance reduction. We use low-fidelity models to derive biasing densities for importance sampling and then fuse the importance sampling estimators such that the fused multifidelity estimator is unbiased and has mean-squared error lower than or equal to that of any of the importance sampling estimators alone. By fusing all available estimators, the method circumvents the challenging problem of selecting the best biasing density and using only that density for sampling. A rigorous analysis shows that the fused estimator is optimal in the sense that it has minimal variance amongst all possible combinations of the estimators. The asymptotic behavior of the proposed method is demonstrated on a convection-diffusion-reaction partial differential equation model for which 100,000 samples can be afforded. To illustrate the proposed method at scale, we consider a model of a free plane jet and quantify how uncertainties at the flow inlet propagate to a quantity of interest related to turbulent mixing. Compared to an importance sampling estimator that uses the high-fidelity model alone, our multifidelity estimator reduces the required CPU time by 65% while achieving a similar coefficient of variation.
Bibtex
@article{KMPVW17FusionEstimators,
title = {Multifidelity probability estimation via fusion of estimators},
author = {Kramer, B. and Marques, A. and Peherstorfer, B. and Villa, U. and Willcox, K.},
journal = {Journal of Computational Physics},
volume = {392},
pages = {385-402},
url = {https://doi.org/10.1016/j.jcp.2019.04.071},
year = {2019}
}
Abstract
This paper proposes and analyzes two reduced-order model (ROM) based approaches for the efficient and accurate evaluation of the Conditional-Value-at-Risk (CVaR) of quantities of interest (QoI) in engineering systems with uncertain parameters. CVaR is used to model objective or constraint functions in risk-averse engineering design and optimization applications under uncertainty. Evaluating the CVaR of the QoI requires sampling in the tail of the QoI distribution and typically requires many solutions of an expensive full-order model of the engineering system. Our ROM approaches substantially reduce this computational expense. Both ROM-based approaches use Monte Carlo (MC) sampling. The first approach replaces the computationally expensive full-order model by an inexpensive ROM. The resulting CVaR estimation error is proportional to the ROM error in the so-called risk region, a small region in the space of uncertain system inputs. The second approach uses a combination of full-order model and ROM evaluations via importance sampling, and is effective even if the ROM has large errors. In the importance sampling approach, ROM samples are used to estimate the risk region and to construct a biasing distribution. Few full-order model samples are then drawn from this biasing distribution. Asymptotically as the ROM error goes to zero, the importance sampling estimator reduces the variance by a factor $1-\beta \ll 1$, where $\beta \in (0,1)$ is the quantile level at which CVaR is computed. Numerical experiments on a system of semilinear convection-diffusion-reaction equations illustrate the performance of the approaches.
Bibtex
@article{HKTQ18CVaRROMS,
author = {Heinkenschloss, M. and Kramer, B. and Takhtaganov, T. and Willcox, K.},
title = {Conditional-Value-at-Risk estimation via Reduced-Order Models},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {4},
pages = {1395--1423},
year = {2018}
}
Abstract
Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive high-fidelity model evaluation is necessary to approximate the system response. Variance reduction with importancesampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of low-cost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogate-model hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive high-fidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reacting-flow example compared to the single-fidelity cross-entropy method that uses a single model alone.
Bibtex
@article{PKW17MFCE,
title = {Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation},
author = {Peherstorfer, B. and Kramer, B. and Willcox, K.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {2},
pages = {737-761},
year = {2018}
}
Abstract
Subspace-based system identification for dynamical systems is a sound, system-theoretic way to obtain linear, time-invariant system models from data. The interplay of data and systems theory is reflected in the Hankel matrix, a block-structured matrix whose factorization is necessary for system identification. For systems with many inputs, many outputs, or large time-series of system-response data, established methods based on the singular value decomposition (SVD)—such as the eigensystem realization algorithm (ERA)—are prohibitively expensive. In this paper, we propose an algorithm to reduce the complexity, with respect to Hankel matrix size, of the eigensystem realization algorithm (ERA) from cubic to quadratic. This reduction is realized by replacing the SVD with a CUR decomposition that directly seeks a low-rank approximation of the Hankel matrix. The CUR decomposition is obtained by selecting a small number rows and columns using a maximum-volume-based cross approximation scheme. We present a worst-case error bound for our resulting system identification algorithm, and we demonstrate its computational advantages and accuracy on a numerical example. The example demonstrates that the resulting identification yields almost indistinguishable results compared with the SVD-based ERA, yet comes with significant computational savings.
Bibtex
@article{KG18ERA_CUR_Hankel,
author = {Kramer, B. and Gorodetsky, A.},
title = {System Identification via {CUR}-Factored {H}ankel Approximation},
journal = {SIAM Journal on Scientific Computing},
volume = {40},
number = {2},
pages = {A848--A866},
year = {2018},
doi = {10.1137/17M1137632}
}
Abstract
We present a learning-based method for stabilization of reduced-order models (ROMs) for thermal fluids, which are challenging multi-physics systems. The stabilization of the reduced-order model is achieved by using Lyapunov robust control theory to design a new closure model that is robust to parametric uncertainties. Furthermore, the design parameters in the proposed ROM stabilization method are optimized using a data-driven multi-parametric extremum seeking (MES) algorithm. We show the advantages of the proposed method on numerically challenging test-cases by using the 2D and 3D Boussinesq equations. The results illustrate that closure models can be extended to multi-physics systems, by using multiple design parameters.
Bibtex
@article{BBSK17stabilizationROMextremumSeeking,
author = "Benosman, M. and Borggaard, J. and San, O. and Kramer, B.",
title = "Learning-based robust stabilization for reduced-order models of 2D and 3D {B}oussinesq equations",
journal = "Applied Mathematical Modelling",
volume = "49",
pages = "162--181",
year = "2017",
issn = "0307-904X",
doi = "https://doi.org/10.1016/j.apm.2017.04.032"
}
Abstract
We consider control and stabilization for large-scale dynamical systems with uncertain, time-varying parameters. The time-critical task of controlling a dynamical system poses major challenges: Using large-scale models is prohibitive, and accurately inferring parameters can be expensive, too. We address both problems by proposing an offline-online strategy for controlling systems with time-varying parameters. During the offline phase, we use a high-fidelity model to compute a library of optimal feedback controller gains over a sampled set of parameter values. Then, during the online phase, in which the uncertain parameter changes over time, we learn a reduced- order model from system data. The learned reduced-order model is employed within an optimization routine to update the feedback control throughout the online phase. Since the system data naturally reflects the uncertain parameter, the data-driven updating of the controller gains is achieved without an explicit parameter estimation step. We consider two numerical test problems in the form of partial differential equations: a convection–diffusion system, and a model for flow through a porous medium. We demonstrate on those models that the proposed method successfully stabilizes the system model in the presence of process noise.
Bibtex
@article{KPW17FeedbackControlROMAdaptiveLearning,
author = {Kramer, B. and Peherstorfer, B. and Willcox, K.},
title = {Feedback Control for Systems with Uncertain Parameters Using Online-Adaptive Reduced Models},
journal = {SIAM Journal on Applied Dynamical Systems},
volume = {16},
number = {3},
pages = {1563-1586},
year = {2017},
doi = {10.1137/16M1088958},
URL = {https://doi.org/10.1137/16M1088958}
}
Abstract
In failure probability estimation, importance sampling constructs a biasing distribution that targets the failure event such that a small number of model evaluations is sufficient to achieve a Monte Carlo estimate of the failure probability with an acceptable accuracy; however, the construction of the biasing distribution often requires a large number of model evaluations, which can become computationally expensive. We present a mixed multifidelity importance sampling (MMFIS) approach that leverages computationally cheap but erroneous surrogate models for the construction of the biasing distribution and that uses the original high-fidelity model to guarantee unbiased estimates of the failure probability. The key property of our MMFIS estimator is that it can leverage multiple surrogate models for the construction of the biasing distribution, instead of a single surrogate model alone. We show that our MMFIS estimator has a mean-squared error that is up to a constant lower than the mean-squared errors of the corresponding estimators that uses any of the given surrogate models alone--even in settings where no information about the approximation qualities of the surrogate models is available. In particular, our MMFIS approach avoids the problem of selecting the surrogate model that leads to the estimator with the lowest mean-squared error, which is challenging if the approximation quality of the surrogate models is unknown. We demonstrate our MMFIS approach on numerical examples, where we achieve orders of magnitude speedups compared to using the high-fidelity model only.
Bibtex
@article{PKW17MFIS,
title = "Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models",
author = "Peherstorfer, P. and Kramer, B. and Willcox, K.",
journal = "Journal of Computational Physics",
volume = "341",
pages = "61--75",
year = "2017",
doi = "https://doi.org/10.1016/j.jcp.2017.04.012",
keywords = "Multifidelity, Model reduction, Uncertainty quantification, Failure probability estimation, Surrogate modeling, Rare event simulation"
}
Abstract
We present a sparse sensing and Dynamic Mode Decomposition (DMD) based framework to identify flow regimes and bifurcations in complex thermo-fluid systems. Motivated by real time sensing and control of thermal fluid flows in buildings and equipment, we apply this method to a Direct Numerical Simulation (DNS) data set of a 2D laterally heated cavity, spanning several flow regimes ranging from steady to chaotic flow. We exploit the incoherence exhibited among the data generated by different regimes, which persists even if the number of measurements is very small compared to the dimension of the DNS data. We demonstrate that the DMD modes and eigenvalues capture the main temporal and spatial scales in the dynamics belonging to different regimes, and use this information to improve the classification performance of our algorithm. The coarse state reconstruction obtained by our regime identification algorithm can enable robust performance of low order models of flows for state estimation and control.
Bibtex
@article{KGBNB17SensingDMDbifurcationsFlows,
author = {Kramer, B. and Grover, P. and Boufounos, P. and Nabi, S. and Benosman, M.},
title = {Sparse Sensing and {DMD}-Based Identification of Flow Regimes and Bifurcations in
Complex Flows},
journal = {SIAM Journal on Applied Dynamical Systems},
volume = {16},
number = {2},
pages = {1164-1196},
year = {2017},
doi = {10.1137/15M104565X}
}
Abstract
The solution of large-scale matrix algebraic Riccati equations (AREs) is important for instance in control design and model reduction and remains an active area of research. We consider a class of matrix AREs arising from a linear system along with a weighted inner product. This problem class often arises from a spatial discretization of a partial differential equation system. We propose a projection method to obtain low rank solutions of AREs based on simulations of linear systems coupled with proper orthogonal decomposition (POD). The method can take advantage of existing (black box) simulation code to generate the projection matrices. We also develop new weighted norm residual computations and error bounds. We present numerical results demonstrating that the proposed approach can produce highly accurate approximate solutions. We also briefly discuss making the proposed approach completely data-based so that one can use existing simulation codes without accessing system matrices.
Bibtex
@article{KS16POD_Riccati,
title = {A {POD} projection method for large-scale algebraic {R}iccati equations},
author = {Boris Kramer and John R. Singler},
journal = {Numerical Algebra, Control \& Optimization},
volume = {6},
pages = {413--435},
year = {2016},
doi = {10.3934/naco.2016018}
}
Abstract
The Eigensystem Realization Algorithm (ERA) is a commonly used data-driven method for system identification and reduced-order modeling of dynamical systems. The main computational difficulty in ERA arises when the system under consideration has a large number of inputs and outputs, thus requiring to compute a full SVD of a large- scale dense Hankel matrix. In this work, we present an algorithm that aims to resolve this computational bottleneck via tangential interpolation. This involves projecting the original impulse response sequence onto suitably chosen directions. Numerical examples demonstrate that the modified ERA algorithm with tangentially interpolated data produces accurate reduced models while, at the same time, reducing the computational cost and memory requirements significantly compared to the standard ERA. We also give an example to demonstrate the limitations of the proposed method.
Bibtex
@article{KG16TangentialInterpolationERA,
author = {Kramer, B. and Gugercin, S.},
title = {Tangential interpolation-based eigensystem realization algorithm for {MIMO} systems},
journal = {Mathematical and Computer Modelling of Dynamical Systems},
volume = {22},
number = {4},
pages = {282-306},
year = {2016},
publisher = {Taylor \& Francis},
doi = {10.1080/13873954.2016.1198389},
URL = {https://doi.org/10.1080/13873954.2016.1198389}
}
Conference Publications
Abstract
We consider the optimal regulation problem for nonlinear control-affine dynamical systems. Whereas the linear-quadratic regulator (LQR) considers optimal control of a linear system with quadratic cost function, we study polynomial systems with polynomial cost functions; we call this problem the polynomial-polynomial regulator (PPR). The resulting polynomial feedback laws provide two potential improvements over linear feedback laws: 1) they more accurately approximate the optimal control law, resulting in lower control costs, and 2) for some problems they can provide a larger region of stabilization. We derive explicit formulas -- and a scalable, general purpose software implementation -- for computing the polynomial approximation to the value function that solves the optimal control problem. The method is illustrated first on a low-dimensional aircraft stall stabilization example, for which PPR control recovers the aircraft from more severe stall conditions than LQR control. Then we demonstrate the scalability of the approach on a semidiscretization of dimension n=129 of a partial differential equation, for which the PPR control reduces the control cost by approximately 75% compared to LQR for the initial condition of interest.
Bibtex
@INPROCEEDINGS{CorKra_PPR_CDC2024,
author={Corbin, N. and Kramer, B.,},
booktitle={63rd IEEE Conference on Decision and Control (CDC)},
title={Computing solutions to the polynomial-polynomial regulator problem},
year={2024},
organization={IEEE},
doi={10.1109/CDC56724.2024.10885897},
pages={2689-2696}
}
Abstract
Data-driven model reduction methods provide a nonintrusive way of constructing computationally efficient surrogates of high-fidelity models for real-time control of soft robots. This work leverages the Lagrangian nature of the model equations to derive structure-preserving linear reduced-order models via Lagrangian Operator Inference and compares their performance with prominent linear model reduction techniques through an anguilliform swimming soft robot model example with 231,336 degrees of freedom. The case studies demonstrate that preserving the underlying Lagrangian structure leads to learned models with higher predictive accuracy and robustness to unseen inputs.
Bibtex
@INPROCEEDINGS{ShaAdiCerTolKra_DDMOR_SoftRobot_MTNS2024,
author={Sharma, Harsh and Adibnazari, I. and Cervera Torralba, J. and Tolley, M.T. and Kramer, B.},
title = {Data-driven Model Reduction for Soft Robots via Lagrangian Operator Inference},
journal = {IFAC-PapersOnLine},
volume = {58},
number = {17},
pages = {91-96},
year = {2024},
note = {26th International Symposium on Mathematical Theory of Networks and Systems MTNS 2024},
doi={10.1016/j.ifacol.2024.10.119},
pages={91-96}
}
Abstract
Data-driven model reduction methods provide a nonintrusive way of constructing computationally efficient surrogates of high-fidelity models for real-time control of soft robots. This work leverages the Lagrangian nature of the model equations to derive structure-preserving linear reduced-order models via Lagrangian Operator Inference and compares their performance with prominent linear model reduction techniques through an anguilliform swimming soft robot model example with 231,336 degrees of freedom. The case studies demonstrate that preserving the underlying Lagrangian structure leads to learned models with higher predictive accuracy and robustness to unseen inputs.
Bibtex
@INPROCEEDINGS{ShaAdiCerTolKra_DDMOR_SoftRobot_MTNS2024,
author={Sharma, Harsh and Adibnazari, I. and Cervera Torralba, J. and Tolley, M.T. and Kramer, B.},
booktitle={20th Robotics Science and Systems (RSS) conferfence},
title={Preserving Lagrangian structure in data-driven reduced-order modeling or soft robots},
year={2024},
organization={},
doi={},
pages={}
}
Abstract
This paper presents a scalable tensor-based approach to computing controllability and observability-type energy functions for nonlinear dynamical systems with polynomial drift and linear input and output maps. Using Kronecker product polynomial expansions, we convert the Hamilton- Jacobi-Bellman partial differential equations for the energy functions into a series of algebraic equations for the coefficients of the energy functions. We derive the specific tensor structure that arises from the Kronecker product representation and analyze the computational complexity to efficiently solve these equations. The convergence and scalability of the proposed energy function computation approach is demonstrated on a nonlinear reaction-diffusion model with cubic drift nonlinearity, for which we compute degree 3 energy function approximations in n = 1023 dimensions and degree 4 energy function approximations in n = 127 dimensions.
Bibtex
@INPROCEEDINGS{CorKra_ScalableComputationEnergyFunctions_ACC2024,
author={Corbin, N. and Kramer, B.,},
booktitle={2024 American Control Conference (ACC)},
title={Scalable Computation of $\mathcal{H}_\infty$ Energy Functions for Polynomial Drift Nonlinear Systems},
year={2024},
organization={IEEE},
doi={10.23919/ACC60939.2024.10644363},
pages={2506--2511}
}
Abstract
Anguilliform swimming is an effective mode of locomotion utilized by elongated fish like eels and oarfish to travel long distances. It is characterized by the use of full-body undulations that produce thrust and it proves highly energy efficient. The efficiency and morphological simplicity of anguilliform fish make anguilliform swimming a promising swimming for aquatic robots that require long-duration operation. However, the efficiency of anguilliform swimming is a product of the coupled dynamics of the swimmer’s softbody and surrounding water. In order to enable this level of efficiency on a robotic platform, the robot must implement online control that considers these coupled dynamics in real time. A model predictive control (MPC) framework lends itself well to considering these coupled physics. However, the highfidelity soft-body simulation and fluid simulation required for effective MPC proves too computationally expensive for realtime control. In this work, we leverage data-driven model reduction techniques to enable approximate physics simulation and highspeed MPC of a simulated, soft, anguilliform robot (Fig. 1). We conduct a comparative study of multiple methods for linear model reduction, allowing us to assess their efficacy in both the state estimation and control context to allow the robot to mimic well-studied, straight-line swimming gaits of anguilliform fish.
Bibtex
@INPROCEEDINGS{AdiShaCerKraTol_DDMOR_SoftRobot_SCR2024,
author={Adibnazari, I., Sharma, H., Cervera Torralba, J., Kramer, B., Tolley, M.T.,},
booktitle={2023 Southern California Robotics (SCR) Symposium, September 14-15},
title={Full-Body Optimal Control of a Swimming Soft Robot Enabled by Data-Driven Model Reduction},
year={2023},
url={https://bpb-us-e2.wpmucdn.com/sites.uci.edu/dist/2/5230/files/2023/09/28_SCR_23_Iman_Adibnazari.pdf},
pages={}
}
Abstract
This paper proposes a probabilistic Bayesian formulation for system identification (ID) and estimation of nonseparable Hamiltonian systems using stochastic dynamic models. Nonseparable Hamiltonian systems arise in models from diverse science and engineering applications such as astrophysics, robotics, vortex dynamics, charged particle dynamics, and quantum mechanics. The numerical experiments demonstrate that the proposed method recovers dynamical systems with higher accuracy and reduced predictive uncertainty compared to state-of-the-art approaches. The results further show that accurate predictions far outside the training time interval in the presence of sparse and noisy measurements are possible, which lends robustness and generalizability to the proposed approach. A quantitative benefit is prediction accuracy with less than 10% relative error for more than 12 times longer than a comparable least-squares-based method on a benchmark problem.
Bibtex
@INPROCEEDINGS{ShaGalGorKra_BayesianIDnonseparable_CDC2022,
author={Sharma, Harsh and Galioto, Nicholas and Gorodetsky, Alex A. and Kramer, Boris},
booktitle={61st IEEE Conference on Decision and Control (CDC)},
title={Bayesian Identification of Nonseparable {H}amiltonian Systems Using Stochastic Dynamic Models},
year={2022},
organization={IEEE},
doi={10.1109/CDC51059.2022.9992571},
pages={6742-6749}
}
Abstract
This paper derives a new data-driven reduced model for a challenging large-scale combustion simulation and offers a detailed performance comparison with two state-of-the-art reduced models that were previously developed for this application. This study helps to determine the ROM approaches that practitioners can choose for accurate simulations of such challenging multiphysics and multiscale problems. In particular, we learn a physics-based cubic reduced-order model (ROM) via the operator inference framework (OPINF). The key to the efficiency and physics-based nature of the model lies in the use of variable transformations of the original highly nonlinear system that make the governing equations polynomial in the new variables. We compare this approach with a quadratic operator inference ROM and with dynamic mode decomposition with control (DMDc), a modal decomposition method that has been shown to work well in many fluid dynamical applications. An extensive comparison of these approaches yields interesting insights into the model form. All reduced models are quite accurate and provide significant computational speed up compared to the high-fidelity model. We find that the cubic and quadratic models based on operator inference capture more high-frequency content of the system compared to DMDc. We also find that the presence of the cubic term in the operator inference ROM improves stability properties of the system.
Bibtex
@inbook{JMcQK,
title={Performance comparison of data-driven reduced models for a single-injector combustion process.},
author={Jain, Parikshit and McQuarrie, Shane A. and Kramer, Boris},
booktitle = {AIAA Propulsion and Energy 2021 Forum},
chapter = {},
pages = {},
doi = {10.2514/6.2021-3633},
URL = {https://doi.org/10.2514/6.2021-3633}
}
Abstract
This paper presents a physics-based data-driven method to learn predictive reduced-order models (ROMs) from high-fidelity simulations, and illustrates it in the challenging context of a single-injector combustion process. The method combines the perspectives of model reduction and machine learning. Model reduction brings in the physics of the problem, constraining the ROM predictions to lie on a subspace defined by the governing equations. This is achieved by defining the ROM in proper orthogonal decomposition (POD) coordinates, which embed the rich physics information contained in solution snapshots of a high-fidelity computational fluid dynamics (CFD) model. The machine learning perspective brings the flexibility to use transformed physical variables to define the POD basis. This is in contrast to traditional model reduction approaches that are constrained to use the physical variables of the high-fidelity code. Combining the two perspectives, the approach identifies a set of transformed physical variables that expose quadratic structure in the combustion governing equations and learns a quadratic ROM from transformed snapshot data. This learning does not require access to the high-fidelity model implementation. Numerical experiments show that the ROM accurately predicts temperature, pressure, velocity, species concentrations, and the limit-cycle amplitude, with speedups of more than five orders of magnitude over high-fidelity models. Our ROM simulation is shown to be predictive 200% past the training interval. ROM-predicted pressure traces accurately match the phase of the pressure signal and yield good approximations of the limit-cycle amplitude.
Bibtex
@inbook{SKHW_2020_learning_physicsbased_ROMs_SciTech,
title={Learning physics-based reduced-order models for a single-injector combustion process.},
author={Swischuk, Renee and Kramer, Boris and Huang, Cheng and Willcox, Karen},
booktitle = {AIAA SciTech 2020 Forum},
chapter = {},
pages = {},
doi = {10.2514/6.2020-1411},
URL = {https://doi.org/10.2514/6.2020-1411}
}
Abstract
When designing systems, uncertainties must be dealt with at various levels. The designer must define appropriate cost and constraint functions that account for such uncertainties and capture the risk associated with unwanted system behavior. The choice of these cost and constraint functions additionally plays an important role in the convergence behavior of the optimization and, among other things, the final design. This paper studies different types of risk-based optimization problem formulations that can aid in efficient and robust design of complex engineering systems. In tutorial form, the paper describes risk-based optimization problem formulations, specifically, reliability-based design optimization, conditional-value-at-risk-based optimization, and buffered-failure-probability-based optimization. The properties of each formulation are analyzed and general guidelines for the appropriate choice of the optimization problem are provided for a given application setup. An in-depth understanding of the different optimization problems should facilitate development of future methods for designing safe engineering systems.
Bibtex
@inbook{CKN_2020_risk_based_design_opt_SciTech,
title={Risk-based design optimization via probability of failure,
conditional value-at-risk, and buffered probability of failure.},
author={Chaudhuri, Anirban and Kramer, Boris and Norton, Matthew},
booktitle = {AIAA SciTech 2020 Forum},
chapter = {},
pages = {},
doi = {10.2514/6.2020-2130},
URL = {https://doi.org/10.2514/6.2020-2130}
}
Abstract
This paper presents Transform & Learn, a physics-informed surrogate modeling approach that unites the perspectives of model reduction and machine learning. The proposed method uses insight from the physics of the problem - in the form of partial differential equation (PDE) models - to derive a state transformation in which the system admits a quadratic representation. Snapshot data from a high-fidelity model simulation are transformed to the new state representation and subsequently are projected onto a low-dimensional basis. The quadratic reduced model is then learned via a least-squares-based operator inference procedure. The state transformation thus plays two key roles in the proposed method: it allows the task of nonlinear model reduction to be reformulated as a structured model learning problem, and it parametrizes the machine learning problem in a way that recovers efficient, generalizable models. The proposed method is demonstrated on two PDE examples. First, we transform the Euler equations in conservative variables to the specific volume state representation, yielding low-dimensional Transform & Learn models that achieve a 0.05\% relative state error when compared to a high-fidelity simulation in the conservative variables. Second, we consider a model of the Continuously Variable Resonance Combustor, a single element liquid-fueled rocket engine experiment. We show that the specific volume representation of this model also has quadratic structure and that the learned quadratic reduced models can accurately predict the growing oscillations of an unstable combustor.
Bibtex
@inbook{QKMW_2019_transform_and_learn,
title={Transform \& Learn: A data-driven approach to nonlinear model reduction},
author={Qian, Elizabeth and Kramer, Boris and Marques, Alexandre and Willcox, Karen},
booktitle = {AIAA Aviation 2019 Forum},
chapter = {},
pages = {},
doi = {10.2514/6.2019-3707},
URL = {https://doi.org/10.2514/6.2019-3707}
}
Abstract
We present new results on robust model reduction for partial differential equations. Our contribution is threefold: 1.) The stabilization is achieved via closure models for reduced order models (ROMs), where we use Lyapunov robust control theory to design a new stabilizing closure model that is robust with respect to parametric uncertainties; 2.) The free parameters in the proposed ROM stabilization method are auto-tuned using a data-driven multi-parametric extremum seeking (MES) optimization algorithm; and 3.) The challenging 3D Boussinesq equation numerical test-bed is used to demonstrate the advantages of the proposed method.
Bibtex
@INPROCEEDINGS{BBK17PODstabilizationBoussinesq,
author={Benosman, M. and Borggaard, J. and Kramer, B.},
booktitle={American Control Conference (ACC)},
title={Robust {POD} model stabilization for the 3D {B}oussinesq equations based on {L}yapunov theory and extremum seeking},
year={2017},
organization={IEEE},
pages={1827--1832}
}
Abstract
We consider control and design for coupled, multiphysics systems governed by partial differential equations (PDEs). The numerical solution of the control problem involves large systems of ordinary differential equations arising from a spatial discretization scheme, which can be prohibitively expensive. Utilizing reduced order surrogate models evolved as a way to circumvent this computational problem. While many reduced order models work well for simulation, the task of control adds additional complexity. We investigate the effects of different reduced order models on the optimal feedback control. We propose to use a structure-preserving surrogate model, constructed by computing dominant subspaces for each physical quantity separately. This method addresses the different scaling of variables commonly found in multiphysics problems. As a test example, a coupled Burgers’ equation multiphysics PDE model is considered. In the numerical study, we find that the feedback gains obtained from the standard proper orthogonal decomposition for the combined variables fail to converge, while the physics-based method produces convergent control feedback matrices.
Bibtex
@inproceedings{kramer16MORcontrolBurgers,
title={Model reduction for control of a multiphysics system: Coupled {B}urgers' equation},
author={Kramer, Boris},
booktitle={American Control Conference (ACC)},
pages={6146--6151},
year={2016},
doi = {10.1109/ACC.2016.7526635},
organization={IEEE}
}
Abstract
We present results on stabilization for reduced order models (ROM) of partial differential equations using learning. Stabilization is achieved via closure models for ROMs, where we use a model-free extremum seeking (ES) dither-based algorithm to optimally learn the closure models’ parameters. We first propose to auto-tune linear closure models using ES, and then extend the results to a closure model combining linear and nonlinear terms, for better stabilization performance. The coupled Burgers’ equation is employed as a test-bed for the proposed tuning method.
Bibtex
@inproceedings{BKBG16learningBasedStabilizationBurgers,
title={Learning-based reduced order model stabilization for partial differential equations: Application to the coupled {B}urgers' equation},
author={Benosman, Mouhacine and Kramer, Boris and Boufounos, Petros and Grover, Piyush},
booktitle={American Control Conference (ACC)},
pages={1673--1678},
year={2016},
doi = {10.1109/ACC.2016.7525157},
organization={IEEE}
}
Abstract
If convection is the dominate mechanism for heat transfer in a heat exchangers, then the devices are often modeled by hyperbolic partial differential equations. One of the difficulties with this approach is that for low (or zero) pipe flows, some of the imperial functions used to model friction can become singular. One way to address low flows is to include the full flux in the model so that equation becomes a convection-diffusion equation with a small diffusion term. We show that solutions of the hyperbolic equation are recovered as limiting (viscosity) solutions of the convection-diffusion model. We employ a composite finite element - finite volume scheme to produce finite dimensional systems for control design. This scheme is known to be unconditionally L2-stable, uniformly with respect to the diffusion term. We present numerical examples to illustrate how the inclusion of a small diffusion term can impact controller design.
Bibtex
@inproceedings{BK15controlHeatExchanger,
title={Full flux models for optimization and control of heat exchangers},
author={Burns, John A and Kramer, Boris},
booktitle={American Control Conference (ACC)},
pages={577--582},
year={2015},
doi = {10.1109/ACC.2015.7170797},
organization={IEEE}
}
Abstract
In this paper, we present a method to solve algebraic Riccati equations by employing a projection method based on Proper Orthogonal Decomposition. The method only requires simulations of linear systems to compute the solution of a Lyapunov equation. The leading singular vectors are then used to construct a projector which is employed to produce a reduced order system. We compare this approach to an extended Krylov subspace method and a standard Gramian based method.
Bibtex
@article{kramer14solvingRiccatiwPOD,
title={Solving Algebraic {R}iccati Equations via Proper Orthogonal Decomposition},
author={Kramer, Boris},
journal={IFAC Proceedings Volumes},
volume={47},
number={3},
pages={7767--7772},
year={2014},
publisher={Elsevier}
}
Patents
Abstract
A method determines values of the airflow measured in the conditioned environment during the operation of the air-conditioning system and selects, from a set of regimes predetermined for the conditioned environment, a regime of the airflow matching the measured values of the airflow. The method selects, from a set of models of the airflow predetermined for the conditioned environment, a model of airflow corresponding to the selected regime and models the airflow using the selected model. The operation of the air-conditioning system is controlled using the modeled airflow.
Bibtex
@misc{BGKB16PatentCompressedSensingLibrarySelection,
title={System and Method for Controlling Operations of Air-Conditioning System},
author={Boufounos, Petros and Grover, Piyush and Kramer, Boris and Benosman, Mouhacine},
year={2018},
month=Dec # "~4",
note={US Patent 10,145,576},
url = {https://patents.google.com/patent/US10145576B2/en}
}
Abstract
A method controls an operation of an air-conditioning system generating airflow in a conditioned environment. The method updates a model of airflow dynamics connecting values of flow and temperature of air conditioned during the operation of the air-conditioning system. The model is updated interactively iteratively to reduce an error between values of the airflow determined according to the model and values of the airflow measured during the operation. Next, the method models the airflow using the updated model and controls the operation of the air-conditioning system using the model.
Bibtex
@misc{BBKG18PatentDMDairflow,
title={System and method for controlling operations of air-conditioning system},
author={Benosman, Mouhacine and Boufounos, Petros and Kramer, Boris and Grover, Piyush},
year={2018},
month=may # "~22",
note={US Patent 9,976,765},
url = {}
}
Thesis
Abstract
This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n >= 100,000 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.
Bibtex
@phdthesis{kramer15thesisMOR,
title={Model and data reduction for control, identification and compressed sensing},
author={Boris Kramer},
year={2015},
school={Virginia Tech},
url = {http://hdl.handle.net/10919/75179}
}
Abstract
This thesis is a numerical study of the coupled Burgers equation. The coupled Burgers equa- tion is motivated by the Boussinesq equations that are often used to model the thermal-fluid dynamics of air in buildings. We apply Finite Element Methods to the coupled Burgers equation and conduct several numerical experiments. Based on these results, the Group Finite Element method (GFE) appears to be more stable than the standard Finite Element Method. The design and implementation of controllers heavily relies on rapid solutions to complex models such as the Boussinesq equations. Thus, we further examine the feasibil- ity and efficiency of the Proper Orthogonal Decomposition (POD) for the coupled Burgers equation. Using POD, we reduce the system to a â minimalâ number of ODEâ s and conduct numerous numerical studies comparing the POD and GFE method. Further numerical ex- periments consider an application where the dynamics are projected on a POD basis and then the governing parameters of the system are varied.
Bibtex
@mastersthesis{kramer2011model,
title={Model reduction of the coupled {B}urgers equation in conservation form},
author={Kramer, Boris},
year={2011},
school={Virginia Tech},
url = {http://hdl.handle.net/10919/34791}
}