Variable Complexity Modeling for Speeding Up Multi- Run-Design-Tasks with Computationally Expensive Simulations
Optimization and the improvement of robustness and reliability in the early stages of the product development is often attempted today using simulation methods together with algorithms that run these simulations in an automated manner. In this context the need often arises, to run the simulation models with small changes of the geometry hundreds to tenth of thousands times. If these simulations are computationally expensive, like a full vehicle crash analysis, the wall clock time to perform the runs is often prohibitively large. Even the usage of big compute clusters cannot always remedy this problem. For this reason approximation methods are often used. But no matter what actual technique is employed, polynomial approximations, radial basis functions, kriging or support vector machines, these techniques are purely mathematical and don’t have any knowledge about the physical problem they approximate. This paper presents a different approach. Here the same physical phenomenon is modeled using two different simulation models. One is very accurate but computationally expensive, the other is less accurate but computes faster. Both models are used to simulate the baseline design and the difference is recorded either as an additive correction delta or multiplicative correction factor. Then the multiple runs of the optimization algorithm or stochastic technique are performed using only the low fidelity quick code, and the correction is applied. When the baseline point is sufficiently far away from the actual point the correction delta or factor needs to be updated. This methodology is explained on a Taguchi Robust Design study for a full vehicle side crash using LS-Dyna. The focus of this paper is to explain the methodology, not to discuss the results.
https://www.dynamore.de/en/downloads/papers/09-conference/papers/F-III-03.pdf/view
https://www.dynamore.de/@@site-logo/DYNAmore_Logo_Ansys.svg
Variable Complexity Modeling for Speeding Up Multi- Run-Design-Tasks with Computationally Expensive Simulations
Optimization and the improvement of robustness and reliability in the early stages of the product development is often attempted today using simulation methods together with algorithms that run these simulations in an automated manner. In this context the need often arises, to run the simulation models with small changes of the geometry hundreds to tenth of thousands times. If these simulations are computationally expensive, like a full vehicle crash analysis, the wall clock time to perform the runs is often prohibitively large. Even the usage of big compute clusters cannot always remedy this problem. For this reason approximation methods are often used. But no matter what actual technique is employed, polynomial approximations, radial basis functions, kriging or support vector machines, these techniques are purely mathematical and don’t have any knowledge about the physical problem they approximate. This paper presents a different approach. Here the same physical phenomenon is modeled using two different simulation models. One is very accurate but computationally expensive, the other is less accurate but computes faster. Both models are used to simulate the baseline design and the difference is recorded either as an additive correction delta or multiplicative correction factor. Then the multiple runs of the optimization algorithm or stochastic technique are performed using only the low fidelity quick code, and the correction is applied. When the baseline point is sufficiently far away from the actual point the correction delta or factor needs to be updated. This methodology is explained on a Taguchi Robust Design study for a full vehicle side crash using LS-Dyna. The focus of this paper is to explain the methodology, not to discuss the results.