Porsche
Article about Porsche

Published 2 weeks, 4 days ago

Cars Porsche Published by J. Doe

Success strategy courtesy of the AI agent

Porsche Engineering has developed an innovative calibration methodology based on deep reinforcement learning. PERL offers optimal strategies for engine calibration and considerably reduces the time and cost of the calibration.

Until then, it had not been possible to teach software the complex strategy of the board game—the required computing power and computing time would have been too great. The turning point came when the AI in the go computer was trained using deep reinforcement learning.

Deep reinforcement learning, still a relatively new methodology, is considered one of the supreme disciplines of AI. Deep reinforcement learning is a self-learning AI method that combines the classic methods of deep learning with those of reinforcement learning. To achieve this, the agent develops its own strategy during the training phase. At each step, the system uses a value network to approximate the sum of expected rewards the agent will get from the actual state onwards if it behaves as it is currently behaving. Based on the value network, a second network—known as the policy network—outputs the action probability that will lead to the maximum sum of expected rewards. One of the superlative disciplines of AI Deep reinforcement learning, still a relatively new methodology, is considered one of the supreme disciplines of AI. Deep reinforcement learning is a self-learning AI method that combines the classic methods of deep learning with those of reinforcement learning. The basic idea is that the algorithm (known as an “agent” in the jargon) interacts with its environment and is rewarded with bonus points for actions that lead to a good result and penalized with deductions in case of failure. The system initially uses trial and error to search for a way to get from the actual state to the target state. At each step, the system uses a value network to approximate the sum of expected rewards the agent will get from the actual state onwards if it behaves as it is currently behaving. Based on the value network, a second network—known as the policy network—outputs the action probability that will lead to the maximum sum of expected rewards. This then results in its methodology, known as the “policy,” which it applies to other calculations after completing the learning phase.

Use in engine calibration

“Here, too, the best strategy for success is required to achieve optimal system tuning,” says Matteo Skull, Engineer at Porsche Engineering. The result is a completely new calibration approach: Porsche Engineering Reinforcement Learning (PERL). With the help of Deep Reinforcement Learning, we train the algorithm not only to optimize individual parameters, but to work out the strategy with which it can achieve an optimal overall calibration result for an entire function, says Skull.

The application of the PERL methodology can basically be divided into two phases: the training phase is followed by real time calibration on engine dyno or in vehicle. PERL is highly flexible here, because parameters such as engine design, displacement or charging system have no influence on the training success. The only important thing is that both the training and later target calibration use the same control logic so that the algorithm implements the results correctly, says Skull.

Continuous verification of the results Once training is complete, PERL is ready for the actual calibration task on the engine. During testing, PERL applies under real-time conditions the best calibration policy to the torque model. In addition, PERL allows us to specify both the calculation accuracy of the torque curve and a smoothing factor for interpolating the values between the calculated interpolation points. In this way, we improve calibration robustness with regards to influences of manufacturing tolerances or wear of engine components over engine lifetime. explains Dr. Matthias Bach, Senior Manager Engine Calibration and Mechanics at Porsche Engineering.

Significantly reduced calibration effort

With today’s conventional tools, such as model-based calibrations, the automated generation of parameter data—such as the control maps in engine management— is generally not optimal and must be manually revised by the calibration engineer.

The current calibration process involves considerable time and cost. Nowadays, the map-dependent calculation of a single parameter, for example the air charge model, requires a development time of about four to six weeks, combined with high test-bench costs, said Bach. For the overall calibration of an engine variant, this results in a correspondingly high expenditure of time and money.

In brief The innovative PERL methodology from Porsche Engineering uses deep reinforcement learning to develop optimal strategies for engine calibration (the “policy”). Info Text: Richard Backhaus Text first published in the Porsche Engineering Magazine, issue 1/2021

Original article

Nov 12, 2021 at 19:28

MEDIA CONTENT

Image related to: Success strategy courtesy of the AI agent.

AMOUNT OF IMAGES

4 items

COPYRIGHT

Porsche

MORE ARTICLES