Check Out All The Smart Security Summit On-Demand Sessions Here.
Simulation has become an essential technology to help companies shorten time to market and reduce design costs. Engineers and researchers use simulation for a variety of applications, including:
- Use a virtual model (also known as a digital twin) to simulate and test their complex systems early and often in the design process.
- Maintain a digital thread with traceability through requirements, system architecture, component design, code, and testing.
- Extend their systems to perform predictive maintenance (PdM) and failure analysis.
Many organizations are improving their simulation capabilities by integrating artificial intelligence (AI) into their model-based design. Historically, these two areas have been separated, but create significant value for engineers and researchers when used together effectively. The strengths and weaknesses of these technologies are perfectly aligned to help businesses solve three main challenges.
Challenge 1: Better training data for more accurate AI models through simulation
Simulation models can synthesize real-world data that is difficult or expensive to collect into quality, clean, cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If left unnoticed, these models will generate inaccurate information or fail outright, causing engineers to spend hours trying to figure out why the model isn’t working.
Simulation can help engineers overcome these challenges. Rather than tweaking the architecture and parameters of the AI model, it has been shown that time spent improving training data can often lead to greater improvements in accuracy.
Event
On-Demand Smart Security Summit
Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.
look here
Since a model’s performance is so dependent on the quality of the data it is trained with, engineers can improve results through an iterative process of simulating data, updating an AI model, observing conditions it cannot predict well and collecting simulated data for those conditions.
Challenge 2: AI for new in-product features
Simulation has become an essential part of the design process for engineers using embedded systems for applications such as control systems and signal processing. In many cases, these engineers develop virtual sensors, devices that calculate a value that is not directly measured from available sensors. But the ability of these methods to capture the nonlinear behavior present in many real-world systems is limited, so engineers are turning to AI-based approaches that have the flexibility to model the complexities. They use data (measured or simulated) to train an AI model that can predict the unobserved state from the observed states and then integrate that AI model into the system.
In this case, the AI model is part of the control algorithm that ends up on the physical hardware and usually needs to be programmed in a lower-level language, like C/C++. These requirements may place restrictions on the types of machine learning models appropriate for such applications, so technical professionals may need to try multiple models and compare accuracy and performance trade-offs on the device.
At the forefront of research in this area, reinforcement learning takes this approach further. Rather than just learning the estimator, reinforcement learning incorporates the entire control strategy. This technique has proven effective in some difficult applications, such as robotics and autonomous systems, but building this type of model requires an accurate model of the environment – never a guarantee – as well as massive computing power. to run a large number of simulations.
Challenge 3: Balancing the “good” versus the “now”
Businesses have always struggled with time-to-market. Organizations that offer a buggy or flawed solution to customers risk irreparable harm to their brand, especially startups. The reverse is true as those “equally active” in an established market struggle to gain ground. Simulations were an important design innovation when they were first introduced, but their constant improvement and ability to create realistic scenarios can slow down perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a long time to build, introducing the risk that the market has moved.
To strike the right balance between speed and quality, technical professionals must recognize that there will always be environmental nuances that cannot be simulated. One should never blindly trust AI models, even when they serve as approximations for complex, high-fidelity systems.
The future of AI for simulation
Artificial intelligence and simulation technologies have built and maintained their momentum individually for almost a decade. Now engineers are starting to see a lot of value in their intersection, given the symbiotic nature of their strengths and weaknesses.
As models continue to serve increasingly complex applications, AI and simulation will become even more essential tools in the engineer’s toolbox. With the ability to develop, test, and validate models accurately and affordably, these methodologies will only grow.
Seth DeLand is Head of Data Analytics Product Marketing at MathWorks.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers