Digital Twins in companies
What are the challenges of adopting the taxonomy of sustainable activities at EU level? Discover more in this article.
Read moreAI is a very promising technology, but obviously it presents a lot of uncertainties in the way it will be designed, developed and validated.
Right now, there is more work needed to ensure that IA modules are reliable, safe, and meet performance standards. We expect progress in this area.
Airbus Protect participates in numerous aviation projects. They mainly focus on testing and validating complex systems like aircraft, air traffic management, maintenance, railway, automotive, and industrial systems. We assist industrial system designers, developers, and operators in demonstrating the effectiveness of their systems. This includes evaluating how the systems function, their cybersecurity measures, reliability, availability, maintainability, safety, and environmental impact.
Moreover, Airbus Protect actively participates in specific projects that target the fundamental behavior of AI elementary modules. Experiments are conducted to understand AI behavior, deadlocks, and task completion in a specific area. The goal is to gain insights into how AI functions in various scenarios.
Researchers aim to determine the effectiveness of AI in completing tasks efficiently.
The experiments focus on observing AI performance and behavior in different situations. In this regard, one must evoke “Evaluation of the performance of AI”, “Confidence AI” or “Dependable and Explainable Learning”.
These R&T projects help Airbus Protect employees enhance their expertise in new approaches concerning their historical business, i.e. system IVVQ processes. It also helps in understanding new technologies such as AI, digital twins, quantum computing, blockchains, autonomous vehicles, shuttles, trains or UAVs, AI performance demonstration, new cyber security technologies integrating AI for example.
Globally, consultants at Airbus Protect are acquiring new abilities. They are able to integrate emerging technology and science into their proficiency. This equips them to penetrate upcoming markets.
AI is a very powerful technology, mature enough to transform the high capability of Big Data. Capturing and processing into very high added value of functionalities, such as decision making in complex situations and environment, autonomous behaviour, supervision of float of systems and other challenging services in open environments encompassing infinite universe of use cases and unpredicted events.
Besides, works contributing to an “AI of confidence” progress more and more, especially concerning application of mathematical frameworks like Topological Data Analysis, Abstract Interpretation, Generalization or Adversarial techniques to cover an infinite combination of scenario parameters and variations.
Moreover, there is a gap between demonstration of performance of AI modules like intelligent sensors and a whole system like an autonomous vehicle.
In absolute, the answer is no: full confidence is not yet reachable in the current state of the art. If we want to have the ability to demonstrate a performance or a property, we have to restrict the perimeter of usage with very strict constraints and limiting conditions (like a limited speed for a vehicle, or a fixed trajectory for an autonomous shuttle), under these strong postulates, it can be possible to demonstrate formal safety properties independent of variability of use cases or scenarios.
When this kind of restrictive demonstration can not be achieved, a combination of audits, virtual simulations, controlled testing, and real-world experience can be used. This helps to establish a framework for ensuring safety and justification in engineering processes.
Also, having a process to collect feedback while using something is important to identify and prevent near misses during operation.
A continuous process of method improvement will certainly lead to a satisfactory level of assurance for AI based applications or complex systems including AI bricks. Currently, those methods are not yet fully accomplished concerning their demonstrating capability and full confidence is not available from the safety point of view for smart mobility, for example.
Nonetheless, we expect a lot of progress in the coming years in controlling and validating what AI can provide in terms of robustness, correctness, relevancy, explainability, and interpretability.
What are the challenges of adopting the taxonomy of sustainable activities at EU level? Discover more in this article.
Read more