Foundations For Continuous Engineering Of Trustworthy Autonomy (at Liverpool)
Objective
Ubiquitous AI will soon allow complex systems to drive on our roads, fly over our heads, move alongside us during our daily lives & work in our factories. In spite of this disruptive landscape, deployment and broader adoption of learned-enabled autonomous systems in safety-critical scenarios remains challenging. Continuous engineering (DevOps) can mediate problems when encountering new scenarios throughout the product life cycle. However, the technical foundations and assumptions on which traditional safety engineering principles rely do not extend to learning-enabled autonomous systems engineered under continuous development. FOCETA gathers prominent academic groups & leading industrial partners to develop foundations for continuous engineering of trustworthy learning-enabled autonomous systems. The targeted scientific breakthrough lies within the convergence of “data-driven” and “model-based” engineering, where this convergence is further complicated by the need to apply verification and validation incrementally & avoid complete re-verification & re-validation efforts. FOCETA’s paradigm is built on three scientific pillars: (1) integration of learning-enabled components & model-based components via a contract-based methodology which allows incremental modification of systems including threat models for cyber-security, (2) adaptation of verification techniques applied during model-driven design to learning components in order to enable unbiased decision making, & finally, (3) incremental synthesis techniques unifying both the enforcement of safety & security-critical properties as well as the optimization of performance. FOCETA approach, implemented in open source tools & with open data exchange standards, will be applied to the most demanding & challenging applications such as urban driving automation & intelligent medical devices, to demonstrate its viability, scalability & robustness, while addressing European industry cutting-edge technology needs.
Funding Agency: EU H2020
Project Time: 2020 - 2023
Personnel
- Dr Xiaowei Huang (Liverpool lead)
- Prof Sven Schewe (Liverpool Co-I)
- Dr Xingyu Zhao (Co-I)
- Dr Yi Dong (Postdoc)
External Collaborators
- (Project PI, UGA Verimag (UGA))
- (DENSO)
- (fortiss)
- (Intel)
- (Siemens)
- (Mentor)
- (Graz University of Technology)
- (Bar Ilan University)
- (Aristotle University of Thessaloniki)
- (RGB Medical)
- (L-UP SAS)
Publications
- Zhao, X., Huang, W., Banks, A., Cox, V., Flynn, D., Schewe, S., and Huang, X. (2021a).Assessing the reliability of deep learning classifiers through robustness evaluation andoperational profiles. InAISafety’21 Workshop at IJCAI’21.(link)
- Huang, W., Sun, Y., Zhao, X., Sharp, J., Ruan, W., Meng, J. and Huang, X., 2021. Coverage Guided Testing for Recurrent Neural Networks, IEEE Tran. On Reliability (link)
- Zhao, X., Huang, W., Huang, X., Robu, V. and Flynn, D., 2021. BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations, UAI2021. (link)
- Zhao, X., Huang, W., Schewe, S., Dong, Y. and Huang, X., 2021. Detecting Operational Adversarial Examples for Reliable Deep Learning, DSN’21 (fast abstract) (link)
- Jin, G., Yi, X., Zhang, L., Zhang, L., Schewe, S. and Huang, X., 2020. How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?. Advances in Neural Information Processing Systems, 33. (link)