EnnCore: End-to-End Conceptual Guarding of Neural Architectures (at Liverpool)



Description: EnnCore will deliver safeguarding for safety-critical neural-based architectures with the following novel properties: (1) Full-stack symbolic software verification: we will develop the first bit-precise and scalable symbolic verification framework to reason over actual implementations of DNNs, thereby providing further guarantees of security properties concerning the underlying hardware and software, which are routinely ignored in existing literature. (2) Explainability / Interpretability: EnnCore will pioneer the integration of knowledge-based and neural explainability methods to support end-users specifying security constraints and diagnosing security risks, in order to provide assurances about its security as NB models evolve. Particular attention will be given to the quantitative and qualitative characterization of semantic-drift phenomena in security scenarios. (3) Scalable: we will systematically combine contemporary symbolic methods for explaining, interpreting and verifying neural representations. In particular, we will develop a neuro-symbolic safeguard framework by linking the structural knowledge-based representation elements to the attentional architecture elements, to achieve scalability and precision in an unprecedented manner. We will also develop new learning techniques for reusing information across different verification runs to reduce formulae size and consistently to improve constraint solving.

Funding Agency: EPSRC

Project Time: 2020 - 2024

Personnel
  • Xiaowei Huang (Liverpool lead)
  • to be announced (postdoc)
External Collaborators
  • Lucas Cordeiro (Project PI, Manchester)
  • Gavin Brown (Co-I, Manchester)
  • Mikel Lujan (Co-I, Manchester)
  • Andrea Freitas (Co-I, Manchester)
  • Mustafa A. Mustafa (Co-I, Manchester)
Publications
  • How does Weight Correlation Affect theGeneralisation Ability of Deep Neural Networks?
  • Explaining Image Classifiers Using Statistical Fault Localization
  • Regression of Instance Boundary by Aggregated CNN and GCN
  • A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability
    • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi.
    • Computer Science Review, 37. 2020.
    • link to publisher
    • arXiv version
  • Reliability Validation of Learning Enabled Vehicle Tracking
  • PRODeep: a platform for robustness verification of deep neural networks
    • Renjue Li, Jianlin Li, Cheng-Chao Huang, Pengfei Yang, Xiaowei Huang, Lijun Zhang, Bai Xue, Holger Hermanns.
    • ESEC/FSE 2020.
    • link to publisher
  • CNN-GCN Aggregation Enabled Boundary Regression for Biomedical Image Segmentation
    • Yanda Meng, Meng Wei, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Xiaowei Huang, Yalin Zheng.
    • MICCAI (4) 2020: 352-362.
    • link to publisher
Please feel free to contact Xiaowei Huang for more information.