EnnCore: End-to-End Conceptual Guarding of Neural Architectures
Description
EnnCore will deliver safeguarding for safety-critical neural-based architectures with the following novel properties: (1) Full-stack symbolic software verification: we will develop the first bit-precise and scalable symbolic verification framework to reason over actual implementations of DNNs, thereby providing further guarantees of security properties concerning the underlying hardware and software, which are routinely ignored in existing literature. (2) Explainability / Interpretability: EnnCore will pioneer the integration of knowledge-based and neural explainability methods to support end-users specifying security constraints and diagnosing security risks, in order to provide assurances about its security as NB models evolve. Particular attention will be given to the quantitative and qualitative characterization of semantic-drift phenomena in security scenarios. (3) Scalable: we will systematically combine contemporary symbolic methods for explaining, interpreting and verifying neural representations. In particular, we will develop a neuro-symbolic safeguard framework by linking the structural knowledge-based representation elements to the attentional architecture elements, to achieve scalability and precision in an unprecedented manner. We will also develop new learning techniques for reusing information across different verification runs to reduce formulae size and consistently to improve constraint solving.
Funding Agency: EPSRC
Project Time: 2020 - 2024
Personnel
Xiaowei Huang (Liverpool lead) to be announced (postdoc)
External Collaborators
- Lucas Cordeiro (Project PI, Manchester)
- Gavin Brown (Co-I, Manchester)
- Mikel Lujan (Co-I, Manchester)
- Andrea Freitas (Co-I, Manchester)
- Mustafa A. Mustafa (Co-I, Manchester)
Publications
- Zhao, X., Huang, W., Banks, A., Cox, V., Flynn, D., Schewe, S., and Huang, X. (2021a).Assessing the reliability of deep learning classifiers through robustness evaluation andoperational profiles. InAISafety’21 Workshop at IJCAI’21.(link)
- Huang, W., Sun, Y., Zhao, X., Sharp, J., Ruan, W., Meng, J. and Huang, X., 2021. Coverage Guided Testing for Recurrent Neural Networks, IEEE Tran. On Reliability (link)
- Zhao, X., Huang, W., Huang, X., Robu, V. and Flynn, D., 2021. BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations, UAI2021. (link)
- Jin, G., Yi, X., Zhang, L., Zhang, L., Schewe, S. and Huang, X., 2020. How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?. Advances in Neural Information Processing Systems, 33. (link)
- Sun, Y., Chockler, H., Huang, X. and Kroening, D., 2020, August. Explaining Image Classifiers using Statistical Fault Localization. In European Conference on Computer Vision (pp. 391-406). Springer, Cham. (link)
- Meng, Y., Meng, W., Gao, D., Zhao, Y., Yang, X., Huang, X. and Zheng, Y., 2020, August. Regression of Instance Boundary by Aggregated CNN and GCN. In European Conference on Computer Vision (pp. 190-207). Springer, Cham. (link)
- Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M. and Yi, X., 2020. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37, p.100270. (link)
- Sun, Y., Zhou, Y., Maskell, S., Sharp, J. and Huang, X., 2020, May. Reliability validation of learning enabled vehicle tracking. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 9390-9396). IEEE. (link)
- Li, R., Li, J., Huang, C.C., Yang, P., Huang, X., Zhang, L., Xue, B. and Hermanns, H., 2020, November. PRODeep: a platform for robustness verification of deep neural networks. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (pp. 1630-1634). (link)
- Meng, Y., Wei, M., Gao, D., Zhao, Y., Yang, X., Huang, X. and Zheng, Y., 2020, October. CNN-GCN aggregation enabled boundary regression for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 352-362). Springer, Cham. (link)