Department of Computer Science,
University of Liverpool,
Liverpool, L69 3BX
Office: George Holt Buindlig, Room H201A
Phone: (+44) 1517958858
Mobile: (+44) 7831378101
Email: xiaowei.huang at liverpool.ac.uk, xiaowei.huang at live.com
I am a Lecturer (Assistant Professor). Prior to Liverpool, I worked at University of Oxford, University of New South Wales, and Chinese Academy of Sciences.
My research concerns the correctness (e.g., safety, trustworthiness, etc) of autonomous systems, including
- logic-based approaches for the specification, verification and synthesis of autonomous multi-agent systems, and
- safety verification of machine learning techniques such as neural network-based deep learning.
- (07/2018) Slides of my talk at Imperial can be found here.
- (07/2018) Our paper titled "A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees" is available on arXiv. The paper summarises our efforts on using Lipschitz continuity and two-player games to achieve the verification. The provable guarantee solely relies on the Lipschitz constants. Except for the maximum safe radius problem as a cooperative game, we also studied feature robustness problem as a competitive game.
- (07/2018) Our new paper titled "Concolic Testing for Deep Neural Networks", coauthored with Youcheng Sun, Min Wu, Wenjie Ruan, Marta Kwiatkowska, Daniel Kroening, has been available from arXiv. This paper has been conditionally accpeted to ASE2018. We will update a new version very soon.
- (06/2018) Our journal paper "An epistemic strategy logic", coauthored with Ron van der Meyden, has been accepted to the ACM Transactions on Computational Logic (TOCL).
- (04/2018) Gave an invited talk on "Verification and Testing of Deep Learning" for the ETAPS workshop on "Formal Methods For ML-Enabled Autonomous System (FOMLAS2018)".
- (04/2018) Our new paper "Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm" is available on arXiv.
- (04/2018) Our paper titled "Reachability Analysis of Deep Neural Networks with Provable Guarantees"
has been accepted to IJCAI2018. The paper is co-authored with Wenjie Ruan (Oxford) and Marta Kwiatkowska (Oxford).
We present a novel, global optimisation based, approach to study the properties (e.g., robustness, output range, safety, etc) of deep neural networks
with provable guarantees. Different with constraint-based approaches, our new approach is not affected by
the internal structure of the networks, and therefore it is able to handle state-of-the-art networks. The paper has been available on arXiv.
- (04/2018) Our paper titled "Model Checking Probabilistic Epistemic Logic for Probabilistic Multiagent Systems"
has been accepted to IJCAI2018. The paper is co-authored with Chen Fu (CAS), Andrea Turrini (CAS), Lei Song, Yuan Feng (UTS),
and Lijun Zhang (CAS). It presents a model checking algorithm (and its tool) for probabilistic multiagent systems with respect to a probabilistic epistemic logic PETL.
- (04/2018) got an Defence Science Technology Laboratory (Dstl) project on "Test Coverage Metrics for Artificial Intelligence", which will run from 2018 to 2019.
We are going to explore software testing techniques for the safety of deep neural networks.
- (03/2018) got an EPSRC 2018 Vacation Bursary Programme project, to support a student Edward Barker on "Explaining Deep Learning-based Image Classification"
- (03/2018) our new paper titled "Testing Deep Neural Networks" is available on arXiv.
It is co-authored with Dr Youcheng Sun and Prof Daniel Kroening from Oxford.
The paper proposes a set of white-box test coverage criteria and test case generation algorithms for DNNs. It is a big step towards developing software testing approaches on systems with deep learning components.
- (01/2018) gave a short talk and served as a panelist for ERTS 2018 at Toulouse, France. The panel was organised by Airbus.
- (12/2017) our paper titled "Feature guided blackbox testing of deep neural networks" has been accepted to TACAS'2018. It is co-authored with Matthew Wicker from University of Georgia, USA, and Prof Marta Kwiatkowska from Oxford. The full paper is available at here. The paper proposes a black-box testing algorithm based on SIFT object detection technique and Monte-carlo tree search algorithm. The algorithm can achieve guarantee on the safety, and thus is also an verification approach.
- (12/2017) gave talks on "Verification of Deep Learning Systems" at Beijing (Beijing University and Tsinghua University) and Xi'an (Xidian University). Slides are available from here.
- (05/2018) We have a new vacancy for PhD student, in conjunction with IBM, to explore "property-aware learning" on how to ensure the correctness of systems with learning-enabled components
(such as deep learning components) by imposing constraints (e.g., safety, security, ethical norms, etc) during the learning process. Advertisement can be found
from findAPhD. The funding is for four years, waiving the tuition fee and having
stipend 14,777 per year.
Exceptional international student can apply, and please contact me first. The position needs to start before October, so please apply ASAP.
- (03/2018) a fully-funded PhD positions is available. The student will be advised by me and Dr. Wei-Yun Yao at A*STAR, Singapore, on the topic of Explainable Intelligent Robots. The student will spend two years in Liverpool and Singapore, respectively. Please feel free to contact me if you are interested in. Please find more details here
- (03/2018) a fully-funded PhD positions is available. The student will be advised by me and Prof. Shang-Hong Lai at National Tsing Hua University (NTHU), Taiwan, on the topic of Verified Object Recognition and Manipulation. The student will spend two years in Liverpool and Taiwan, respectively. Please feel free to contact me if you are interested in. Please find more details here .
list and dblp
Recent Invited Talks, Seminars, and Panel Discussions:
- January 2018, Toulouse, France. Invited panel discussion on how machine learning technique could be used (or not) for safety-critical applications, oragnised by ONERA The French Aerospace Lab and AirBus. The 9th European Congress on Embedded Real Time Software and Systems (ERTS 2018). https://www.erts2018.org/
- April 2018, Thessaloniki, Greece. Verification of Deep Neural Networks. Invited Talk to the ETAPS 2018 workshop on formal methods for ML-enabled autonomous systems (FoMLAS2018). https://fomlas2018.fortiss.org
- Januray 2018, Florida, US. Invited talk and panelist of a session in SciTech2018 on the Interaction of Software Assurance and Risk Assessment Based Operation of Unmanned Aircraftsession. Organised by The American Institute of Aeronautics and Astronautics (AIAA).
- December 2017, Beijing, China. Verification of Robotics and Autonomous Systems. Invited talk to the workshop on the Verification of Large Scale Real-Time Embeded Systems. Slides are available from here.
- September 2017, Visegrad, Hungary. Verification of Robotics and Autonomous Systems. Invited Talk to the 11th Alpine Verification Meeting (AVM2017). http://avm2017.inf.mit.bme.hu. Slides are availabe from here
- November 2015, Oxford, UK. Reasoning About Trust in Autonomous Multiagent Systems. Univeristy of Oxford.
Funding and Grants
- Defence Science Technology Laboratory (DSTL) project on "Test Coverage Metrics for Artificial Intelligence", 2018 - 2019, with Simon Maskell, Sven Schewe, and Daniel Kroening
- EPSRC 2018 Vacation Bursary Programme project, 2018
- NSFC project on "Automated Verification and Synthesis of Smart Factories", 2018 - 2021
- KTP Project with Kadfire, 2017-2019
- Lockey Grant, Oxford University, 2015.
Program Committee Memberships
Supervision of Postgraduate Research Students
If you are interested in doing a PhD in relevant reserach areas with me, please feel free to contact me. University of Liverpool has a set of established scholarship schemes, including Liverpool china scholarship council award. Additional to these, I may have some vacancies from time to time.