I am now a Reader (Associate Professor++) of Computer Science at the University of Liverpool. Prior to Liverpool, I worked at University of Oxford, University of New South Wales, and Chinese Academy of Sciences.
The research my group is currently conducting spans over machine learning, formal methods, and robotics. If you are interested in these areas and want to collaborate with us, please feel free to get in touch.
Specifically, we are interested in analysing autonomous systems -- systems that can learn, adapt, and make decisions by themselves -- in terms of their properties (e.g., safety, robustness, trustworthiness, security, etc), to understand if they are applicable to safety critical applications, and constructing autonomous systems with these properties satisfied. This may include (but not limited to)
- verification of neural network-based deep learning on safety and security properties,
- practical analysis techniques (software testing, safety argument, certification, etc) for machine learning techniques,
- interpretation and explanation of deep learning, and
- logic-based approaches for the specification, verification and synthesis of autonomous multi-agent systems.
Currently, the application areas we are addressing include self-driving cars, underwater vehicles, and other robotics applications. We are also interested in various healthcare applications where safety and interpretability are important.
The research has been funded by Dstl, EPSRC, European Commission, etc. I have been the PI (or Liverpool PI) for projects valued more than £1.86M, and co-I for more than £15M. I am directing the Autonomous Cyber Physical Systems Laboratory (https://cgi.csc.liv.ac.uk/~acps/index.html), which is now at Ashton Building and will be re-located to the new Digital Innovation Facility (DIF) Building.
For Prospective Students:
I am always looking for PhD students with strong motivation to actively participate in research. There are a few possible ways of receiving a scholarship, for example
If you have other means of supporting your study, you are also welcomed to get in touch.
New Open Positions:
- We are recruiting a PhD student on topic "Learning Enabled Human Activity Recognition and Tracking for Healthcare". It will be funded by University of Liverpool Doctoral Network in Technologies for Healthy Ageing . Please find the advertisement at findAPhd, and feel free to get in touch if you are eligible and interested in.
We are recruiting a postdoctoral researcher. The position will be about 2-3 years, depending on your starting time. The topic will be on the verification an interpretation of deep learning. Please find the job advertisement and apply through the links provided. Please get in touch if you have any questions.
- ☆ We are recruiting a PhD student to be funded by CDT, on "Robust and interpretable graph neural networks for the analysis of MRI and EEG to classify epilepsy subtypes and predict patient outcomes". Please find the advertisement at findAPhd, and feel free to get in touch if you are interested in ML and brainimaging.
- We are recruiting a PhD student to be funded by EPSRC through the CDT of Distributed Algorithm. Please find the advertisement at findAPhd, and feel free to get in touch if you are eligible and interested in federated learning.
- We are recruiting a postdoctoral researcher for the H2020 project on "FOCETA - Foundations for Continuous Engineering of Trustworthy Autonomy". The position will be 3 years. You will collaborate with many research institutes and large companies across Europe. The topic will be on the verification and validation of learning-enabled systems. Details please refer to the job advertisement. This position has been filled.
- (05/2021) AISafety workshop will be held again with IJCAI2021. Please submit your papers through AISafety Website
- (08/2020) SafeAI workshop will be held again with AAAI2021.
- (03/2020) AISafety workshop will be held again with IJCAI2020.
- (08/2019) SafeAI will be held again as a workshop of AAAI2020.
- (08/2019) Organising workshop AI&FM2019 at ICFEM2019 , to discuss how to make AI and formal methods (and software engineering) mutually beneficial. It will be on 5th Nov, 2019.
- (02/2019) SafeAI will be held again, as a workshop of IJCAI2019, website: https://www.ai-safety.org/.
- (08/2018) Co-organising a AAAI workshop on AI safety (http://www.safeai2019.org).
Recent News (for all News please go to the News tab)
- (09/2021) Organised a workshop "Safety Assurance for Deep Learning in Underwater Robotics" (saferobot.eventbrite.co.uk)
- (08/2021) Delivered a tutorial to IJCAI'2021 on "Towards Robust Deep Learning Models: Verification, Falsification, and Rectification" with Wenjie, Elena, and Xinping. Tutorial information is available at the website: https://tutorial-ijcai.trustai.uk.
- (07/2021) ☆ Congratulations to Wei, who is one of the winners of the SIEMENSE AI-DA challenge (https://ecosystem.siemens.com/topic/detail/default/33), which concerns how to assess the dependability of machine learning models. Specifically, he won the “most original approach” award. There were 32 teams from 15 countries participated in this challenge. This work also won the best paper award in AISafety2021, paper is available.
- (07/2021) One paper accepted by ICCV2021. Congratulations to Yanda.
- (07/2021) Our paper "Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers" has been accepted by Machine Learning journal. Congratulations to Wei and Xingyu.
- (05/2021) Gave a talk to the Center For
Perspicuous Computing (CEPC) colloquium.
- (06/2021) ☆ Congratulations to Xingyu, who was offered a lectureship position in the department.
- (05/2021) Gave a talk on "safety and reliability of deep learning" to VARS'20 (https://hycodev.com/VARS2021/).
- (05/2021) Congratulations to Xingyu and Wei, whose paper on "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations" has been accepted to UAI2021. This paper develops a Bayesian method for the well-known LIME explainable AI method, to address the issue of robustness and consistency in explanations. Now, the explanations are not only more accurate but also more robust.
- (05/2021) Congratulations to Wei, whose paper on "Coverage Guided Testing for Recurrent Neural Networks" has been accepted to IEEE transactions on Reliability. This paper develops temporal based coverage metrics for the testing of LSTMs.
- (11/2020) Going to give tutorial on "Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications" to ICDM2020 with Wenjie Ruan and Xinping Yi. Website: https://tutorial.trustdeeplearning.com
(10/2020) Started a new project "SOLITUDE: Safety Argument for Learning-enabled Autonomous Underwater Vehicles." with Xingyu Zhao, Simon Maskell, Sven Schewe, Sen Wang (Heriot Watt) on developing safety assurance argument for autonomous underwater vehicles.
- (09/2020) Congratulations to Gaojie Jin! Paper "How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?" has been accepted to NeurIPS2020. We study a "correct by construction" question -- how to train a neural network with good generalisation ability (i.e., reliability)? -- and find that this is possible by tracking and controlling a Weight Correlation over the trainable parameters during the training. Experiments show that the improvement is persistent across small networks and large scale networks such as VGG16. The weight correlation can also be used to predict if a model generalises well, without using test data which might not be available in practical scenarios. Please check paper from Arxiv.
- (08/2020) Our paper "Generalizing Universal Adversarial Attacks Beyond Additive Perturbations" has been accepted to ICDM2020.
- (08/2020) Our paper "PRODEEP: a platform for robustness verification of deep neural networks" has been accepted to ESEC/FSE2020.
- (07/2020) Our paper "Lightweight Statistical Explanations for Deep Neural Networks" has been accepted to ECCV2020.
- (07/2020) Our paper "Regression of Instance Boundary by Aggregated CNN and GCN" has been accepted to ECCV2020.
- (06/2020) Congratulations to Wei Huang! Our paper "Practical Verication of Neural Network Enabled State Estimation System for Robotics" has been accepted to IROS2020.
- (05/2020) Our survey paper "A Survey of Safety and Trustworthiness of Deep Neural Networks" has been accepted to the journal of Computer Science Survey. It's current arXiv version is here
Teaching for this semester