The research my group is currently conducting spans over machine learning, formal methods, and robotics. If you are interested in these areas and want to collaborate with us, please feel free to get in touch. Most of my research publications can be found through my Google Scholar profile.
Specifically, we are interested in analysing autonomous systems -- systems that can learn, adapt, and make decisions by themselves -- in terms of their properties (e.g., safety, robustness, trustworthiness, security, etc), to understand if they are applicable to safety critical applications, and constructing autonomous systems with these properties satisfied. This may include (but not limited to)
verification of neural network-based deep learning on safety and security properties,
interpretation and explanation of deep learning, and
logic-based approaches for the specification, verification and synthesis of autonomous multi-agent systems.
Currently, the application areas we are addressing include self-driving cars, underwater vehicles, and other robotics applications. We are also interested in various healthcare applications where safety and interpretability are important.
The research has been funded by Dstl, EPSRC, European Commission, Innovate UK, etc. I have been the PI (or Liverpool PI) for projects valued more than £1.9M, and co-I for more than £15M. Some brief information can be found here.
I am on the role of school research lead of the EEECS school.
I am always looking for PhD students with strong motivation to actively participate in research. There are a few possible ways of receiving a scholarship, for example
☆
We are recruiting a postdoctoral researcher. The position will be about 2-3 years, depending on your starting time. The topic will be on the verification an interpretation of deep learning. Please find the job advertisement and apply through the links provided. Please get in touch if you have any questions.
☆ We are recruiting a PhD student to be funded by CDT, on "Robust and interpretable graph neural networks for the analysis of MRI and EEG to classify epilepsy subtypes and predict patient outcomes". Please find the advertisement at findAPhd, and feel free to get in touch if you are interested in ML and brainimaging.
We are recruiting a PhD student to be funded by EPSRC through the CDT of Distributed Algorithm. Please find the advertisement at findAPhd, and feel free to get in touch if you are eligible and interested in federated learning.
We are recruiting a postdoctoral researcher for the H2020 project on "FOCETA - Foundations for Continuous Engineering of Trustworthy Autonomy". The position will be 3 years. You will collaborate with many research institutes and large companies across Europe. The topic will be on the verification and validation of learning-enabled systems. Details please refer to the job advertisement. This position has been filled.
Workshop Organisation
(11/2022) SafeAI workshop will be held again with AAAI2022. Please submit your papers through SafeAI Workshop Website
(09/2021) Organised a workshop "Safety Assurance for Deep Learning in Underwater Robotics" (website), with other relevant information available at SOLITUDE Project Resources website
(05/2021) AISafety workshop will be held again with IJCAI2021. Please submit your papers through AISafety Website
(08/2020) SafeAI workshop will be held again with AAAI2021.
(03/2020) AISafety workshop will be held again with IJCAI2020.
(08/2019) SafeAI will be held again as a workshop of AAAI2020.
(08/2019) Organising workshop AI&FM2019 at ICFEM2019 , to discuss how to make AI and formal methods (and software engineering) mutually beneficial. It will be on 5th Nov, 2019.
Recent News (for all News please go to the News tab)
(02/2023) Paper "Randomized Adversarial Training via Taylor Expansion" accepted to CVPR2023, congrats to Gaojie and co-authors
(01/2023) Paper "Decentralised and Cooperative Control of Multi-Robot Systems through Distributed Optimisation" accepted to AAMAS2023, congrats to Yi and co-authors
(11/2022) Paper "Towards Verifying the Geometric Robustness of Large-scale Neural Networks" accepted to AAAI2023, congrats to all co-authors
(10/2022) With Xingyu Zhao and Yi Dong, we are awarded a project on UK and US governments launched challenge on privacy-enhancing technologies (PETs), where we are developing a federated/distributed learning that is able to consider scalability (i.e., number of users), privacy, accuracy, communication complexity, and efficiency, and will apply the algorithm to two applications on financial crimes and COVID healthcare, respectively.
(10/2022) To give an invited talk to ICFEM2022. slides.
(12/2022) textbook "Machine Learning Safety" will be published in December 2022.
(07/2022) our paper "Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation" was accepted to ECCV this year. Congratulations to Ganlin, and all co-authors.
(06/2022) our papers "Dependability Analysis of Deep Reinforcement Learning based Robotics and Autonomous Systems" and "STUN: Self-Teaching Uncertainty Estimation for Place Recognition" were accepted to IROS this year. Congratulations to Yi and Kaiwen, and all co-authors.
(06/2022) Gave invited talk on "Is Deep Learning Certifiable at all?" to TAI-RM2022 workshop and to the SAE G-34/EUROCAE WG-114 Technical Talk.
(03/2022) Congratulations to Gaojie, whose paper on "enhancing adversarial training with second order statistics of weights" was accepted to CVPR this year.
(03/2022) Gave a talk at Université Grenoble Alpes on "Machine Learning Safety (and Security)"
(10/2021) Congratulations to Yanda, who has three papers published at ICCV2021, IEEE transactions on Medicai Imaging, and MICCAI2021, respectively, on deep learning in healthcare.
(10/2021) Warmest Welcome to Mr Yi Qi and Mr Sihao Wu on their joining the group to start PhD.
(08/2021) Delivered a tutorial to IJCAI'2021 on "Towards Robust Deep Learning Models: Verification, Falsification, and Rectification" with Wenjie, Elena, and Xinping. Tutorial information is available at the website: https://tutorial-ijcai.trustai.uk.
(07/2021) Congratulations to Wei, who is one of the winners of the SIEMENSE AI-DA challenge (https://ecosystem.siemens.com/topic/detail/default/33), which concerns how to assess the dependability of machine learning models. Specifically, he won the “most original approach” award. There were 32 teams from 15 countries participated in this challenge. This work also won the best paper award in AISafety2021, paper is available.
(07/2021) One paper accepted by ICCV2021. Congratulations to Yanda.
(07/2021) Our paper "Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers" has been accepted by Machine Learning journal. Congratulations to Wei and Xingyu.
(05/2021) Congratulations to Xingyu and Wei, whose paper on "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations" has been accepted to UAI2021. This paper develops a Bayesian method for the well-known LIME explainable AI method, to address the issue of robustness and consistency in explanations. Now, the explanations are not only more accurate but also more robust.
(05/2021) Congratulations to Wei, whose paper on "Coverage Guided Testing for Recurrent Neural Networks" has been accepted to IEEE transactions on Reliability. This paper develops temporal based coverage metrics for the testing of LSTMs.
(11/2020) Going to give tutorial on "Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications" to ICDM2020 with Wenjie Ruan and Xinping Yi. Website: https://tutorial.trustdeeplearning.com
(10/2020) Started a new project "SOLITUDE: Safety Argument for Learning-enabled Autonomous Underwater Vehicles." with Xingyu Zhao, Simon Maskell, Sven Schewe, Sen Wang (Heriot Watt) on developing safety assurance argument for autonomous underwater vehicles.
(09/2020) Congratulations to Gaojie Jin! Paper "How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?" has been accepted to NeurIPS2020. We study a "correct by construction" question -- how to train a neural network with good generalisation ability (i.e., reliability)? -- and find that this is possible by tracking and controlling a Weight Correlation over the trainable parameters during the training. Experiments show that the improvement is persistent across small networks and large scale networks such as VGG16. The weight correlation can also be used to predict if a model generalises well, without using test data which might not be available in practical scenarios. Please check paper from Arxiv.
(08/2020) Our paper "Generalizing Universal Adversarial Attacks Beyond Additive Perturbations" has been accepted to ICDM2020.
(08/2020) Our paper "PRODEEP: a platform for robustness verification of deep neural networks" has been accepted to ESEC/FSE2020.
(07/2020) Our paper "Lightweight Statistical Explanations for Deep Neural Networks" has been accepted to ECCV2020.
(07/2020) Our paper "Regression of Instance Boundary by Aggregated CNN and GCN" has been accepted to ECCV2020.
(06/2020) Congratulations to Wei Huang! Our paper "Practical Verication of Neural Network Enabled State Estimation System for Robotics" has been accepted to IROS2020.
(05/2020) Our survey paper "A Survey of Safety and Trustworthiness of Deep Neural Networks" has been accepted to the journal of Computer Science Survey. It's current arXiv version is here
(07/2021) Congratulations to Wei, who is one of the winners of the SIEMENSE AI-DA challenge (https://ecosystem.siemens.com/topic/detail/default/33), which concerns how to assess the dependability of machine learning models. Specifically, he won the “most original approach” award. There were 32 teams from 15 countries participated in this challenge.
(07/2021) Our paper "Embedding and Synthesis of Knowledge in Tree Ensemble Classifiers" has been accepted by Machine Learning journal. Congratulations to Wei and Xingyu.
(05/2021) Congratulations to Xingyu and Wei, whose paper on "BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations" has been accepted to UAI2021. This paper develops a Bayesian method for the well-known LIME explainable AI method, to address the issue of robustness and consistency in explanations. Now, the explanations are not only more accurate but also more robust.
(05/2021) Congratulations to Wei, whose paper on "Coverage Guided Testing for Recurrent Neural Networks" has been accepted to IEEE transactions on Reliability. This paper develops temporal based coverage metrics for the testing of LSTMs.
(11/2020) Going to give tutorial on "Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications" to ICDM2020 with Wenjie Ruan and Xinping Yi. Website: https://tutorial.trustdeeplearning.com
(11/2020) Will start a 3-year H2020 project on "FOCETA - Foundations for Continuous Engineering of Trustworthy Autonomy" as the Liverpool lead (with Prof Sven Schewe).
(10/2020) Started a new project "SOLITUDE: Safety Argument for Learning-enabled Autonomous Underwater Vehicles." with Xingyu Zhao, Simon Maskell, Sven Schewe, Sen Wang (Heriot Watt) on developing safety assurance argument for autonomous underwater vehicles.
(10/2020) Warmest welcome to Kaiwen Cai, who just join the group to start his PhD.
☆ (09/2020) Congratulations to Gaojie Jin! Paper "How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks?" has been accepted to NeurIPS2020. We study a "correct by construction" question -- how to train a neural network with good generalisation ability (i.e., reliability)? -- and find that this is possible by tracking and controlling a Weight Correlation over the trainable parameters during the training. Experiments show that the improvement is persistent across small networks and large scale networks such as VGG16. The weight correlation can also be used to predict if a model generalises well, without using test data which might not be available in practical scenarios. Details will come shortly.
(09/2020) Will start an EPSRC project on "EnnCore: End-to-End Conceptual Guarding of Neural Architectures" as the Liverpool lead over the EPSRC call for Security for all in an AI enabled society
(08/2020) Our paper "Generalizing Universal Adversarial Attacks Beyond Additive Perturbations" has been accepted to ICDM2020.
(08/2020) Our paper "PRODEEP: a platform for robustness verification of deep neural networks" has been accepted to ESEC/FSE2020.
(08/2020) Will give lecturers on verification of neural networks at Summer School Marktoberdorf 2020("Safety and Security of Software Systems: Logics, Proofs, Applications").
(07/2020) Our paper "Lightweight Statistical Explanations for Deep Neural Networks" has been accepted to ECCV2020.
(07/2020) Our paper "Regression of Instance Boundary by Aggregated CNN and GCN" has been accepted to ECCV2020.
(06/2020) Congratulations to Wei Huang! Our paper "Practical Verication of Neural Network Enabled State Estimation System for Robotics" has been accepted to IROS2020.
(05/2020) Our survey paper "A Survey of Safety and Trustworthiness of Deep Neural Networks" has been accepted to the journal of Computer Science Survey. It's current arXiv version is here
(05/2020) A new paper "CNN-GCN Aggregation Enabled Boundary Regression for Biomedical Image Segmentation" accepted to MICCAI2020
(05/2020) Will give an invited talk at University of Exeter.
(04/2020) Our paper " Safety Framework for Critical Systems Utilising Deep Neural Networks" has been accepted to SAFECOMP2020. This paper, we start our investigation into the safety assurance case for deep learning by Bayesian inference, with evidences collected from verification and testing of neural networks.
(03/2020) Will give an invited talk at MMB2020 on "Safety Certification of Deep Learning".
(01/2020) Our paper on "Reliability Validation of Learning Enabled Vehicle Tracking" is accepted to ICRA2020. It is the first time we discovered that there are risks in the interaction between a deep learning system and other traditional, symbolic systems such as Kalman Filter based dynamic vehicle tracking.
(01/2020) Started a project to work with MBDA & WSTC on "Adaptive & reactive mission execution".
(10/2019) Have a 4-year PhD studentship funded by Dstl to work on "Risk-Aware Shared Autonomy for Robotic Inspection Missions in Adversarial Environments". Please get in touch ASAP if you are interested in the position.
(09/2019) Welcome Dengyu WU to join the team to start his PhD.
(07/2019) Our very first paper on testing DNN is finally accepted to EMSOFT2019 and ACM Transactions on Embedded Computing Systems (TECS). The paper can be found from arXiv. Before it is published, it has been cited for 40+ times. :-)
(07/2019) Our abstract interpretation based DNN verification Paper "Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification" is finally accepted (though conditionally :-)) to SAS2019. It combines symbolic propagation with abstract interpretation to achieve a good performance. A preliminary version is available from arXiv.
(06/2019) A paper "Towards Integrating Formal Verification of Autonomous Robots with Battery Prognostics and Health Management", joint with colleagues from Heriot-Watt University, is accepted to SEFM2019.
(06/2019) Have another project with Dstl on Test Metrics for Artificial Intelligence.
(06/2019) Our paper "Gaze-based Intention Anticipation over Driving Manoeuvres in Semi-Autonomous Vehicles" is accepted to IROS 2019, another major conference to be held in Macau this year other than IJCAI.
(06/2019) Warmest welcome to Peipei Xu for her joining the team to start her PhD.
(06/2019) finally received my HEA (Higher Education Academy) fellow certificate.
(05/2019) Our journal paper "A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees" has been accepted to Theoretical Computer Science (TCS). The paper is a significantly extended version of our TACAS2018 paper. The newest version is available from Arxiv
(05/2019) Our paper on "Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees" has been accepted to IJCAI2019.
(05/2019) Our journal paper on "Reasoning about Cognitive Trust in Stochastic Multiagent Systems" has been accepted to the ACM Transactions on Computational Logic (ToCL). The paper, an extended version of our AAAI2017 paper of the same title, is to provide a foundational framework on how to reason about the trust between human and AI in a dynamic system. The paper is co-authored with Maciek Olejnik and Marta Kwiatkowska, who invested a lot of effort in this paper.
(03/2019) Our paper titled "Analyzing Deep Neural Networks with Symbolic Propagation: Towards Higher Precision and Faster Verification" is available on arXiv. It is an improvement of abstract interpretation with symbolic propagation for the verification of DNNs.
(03/2019) Francesco Crecchi will visit from March to May.
(12/2018) Our tool paper for DeepConcolic has been accepted to ICSE2019.
(11/2018) Taking up the role of Undergraduate Admission Tutor for the CS department. I will be overseeing the entry requirements of various UG programs available in CS Dept. for the next few years.
(10/2018) Join ORCA (Offshore Robotics for Certification of Asset) Hub as a Co-Investigator at Liverpool. I will be on the topic "Robot and Asset Self-Certification"
(10/2018) Warmly welcome Ms Emese Thamo to the group to start her PhD.
(09/2018) Slides of my talk at Nanjing University can be found here.
(07/2018) Slides of my talk at Imperial can be found here.
(07/2018) Our paper titled "A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees" is available on arXiv. The paper summarises our efforts on using Lipschitz continuity and two-player games to achieve the verification. The provable guarantee solely relies on the Lipschitz constants. Except for the maximum safe radius problem as a cooperative game, we also studied feature robustness problem as a competitive game.
(07/2018) Our new paper titled "Concolic Testing for Deep Neural Networks", coauthored with Youcheng Sun, Min Wu, Wenjie Ruan, Marta Kwiatkowska, Daniel Kroening, has been available from arXiv. This paper has been conditionally accpeted to ASE2018. We will update a new version very soon.
(06/2018) Our journal paper "An epistemic strategy logic", coauthored with Ron van der Meyden, has been accepted to the ACM Transactions on Computational Logic (TOCL).
(04/2018) Gave an invited talk on "Verification and Testing of Deep Learning" for the ETAPS workshop on "Formal Methods For ML-Enabled Autonomous System (FOMLAS2018)".
(04/2018) Our new paper "Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm" is available on arXiv.
(04/2018) Our paper titled "Reachability Analysis of Deep Neural Networks with Provable Guarantees"
has been accepted to IJCAI2018. The paper is co-authored with Wenjie Ruan (Oxford) and Marta Kwiatkowska (Oxford).
We present a novel, global optimisation based, approach to study the properties (e.g., robustness, output range, safety, etc) of deep neural networks
with provable guarantees. Different with constraint-based approaches, our new approach is not affected by
the internal structure of the networks, and therefore it is able to handle state-of-the-art networks. The paper has been available on arXiv.
(04/2018) Our paper titled "Model Checking Probabilistic Epistemic Logic for Probabilistic Multiagent Systems"
has been accepted to IJCAI2018. The paper is co-authored with Chen Fu (CAS), Andrea Turrini (CAS), Lei Song, Yuan Feng (UTS),
and Lijun Zhang (CAS). It presents a model checking algorithm (and its tool) for probabilistic multiagent systems with respect to a probabilistic epistemic logic PETL.
(04/2018) got an Defence Science Technology Laboratory (Dstl) project on "Test Coverage Metrics for Artificial Intelligence", which will run from 2018 to 2019.
We are going to explore software testing techniques for the safety of deep neural networks.
(03/2018) got an EPSRC 2018 Vacation Bursary Programme project, to support a student Edward Barker on "Explaining Deep Learning-based Image Classification"
(03/2018) our new paper titled "Testing Deep Neural Networks" is available on arXiv.
It is co-authored with Dr Youcheng Sun and Prof Daniel Kroening from Oxford.
The paper proposes a set of white-box test coverage criteria and test case generation algorithms for DNNs. It is a big step towards developing software testing approaches on systems with deep learning components.
(01/2018) gave a short talk and served as a panelist for ERTS 2018 at Toulouse, France. The panel was organised by Airbus.
(12/2017) our paper titled "Feature guided blackbox testing of deep neural networks" has been accepted to TACAS'2018. It is co-authored with Matthew Wicker from University of Georgia, USA, and Prof Marta Kwiatkowska from Oxford. The full paper is available at here. The paper proposes a black-box testing algorithm based on SIFT object detection technique and Monte-carlo tree search algorithm. The algorithm can achieve guarantee on the safety, and thus is also an verification approach.
(12/2017) gave talks on "Verification of Deep Learning Systems" at Beijing (Beijing University and Tsinghua University) and Xi'an (Xidian University). Slides are available from here.
Software:
TrustAI: a tool set for the safety and trustworthiness of systems with deep learning components.
MBDA & WSTC project on "Adaptive & reactive mission execution", PI, with Jason Ralph and Simon Maskell, 2019 - 2020
Defence Science Technology Laboratory (DSTL) PhD studentship on "Statistical Approach to Assess the Trustworthiness of Robotics and AI", PI, 2020 - 2024
Test Coverage Metrics for Artificial Intelligence -- v2.0.
Recent Invited Talks, Seminars, and Panel Discussions:
(10/2022) To give an invited talk to ICFEM2022. slides.
(06/2022) Gave invited talk on "Is Deep Learning Certifiable at all?" to TAI-RM2022 workshop and to the SAE G-34/EUROCAE WG-114 Technical Talk.
(03/2022) Gave a talk at Université Grenoble Alpes on "Machine Learning Safety (and Security)"
(08/2021) Delivered a tutorial to IJCAI'2021 on "Towards Robust Deep Learning Models: Verification, Falsification, and Rectification" with Wenjie, Elena, and Xinping. Tutorial information is available at the website: https://tutorial-ijcai.trustai.uk.
(08/2020) Will give lecturers on verification of neural networks at Summer School Marktoberdorf 2020("Safety and Security of Software Systems: Logics, Proofs, Applications").
(05/2020) Will give an invited talk at University of Exeter.
(03/2020) Will give an invited talk at MMB2020 on "Safety Certification of Deep Learning".
(09/2018) Slides of my talk at Nanjing University can be found here.
(07/2018) Slides of my talk at Imperial can be found here.
(04/2018) Gave an invited talk on "Verification and Testing of Deep Learning" for the ETAPS workshop on "Formal Methods For ML-Enabled Autonomous System (FOMLAS2018)".
January 2018, Toulouse, France. Invited panel discussion on how machine learning technique could be used (or not) for safety-critical applications, oragnised by ONERA The French Aerospace Lab and AirBus. The 9th European Congress on Embedded Real Time Software and Systems (ERTS 2018). https://www.erts2018.org/
April 2018, Thessaloniki, Greece. Verification of Deep Neural Networks. Invited Talk to the ETAPS 2018 workshop on formal methods for ML-enabled autonomous systems (FoMLAS2018). https://fomlas2018.fortiss.org
Januray 2018, Florida, US. Invited talk and panelist of a session in SciTech2018 on the Interaction of Software Assurance and Risk Assessment Based Operation of Unmanned Aircraftsession. Organised by The American Institute of Aeronautics and Astronautics (AIAA).
December 2017, Beijing, China. Verification of Robotics and Autonomous Systems. Invited talk to the workshop on the Verification of Large Scale Real-Time Embeded Systems. Slides are available from here.
September 2017, Visegrad, Hungary. Verification of Robotics and Autonomous Systems. Invited Talk to the 11th Alpine Verification Meeting (AVM2017). http://avm2017.inf.mit.bme.hu. Slides are availabe from here
November 2015, Oxford, UK. Reasoning About Trust in Autonomous Multiagent Systems. Univeristy of Oxford.
Open Positions:
(06/2020) We are recruiting a postdoctoral researcher for the H2020 project on "FOCETA - Foundations for Continuous Engineering of Trustworthy Autonomy". The position will be 3 years. You will collaborate with many research institutes and large companies across Europe. The topic will be on the verification and validation of learning-enabled systems.
(10/2019) Have a 4-year PhD studentship funded by Dstl to work on "Statistical Approach to Assess the Trustworthiness of Robotics and AI". Please get in touch ASAP if you are interested in the position.
(05/2019) We have a 2+ year postdoc position in the area of machine learning, software engineering, and automated verification,
working on the topic of “verification and testing of learning-enabled systems”. Please find more details here. The position can start any time from now. If you are interested in, please feel free to get in touch.
(05/2018) We have a new vacancy for PhD student, in conjunction with IBM, to explore "property-aware learning" on how to ensure the correctness of systems with learning-enabled components
(such as deep learning components) by imposing constraints (e.g., safety, security, ethical norms, etc) during the learning process. Advertisement can be found
from findAPhD. The funding is for four years, waiving the tuition fee and having
stipend 14,777 per year.
Exceptional international student can apply, and please contact me first. The position needs to start before October, so please apply ASAP.
(03/2018) a fully-funded PhD positions is available. The student will be advised by me and Dr. Wei-Yun Yao at A*STAR, Singapore, on the topic of Explainable Intelligent Robots. The student will spend two years in Liverpool and Singapore, respectively. Please feel free to contact me if you are interested in. Please find more details here
(03/2018) a fully-funded PhD positions is available. The student will be advised by me and Prof. Shang-Hong Lai at National Tsing Hua University (NTHU), Taiwan, on the topic of Verified Object Recognition and Manipulation. The student will spend two years in Liverpool and Taiwan, respectively. Please feel free to contact me if you are interested in. Please find more details here .
Postdocs in the project where I was/am the primary investigator
Ms Amany Alshareef, started from 03/2019, with Prof. Sven Schewe as the co-supervisor. Before coming to Liverpool, Amany has an MSc at Ball State University and a BSc at Umm Al-Qura University.
topic: testing deep learning
03/2019 -
Mr Gaojie Jin, with Dr. Xinping Yi as the co-supervisor. Before coming to Liverpool, Gaojie has an MSc at Liverpool University and a BSc at Peking University.
topic: interpretability
03/2019 -
Mr Wei Huang, started from 02/2019, with Prof. Shang-Hong Lai at National Tsing Hua University, Taiwan, as the co-supervisor. Before coming to Liverpool, Wei has an MSc at Imperial College and a BSc at Xiamen University.
topic: verification and validation of adaptive systems
02/2019 -
Ms Emese Thamo, with Dr Yannis Goulermas as the co-supervisor. Before coming to Liverpool, Emese has a BSc at Cambridge.
topic: Improving the Safety of Deep Reinforcement Learning Algorithms by Making Them More Interpretable
Dr Chen Zhang, China University of Mining and Technology. 12/2019 - 11/2020
Mr Zhixuan Xu, Renming University of China. 10/2019 - 10/2020
Mr Francesco Crecchi, University of Pisa, Italy. 04/2019 - 06/2019
"Robotics and Artificial Intelligence" Reading group is to hold a weekly meeting where one of the members will have a 30-40 minutes talk, discussing either their own papers, papers from other research groups, or anything that they are interested in. This will be followed by a Q&A and discussion session among the group on the topic.
Membership:
Anyone can join by request. If you are interested in, please feel free to drop me a message.
Venue:
Due to the lockdown, we are mainly holding this through virtual meetings (please click: Zoom meeting).
Meeting time:
Starting from the week of 24th August, the meeting time is moved to Tuesday 11:00-12:00, London time.
Talk Schedule:
Please refer to the webpage at ACPS lab for the detailed information about the reading group.