Dr. Soteris Demetriou
Dr. Soteris Demetriou is an Associate Professor of Computer Systems Security at Imperial College London and Director of the Applications, Platforms, and Systems Security (APSS) lab, as well as Imperial’s Academic Centre of Excellence in Cyber Security Research (ACE-CSR). His research spans the security, privacy, and safety of machine learning and computer systems, uncovering flaws in Android, IoT devices, cloud services, and ML models, and developing tools and systems that strengthen end-user privacy and platform security. His work has appeared in leading venues such as NDSS, CCS, OSDI, and SOSP, earned awards including NDSS Distinguished Paper, and led to patents and collaborations with industry and academia.
Security & Privacy Generative Models Computer Systems
Chengzeng You
Charles is a PhD student at the Applications and Systems Security Lab at Imperial College London (apss@Imperial). He has a Masters from the Chinese Academy of Sciences focusing on indoor positioning applications. As either a team leader or coordinator, he has completed 4 national (Chinese) engineering projects, published 10 journal papers, 5 patents and 1 monograph. As a programmer, he learned more than 9 programming languages including C/C++/C# (5 years), HTML (4 years), JavaScript (4 years), Matlab (3 years), Python (2 years), Android (2 years), Java (1 years). He leveraged those technical skills to develop service Platforms , Information Management webpages and indoor positioning applications.
Sensor Spoofing 3D Object Detection
Cenxi Tian
I am currently pursuing a PhD in Computing at Imperial College London as a member of the APSS (Applications, Platforms, and Systems Security) Lab, under the supervision of Dr. Soteris Demetriou. My research focuses on Mobile AI Security, exploring security and privacy challenges in on-device machine learning, particularly with emerging edge computing technologies. I develop methods to analyze interactions between hardware accelerators and ML models on mobile platforms, aiming to contribute to more secure and reliable edge AI systems. When I’m not in the lab, I’m either organizing hiking trips or sharing great food with friends and family.
Sidechannel Analysis Mobile Devices
Thanh Hai Le
Works on evaluating and designing safeguards for generative models.
GenAI SafetyMansi
My research interests lie in advancing AI safety by developing methods to make machine learning models more controllable and aligned with desired outcomes. Currently, I am focused on building an adversarially robust image generation framework for text-to-image diffusion models that supports both conditional and unconditional safety controls through policy-driven mechanisms—without compromising the generative quality. This work draws on concept localization and mechanistic interpretability to enable fine-grained control over model behavior. Additionally, I have investigated attribute leakage in diffusion models, uncovering how sensitive or unintended features can be inferred from generated outputs.
ML/AI Security, privacy, and Safety Diffusion Models Machine Interpretability
Recent Comments