Our research uncovers critical privacy risks in text-to-image diffusion models, showing that sensitive attributes such as authorship style and even dementia-related speech markers can leak into generated images. We demonstrate that adversaries can infer these attributes with high accuracy, even from images alone, highlighting threats of unauthorized profiling and discrimination. By developing new adversarial models, multimodal embedding analyses, and explainability-driven metrics, we expose how generative AI transforms and propagates sensitive information—underscoring the urgent need for privacy-preserving safeguards in next-generation AI systems. In response we spearhead the design and evaluation of foundational safeguards.
[+] more
Projects

Security and Privacy in Machine Learning and CPS
Study on the security and privacy of machine learning models and other applications in IoT and cyber-physical systems.
[+] more

Mobile, IoT, and Web Security
Study of adversarial capabilities and development of novel defense strategies for smartphone and IoT systems.
[+] more

Authentication, Authorization and Access Control
Study of effective and efficient authentication, authorization and access control mechanisms.
[+] more
Recent Comments