College of Engineering and Computing
Department of Computer Science and Software Engineering
Cybersecurity Program

Secure & Private Intelligent Systems Lab

Unmasking the Threats: Privacy and Security Attacks on LLMs

We investigate sensitive-information leakage in LLMs by modeling adversary capabilities and analyzing concrete attack vectors, including data and model extraction and jailbreaks induced by prompts or fine-tuning. Building on these findings, we develop and evaluate defenses—most notably differential privacy and homomorphic encryption—to safeguard training data and the resulting models.

Privacy and security attacks on LLMs
LLM membership inference workflow diagram
  • Membership and property inference against LLMs
  • Prompt injection, indirect injection, and jailbreak defenses
  • Data exfiltration and model extraction risks
  • Mitigations: red teaming, filtering, DP fine-tuning, and safety policies

Privacy and Security for Federated Learning

We analyze privacy and security vulnerabilities in federated learning, including attacks targeting data confidentiality, model integrity, and robustness. Building on this understanding, we design defense mechanisms and secure training systems that protect against data leakage and model manipulation while preserving utility in decentralized environments.

Federated learning privacy and security diagram 1
Federated learning privacy and security diagram 2
  • Differential privacy and privacy accounting for FL
  • Secure aggregation and communication-efficient protocols
  • Robustness to poisoning, backdoors, and model inversion
  • Personalization and fairness under privacy constraints

Past Research

  • Differentially Private Data Publishing and Network Analysis
  • De-Anonymization in Social Networks
  • Performance of Short-Packet Communication System