Wenjing Zhang

Assistant Professor
School of Computer Science
Email: 
wzhang25@uoguelph.ca
Phone number: 
(519)-8244120 ext. 53138
Office: 
REYN 3319
Available positions for grads/undergrads/postdoctoral fellows: 
Yes
Seeking academic or industry partnerships in the area(s) of: 
Security and Privacy in Generative Models, Data Privacy, Federated Learning, Information Theory, Optimization, Reinforcement Learning, Security and Privacy in AI/Machine Learning

Education and Employment Background

Dr. Wenjing Zhang received her Ph.D. in Computer Science from the University of Guelph in February 2024. She joined the School of Computer Science at the University of Guelph as a tenure-track Assistant Professor in August 2024. From 2016 to 2018, she was a visiting research scholar in the Department of Electrical and Computer Engineering at the University of Arizona, USA. Her research focuses on cybersecurity and Artificial Intelligence (AI)/machine learning, with publications in top-tier venues such as Neural Information Processing Systems (NeurIPS), IEEE Transactions on Information Forensics and Security, IEEE Transactions on Dependable and Secure Computing, Computers & Security, and IEEE Transactions on Communications. In 2025, Dr. Zhang was awarded a five-year NSERC Discovery Grant (2025-2030) by the Natural Sciences and Engineering Research Council of Canada to support her research program, titled “Enhancing Security, Privacy, and Fairness in Generative Artificial Intelligence (AI).”

 

Research Themes 

Wenjing’s research interests are broadly centered around cybersecurity, Artificial Intelligence (AI), and Machine Learning (ML), with a particular emphasis on the interdisciplinary research of security, privacy, and machine learning. Her work is twofold: firstly, she utilizes machine learning techniques to study security and privacy. Secondly, she develops secure and privacy-preserving machine learning models. Key areas of her research focus include:

  • Security in AI and ML: Models in AI and ML are increasingly vulnerable to security threats aimed at compromising their functionality. These threats include poisoning attacks that corrupt training data, evasion attacks using adversarial examples to mislead models, and prompt injection attacks that manipulate inputs to large language models (LLMs). Wenjing’s team is highly interested in to developing robust defenses to secure AI and ML models against such threats. Her team’s efforts include designing empirically secure defenses using ensemble methods to counter both unseen and adaptive attacks, creating provably secure defenses that balance rigorous security guarantees with optimal model performance, and developing defenses against prompt injection attacks, particularly for LLMs.
  • Model Privacy in AI and ML: Model privacy involves protecting internal parameters to prevent model theft, even when attackers lack direct access to the model’s architecture, parameters, or training data. This protection is crucial for preserving the intellectual property of high-value model providers who invest heavily in expertise, large datasets, hyperparameter optimization, and computing resources to develop these models. Wenjing’s team works to advance innovative techniques that protect the privacy of both model parameters and outputs. This includes designing training frameworks that enhances the privacy-utility trade-off by leveraging statistical and information-theoretical approaches, creating mechanisms that perturb model predictions to minimize information leakage, and implementing collaborative privacy-preserving training techniques that enable multiple stakeholders to train and utilize models without sacrificing privacy.
  • Data Privacy in AI and ML: AI and ML models trained on real-world data can inadvertently memorize and reproduce sensitive information, thereby compromising the privacy of both training and testing data. To address this challenge, Wenjing’s team focuses on leveraging generative models to develop synthetic data generation algorithms that minimize privacy leakage while accurately replicating the statistical properties of real data. To further enhance the utility of synthetic data, her team designs reinforcement learning-based feature selection algorithms that ensure the fidelity and diversity of synthetic training datasets. Additionally, Wenjing’s team is interested in developing privacy-preserving prompt engineering techniques for LLMs to protect testing data while improving inference quality and effectiveness.

 

Highlights

  • Major Funding: NSERC Discovery Grant, Enhancing Security, Privacy, and Fairness in Generative Artificial Intelligence (AI), Natural Sciences and Engineering Research Council of Canada, 2025 - 2030
  • Awards: Westin Scholar Award, International Association of Privacy, 2022
  • Professional Affiliations: Technical Program Committee Member in the IEEE Conference on Communications and Network Security (IEEE CNS 2025), IEEE International Conference on Communications (IEEE ICC 2025), IEEE International Conference on Computing, Networking and Communications (IEEE ICNC 2025)

 

Media Coverage