Researcher develops defenses against federated graph learning backdoor attacks

Webp 4dclbz1ua3d4eruf0m7jj5cgyq7e
Jess Goode Chief of Staff and Vice President for Strategy | Illinois Institute Of Technology

Researcher develops defenses against federated graph learning backdoor attacks

An Illinois Tech researcher has been recognized with a Best Paper Award at the Association of Computing Machinery Conference on Computer and Communications Security. The award was given for work on developing a defense against backdoor attacks in federated graph learning (FedGL), a framework that allows multiple users to train a shared algorithm while maintaining data privacy.

Assistant Professor Binghui Wang and his team received the accolade in the Artificial Intelligence Security and Privacy Track for their paper titled “Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses.” This paper was submitted to the ACM’s Special Interest Group on Security, which has an acceptance rate of about 20 percent after peer review.

Wang highlighted the significance of their work by stating, “What excites me most about this project is how it masterfully bridges the gap between deep theoretical rigor and practical accessibility. The provable defense mechanism is both elegant in its mathematical foundation and effective in real-world applications—while remaining comprehensible to the general public.”

The research introduces an attack called optimized distributed graph backdoor attack (Opt-GDBA), which embeds malicious triggers into training graph data. This technique achieved a 90 percent success rate across different datasets. Wang explained, “The Opt-GDBA is an optimized and learnable attack that considers all aspects of FedGL, including the graph data’s structure, the node features, and the unique clients’ information.”

To counteract this new threat, Wang's team developed a provable defense mechanism that breaks incoming graph data into smaller pieces to detect suspicious elements. This defense blocked every Opt-GDBA attack while preserving over 90 percent of legitimate data.

“The most significant challenge was developing a provable defense robust against both known attacks and future unknown threats capable of arbitrarily manipulating graph data,” Wang noted. He credited more than five years of research in AI model defenses for enabling their innovative approach.

Wang collaborated with Yuxin Yang from Jilin University and Illinois Tech; Qiang Li from Jilin University; Jinyuan Jia from Pennsylvania State University; and Yuan Hong from the University of Connecticut.

Mentioned in this story

LETTER TO THE EDITOR

Have a concern or an opinion about this story? Click below to share your thoughts.
Send a Letter

Submit Your Story

Know of a story that needs to be covered? Pitch your story to The Southland Marquee.
Submit Your Story

Mentioned in this story

Illinois Institute Of Technology

More News