A new framework for assessing risks in artificial intelligence systems has been published by a team including two faculty members from Illinois Institute of Technology. The framework, called STRIFE, offers a comprehensive approach to AI threat modeling by addressing technical, ethical, and legal aspects.
Ann Rangarajan, assistant professor of information technology and management at Illinois Tech, and Saran Ghatak, professor and chair of the Department of Humanities, Arts, and Social Sciences at the university, collaborated with external researchers to develop the methodology. Their work appears in the International Journal of Intelligent Information Technologies.
The STRIFE framework aims to help developers and users identify potential unintended consequences throughout an AI system’s lifecycle. These include risks such as biased outcomes, privacy breaches, psychological harm, mass surveillance facilitation, or environmental hazards.
“AI systems operate within complex social, organizational, and cultural contexts that fundamentally shape how risks emerge,” said Ghatak. “STRIFE recognizes that threats to AI systems often originate not from technical failures alone, but from the broader ecosystem of human users, institutional policies, and societal expectations."
The researchers highlight that many threats stem from more than just technical vulnerabilities like algorithmic bias or adversarial attacks. Ethical concerns such as sustainability and inclusion as well as legal issues related to intellectual property also arise from interactions between humans and AI.
According to Rangarajan: "The real innovation of STRIFE lies in its systematic approach to addressing AI threats through domain-specific terminology, which speaks directly to different professional communities. This comprehensive approach ensures that AI threat modeling becomes as fundamental to AI system development as traditional threat modeling has become for conventional software systems."
STRIFE is structured so it can be applied across multiple disciplines by involving computer scientists alongside social scientists, ethicists, and legal scholars. In technical terms it focuses on safety and transparency; ethically it addresses trust and inclusion; legally it considers issues like reasonableness standards.
The framework also aligns with existing risk management practices by integrating with the four main functions outlined in the National Institute of Standards and Technology (NIST) AI Risk Management Framework: govern, map, measure, and manage. It introduces a fifth function—mediate—to extend these principles specifically for socio-technical threats posed by AI.
Rangarajan explained: “While the NIST AI Risk Management Framework provides essential guidance for trustworthy AI, practitioners often struggle with how to implement such principles in specific contexts. Our framework systematically guides threat identification across technical dimensions such as safety and transparency, ethical considerations including trust and inclusion, and legal factors such as reasonableness and intellectual property because AI risks emerge from the complex interactions between technology, human behavior, and societal structures.”
