A Review of Social Engineering Threats in the Age of AI: Understanding Human Vulnerabilities Through Protection Motivation Theory (PMT)

Authors

  • Noor Aishah Zainiar
  • Siti Nur Edayu Hashim

Abstract

Social engineering attacks are one of the most widespread cybersecurity threats today, and it is becoming even more dangerous with the advancements of Artificial Intelligence (AI). In contrast to traditional cybersecurity attacks that target system faults, social engineering uses human weaknesses, making it a serious concern for both individuals and companies. The introduction of AI strategies such as deepfakes, automated phishing, and voice cloning has made these attacks more realistic and harder to detect. Given that cybersecurity risks are on the rise, not much is known about how people see and react to threats that are driven by AI. This study applies Protection Motivation Theory (PMT) to explore how individuals perceive the severity, vulnerability, response efficacy, and self-efficacy of social engineering attacks. Findings show that social engineering attacks that are driven by AI change how people think and make decisions, which makes it more likely that someone will fall victim. The results also show how important it is to educate people about cybersecurity. For example, cybersecurity training programs that focus on PMT parts can help people gain the skills and confidence they need to recognize and prevent these complicated threats. This study contributes to the broader effort of strengthening public and educational outreach to combat the growing threats of rapidly evolving AI-driven social engineering.

Keywords: Social Engineering, Artificial Intelligence (AI) Threats, Cybersecurity Education, Protection Motivation Theory (PMT), Human Factors in Security

Downloads

Published

2025-10-18