In This Story
Whether you are an experienced software developer, a teen texting on a smartphone, or an older adult checking a bank statement, cybersecurity is part of your life. Humans and computers interact every minute of every day and cybersecurity is there to keep information safe and actions private. But normal human behavior can compromise safety and privacy.
For the next 12 months, researchers funded by the Commonwealth Cyber Initiative’s (CCI) Northern Virginia Node (NoVa Node) will be exploring the impact of human behavior on cybersecurity systems. Divided into six teams, the researchers will seek to leverage the power of their academic expertise in the social sciences, and related fields. The teams include faculty from the Colleges of Engineering and Computing, Humanities and Social Sciences, Education and Human Development, and the School of Business. Each team will explore a different aspect of the problem as they aim to translate those understandings into solutions or areas for additional investigation that can impact the welfare of Virginians. Faculty from the Department of Cyber Security Engineering are members of two of those teams.
“Impact of Human Behavior in a Mixed Traffic Environment”
PI: Linghan Zhang, CEC; Co-PIs: Nirup Menon, School of Business, Nupoor Ranade, College of Humanities and Social Sciences (CHSS),
As autonomous vehicles become more prevalent and mingle with human-driven vehicles this mixed traffic environment may comprise both. In mixed traffic, the behaviors of human drivers are unpredictable and can lead to situations that confuse autonomous vehicles and cause adverse events for both.
The CCI NoVa Node’s research in autonomous vehicles (AVs) has already garnered attention from vehicle manufacturers such as Ford, Cadillac, and Daimler-Benz. Linghan Zhang and her team aim to extend that research by studying their use in mixed traffic.
According to Linghan, the team’s goal is to reflect driving reality through a multi-vehicle simulation in mixed traffic, using driving conditions that have led to real-world collisions in the past. She says, “Prior research only focuses on a single user’s behavior, and the data collected is mainly limited to surveys and interviews. With objective driving data missing, prior experiments did not reflect on-road driving reality.”
This project could achieve valuable and meaningful data on how human driver behaviors affect other components in mixed driving environments, especially in security- and safety-critical contexts when human errors are inevitable as well as uncover what humans need to know while driving alongside AVs. The team expects that the results will be significant for autonomous vehicle implementation and policymaking.
“Characterizing and Countering User Security Fatigue in Password Enhancement through Deep Learning”
PI: Gerald Matthews, CHSS, George Mason University; Co-PIs: Giuseppe Anteniese and Daniel Barbará, CEC, George Mason University
If you already have a demanding job, you might think maintaining security is an additional burden, and not keep up with cybersecurity best practices such as updating or changing your passwords.
Professor Giuseppe Ateniese has designed a tool for enhancing password strength, based on a deep learning approach, but psychological factors may limit the adoption and impact of the tool. Everyone can be vulnerable to security fatigue and lax cybersecurity practices can have major societal consequences—threats to national security, financial losses to individuals and organizations, and invasion of privacy.
Introducing security tools powered by Artificial Intelligence, when successful, will counteract typical human fallibilities and promote safety in computer systems across government, industry, and personal use. This project investigates the effect of security fatigue on the use of Anteniese’s tool. It will also explore strategies for mitigating fatigue and supporting user engagement.
The team believes that enhancing employees' ability and motivation to maintain effective security protocols has immediate economic benefits and the research has the potential to suggest design features of security tools that can support commercialization as well as training protocols.