AI infrastructures are a national security and human safety issue, Mason professor says

Body

A team of George Mason University researchers, led by Distinguished University Professor J.P. Singh, has received a $1.4 million grant from the Department of Defense to examine the way countries are implementing their national artificial intelligence infrastructure strategies.

Specifically, said Singh, who works out of Mason’s Schar School of Policy and Government, his Minerva Project wants to understand “how preferences or interests from society, business, or other government actors shape policy in terms of what countries are doing with their national AI infrastructures.”

“Many countries have official national AI strategies, and they’re usually announced by the government,” Singh said. “But it’s unclear at whose behest those policies arise.”

Why is this research important?

AI systems at the ground level are built on data and the way these data are collected, because a machine can only learn from the data that goes in. The question is, whose data? If this is facial recognition, whose data went into it? Women may be excluded. If it’s mostly men’s data, certain ethnicities might be excluded. If we’re going to have an AI system that detects breast cancer, whose regional data went into that, what kind of groups? That’s why there’s that famous saying about garbage in, garbage out.

Why is this a national security issue?

“At the end of the day, whatever we do as human beings relates to how secure we are, so the way AI infrastructures evolve in a country can enhance security because you are able to surveil populations around the world and also stop intrusions on cyber infrastructures. That’s very much related to military-type security. But we are also examining security at a different level. What does it mean to be secure as a human being?

What does it mean to have human security?

Let’s imagine Society A. Militarily they might be secure, but people who belong to groups who, in practice, have fewer rights are not so secure. In India’s case, these may be lower-caste groups. In several Middle East countries, these may be women. In the United States, unfortunately, it may be minority groups. So we’re thinking about what security means for these groups. What does it mean for them to be represented through data, which would then be run on these machine-learning systems, because security in an AI sense would mean they are also represented.

What are the consequences for being in groups whose data is not being represented?

There may be people in the developing world with tropical diseases. But the sophisticated health systems being developed in the Global North may not have enough data about the diseases in the Global South, which may be about blindness from smoke—because people have no choice but to burn wood or coal, or diarrhea, or tuberculosis, or smallpox.

Yet you also call this data repository a double-edge sword.

You may not want your data to be out there. The consequence is that we need governance systems that guard against people’s data being exchanged freely. Right now, there’s a huge battle between the U.S. and the European Union about how data that sits on the cloud can be exchanged. In the U.S., by and large, the person who collects the data can then exchange it, as long as you have the informed consent of the person when the data was collected. The European Union position has been that every time the data gets exchanged, an additional set of constraints must be met.

What is your bottom line?

Humanities travel through several roads. We may have done ships, we may have done railways, we may have done roads. In the 21st century, our road is an artificial intelligence infrastructure. It’s very important to know if we can travel down that road. Just as in the past, people may not have the ability to get on a train, even if it passed through their village, we have to now figure if everybody is on the train of an artificial intelligence infrastructure, and if it is safe for them to do so.