In July, the Mitchell Institute’s recently launched Center for UAV and Autonomy Studies dove head first into the national debate on drones and the future of armed conflict when it brought together 40 government and industry experts to game out UAV support for deep-strike missions in a Taiwan Straits conflict scenario. The workshop examined questions that included the role of autonomy and the readiness of the technology for use in Joint All Domain Command and Control and all-domain operations.
With results from the workshop due in the coming weeks, the center is planning additional efforts on topics such as UAS support to the air-superiority mission and how to introduce more autonomy on these platforms.
Caitlin Lee, the Mitchell Institute’s senior fellow for UAV and Autonomy Studies, says that autonomy has key advantages for operating in contested environments, but warned the technology also rises policy questions. She discusses those issues in this Q&A.
Lee: One thing we’re clearly seeing is that, unlike the past 20 years of counterinsurgency and conventional warfare, the life of a drone is nasty, brutish, and short. I’ve seen estimates as low as nine days [due to] electromagnetic spectrum competition and the measures/countermeasures fight going on [as well as] a kinetic fight for these drones. It’s a highly contested environment and it’s really a laboratory and a microcosm. We can draw lessons from this as we in the US look to the Great Power competition.
Lee: There are many unanswered questions with China. They haven’t really seen a conventional conflict in a very long time; they’re relying 100 percent on training to prepare for a conventional conflict scenario. Have we actually seen them reveal all of the capabilities that they have? Probably not.
I would expect in a conflict with China and in the Indo-Pacific that we would encounter contested, degraded, and denied communications. We are going to need to prepare to operate through that.
Lee: There’s no single autonomy framework for understanding how independently a machine operates from a human. But if you start from the principle that it’s the degree of the machine’s independence from human control, then you can start to think about these different levels of autonomy.
I would hearken back to the workshop we did this summer where we gave the participants an autonomy menu, and said, “You’re in this conflict in the Taiwan Strait. How much autonomy do you need?” On the very low end of the spectrum [there’s a] UAV that has a sensor on it that’s doing automatic target recognition, so it knows to look for the S-300 surface-to-air missile system. It knows what it looks like, and it’s going to ping a human somewhere when it sees that.
At the other extreme is platform autonomy, where you task a UAV to go out 200 nautical miles, look for the S-300 surface-to-air missile system, come back and report out to a human. It will encounter obstacles along the way in a highly contested air environment. It will orient itself in that environment; and it will make decisions to achieve that task.
Taking it one step further, this is where you get into the swarming discussion. This is where you have a number of UAVs that are capable of independently operating from human control to execute those tasks, but they’re also talking to each other and operating in a swarm.
Lee: The communications aspect has been brought back in because now the UAVs all need to talk to each other to optimize for their mission. Let’s say it’s to select and engage a surface-to-air missile system. They all need to be communicating to figure out which one is in the best position to do the tracking versus the engaging. If you think about this spectrum of autonomy and where we are today and where we need to get to for the Indo-Pacific, it’s clear that we need to make some progress.
When we think about Reaper, it’s dependent on satellite communications for control from the ground. Clearly, as we see in Ukraine, those links can be vulnerable to jamming. You can always change datalinks, but you always have to worry about the adversary catching up. The existence of the datalink itself is a vulnerability. You can either learn to live with that vulnerability, make that datalink really robust [to] fight that measures/countermeasures fight, or you say, “No, I don’t want to do that. I’m going to allow this machine to make some independent decisions and operate autonomously.”
There’s still going to be the need for communications because those UAVs have to be able to talk to each other if you want collaborative autonomy. You [need] a hard-to-detect datalink that allows that. Maybe you don’t care if that datalink gets jammed. Maybe the autonomy allows the UAV to continue operating. Maybe the UAV is so cheap that if it’s out of the fight, it’s fine.
If you accept that there’s not going to be a human in the loop for this target engagement and you’re going to let the UAV make the decision to engage a target, you still need communications to let humans know at some point how it went.
Autonomy is not a workaround, but it can reduce reliance on communications in a high-end conflict where you’re worrying about aircraft survival.
The risk goes down if the aircraft is unmanned, but maybe there’s a UAV that has a fair degree of stealth, very low observable structure, and [a] sophisticated radar – and you actually don’t want it to get shot down. You’d have to think hard about resilient comms or whether to go with an autonomous system. From an operational perspective, autonomy is highly advantageous. From a policy and a technology maturity perspective, there are many open questions.
Lee: DoD policy requires that a certain set of wickets be jumped through to build a lethal autonomous weapon, but there’s no prohibition on doing so. I think there are great advantages to doing so.
The US position on autonomy, as I understand it, is that we want to keep a human in the loop. What I’m suggesting is, from an operational perspective, it is more advantageous to not have a human in the loop if you can ensure that you can minimize or eliminate collateral damage and friendly fire incidents.
Another operational reason why autonomy would be advantageous is that you don’t have to wait for that human in the loop to chime in. The machine can move faster than that. And when the other side is using autonomous systems, that becomes even more important.
But from a policy perspective, it’s very fraught, especially for the US where we put a premium on minimizing civilian casualties and friendly fire incidents. Just because it’s operationally advantageous doesn’t mean we’re going to overlook our ethical issues.
Lee: Some people would assert that it’s actually easier to build autonomy into airframes, into operations in air environments because it’s so much less cluttered than the ground. Those who might be pessimistic about autonomy might look at the self-driving car industry and say, “Look how long it is taking them to get this right.” An optimist might say, “Yes, but they’re dealing in the terrestrial domain.” In the air environment, it’s much more of a clear airspace so it could be easier to develop autonomy for that environment.
Lee: That’s still very much a live issue; we need to get those airspace permissions to fly these aircraft around in the US more. The main way we’re going to wrestle with these unknowns about autonomy is through prototyping and demonstration. The faster we can get these systems out into the National Airspace, the more we can play around with them and the more we can refine that autonomy and artificial intelligence software.
0 Comments