December 26, 2024

Trae Stephens on the Ethics of AI Warfare

Trae #Trae

Photo-Illustration: Intelligencer; Photo: Courtesy of the subject

Artificial intelligence and machine learning may suddenly seem to be everywhere, but that’s not true in the defense sector, despite the growing ubiquitousness of drone warfare and the apparently unlimited amount of money the U.S. gives to defense contractors. One company trying to outflank the big defense firms with higher tech is Anduril, which has been selling surveillance, reconnaissance, and counter-drone technologies to the U.S., including a “smart wall” system for the southern border. Last fall, it introduced its first weapon, a drone-based “loitering munition.”

In the latest episode of On With Kara Swisher, Kara grills Anduril co-founder Trae Stephens about the company’s approach to defense and its implications. They also discuss spy balloons, the war in Ukraine, AI bias, and the challenge of cutting China out of the supply chain. As seen in the excerpt below, they also get into Saint Augustine and the ethics of autonomous weapons as well as why Stephens believes big defense contractors are still struggling to innovate.

Journalist Kara Swisher brings the news and newsmakers to you twice a week, on Mondays and Thursdays.

Subscribe on:

Kara Swisher: So you focus on autonomous systems. Are we talking about military drones? Surveillance? Autonomous systems — how would you explain it?

Trae Stephens: Yeah. There are a lot of drones that have existed since even the Cold War. The plane that ended up being the Predator was actually developed in the ’80s, and these are really remotely piloted systems. So they’re manned, in actuality, by a person with a joystick, you know, even if they’re thousands of miles away. And these are incredibly expensive to manage and maintain. This isn’t like, you know, one pilot can control dozens of aircraft. And so when we talk about autonomy, we’re talking about real autonomy, like how do we get kind of a mission manager to control a bunch of assets in a battle space and be able to make decisions very efficiently and effectively — ideally where those effectors, the drones, whether they’re planes or ground vehicles or underwater vehicles, are low-cost enough as to be attritable. So you’re not putting people in harm’s way. You’re lowering the cost of the taxpayer if those systems are lost, which gives you a very different strategic approach to engaging in conflict. And ideally, you know, you’re creating an ethical good by making it much less likely that humans lose life when some activity is necessary.

Kara Swisher: And less complex, presumably. And using technologies that consumers use regularly, like a lot of things which haven’t moved into defense. So your first piece of tech was Lattice OS. Talk about that. Gathering data, targets — what was it doing?

Trae Stephens: So Lattice is the software that sits behind all of our products. You can kind of think of it as a computer vision, command, and control platform that has flight controls for aircraft. It has, you know, ground-vehicle controls for ground vehicles. And it does all of the taking, the sensors, fusing that data and then helping the system make decisions about that data with a human kind of guiding that interaction over time. So Lattice is present in everything that we build.

Kara Swisher: Mm-hmm. And how much is the tech doing versus a person? You said you had one person running a drone— and again, we’ve seen that, whether it’s depicted in movies or whatever — what is the role of the person when you’re thinking of these, of these things. It’s minimizing the person, right? Correct?

Trae Stephens: I wouldn’t put it that way. I think the person is still a critical element in all of the activity that we’re engaged in, in the Defense Department and in our other national security apparatus. I think the key thing is that you’re enabling people to do what people are really, really good at, and you’re enabling machines to do what machines are really, really good at. And that’s what we’re trying to optimize. We’re trying to create the most efficient pathway for optimizing the skill set of the people who are responsible for those systems.

Kara Swisher: But the goal would be to remove people as much as possible, correct?

Trae Stephens: Uh, I think it’s optimizing people differently.

Kara Swisher: Or the inefficient use of people.

Trae Stephens: Yeah. That’s probably a fair way to put it. It’s getting people out of doing things that people shouldn’t be doing.

Kara Swisher: Mm-hmm, because either they’re bad at it or they could get hurt. Correct?

Trae Stephens: Yeah, I think those are probably the two biggest things. What we usually call this internally is “dull, dirty, dangerous.” We want people to be removed from those to the extent possible.

Kara Swisher: Dirty meaning “munitions,” right?

Trae Stephens: Dirty could mean a lot of things. It could mean, like, putting people in a position where they have to make decisions that you don’t want them making. It could be putting them in really, uh, like, bad spaces for their mental health. Like you don’t want a person to be cleaning up a toxic-waste problem; you really want that handled by robots or machines. So there’s a number of different definitions you could apply there.

Kara Swisher: Last fall, you announced a first weapon: the Altius drone “loitering munition.” Talk about that, is that — these drones pick their targets, or how does that work? Because one of the issues I remember when they were having people man drones and doing bombing is the mental effect of doing that from afar, which has been an issue since they had bombs dropping outta planes, I’m presuming. But talk about this drone, this loitering munition.

Trae Stephens: Yeah, so I’m assuming that when you’re using the word weapon, you’re referring to, like, things that explode — is that kind of the direction you’re taking it?

Kara Swisher: Yes, yes, but this Altius drone, this was your first weapon that you announced.

Trae Stephens: Yeah, I mean, we’ve built a lot of systems in the past that have been used for tactical purposes. I think the Altius announcement is the first that we’ve made where something explodes. But those two things aren’t necessarily the way that I would define weapon, right. There are a lot of weapons that don’t explode, but this one does happen to explode. So yeah, we are working on this platform called Altius. There are different versions of it. These are used oftentimes as air-launched effects. So it’s like you have a helicopter, it can shoot a drone out of a tube, and that drone can be used to extend its range. It could be used to do surveillance or reconnaissance to make sure that the helicopter or the whatever aircraft is launching, it isn’t putting itself in danger.

And, you know, the loitering ammunition part of this is making it possible for an operator to be more precise and discriminate with a strike where a strike is needed. So one of the things you might have seen, you might be seeing, in Ukraine is the use of dumb artillery. So you’re shooting mortars; you know, these things are like pocking farmland, they’re killing civilians. We want to put munitions on target where we know that the target is the adversary and it’s not going to lead to unnecessary casualties. And so it allows you as an operator to see what it is that you’re going after and then convert that air-launch effect from its surveillance purposes into something that can be used to deliver an impact in a way that you would normally be doing in a much less precise way.

Kara Swisher: So this is a person doing it, using a drone. Precisely. Not that AI has to be trained to say this is a dangerous thing. It’s something that a person at the other end decides, correct?

Trae Stephens: Yeah, absolutely. You want a human in the loop on these decisions. I think they’re, you know, the conversation around autonomous weapons is obviously very complicated.

Kara Swisher: Absolutely.

Trae Stephens: And I think the technologies that we’re building are making it possible for humans to make the right decisions about these things so that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.

Kara Swisher: Well, the fear being that AI will make a mistake, just the way there’s the controversies around AI in judging or, you know, in court cases or things like that.

Trae Stephens: Human judgment is incredibly important. We don’t want to remove that.

Kara Swisher: Or just as flawed. You know, if you listen to Daniel Kahneman, he’s like, well, AI’s a little better than humans, ’cause humans can make 60 different decisions based on almost no information.

Trae Stephens: Yeah, the place where people get hung up on that, though, is that they want someone to be responsible. And so humans might make mistakes, but we believe in the concept of being able to hold humans responsible for those mistakes. If a machine makes a mistake, like, who do you blame for the mistake? Do you blame the people that manufacture the system? There’s all sorts of questions that come up there.

Kara Swisher: Is there a deadening when you put AI in charge of so much stuff? I mean, you just build these things, presumably, but is there a thought in the defense community if you start to really make it into a game, or feel like a game in some way, that there’s a problem with that? Or is it, “Whoa, we’re gonna save lives by doing this”?

Trae Stephens: No, I think there’s a great deal of thought that goes into this, and it’s certainly something that I don’t feel absolved from personally, even. Like, it’s really important to have responsible conversations and dialogue about ethics and, you know, how the things that we’re building impact what other people are building and how that impacts our adversaries. And you know, one of the core reasons why we started Anduril is we believe that, you know, you can lean very heavily into “just war” theory to conduct conflict in the most ethical way possible.

Kara Swisher: Is it Saint Augustine? Just war?

Trae Stephens: Yeah, Saint Augustine was a lot of the early writings on just war.

Kara Swisher: I remember my Georgetown.

Trae Stephens: There you go, you got it. One great example of this is the Zawahiri raid in Kabul a few months back. That was done with what’s called a “Ginsu missile.” It’s nonexplosive; it’s completely dull, kinetic. But we were able to take out Zawahiri, and no one else was even injured in that attack with a completely nonexplosive guided munition.

And I think once you get to the point where you can be incredibly accurate, you can be incredibly precise, you get more of a deterrent impact on the conventional side of the equation. We all understood nuclear deterrents and how that works from a strategic perspective, but if you can get to the point where you can conventionally deliver outcomes on the battlefield at very low cost, you can deter the adversary from engaging in conflict to begin with. And that’s the sort of advantage that I think is important for us to try to build.

Kara Swisher: All right, Saint Augustine aside, one of the things that happened though, these, these AI, especially technologies, become very powerful. We can see how it revolutionizes search. Talk about how it does that with defense and how we fight wars, if that’s your goal. You had written last year, “Today there’s more AI in a Tesla than in any U.S. military vehicle.” Agreed. “Better computer vision in your Snapchat app than in any system the Department of Defense owns.” What’s the problem? Is it they just don’t wanna use Snapchat in the Defense Department, or what?

Trae Stephens: Ha, I definitely don’t think they should be using Snapchat in the Defense Department.

Kara Swisher: No, they should be using TikTok, but go ahead.

Trae Stephens: Obviously, no. So I think, you know, there’s a lot of problems here. Part of it is just the incentive structure. So if you look at … you know, when the Cold War ended, the secretary of Defense got together the defense industry in what became colloquially known as the Last Supper. And he effectively said, like, “Look, defense spending is going down. Everybody needs to consolidate or die.” And so he encouraged all these companies to do exactly what they did, which is they pared themselves down and they built an incentive structure that made it possible to maintain nearly perfect competition between the large primes — which is what we call the big defense contractors, they’re called primes.

And so we got in this situation where everything was an exquisite bespoke system. Everything was built from scratch. And then you had companies like SpaceX that come up and they say, “We’ve built a commercial launch system that is a fraction of the cost of what is being offered by the United Launch Alliance,” the group of the primes that do this together. And they don’t get access to the contracts because the DoD has sort of tacitly agreed with the primes that they won’t allow new entrants as long as it’s this nearly perfect competition where it’s basically an index fund of U.S. GDP. Like, these companies grow very slowly over time tracking GDP. They’re gonna distribute revenue as evenly as they possibly can, but we get no innovation as a result. And so we’re kind of stuck in the middle of this right now.

This interview has been edited for length and clarity.

On With Kara Swisher is produced by Nayeema Raza, Blakeney Schick, Cristian Castro Rossel, and Rafaela Siewert, with mixing by Fernando Arruda, engineering by Christopher Shurtleff, and theme music by Trackademics. New episodes drop every Monday and Thursday. Follow the show on Apple Podcasts, Spotify, or wherever you get your podcasts.

See All

Leave a Reply