Who is in charge when it comes to AI? People or machines? In this episode, Paul Scharre, author of the books Army of None: Autonomous Weapons and the Future of War and the award-winning Four Battlegrounds: Power in the Age of Artificial Intelligence, and Robert Sparrow, coauthor with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming” that was featured in the Spring 2023 issue of Parameters, discuss AI and its future military implications.
Read the article: https://press.armywarcollege.edu/parameters/vol53/iss1/14/
Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relations
Episode transcript: AI: Centaurs Versus Minotaurs: Who Is in Charge?
Stephanie Crider (Host)
The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government.
You’re listening to Conversations on Strategy.
I’m talking with Paul Scharre and Professor Rob Sparrow today. Scharre is the author of Army of None: Autonomous Weapons in the Future of War, and Four Battlegrounds: Power in the Age of Artificial Intelligence. He’s the vice president and director of studies at the Center for a New American Security.
Sparrow is co-author with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” which was featured in the Spring 2023 issue of Parameters. Sparrow is a professor in the philosophy program at Monash University, Australia, where he works on ethical issues raised by new technologies.
Welcome to Conversations on Strategy. Thanks for being here today.
Paul Scharre
Absolutely. Thank you.
Host
Paul, you talk about centaur warfighting in your work. Rob and Adam re-envisioned that model in their article. What exactly is centaur warfighting?
Scharre
Well, thanks for asking, and I’m very excited to join this conversation with you and with Rob on this topic. The idea really is that as we see increased capabilities in artificial intelligence and autonomous systems that rather than thinking about machines operating on their own that we should be thinking about humans and machines as part of a joint cognitive system working together. And the metaphor here is the idea of a centaur, the mythical creature of a 1/2 human 1/2 horse, with the human on top—the head and the torso of a human and then the body of a horse. You know, there’s, like, a helpful metaphor to think about combining humans and machines working to solve problems using the best of both human and machine intelligence. That’s the goal.
Host
Rob, you see AI being used differently. What’s your perspective on this topic?
Robert Sparrow
So, I think it’s absolutely right to be talking about human-machine or manned-unmanned teaming. I do think that we will see teams of artificial intelligence as robots and human beings working and fighting together in the future. I’m less confident that the human being will always be in charge. And I think the image of the ccentaur is kind of reassuring to people working in the military because it says, “Look, you’ll get to do the things that you love and think are most important. You’ll get to be in charge, and you’ll get the robots to do the grunt work.” And, actually, when we look at how human beings and machines collaborate in civilian life, we actually often find it’s the other way around.
(It) turns out that machines are quite good at planning and calculating and cognitive skills. They’re very weak at interactions with the physical world. Nowadays, if you, say, ask ChatGPT to write you a set of orders to deploy troops it can probably do a passable job at that just by cannibalizing existing texts online. But if you want a machine to go over there and empty that wastepaper basket, the robot simply can’t do it. So, I think the future of manned-unmanned teaming might actually be computers, with AI systems issuing orders. Or maybe advice that has the moral force of orders are two teams of human beings.
Adam and I have proffered the image of the Minotaur, which was the mythical creature with the head of a bull and the body of a man as an alternative to the centaur, when we’re thinking about the future of manned-unmanned teaming.
Host
Paul, do you care to respond?
Scharre
I think it’s a great paper and I would encourage people to check it out, “Minotaurs, Not Centaurs.” And it’s a really compelling image. Maybe the humans aren’t on top. Maybe the humans are on the bottom, and we have this other creature that’s making the decisions, and we’re just the body taking the actions. (It’s) kind of creepy, the idea of maybe we’re headed towards this role of minotaurs instead, and we’re just doing the bidding of the machines.
You know, a few years ago, I think a lot of people envisioned the type of tasks that AI would be offloading, would be low-skill tasks, particularly for physical labor. So, a lot of the concern was like autonomousness was gonna put truck drivers out of work. It turns out, maneuvering things in the physical world is really hard for machines. And, in fact, we’ve seen with progress in large language models in just the last few years, ChatGPT or the newest version (GPT-4), that they’re quite good at lower-level skills of cognitive labor so that they can do a lot of the tasks that maybe an intern might do in a white-collar job environment, and they’re passable. And as he’s pointing out, ask a robot to throw out a trash basket for you or to make a pot of coffee . . . it’s not any good at doing that. But if you said, “Hey, write a short essay about human-machine teaming in the military environment,” it’s not that bad. And that’s pretty wild.
And I think sometimes these models have been criticized . . . people say, “Well, they’re just sort of like shuffling words around.” It’s not. It’s doing more than that. Some of the outputs are just garbage, but (with) some of them, it’s clear that the model does understand, to some extent. It’s always dicey using anthropomorphic terms, but (it can) understand the prompts that you’re giving it, what you’re asking it to do, and can generate output that’s useful. And sometimes it’s vague, but so are people sometimes. And I think that this vision of hey, are we headed towards this world of a minotaur kind of teaming environment is a good concern to raise because presumably that’ not what we want.
So then how do we ensure that humans are in charge of the kinds of decisions that we want humans to be responsible for? How do we be intentional about using AI and autonomy, particularly in the military environment?
Sparrow
I would resist the implication that it’s only really ChatGPT that we should be looking at. I mean, in some ways it’s the history of chess or gaming where we should be looking to the fact that machines outperform all, or at least most, human beings. And the question is if you could develop a warfighting machine for command functions then that wouldn’t necessarily have to be able to write nice sentences. The question is when it comes to some of the functions of battlefield command, whether or not machines can outperform human beings in that role. There’s kind of some applications like threat assessment in aerial warfare, for instance, where the tempo of battle is sufficiently high and there’s lots of things whizzing around in the sky, and we’re already at a point where human beings are relying on machines to at least prioritize tasks for them. And I think, increasingly, it will be a brave human being that overrides the machine and says, “The machine has got this wrong.”
We don’t need to be looking at explicit hierarchies or acknowledged hierarchies either. We need to look at how these systems operate in practice. And because of what’s called automation bias, which is the tendency of human beings to defer to machines once their performance reaches a certain point, yeah, I think we’re looking at a future where machines may be effectively carrying out key cognitive tasks. I’m inclined to agree with Paul that there are some things that it is hard to imagine machines doing well.
I’m a little bit less confident in my ability to imagine what machines can do well in the future. If you’d asked me two years ago, five years ago, “Will AIs be able to write good philosophy essays?” I would have said, “That’s 30 years off.”
Now I can type all my essay questions into ChatGPT and this thing performs better than many of my students. You know, I’m a little bit less confident that we know what the future looks like here, but I take it that the fundamental technology of these generative AI and adversarial neural networks is actually going to be pretty effective when it comes to at least wargaming. And, actually, the issue for command in the future is how well can we feed machines the data that they need to train themselves up in simulation and apply it to the real world?
I worry about how we’ll know these things are reliable enough to move forward, but there’s some pretty powerful dynamics in this area where people may effectively be forced to adopt AI command in response to either what the enemy is doing or what they think the enemy is doing. So, not just the latest technology, there’s a whole set of technologies here, and a whole set of dynamics that I think should undercut our confidence that human beings will always be in charge.
Host
Can you envision a scenario in which centaur and minotaur warfighting might both have a role, or even work in tandem?
Sparrow
I don’t think it’s all going to be centaurs, but I don’t think it will all be minotaurs. And in some ways, this is a matter of the scale of analysis. If you think about something like Uber, you know, people have this vision of the future of robot taxis. I would get into the robot taxi. And as the human being, I would be in charge of what the machine does. In fact, what we have now is human beings being told by an algorithm where to drive.
Even if I were getting into a robot taxi and telling it where to go, for the moment, there’d be a human being in charge of the robot taxi company. And I think at some level, human beings will remain in charge of war as much as human beings are ever in charge of world historical events. But I think for lots of people who are fighting in the future, it will feel as though they’re being ordered around by machines.
People will be receiving feeds of various sorts. It will be a very alienating experience, and I think in some contexts they genuinely will be effectively being ordered around by an AI. Interesting things to think about here is how even an autonomous weapons system, which is something that Paul and I have both been concerned about, actually relies on a whole lot of human beings. And so at one level, you hope that a human being is setting the parameters of operations of the autonomous weapons system, but at another level, everyone is just following this thing around and serving its needs. You know, it returns to base and human beings, refuel and maintain it and rearm it.
Everyone has to respond to what it does in combat. Even with something like a purportedly autonomous weapons system, zoom out a bit, and what you see as a human is a machine making a core set of warfighting decisions and a whole lot of human beings scurrying around serving the machine. Zoom out more, and you hope that there’s a human being in charge. Now, it depends a little bit on how good real-world wargaming by machines gets, and that’s not something I have a vast amount of access to, how effective AI is in war gaming. Paul may well know more about that. But at that level, if you really had a general officer that was a machine, or even staff taking advice from wargamers from war games then I think most of the military would end up being a minotaur rather than a centaur.
Scharre
It’s not just ChatGPT and GPT-4, not just large language models. We have seen, as you pointed out, really amazing progress because a whole set of games—chess, poker, computer games like StarCraft 2 and Dota 2. At human level there is sometimes superhuman performance at these games. What they’re really doing is functions that militaries might think of as situational awareness and command and control.
Oftentimes when we think about the use of AI or autonomy in a military context, people tend to think about robotics, which has value because you can take a person out of a platform and then maybe make the platform more maneuverable or faster or more stealthy or smaller or more attritable or something else. In these games, the AI agents have access to the same units as the humans do. The AI playing chess has access to the same chess pieces as the humans do. What’s different is the information processing and decision making. So it’s the command and control that’s different.
And it’s not just that these AI systems are better. They actually play differently than humans in a whole variety of ways. And so it points to some of these advantages in a work time context. Obviously, real world is a lot more complicated than a chess or Go board game, and there’s just a lot more possibilities and a lot more clever, nefarious things that an adversary can do in the real world. I think we’re going to continue to see progress. I totally agree with Rob that we really couldn’t say where this is going.
I mean, I’ve been working on these issues for a long time. I continue to be surprised. I have been particularly surprised in the last year, 18 – 24 months, with some of the progress. GPT-4 has human-level performance on a whole range of cognitive tasks—the SAT, the GRE, the bar exam. It doesn’t do everything that humans can do, but it’s pretty impressive.
You know, I think it’s hard to say where things are going going forward, but I do think a core question that we’re going to grapple with in society, in the military and in other contexts, is what tasks should be done by a human and which ones by a machine? And in some cases, the answer to that will be based simply on which one performs better, and there’s some things where you really just care about accuracy and reliability. And if the machine does a better job, if it’s a safer driver, then we could save lives and maybe we should hand over those tasks to machines once machines get there. But there’s lots of other things, particularly, in the military context that touch on more fundamental ethical issues, and Rob touches on many of these in the paper, where we also want to ask the question, are there certain tasks that only humans should do, not because the machines cannot do them but because they should not do them for some reason?
Are there some things that require uniquely human judgment? And why is that? And I think that these are going to be difficult things to grapple with going forward. These metaphors can be helpful. Thinking about is it a centaur? Is the human really up top making decisions? Is it more like a minotaur? This algorithm is making decisions and humans are running around and doing stuff . . . we don’t even know why? Gary Kasparov talked about in a recent wonderful book on chess called Game Changer about AlphaZero, the AI chess playing agent. He talks about how, after he lost to IBM’s deep blue in the 90s, Kasparov created this field of human-machine teaming in chess of free-play chess, or what sometimes been called centaur chess, where this idea of centaur warfighting really comes from. And there was a period of time where the best chess players were human-machine teams.
And it was better than having humans playing alone or even chess engines playing by themselves. That is no longer the case. The AI systems are now so good at chess that the human does not add any value in chess. The human just gets in the way. And so, Kasparov describes in this book chess shifting to what he calls a shepherd model, where the human is no longer pairing with the chess agent, but the human is choosing the right tool for the job and shepherding these different AI systems and saying, “Oh, we’re playing chess. I’m going to use this chess engine,” or “I’m going to write poetry. I’m going to use this AI model to do that.” And it’s a different kind of model, but I think it’s helpful to think about these different paradigms and then what are the ones that we want to use? You know, we do have choices about how we use the technology.
How should that drive our decision making in terms of how we want to employ this technology for various ends?
Host
What trends do you see in the coming years, and how concerned or confident should we be?
Sparrow
I think we should be very concerned about maintaining human control over these new technologies, not necessarily the kind of super-intelligent AIs going to eat us all questions that some of my colleagues are concerned about, but, in practice, how much are we exercising what we think of as our core human capacities in our daily roles both in civilian life but also in military life? And how much are we just becoming servants of machines? How can we try to shape the powerful dynamics driving in that direction? And that’s the sort of game-theoretic nature of conflict. Or the fact that, at some level, you really want to win a battle or a war makes it especially hard to carve out space for the kind of moral concerns that both Paul and I think should be central to this debate. Because if your strategic adversary just says, “Look, we’re all in for AI command,” and it turns out that that is actually very effective on the battlefield then it’s gonna be hard to say, “Hang on a moment, that’s really dehumanizing, we don’t like just following the orders of machines.” It’s really important to be having this conversation. It needs to happen at a global level—at multiple levels.
One thing that hasn’t come up in our conversation is how I think the performance of machines will actually differ in different domains—the performance of robots, in particular. So, something like war in outer space, it’s all going to be robots. Even undersea warfare, that strikes me, at least the command functions are likely to be all onboard computer systems, or again, or undersea. It’s not just about platforms on the sea. But the things that are lurking in the water are probably going to be controlled by computers. What would it be like to be the mechanic on a undersea platform?
You know, there’s someone whose job it is to grease the engines and reload the torpedoes, but, actually, all the combat decisions on the submarine are being made by an onboard computer. That would be a really miserable role to be the one or two people in this tin can under the ocean where the onboard computer is choosing what to engage and when. Aerial combat, again, I think probably manned fighters have a limited future. My guess is that the sort of manned aircraft . . . there are probably not too many more generations left of those. But infantry combat . . . I find that really hard to imagine being handed over to robots for a long time because of how difficult the physical environment is.
That’s just to say, this story looks a bit different depending upon where you’re thinking about combat taking place. I do think the metaphors matter. I mean, if you’re going to sell AI to highly trained professionals, what you don’t do is say, “Look, here’s a machine that is better than you at your job. It’s going to do all things you love and put you out of work.” No one turns up and says that. Everybody turns up to the conference and says, “Look, I’ve got this great machine, and it’s going to do all the routine work. And you can concentrate on things that you love.” That’s a sales pitch. And I don’t think that we should be taken in by that. You want people to start talking about AI, take it seriously. And if you go to them saying, “Look, this thing’s just going to wipe out your profession,” That’s a pretty short conversation.
But if you take seriously the idea that human beings are always going to be in charge, that also forecloses certain conversations that we need to be having. And the other thing here is how these systems reconfigure social and political relations by stealth. I’m sure there are people in the military now who are using ChatGPT or GPT-4 for routine correspondence, which includes things that’s actually quite important. So, even if the bureaucracy said, “Look, no AI.” If people start to rely on it in their daily practice, it’ll seep into the bureaucracy.
I mean, in some ways, these systems, they’re technocratic, through and through. And so, they appeal to a certain sort of bureaucracy. And a certain sort of society loves the idea that all we need is good engineers and then all hard choices will be made by machines, and we can absolve ourselves of responsibility. There’s multiple cultural and political dynamics here that we should be paying attention to. And some of them, I suspect, likely to fly beneath the radar, which is why I hope this conversation and others like it will draw people’s attention to this challenge.
Scharre
One of the really interesting questions in my mind, and I’d be interested in your thoughts on this, Rob, is how do we balance this tension between efficacy of decision making and where do we want humans to sit in terms of the proper rule? And I think it’s particularly acute in a military context. When I hear the term “minotaur warfighting,” I think, like, oh, that does not sound like a good thing. You talk in your paper about some of the ethical implications, and I come away a little bit like, OK, so is this something that we should be pursuing because we think it’s going to be more effective, or we should be running away from and this is like a warning. Like, hey, if we’re not careful, we’re all gonna turn into these minotaurs and be running around listening to these AI systems. We’re gonna lose control over the things that we should be in charge of. But, of course, there’s this tension of if you’re not effective on the battlefield, you could lose everything.
In the wartime context, it’s even more compelling than some business—some business doesn’t use the technology in the right way or it’s not effective or it doesn’t improve the processes, OK. They go out of business. If a country does not invest in their national defense, they could cease to exist as a nation. And so how do we balance some of these needs? Are there some things that we should be keeping in mind as the technology is progressing and we’re sort of looking at these choices of do we use the system in this way or that way to kind of help guide these decisions?
Sparrow
10 years ago, everyone was going home on autonomy. It was all going to be autonomous. And I started asking people, “Would you be willing to build your next set of submarines with no space for human beings on board? Let’s go for an unmanned submersible fleet.” And a whole lot of people who, on paper, were talking about AI’s output . . . autonomous weapon systems outperforming human beings would really balk at that point.
How confident would you have to be to say, “We are going to put all our eggs in the unmanned basket for something like the next generation Strike Fighter or submarines.”? And it turns out I couldn’t get many takers for that, which was really interesting. I mean, I was talking to a community of people who, again, all said, “Look, AI is going to outperform human beings.” I said “OK, so let’s just build these systems. There’s no space for a human being on board.” People started to get really cagey.
And de-skilling’s a real issue here because if we start to rely on these things then human beings quickly lose the skills. So you might say, “Let’s move forward with minotaur warfighting. But let’s keep, you know, in the back of our minds that we might have to switch back to the human generals if our adversary’s machines are beating our machines.” Well, I’m not sure human generals will actually maintain the skill set if they don’t get to fight real wars. At another level, I think there’s some questions here about the relationship between what we’re fighting for and how we’re fighting.
So, say we end up with minotaur warfighting and we get more and more command decisions, as it were, made by machines. What happens if that starts to move back into our government processes? It could either be explicit—hand over the Supreme Court to the robots. Or it could be, in practice, now everything you see in the media is the result of some algorithm. At one level, I do think we need to take seriously these sorts of concerns about what human beings are doing and what decisions human beings are making because the point of victory will be for human beings to lead their lives. Now, all of that said, any given battle, it’s gonna be hard to avoid the thought that the machines are going to be better than us. And so we should hand over to them in order to win that battle.
Scharre
Yeah, I think this question of adoption is such a really interesting one because, like, we’ve been talking about human agency in these tasks. You know, flying a plane or being an infantry or, you know, a general making decisions. But there also is human agency as this question of do you use a technology in this way? And we could see it in lots of examples of AI technology, today—facial recognition for example. There are many different paradigms for how we’re seeing facial recognition used. For example, it’s used very differently in China today than in the United States. Different regulatory environment. Different societal adoption. That’s a choice that society or the government, whoever the powers that be, have.
There’s a question of performance, and that’s always, I think, a challenge that militaries have with any new technology is when is it good enough that you go all in on the adoption, right? When are there airplanes, good enough that you then reorient your naval forces around carrier aviation? And that’s a difficult call to make. And if you go too early, you can make mistakes. If you go too late, you can make mistakes. And I think that’s one challenge.
It’ll be interesting, I think, to see how militaries approach these things. My observation has been so far, (that) militaries have moved really slowly. Certainly much, much slower that what we’ve seen out in the civilian sector, where if you look at the rhetoric coming out of the Defense Department, they talk about AI a lot. And if you look at actually doing, it’s not very much. It’s pretty thin, in fact. Former Secretary of Defense Mark Esper, when he was the secretary, he had testified and said that AI was his number one priority. But it’s not. When you look at what the Defense Department is spending money on, it’s not even close. It’s about 1 percent of the DoD budget. So, it’s a pretty tiny fraction. And it’s not even in the top 10 for priorities.
So, that, I think, is interesting because it drives choices and, historically, you can see that, particularly with things that are relevant to identity, that becomes a big factor in how militaries adopt a technology, whether it’s cavalry officers looking at the tank or when the Navy was transitioning from sail to steam. That was pushed back because sailors climbed the mast and worked the rigging. They weren’t down in the engine room, turning wrenches. That wasn’t what sailors did. And one of the interesting things to me is how these identities, in some cases, can be so powerful to a military service that they even outlast that task itself. We still call the people on ships sailors. They’re not actually climbing the mast or working the riggings; they’re not actually sailors, but we call them that.
And so how militaries adopt these technologies, I think, is very much an open question with a lot of significance both from the military effectiveness standpoint and from an ethical standpoint. One of the things that’s super interesting to me that we are talking about some of these games like AI performance in chess and Go and computer games. And what’s interesting is that I think some of the attributes that are valued in games might be different than what the military values.
So, when gaming environments, like in computer games like StarCraft and Dota 2, one of the things computers are very, very good at is operating with greater speed and precision than humans. So they’re very good at what’s termed the microplay—basically, the tactics of maneuvering these little artificial units around on this simulated battlefield. They’re effectively invincible in small unit tactics. So, if you let the AI systems play unconstrained, the AI units can dodge enemy fire. They are basically invincible. You have to dumb the AI systems down, then, to play against humans because when these companies, like Open AI or DeepMind, are training these agents, they’re not training them to do that. That’s actually easy. They’re trying to train them to do the longer term planning that humans are doing and processing information and making higher-level strategic decisions.
And so they dumb down the speed at which the AI systems are operating. And you do get some really interesting higher-level strategic decision making from these AI systems. So, for example, in chess and Go, the AI systems have come up with new opening moves, in some cases that humans don’t really fully understand, like, why this is a good tactic? Sometimes they’ll be able to make moves that humans don’t fully understand why they’re valuable until further into the game and they could see, oh, that move had a really important change in the position on the board that turned out to be really valuable. And so, you can imagine militaries viewing these advantages quite differently. That something that was fast, that’s the kind of thing that militaries could see value in. OK, it’s got quick reaction times. Something that has higher precision they could see value in.
Something where it’s gonna do something spooky and weird, and I don’t really understand why it’s doing it, but in the long run it’ll be valuable, I could see militaries not be excited about at all . . . and really hesitant. These are really interesting questions that militaries are going to have to grapple with and that have all of these important strategic and ethical implications going forward.
Host
Do you have any final thoughts you’d like to share before we go?
Sparrow
I kind of think that people will be really quick to adopt technologies that save their lives, for instance. Situational awareness/threat assessment. I think that is going to be adopted quite quickly. Targeting systems, I think will be adopted. We can take out an enemy weapon or platform more quickly because we’ve handed over targeting to an AI—I think that stuff will be adopted quite quickly. I think it’s gonna depend where in the institution one is. I’m a big fan of looking at people’s incentive structures. You know, take seriously what people say, but you should always keep in the back of the mind, what would someone like you say?
This is a very hard space to be confident in, but I just encourage people not to just talk to the people like them but to take seriously what people lower down the hierarchy think. How they’re experiencing things. That question that Paul raised about do you go early in the hope of getting a decisive advantage or do you go late because you want to be conservative, those are sensible thoughts. As Paul said, it’s still quite early days for military AI. People should be, as they are, paying close attention to what’s happening in Ukraine at the moment, where, as I understand it, there is some targeting now being done by algorithms, and keep talking about it.
Host
Paul, last word to you, sir.
Scharre
Thank you, Stephanie and Rob for a great conversation, and, Rob, for just a really interesting and thoughtful paper . . . and really provocative. I think the issues that we’re talking about are just really going to be difficult ones for the defense community to struggle with going forward in terms of what are the tasks that should be done by humans versus machines. I do think there’s a lot of really challenging ethical issues.
Oftentimes, ethical issues end up getting kind of short shrift because it’s like, well, who cares if we’re going to be minotaurs as long as it works? I think it’s worth pointing out that some of these issues get to the core of professional ethics. The context for war is a particular one, and we have rules for conduct and war (the law of war) that kind of write down what we think appropriate behavior is. But there are also interesting questions about military professional ethics of, like, you know, decisions about the use of force, for example, are the essence of the military profession. What are those things that we want military professionals to be in charge of . . . that we want them to be responsible for? You know, some of the most conservative people I’ve ever spoken to in these issues of autonomy are the military professionals themselves, who don’t want to give up the tasks that they’re doing. And sometimes I think for reasons that are good and make sense, and sometimes, for reasons that I think are a little bit stubborn and pigheaded.
Sparrow
Paul and Stephanie, I know you said last word to Paul, so I wanted to interrupt now rather than at the end. I think it’s worth asking, why would someone join the military in the future? Part of the problem here is a recruitment problem. If you say, “You’re going to be fodder for the machines,” why would people line up for that?
You know, that question about military culture is absolutely spot on, but it matters to the effectiveness of the force, as well, because you can’t get people to take on the role. And the other thing is the decision to start a war, I mean, or even to start a conflict, for instance. That’s something that we shouldn’t hand over to the machines, but the same logic that is driving towards battlefield command is driving towards making decisions about first strikes, for instance. And that’s one thing we should resist is that some AI system says now’s the time to strike. For me, that’s a hard line. You don’t start a war on the basis of the choice of the machine. So just some examples, I think, to illustrate the points that Paul was making.
Sorry, Paul.
Scharre
Not at all. All good points. I think these are gonna be the challenging questions going forward, and I think there’s going to be difficult issues ahead to grapple with when we think about how to employ these technologies in a way that’s effective that keep humans in charge and responsible for these kinds of decisions in war.
Host
Thank you both so much.
Sparrow
Thanks, Stephanie. And thank you, Paul.
Scharre
Thank you both. Really enjoyed the discussion.
Host
Listeners, you can find the genesisarticle@press.armywarcollege.edu/parameters look for volume 53, issue 1. If you enjoyed this episode of Decisive Point and would like to hear more, you can find us on any major podcast platform.
About the authors
Paul Scharre is the executive vice president and director of studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. His first book, Army of None: Autonomous Weapons and the Future of War, won the 2019 Colby Award, was named one of Bill Gates’ top five books of 2018, and was named by The Economist one of the top five books to understand modern warfare. Scharre previously worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and secretary-level planning guidance.
Robert J. Sparrow is a professor in the philosophy program and an associate investigator in the Australian Research Council Centre of Excellence for Automated Decision-making and Society (CE200100005) at Monash University, Australia, where he works on ethical issues raised by new technologies. He has served as a cochair of the Institute of Electrical and Electronics Engineers Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.
Date Taken: | 06.28.2023 |
Date Posted: | 07.12.2023 12:04 |
Category: | Newscasts |
Audio ID: | 75418 |
Filename: | 2306/DOD_109755366.mp3 |
Length: | 00:32:24 |
Artist | Robert Sparrow and Paul Scharre |
Album | Conversations on Strategy Podcast |
Track # | 22 |
Year | 2023 |
Genre | Podcast |
Location: | US |
Web Views: | 143 |
Downloads: | 1 |
High-Res. Downloads: | 1 |
This work, Conversations on Strategy Podcast – Ep 22 – Paul Scharre and Robert J. Sparrow – AI: Centaurs Versus Minotaurs—Who Is in Charge?, by Kristen Taylor, identified by DVIDS, must comply with the restrictions shown on https://www.dvidshub.net/about/copyright.