Progress in the development and application of artificial intelligence is coming at a rapid pace today, and is challenging not only our view of work in the future, but also our view of human persons. Can machines have consciousness like a person does? How about rationality? Moral reasoning? Creativity? Join us for this fascinating discussion with philosopher Dr. Mihretu Guta as he explains what artificial intelligence involves and how it impacts our view of what constitutes a human person.
More About Our Guest
Dr. Mihretu Guta is adjunct professor of philosophy at Biola University and Azusa Pacific University. He holds an MA in Philosophy from Talbot and a Ph.D. from the University of Durham in the UK. He is the author/editor of several philosophical works, including, Consciousness and the Ontology of Properties. He has presented papers all over the world and published in several high level philosophical journals.
Episode Transcript
Scott Rae: Welcome to the podcast, “Think Biblically: Conversations on Faith and Culture.” I'm your host, Scott Rae, dean of faculty and professor of Christian ethics at Talbot School of Theology, here at Biola University.
Sean McDowell: And I'm your co-host, Sean McDowell, professor of apologetics at Talbot School of Theology, Biola University.
Scott Rae: We're here today to talk about the subject of artificial intelligence and how that connects to a Christian worldview, and we have a specialist here on our campus today, and I'm so excited for you to meet and to get acquainted with and to hear some ... he's done a lot of thinking about this and to hear some of his views.
Dr. Mihretu Guta is a Ph.D. in philosophy from the University of Durham in the United Kingdom; he's an adjunct faculty both here at Biola and Azusa Pacific University. He is a specialist in the area of metaphysics and the philosophy of mind — don't let those terms throw you, we'll define those here before too long. But, he's particularly a specialist in the area of the neurosciences, the brain, the soul and how that connects with things like artificial intelligence.
So, Mihretu, welcome, thanks for joining us today, and we look forward to our conversation on this, what I think is a really interesting subject of artificial intelligence.
Mihretu Guta: Yeah, thank you so much. I'm very delighted to be here, and I'm excited to talk about these topics because they interest me a lot and so I'm happy to share whatever I have-
Scott Rae: [crosstalk] the best place to start would be, tell us exactly what is artificial intelligence?
Mihretu Guta: Okay. Artificial intelligence is an attempt that people make to create artificial intelligent agents or even persons. Even if possible, they also think to create consciousness and machines. So artificial intelligence is a kind of a discipline in computer science that really focuses on that.
So, in order to have a very good understanding of artificial intelligence, it's good to make a distinction. One, weak artificial intelligence and a strong artificial intelligence. Weak artificial intelligence is an attempt to make tools, basically, computer tools, machines that serve us. It could be facial recognition machines, speech synthesizers, self-driving cars or laptops that we use, all sorts of things that really help us accomplish efficiently, are deemed under the category of weak artificial intelligence.
But a lot of debate and interesting issues come up with strong artificial intelligence. Strong one where the name itself is strong is a discipline that attempts to create in digital machines people like us, thinkers like us, with no distinction, who think like us, who act like us, who reason like us and who solve problems like us and so forth.
So, there are an awful lot of questions that we can ask when it comes to a strong artificial intelligence, but the weak one is not controversial.
Scott Rae: So, something like IBM Watson ...
Mihretu Guta: Yes.
Scott Rae: … would be an example of strong artificial intelligence?
Mihretu Guta: No, it should be taken as weak artificial intelligence because the Deep Blue, for example, that beat the world's greatest chess player, Garry Kasparov, is a weak artificial intelligence. It's not conscious, there's no thinker in that program, is a computer program that really makes millions of moves in a matter of seconds. So human beings do not have that high-speed capacity to do that, so yeah.
Sean McDowell: So you're on your way to a big conference at Cambridge University on the intersection of kind of science and theology and faith and you're arguing that although weak intelligence is possible, that the-
Mihretu Guta: The objections are as follows, one would be the gap problem, the kind of the inherent gap problem and the wrong location problem. Let me explain the first one. So people are trying to make a claim, the more sophisticated machines we make, there shouldn't be anything impossible on our part to really bring about super agents who actually excel us in so many ways.
But here's a simple example for people to follow my argument. Suppose you are the first person ever to climb to the summit of Everest. So you have what I call historical superiority. Anyone who subsequently climbs the summit of Mount Everest is not going to strip you of that historical superiority. The fact that you climbed to the summit of Mount Everest is a contingent fact. I could have climbed before you did, but once you did, no matter how many people climb to the summit, you necessarily have historical superiority. So no one will ever be able to strip that of you.
So let's relate this to what I call ontological superiority. We do have ontological superiority over machines because we are the primary causal source of this machines. And also we are agents in a very special ontological sense. We are rational beings in a very special ontological sense. So it is impossible metaphysically for machines to, no matter how impressive computational tasks it carry out the ontological superiority is a fixed matter. So they're not going to be able or be in a position to strip us of that ontological superiority.
Here's a theological example. God cannot create another necessary being. The fact that God cannot create another necessary being does not show any flaw on the part of God. It's not like God is not omnipotent because of that; it's metaphysically impossible. There's nothing in the nature of God, him being necessary being, in other words, there's nothing that is necessary to the extent that God is necessary in the sense that God is necessary, so it's impossible for God himself to create another necessary being. So therefore God has over us, over humans, ontological superiority that no matter what we do as human beings, we're not going to be able to strip him of that ontological superiority. Likewise, let's say with machines, we create machines and suppose we create machines that populate the state of Nevada. Even let's suppose we create machines that conceive other machines and give rise, give birth to other machines, but still the buck stops with us.
We came up with that blueprint so we are superior to them necessarily. We are smarter than them necessarily, therefore that's not going to be put to rest.
Sean McDowell: So you're making an argument that it's true for both God and for human beings, that whatever kind of being we are, we cannot make an ontologically superior being. Is that a simple way to kind of grasp the point?
Mihretu Guta: Simple way would be you do not even have a property to bring about a conscious being exactly similar in kind, just like you, not in degree. The difference between machines and us for so many people at this point is a matter of degree, at weak AI level. So we want to move up to a strong AI level when that kind of a degree distinction will disappear and we now can claim we've brought into existence a being exactly the same in kind with us.
Sean McDowell: That's a really important distinction because sometimes people think once we had this machine that could play chess, it's increasingly getting smarter and it's only a matter of time before it passes us and become smarter. You're saying that's just a distinction in degree, but to make a self-conscious being that has strong intelligence, that would be a different in kind and that's metaphysically impossible.
Mihretu Guta: It's impossible because you don't really have a property to do to begin with. If you think you have a property to do that kind of thing, let me know what that property is. So I'm demanding in my paper actually, what is that metaphysical property such that you can be in a position to bring a being the same in kind with you into existence by complicated machines?
Scott Rae: So, Mihretu, does this mean that even with weak artificial intelligence, you won't ... artificial intelligence will not be able to produce intelligent agents?
Mihretu Guta: I actually don't think artificial intelligence can create intelligence, but in a metaphorical sense, yes. Suppose for example, you have a machine, you have a program that you run on it, right? So there are algorithms that you've written for that machine, a scripted algorithmic steps that the machine has to follow in order to execute what you put in that machine. So when we talk about “agent” — quote, unquote — what do we mean by that? Is that an independent autonomous agent just like us that can come with its own novel ideas, novel creative concepts, for example, problem-solving mechanisms and so on?
So therefore the kind of, the way we're using in this research, in fact we're abusing words and predicates and nouns and so on. We're transporting our own exposure and then projecting it onto the machines and then using the words in exact same sense. So I doubt that we can bring agents, but we can create machines that appear to be thinking like us, for example. This is not mysterious. Your computer might say, if you Google something, "Are you sure?" Sometimes the computer asks you. But that means it's not like the machine came up with that idea; the algorithmic data, everything is fed into it to say that given certain moves that you make, for example, certain commands that you give to that machine. So everything is scripted in machines.
Scott Rae: So let me pose an example to you. I read about this in the last month or so that a group of clinical psychologists are now using artificial intelligence as a way to treat their patients. So instead of seeing a therapist, you see the machine that is programmed to detect your mood, to read your spirit and then to dispense the best advice and empathy that they can. What do you make of that?
Mihretu Guta: Okay. So there is what we call machine learning, right? So, machine learning, you can feed into that machine patterns. For instance, neuropatterns, neural activities from normal person, neural activities from abnormal person who is having some sort of problems. So you can feed those patterns into the machine and the machine can implement those in a way that you've ordered within the parameter that you've set up. So it's not like the machine is being conscious of your situation, the machine is still executing the output based on the input that was fed into the machine itself. So these are like tricky areas because the way we use language in these areas make it appear as if there is an agent there in the machine is almost conscious of how you're feeling. And I think these are very misleading ways to think about this issue.
Scott Rae: So you would just call this a much more sophisticated algorithm.
Mihretu Guta: Absolutely.
Scott Rae: But no, I mean no sense of that the machine actually has decision capacity or anything resembling consciousness analogous to human beings.
Mihretu Guta: Here's the funny thing, social intelligence researchers actually they do not spend time thinking about consciousness. In fact, the most comprehensive textbooks in AI or artificial intelligence or writers in this area, they do not even give a chapter-llength treatment of what consciousness is. The best they can say is in two or three lines; they just mention consciousness as they move on and rightly so because consciousness is deeply subjective and consciousness also entails necessarily a conscious being. So there is no such a thing called consciousness floating up in air without being a consciousness of someone. There is a thinker, there is a conscious being therefore for AI researchers to make inroad in this research, they have the first tell us how they understand what consciousness is, how they understand what an agent is. Unless they come up with a very worked out metaphysical views on this issues, I don't think we should accept their superficial treatment of these concepts.
Scott Rae: In other words, they need philosophers.
Mihretu Guta: 100%. And in fact-
Scott Rae: [crosstalk 00:13:42] theologians [inaudible 00:00:13:43]
Mihretu Guta: Artificial intelligence actually necessarily was born out of philosophy and logic, so yep, we've contributed to that.
Sean McDowell: So a moment ago you're talking about this machine that from our appearances could seem like it's interacting and showing empathy, but it's really a sophisticated algorithm. You said there's no agency. Is that ultimately the difference of why a machine could never have strong artificial intelligence because there's no agent or soul or self? Is that [crosstalk] can you explain what-
Mihretu Guta: Absolutely. Yeah, that's exactly the point because in my book, Consciousness and the Ontology of Properties, there's a chapter on metaphysics of artificial intelligence where there are conditions. For instance, when you think about thinking, thinking as a mental process, you have to think a thinker. Thinking cannot exist without a thinker. Movement cannot exist without something moving, movement relative to an object that moves. But there's no such a thing called movement, an abstraction from the object of that moves. So there's no sense that we can make about thinking without a thinker. It's sort of like Descartes already told us in 17th century, that was a dead end for him. So he postulated the possibility of denying his own existence, existence of his ... everything got around him, but he was not able to doubt his own existence because the doubter must necessarily exist in order to doubt even be able to get off the ground. So Descartes understood that very, very long time ago. So yes, that's precisely the point.
So AI researchers have to tell us what they mean by self, what they ... Even if they deny these issues, then they have to tell us what part of the computer actually thinks. What do they take by the central part of a thinker in a computer. Is that the program? Is that the bundle of events in the computer or the temporal parts of the computer? This is what [Altson] argues about artificial intelligence in [inaudible] University of Sheffield. So, as metaphysicians, the excitement level is not the same as the excitement level of people who really think that we are making progress because there are metaphysical issues that haven't even been touched, let alone to be tackled. So therefore, what good reason do I have to join the party?
Sean McDowell: Okay, so in every science fiction film we've seen, whether it's A.I. [Artificial Intelligence] or whether it's Terminator or I, Robot, there's always this mysterious moment where the computer just becomes alive and starts thinking, but there's never an explanation for it. Essentially what you're saying is right now, computers are getting more sophisticated, the algorithms are able to seem more human to the uneducated person from the outside, but there is no clue how this self emerges naturalistically any more than there was decades in the past. Is that fair or am I reading any less?
Mihretu Guta: Let me give you one of the world's leading naturalist at University of Berkeley by the name John Searle. He is a naturalist. He's one of the world's leading critique of a strong AI. So machines of any degree of complexities, they can do excellent things in manipulating syntax, syntactical rules with zero understanding, which means they do not have any understanding of the semantics of what they actually manipulate. Therefore, John Searle came up with a thought experiment, what he calls the Chinese Room Thought Experiment. For example, Sean, let's say you don't speak, Amharic, right? You don't, it's Ethiopian national language.
Sean McDowell: I do not, thank you for thinking I may be even would be able to.
Mihretu Guta: Right, let me use this example on you. Suppose I gave you, you do not speak a word of Amharic, all right? I gave you symbols to relate, okay? You can see the words even if you don't read Amharic, okay? So you are in one room, all right? Let's suppose that's the case. And Dr. Scott, you're outside of ... you don't know whether Sean is a human being or a computer, but you perfectly speak Amharic. So you ask questions-
Scott Rae: Which I do.
Mihretu Guta: Which you do. I mean this is ... so you ask a series of questions. All you have to do in that room is to relate whenever a certain question comes up, relate this symbol with that symbol, that will give as an output the right kind of answer to Scott. So Scott thinks a human being is interacting with me and who perfectly understands Amharic. But on your part you have zero semantical grasp of anything you're manipulating, but you can manipulate that. You don't have to learn Amharic. You don't have to have any grasp of the language of Amharic and so far thus you can manipulate symbols blindly and mechanically, you can do that.
Therefore Searle argues to this very day that we have zero ... we shouldn't have any concern whatsoever that machines would become somehow conscious or something like that. [inaudible] necessarily, they can't, so therefore no degree of complexity should actually turn them into sort of like magically overnight to be a conscious being. So the Chinese room argument has been bombarded, has been critiqued by so many people, but still no luck in my view. I think that objection stands, is ground and I don't think there's any good response to that objection. So yes, I think so there's a difference between manipulating syntactical rules and understanding the content of what you're manipulating. So machines, all they do is manipulating that. For instance Siri, the speech synthesizer for instance, is a program where you can ask any question and then it'll ... and give you the right output, but you have to feed into that machine every possible information about that area first.
If that information is not being fed into the machine, there's nothing you can get out of that machine, sorry. So it's all about input and output, and in fact, I'm arguing that the adjective artificial has to be replaced by functional. So there's no such a thing called artificial intelligence. There is functional intelligence, quote, unquote, yup.
Scott Rae: That was sort of my question is, is the idea of artificial intelligence actually an oxymoron?
Mihretu Guta: I think it is as far as it goes because in fact, the phrase software is introduced in 1956 at Dartmouth College, New Hampshire. And there's no mystery about the phrase and a group of people, 10 scholars got together and they had a conference and they just suggested, but then it stuck with us until this day.
Scott Rae: So we're saying that artificial intelligence machines will never have consciousness analogous to human beings, is that right?
Mihretu Guta: I am arguing for that conclusion. If proponents of strong AI think otherwise, what they have to do is not to make bogus claims. They have to come up with a very credible account of how they go about creating consciousness, which we do not have the slightest idea as we speak because consciousness is one of the most difficult aspect of human reality in the whole world. And I am part of an international research group on this issue, and I think we're still scrambling and AI is one small discipline within computer science and I don't think-
Scott Rae: So if that's true about consciousness, would you also say that's true about other traits like creativity, rationality, moral reasoning, things like that?
Mihretu Guta: Exactly. So, all of those things hinge on whether there is a being who reasons morally, there's a being who is rational. Rationality, we can catch that in two different ways if you take logic, deductive reasoning and inductive reasoning or probabilistic reasoning, there are rules of inferences and logic and blah, blah, blah, blah. You can gauge whether a person is reasonably reasoning or not, but there's another dimension of rationality that's distinct from logical rationality. The ability to navigate through life generally the ability to cope with your environment, the ability to make a very complicated social network and conduct your life within that context itself requires a very practical, extremely sophisticated rationality, grounded in general intelligence, not specific intelligence. Machines are good in specific intelligence; you give them a specific task and you write the rules and they give you the output.
But we're not talking about that. Human beings have general intelligence. You don't have to have a Ph.D., you need to only be a human being to be an honor of general intelligence. So general intelligence encompasses your very nature and what you do as a result of that.
Sean McDowell: So what motivates so much of the interest and maybe hysteria around artificial intelligence? Is it a combination of like we've seen some remarkable technological breakthroughs that's a piece of it? Is it naivete? Is it the naturalistic worldview? Why is what you're saying controversial because in some ways it strikes me as being kind of obvious if we just reflect upon it?
Mihretu Guta: Here is a for example, as I've said, weak ... and artificial intelligence is not controversial. I think it's doing a great job in medical science and education and all sorts of areas, in transportation and all sorts of areas. A strong AI is saying that we can actually show that we are not significant as human beings. We can create a smarter machines than us with superintelligence. In fact, there is a proposal right now in some research circles that we will end up being their pets if strong artificial intelligence get realized. And we would become nobodies and in fact our centrality as human beings will diminish in fact. Yeah, that boosts up materialist ontology of a human personhood. Definitely there is a very serious motivation in that regard.
Scott Rae: Given what you've said about strong AI and that it's, in your view, highly unlikely if not impossible that we will ever have machines that can do the things that strong AI proponents claim that they can do, what are the possibilities for weak artificial intelligence? What excites you about that field? What are some of the things that we ought to be on the lookout for in the future for weak artificial intelligence? Because it sounds like it has the potential to really revolutionize a lot of the way we live.
Mihretu Guta: Exactly. It has already started doing that, right? So we have created all sorts of machines that pick up on neural activities, for instance, in the context of medicine for instance, machine learning. Weak artificial intelligence is making inroads already. I mean it's like take transportation for instance, self-driving cars or in education, creating computer programs that really help people to keep up with their work for instance and Grammarly for instance that really kind of corrects your dumb typos or whatever it is. I don't use that and I want to do it myself, but anyway, that's one way. So almost in every sphere that we can possibly think of, artificial intelligence is really making inroads in making these gadgets and computers that we use are helping us in so many ways. And I think if you take transportation, medicine, education and religious context, all these sophisticated machines are already helping us.
In fact, we can ... that's why proponents of weak artificial intelligence, they say, "These are tools, these are our ... they serve us well insofar as we don't abuse them." But there are some scary things as well about these machines that I will point out later on, my own concerns about this, but as far as it goes, I think I don't really have any problem. I mean, you have airplanes that are computerized and cars computerized for instance, they tell you on the dashboard there is a problem for you with your computer or with your car so stop and change this and that and all of that is very good, I mean otherwise you get stuck somewhere in the middle of the road. Now there are cars that really give you that kind of information.
Scott Rae: Maybe now it'd be a good time to think about what troubles you about that?
Mihretu Guta: Okay. I have two concerns. One is philosophical and one is practical. For instance, robotic technology is really growing fast and for instance, Amazon and FedEx and some companies in United States are testing these robots so that they would no longer use mail carriers for instance, they will no longer use human beings to drop boxes at your door and so on, and warehouses for example are also thinking to employ, deploy more robots than human beings. So there is a potential for so many people to lose jobs in the future. And I think that's very concerning. What is it that we will achieve at the end of the day having half of the population is out of a job and few people who make billions of dollars, they can enrich their assets even more?
So what's this ... the illusion of efficiency in the West is, I think extremely unacceptable. I mean, to what degree do you think you can be efficient or even as a consumer, to what extent? So the job aspect is very, very scary. If it goes according to how people are planning, I think there is a huge ... there will be a huge challenge and it will be a recipe for social chaos in the future.
Scott Rae: Stay tuned in a few weeks, we'll have an interview with scholar Jay Richards who is going to be addressing this very subject: will machines take all the jobs? I think that's a very real concern. And you said you had one other concern.
Mihretu Guta: My other concern is a concern that has to do with moral flourishment. I think the more we machinize society, the gap ... the interest for me to engage another human being will diminish. I will develop more attachment to gadgets and machines. So if someone knocks at my door, if I'm looking at the robot as opposed to a human being, imagine I'm isolating myself even more from human beings. So how's it possible to flourish as a society in a moral sense? I think that concerns me a lot. Moral flourishment requires us to really lift all of us up, grow together and flourish as members of society together. I think if this kind of creates a scenario where you have money, you can do whatever you want with your money and so who cares for the rest of other human beings? So I don't think this is helpful at all. And I think the privacy issue is already a serious matter in Western context, that will double, that will become almost hard to solve in the future. So people are machinizing themselves, there is human machine interface, for example, not only implanting things but also changing half of your body with devices in the future and this could be a military context or so many areas, so ...
Scott Rae: So instead of a smartphone or a smart home, you could have a smart body?
Mihretu Guta: So which means ... this is called cybernetic organism, which is cyborg. In other words, cyborg means you're not entirely biological organisms. So you are a mixture of these gadgets and biological bodies. And so there are proposals to actually upgrade your memory capacity by implanting chips and so forth. And there are so many proposals including marriage in the future with robotic objects, so a lot is on the horizon. So, we need to be very carefully think about these issues very, very carefully. And we want to flourish as members of society, and as Christians I think our main responsibility is to take care of each other, humanity, environment and everything else. But if we are turning ourselves into sort of human zombies, sort of like machinized members of society, I think I don't see a prospect for real, genuine, moral flourishment and it will enhance the greed that's already affecting us. And in fact in some sense we can critique capitalist economic model; if it's not carefully handled, I think it would spin out of control and AI contributes to that a lot.
Scott Rae: It sounds like we've got metaphysical issues and we have moral issues to think about. Metaphysical issues with strong AI, moral issues with weak AI in the application of that as it becomes much more widespread.
Well, Dr. Guta, this has been I think, incredibly insightful. I mean, I suspect for our listeners, these are a lot of things that you haven't thought about before and I appreciate you bringing it to a level where it's nontechnical, where it's easily understandable. Thank you for your expertise on this and for the work that you are doing and will continue to do on this in the future.
Mihretu Guta: Thank you very much for having me.
Scott Rae: This has been an episode of the podcast, “Think Biblically: Conversations on Faith and Culture.” To learn more about us and today's guest, Dr. Mihretu Guta and to find more episodes, go to biola.edu/thinkbiblically, that's biola.edu/thinkbiblically.
If you enjoyed today's conversation, give us a rating on your podcast app and be sure to share it with a friend. Thanks so much for listening and remember, think biblically about everything.