Artificial intelligence and robotics are becoming a major part of our everyday lives, whether we realize it or not. Where do we see it in our lives? Where is it a good thing and where should we be concerned about it? Can AI produce machines with the ability to learn, human consciousness, rationality, moral decision making? Join Scott and Sean for this very relevant conversation.
Episode Transcript
>> Artificial intelligence and robotics are becoming a part of our everyday lives, whether we realize it or not, whether we like it or not. Are robots and artificial intelligence ever going to gain consciousness like human beings and be able to think? And perhaps most importantly, how should we think biblically about artificial intelligence and robotics? My name is Sean McDowell, a professor at Biola University, here with my co-host Scott Rae, and this is Think Biblically. Scott, I've got a ton of questions for you related to artificial intelligence and robotics. One of the most pressing issues today. So let's jump right in and maybe define our terms as we begin. So let's just ask the question, what do we mean by artificial intelligence?
>> Well, essentially what we mean by that, Sean, is it's computers being programmed to mimic human intelligence.
>> Sean: Okay.
>> That's essentially what it is.
>> Sean: Okay.
>> So that, I think emphasis on the term artificial.
>> Okay.
>> Because, I don't think we can make a good argument that it is, it is identical to human intelligence and human consciousness, things like that. That's what's known as strong AI to make it virtually indistinguishable from those.
>> Sean: Okay.
>> What's called a weak AI is, I think, is, I think a more realistic term, which is used to describe efforts to mimic and emulate what human intelligence, some of the things that human intelligence can do.
>> So the words mimic are really important. And I think if artificial intelligence had not been called artificial intelligence and just called something else, it almost would've removed a lot of the confusion that we have, wouldn't it?
>> I think that's right. But I think, we remember that, what robots and AI machines do, is what they're programmed to do.
>> Sean: Okay.
>> They don't do this independently. They're not human minds that operate in a self-directed way. They just, they do what they're programed to do.
>> Okay, well, let's talk about some examples of this, because sometimes from science fiction, we just have this strong AI version, artificial intelligence that is on the horizon. But the reality is, it's all around us. It's the air we breathe. So give us some examples in our daily life.
>> I think it's, yeah. It's becoming much more common and a part of the fabric of everyday life. Take something as simple as Alexa.
>> Sean: Okay.
>> Siri, which mimics, the mind's ability to understand human speech, okay. And so I think one of the clearest examples that almost all of us use all the time. Search engines that optimize certain functions, fact that the news feed that you get on, on what you look at to get, to get your news is customized by what you click on and what you look at. And that's the use of artificial intelligence to optimize your own search functions as you go forward. Surgeons today are using robots, now, surgeon guided robots.
>> Sean: Sure.
>> But they are in most case, they are much more efficient and much more precise than human surgeons are acting alone. So there are some forms of surgery now, like prostate surgery, for example. They've done almost exclusively robotically. So that, you know, robots run Amazon warehouses. I mean, they're all over the place.
>> It's amazing.
>> And they are the reason that Amazon is able to keep its costs down. And it's, and some would say, it's profits high. So, and there are, and you know, things like a AI Gizbe is used to have to sort of mimic what therapists can do, what counselors can do. They're programmed to answer certain questions a certain way. They're programmed to respond to human beings in a certain way. So there's, I wouldn't say there's reciprocity-
>> Sean: Okay.
>> In the same way that there is like a human relationship. But I think sometimes, you know, AI functioning robots can provide some of those comfort and almost therapy type of roles. So those would be some examples.
>> Sean: Okay.
>> It's sort of, AI's founded on the idea that human intelligence can be described so precisely with our knowledge of what goes on in the brain that a machine can simulate it.
>> So it's really helpful to think of AI being used by surgeons that 99.9% percent of us will never utilize. But Alexa, even managing Uber drivers, things like Netflix that are integrated in our daily lives use artificial intelligence. Now, in some ways, your answer to the previous question kind of addressed where we're going, but I want you to talk about some of the benefits of integrating AI and robots into our daily life. And just for those watching, we start with benefits. Because sometimes as humans, as Christians and evangelicals, we can start with fear, and that's not always the healthy place to start or biblical place.
>> Scott: Well, yeah and I, I would start theologically with general revelation and common grace, which the, the idea is there is that God has embedded His wisdom into His world as well as into His word, but He's embedded His wisdom into His world. And by common grace has given human beings all the capacities that they need to unlock what God's embedded into creation. So this, I think is the way, the way we put technology, I think, within a Christian framework. And the technology in general, I would say is the good gift of God in order to enhance human flourishing and to improve the common good.
>> Sean: Okay. So I think, at the start, I think it's right to look at some of the benefits of what this technology can do. I think we need to treat it like we treat any other technology and look at, you know, it generally God's good gift, but in a fallen broken world, it also has the possibility to go off the rails.
>> Sean: That's super helpful. So in some ways, a pencil is technology, a button on a shirt is technology. So we should approach artificial intelligence the way we should other technology, but clearly more is at stake with artificial intelligence than a button or a pencil. But that doesn't change the in principle approach to look at the good use of technology. So very helpful way to frame this. Give us some examples of maybe some of the benefits of robots or AI in a daily life.
>> Well, I love the idea that, you know, I don't have to look at a Thomas guide or a map.
>> Sean: Amen to that. [Sean laughs]
>> To find, you know, especially those of us that are particularly directionally challenged.
>> Sean: That's me.
>> It would be me too. And without our wives, we'd be in really deep trouble until, until the, the AI functions that, that fuel our GPS systems were put to work. So that's, I think one example. You know, I love having the ability to search for just about anything you know, at a couple clicks. You know, research today is not nearly as burdensome. When I worked on my doctoral dissertation, I spent hours and hours and hours in a literal law library.
>> Sean: Wow.
>> Photocopying things that I eventually threw away later.
>> So they had photocopies back then.
>> They did.
>> Sorry, I couldn't resist. Keep going.
>> Yeah, sometime before the flood. So I think that, that's, I think, a really helpful thing that all of us benefit from. You know, it's embedded in, you know, in a lot of our systems that have made jobs a lot more streamlined. I mean, you don't hear today receptionists you know, picking up the telephone.
>> Sean: Right.
>> You hear that's all done robotically.
>> Sean: Okay.
>> And then I think it has the potential, I think, to provide for some, some forms of sort of, I call close to human interaction.
>> Sean: Okay.
>> But not quite, not quite the same thing, but can provide close to it.
>> Sean: Okay.
>> In some important ways.
>> That's great. We'll get into some of those. Now, when you say we have artificial intelligence rather than a receptionist, one of the questions we'll get to not yet is, wait a minute, is this replacing jobs? We'll get there, but there's a few things we wanna cover first, kind of laying the groundwork for this. So how has the social acceptance, and maybe why has it shifted since the pandemic? Because some pretty significant things have changed about how people think about robots during that time.
>> Yeah and I think, it may be a bit different in some parts of the world, than in others.
>> Sean: Fair enough.
>> You know, for example, our colleague Martu Gurtafre, who's from Ethiopia, has told us that some of the things that robots are being used for in Asia would be unthinkable in Africa, because there is such a different emphasis on presence and proximity and community in Africa than there is in other parts of the world. So I think in the West, which is all, you know, all you and I have to go on, I think we've seen an increasing comfort with this, though there are at the edges, there's some definite needs,
>> Sean: Sure.
>> But I think we're sort of getting more accustomed to this. And I think what's happened is that, we're actually just becoming more aware of it. Cause I don't, I don't think there's been a huge public relations campaign in favor of artificial intelligence. It's just been sort of gradually introduced into more and more of the fabric of our lives. I think with the lockdown, the result of the pandemic, I think we saw much greater need for things to be automated. Things that don't require the kind of human contact that the pandemic ruled outta bounds for longer than any of us anticipated.
>> That's right.
>> So I think there's something to that, that I think is, can be helpful. But the danger of course, is that if it replaces...
>> Exactly.
>> Some of those forms of genuine human contact and community that we all, I think came out of the pandemic just starving for.
>> Sean: That's right.
>> So I would say it's, it was, it's an artificial substitute for some of those things that can provide some help along the way, but it's not a substitute for the real thing.
>> That's a real interesting point to say that really what it did is brought to the surface practices that were already there and made us more thoughtful about them and more aware of them. I think that's a helpful distinction to make. Now, you hinted at this in your answer, in terms of how robots are incorporated into Africa versus in Asia as one example. What are some of the differences in terms of approaching robots in the East and in the West? And I ask this because one of the examples I've often used is the low birth rate in Japan is caretakers are being placed with robots.
>> Scott: Right.
>> My initial instinct is, that's wrong. That's messed up. Here's a huge problem. And there's concern there. But the more I reflect upon it, that might not be the whole picture in terms of what's going on.
>> No, I think you're absolutely right about that. Is it the same thing as having a, in the flesh human being take care of the elderly? I mean, is it the, you know, the best thing of course is to have a family member do that. Somebody who actually knows them.
>> Sean: Sure. That's in the absence of that, we look to things like assisted living and in parts of the world where those things aren't as readily accepted for whatever reason or readily available, I think robotics can provide the next best thing. Can it provide the same thing as a human being? Obviously not.
>> Sean: Sure.
>> But can it provide something that's good enough? It's certainly better than nothing at all. I think that's the difference, I think in the East, I think tends to be much more, you know, people oriented, community oriented. I think what you described culturally going on, the birth dearth that's taken place in countries like Japan has made some of these things a little more necessary.
>> Sean: Right.
>> But I think in the West, we tend to be a bit more task oriented, a bit more, you know, compulsive and driven, less community oriented in general. That's a, I mean, that's a huge big generalization, I realize.
>> Sean: Sure.
>> But I think there's, I think the ground in the west, I could see being a bit more fertile for the kind of widespread acceptance of AI and robotics then in a place like the east. And certainly I think south of the equator, you'd find, you know, a much different attitude toward replacing human beings with robots.
>> I think one of the key points is that can robots help in a way that is better than not having robots, as long as they're not replacing human beings or pretending to be human in a way that is deceptive?
>> Scott: Right, that's misleading.
>> That's where the ethical line starts to cross. But even in things like counseling, it's interesting how some people who have trauma or difficulty can initially open up to a robot in a way they might not to a human being.
>> Scott: That's correct.
>> Scott: That's right.
>> Now, as long as that is a gateway to get the person to opening up to a human being, that seems to me to be a positive step in the right direction.
>> No, that's a really good example. And if that, you know, because people suffering from PTSD, often the most difficult step is getting the process started. And so if you can, if we can help get that, you know, sort of get them off the dime and get the process moving, then it has real potential for significant healing. But that's, you know, anytime you have that, those kind of really hard emotional conversations with another human being, there's a just a difference there.
>> Sean: Absolutely.
>> And I'm not exactly sure how to wrap my arms around exactly what that means for a trauma patient, but there's a difference opening up to a person. I suspect one of those things is that with a real person, you have the possibility being judged and evaluated.
>> Sean: Exactly. When you don't with a robot, unless they're programmed to be judgmental.
>> Sean: Exactly.
>> Which could be certainly a possibility.
>> Which isn't real judgment then.
>> That's correct.
>> Right?
>> That's correct, it would be artificial judgment.
>> There you go. So it lacks the judgment, but also lacks the ability to really love.
>> Sean: Correct.
>> So if it can open up a door to somebody to be loved, and if there's some appropriate judgment that needs to be made through that person's life, a human being can make that, that's true for all of us as we grow. So-
>> Let's just say, I think, I think what people don't fear in opening up to a robot powered by artificial intelligence would be being put to shame. And I think that's, that may be the biggest hurdle that keeps someone from opening up and really getting the healing process started.
>> In some ways, it's like video games. They sometimes, now this raises other ethical issues. but obviously sometimes in war, video games will be used to break down somebody's barriers to using a weapon, if that's a step to self defense and a righteous use. And I realize there's debate about that, that in principle could be a positive use of technology. So we've talked about a lot of potential positive uses, but I know you have some concern. Talk about some of your concerns.
>> And I know you've thought a lot about some of these concerns too. My, I think my biggest concern, is that it will, there's the economic disruption that it is causing, and will cause in the future. I don't believe that robots are gonna take all of the jobs of human beings.
>> Sean: Okay.
>> But I do believe they're gonna take some of them. And it's gonna cause disruption in job markets and displacement of workers and people who have actually done everything they're supposed to do. They've gone to school, they've gotten their education, they've gotten trained to do, you know, to do a good job. They've done it well, find that they're being replaced. And that, I think, I don't think we're, I don't think we're anything close to prepared for this economic disruption that I think is coming over the next few years with this. Now that's one thing that I think is very worrying to me.
>> Sean: Okay.
>> Because there, we're gonna, we're gonna have a lot of people who are going to have to figure out, "well, what do I do with my life now that my livelihood has been taken by a robot?"
>> And might examples of this be like truck drivers, that's a massive profession, that a lot of people benefit from and have served for years. But it seems like artificial intelligence is potentially taking over the shipping of certain goods. I don't know how large that industry is,
>> Scott: Oh, its enormous.
>> But that would be a lot of people potentially out of jobs.
>> Scott: And if, you know, take for one thing we didn't mention in the examples that are coming, they're already here and are coming in more detail, but are self-driving cars. You know, which will put Uber, Lyft, taxi drivers, delivery people potentially all not to mention and if we have self-driving cars, there's no reason we can't have self-driving trucks as well. So that has the, just that by itself, hugely disruptive. So there, I mean there are lots of people that have very, very well paying jobs at driving trucks all over the country that may be looking for other things to do.
>> So when we talk about artificial intelligence, there's a lot of angles we could go with concerns. You've mentioned one that's an economic potential concern. Before we get to the question, I know a lot of people are asking, and this is a lot of your specialty, is, will intelligent agents, in sense of robots, ever be able to think like human beings? Are there any other concerns that you have societally speaking?
>> Yeah, here's what, here's the way I would put it, I'd use the term what some people call encrypted discrimination.
>> Hmm.
>> That, for example, lots of job applications are not reviewed first and foremost by human beings. They're viewed by machines.
>> Sean: Okay.
>> And machines with artificial intelligence powering them can be programmed to look for and sort out any qualifications that companies want or don't want. And it would be, it would not be difficult to encrypt into the assessment of resumes. Factors that, I think we would pretty easily suggest are discriminatory.
>> Sean: Gotcha.
>> So that's one thing where it could, it could go off the rails and in ways that, my guess is most people are never gonna be aware of, if you're not the one programming it, how will you know if that's ever been an issue? So that's one thing that I think that troubles me. And I think the other thing that, that I think worries me is if what we call strong AI becomes more, more and more what people are after, I think we could, we could make a metaphysical mistake in assuming that a machine can do all of the same things, literally, that a human mind can do, that it can make decisions, it can have consciousness, it can engage in moral reasoning. Those, you know, those require I think more of a broader philosophical understanding of what a human person is and what characterizes a person as opposed to what can be programmed into a machine.
>> Sean: Gotcha. Now let me come back to that and see if you think that's going to happen or not. But I think the reality is artificial intelligence is raising questions we've never had to really ask before. So I've seen coming through on social media, this app that says it creates content for you. And I'm reading that going, "Wait a minute, if I put my name on that, maybe it's a system that pulls from other things I've written and says, this is the kind of thing you would say, Do I put my name on that? Do I not? Am I the author?" Like even just asking those questions stretches the bound of what authorship means and responsibility to content with your name on it.
>> Yeah, not to mention copyright.
>> And well, and copyright too. Like these are just new questions. And that's just one, artificial intelligence raises a lot of new questions. And what we're tempted to do is just to use it for efficiency and to skip over asking the more difficult questions. Overall, that's one of my biggest concerns.
>> Scott: And I think that's a huge and well-founded concern because we've already seen it taking place. Think about how, how much this has become a part of the fabric of life without anybody really asking a lot of serious questions about whether we want this to be a part of the fabric of our lives. You know, I'm very troubled by some of the algorithms that are used to direct self-driving cars. You know, for example, if you are, if an accident is imminent and you are about to collide with a person, with a car, with three people in it, and you're by yourself, that self-driving car is programmed to save more lives rather than less.
>> Sean: Okay.
>> And namely, you're in serious trouble if that's the case.
>> Sean: Okay.
>> So I think how, I mean, how these things are programmed, obviously has ethical values and principles embedded in them that nobody is made, nobody's disclosing. So, I mean, that's just, that's one example with a self-driving car, that hopefully not gonna happen very often.
>> Sean: Right.
>> But if they become much more prevalent, you know, you're gonna see more types of things like that happening.
>> Now, I can imagine someone watching this going that seems pretty common sense that if it's going to crash, save more lives than not. Your point is that's not the only metric. And it just highlights that certain ethical, ethical ideas are built into this and have to be, they're not human decisions. Now it's a machine. And we have to be aware of how those are built into 'em. And some can have troubling conclusions.
>> Right and I think we need to be clear, not the machine's not making the decision. A human, our team, a team of human beings has programmed the machine to make, you know, to quote, make that decision. But I think we should be very clear that artificial intelligent robots are not, are not decision making agents like human beings are.
>> Okay, so they're not right now, will they ever be?
>> Well, for one, that presumes that they would have human consciousness. And we know so little about the origin of our own human consciousness. How is it that we have this ability to look at ourselves through first person introspection, which I think is a good, is good evidence that we actually are conscious of ourselves.
>> Sean: Right.
>> So, you know, in independently of our bodies, how is it that we can't even figure out where this comes from in human beings, much less to be able to program it into some sort of artificially intelligent machine. So I'm not, I'm not optimistic that strong AI is ever going to be possible. Now we may come closer to, the more we learn about neuroscience and the brain, the more we understand that neurological functions, the better we'll be able to mimic them.
>> Sean: Okay.
>> But that's a, in my view, meta physically, that's a huge leap from humans programming something like consciousness to actually having it.
>> Sean: Gotcha.
>> That's a really big leap. And I think the same is true for things like moral reasoning, you know, rationality, it mimics, that's the point.
>> Mimics.
>> Emulates it.
>> Here's the idea.
>> Okay.
>> But it only does what it's programmed to do.
>> I think one of the key points you're making is how at core worldview issues underlie this question. So if naturalism is true and we're just complex physical machines, then in principle, self-consciousness of robots is possible. But if we are body and soul, which you have argued, then just mimicking the brain isn't gonna get us all the way there cause there's also a soul. Now we're not gonna settle that issue right now.
>> Scott: Right.
>> But really wanna highlight the worldview issues at play.
>> Scott: But I think you've, you've made the right point because almost everybody in the AI scientific and programming community believes exactly what you said about a human being. That is, it is a physical thing only. And this is where, you're absolutely right. The worldview means everything. Because if physicalism about a human being is true, if we are nothing more than a collection of our parts and properties, then there's no reason why strong AI can't be true.
>> Sean: Hmm. So if human beings make an AI system that's self-conscious, what's often missed is this wouldn't mirror how it allegedly happened in nature through an evolutionary process, which is unguided and purposeless. If it happens, and again, I'm even to say I'm skeptical, understates where I stand, I don't think,
>> Scott: Fair enough.
>> It's metaphysically possible, but if it did happen, it would be an argument for design because it takes intelligence to get there. That's an interesting point that's left out. Now with that said, let me shift it a little bit. I wanna ask you a question that just in some ways, I don't know where to go with this. The fact that we have to ask this question today is just eye opening to our culture is at, but we've seen this in movies. Can human beings even be friends with AI robots? Can we be a friend with a robot? And we've seen a movies like "Her" with Joaquin Phoenix, not just a friend, but a boyfriend and a girlfriend, is such a thing possible?
>> Well, it depends on how you're defining the term friend.
>> Fair enough.
>> If you mean by that a reciprocal relationship, I think the answer to that is no, because you, what you are doing is, you are interacting with a machine that's been programmed in a certain way to respond to you in a certain way. None of that is spontaneous. None of that is intuitive like we have in a real relationship. So there, I mean there's just, a whole lot that's missing from what we would generally take to be the normal reciprocal give and take of a relationship that we take is actually pretty commonplace now. It can mimic some of that. It can mimic sympathy. For example, when you share, you know, negative emotions, for example, it can mimic compassion. But can it give you the same thing, can it give you the reciprocity that you get from another human being? I think the answer to that is no. And I don't see where that will ever be the case because ultimately, we talk about relationships in terms of soulmates for a reason.
>> Sean: Right.
>> And it's because there's a deep, there's a soul ish connection that's made. I mean, whether, you know, whether a person acknowledges it or not. I mean, even physicalists are looking for their soulmate. And what they mean by that is that there's something deeper that they want to connect with. That's not, reducible to physics and chemistry in a person. Now I know you, you've had thoughts about this too.
>> Yeah, he idea of reciprocity is where I just, I hit a wall with being friends with a robot. Can someone come to care about a robot? Sure. Could someone benefit from that? Yes. But is that a friendship? I mean, even in fiction movies like Big Hero Six entertain this idea of a robot. But at the end, sorry to ruin it, people have had plenty of time to see it. But the robot kind of sacrificing its life has meaning, not just cause it was programmed to, but seems to make that choice to do so,
>> Scott: Independently.
>> Which is something...
>> Yeah and that a robot can't really do. So if we stretch what a robot is and make it self conscious, then it seems we could have that kind of friendship. But robots can't and won't be able to do that. So the idea of friendship to me now today, I think maybe it has some legs, so to speak with people. caused the idea of friends on social media has been really diluted. I mean, I call somebody a friend that I've responded to in a tweet back and forth.
>> Scott: That's right.
>> So maybe that's why this idea.
>> 800 of my closest friends.
>> Yeah, exactly. So I understand why people might find that appealing. But I think we have to remember there's something fundamentally different about a robot that is gonna not allow the kind of friendship scripture talks about.
>> Here's maybe I'd put it like this.
>> Sean: Okay.
>> The real friendships are organic. And natural and self initiating, not programmed. And I think that it, I think that's one, I think that's another way to look at this idea of reciprocity. 'Cause if a machine is not an a moral agent, then it's hard to see how it can, how you can have a genuine friendship.
>> That makes sense. So we've talked about with robots that they don't have creativity similar to human beings. It's downstream from the creativity built into them. You've also talked about how robots don't have moral reasoning, but they're built in a certain way to react in certain circumstances because of the moral reasoning of the programmers.
>> Scott: Yeah, I'd put it like this. They have an, algorithm that's driven by unstated, unexpressed-
>> Sean: Okay.
>> Moral values.
>> Sean: Okay, good.
>> Get that right.
>> By built into 'em by human beings who have creativity and moral reasoning.
>> Which interestingly sort of begs the question of where did that creativity and rationality and moral sense come from in the programmers themselves in the first place? Which on a, you know, as you've pointed out numerous times on a naturalistic physicalist worldview, there's just no adequate way to account for that.
>> Sean: That's exactly right.
>> That's the big elephant in the living room in this larger conversation.
>> That's a great question. Let's jump to what's been called the robots, robot rights movement. When I first heard this, I was like, robot rights movement again, what are we talking about? What's happening now? My first instinct is like, of course robots don't have rights. They're not human beings. But that's not necessarily what's meant by robot rights. It's more the sense of do we attribute rights to robots because of the function and role they may play in society, for society to function better. So it's not for the sake of the robot itself, but more the role that it plays. Is that fair? And what are your thoughts on robots rights?
>> Well, I think that is, that is a fair description. And I would suggest that if, I mean, if the fundamental right is the right to life, which robots don't have, all other rights are built on that. So in that case, right, if we recognized a right for a robot, it would be because it has instrumental value, not intrinsic.
>> Sean: That is it.
>> It has value to accomplish something else that's not intrinsic to the robot itself. So I could see, for example, if you have, say a robot that is actually, you know, walking someone across a busy street.
>> Sean: Okay.
>> For example.
>> The robot would have the right not to be run over by a car in the street.
>> Okay.
>> It would have the right.
>> The right sense of the legal right, yeah.
>> It would basically have, maybe it would have the right of way.
>> Okay.
>> Would be a better way to put it.
>> Okay, that's good. So that would be one example. You know, I think robots would have, if they're performing valuable functions, I wouldn't call that a right. I'd call that just, you know, something that's wise to do. They would have it, it would be in their, be in everybody's interest to have them properly maintained, you know, if they needed repair, to provide those. But not because the robot has a right to that. I mean,
>> Sean: Okay.
>> I suspect none of us would think twice. That when a robot outlived its usefulness, you know, stripping it for parts and throwing the rest on a scrap heap, you know, which we would not do with human beings.
>> And this is where we make a distinction between a legal right and a moral right. So legal rights should reflect moral rights. But this is not always the case.
>> Scott: Not always.
>> Whether it's in abortion laws, human beings have the right to life. The unborn is a human being. Sadly, our laws don't reflect that. We see a similar thing in Nazi Germany, the right to life was denied to human beings. So they don't always match up. When we're talking about human beings, we have a moral right not to get run over. And we have the legal right. A robot doesn't have the moral right. It has the legal right. In fact, if you damage the robot, you're not harming the robot itself as a human being. If anything, you're harming the person who made it and the person who ordered the service of like food delivery.
>> Scott: Right.
>> That distinction I think is really important when we talk about robot rights.
>> I'm just thinking out loud here on this one-
>> Uh-oh.
>> Because, yeah watch out. But I'm wondering if there's a parallel to the way we think about animal rights.
>> Sean: Hmm.
>> The reason we have laws that protect animals from cruelty, in my view is not because animals have rights, but it's because of what cruelty to animals says about human beings. It's a virtue based approach to that. And I think that sort of reflects, I think what you just suggested, that we would recognize something for robots, but it's not intrinsic to the robot. It's because it has it, you know, that not providing that would, would involve harm, or benefit to another human being or a community. And so to look at,
>> Sean: Interesting.
>> I think to look at it in terms of the, that virtue approach is what does that say about us as a community if we would, you know, have a sort of fast and loose view toward how we, you know, how we treat robots because, we don't want to be seen as uncaring or uncompassionate toward the people whom the robots are serving. So something like that, I don't know. So I have to think a little harder about that one, but...
>> That's fair. But I think you could also argue that how we treat a robot is going to shape us. And this is true with animals.
>> Scott: That's right.
>> If you're cruel to animals, the issue you raised is behind it. But it also affects us, maybe the way we'll treat human beings, maybe the way it will treat creation. The same with robots is maybe we want robot rights because the more they seem to be human, if we mistreat them, it can dehumanize others in our minds and create a culture that is less loving and caring. I think for that reason, almost alone, because we do sometimes confuse, I mean, I found myself, sometimes I get mad at Siri. I'm like, "What am I doing?" Like I'm just, it's like pulling out something in me. I'm like, "this is just a program." It's just doing what it's trained to do, and it's like pulling something outta me. Well, for that reason, there maybe should be a kind of qualified robot rights as there's animal rights. So we in turn treat one another more humanely.
>> I didn't know that you yelled at Siri.
>> I've caught myself doing that a time or two. I'm like, what am I doing? [both laugh] I just, what's the point?
>> Well, let me, you've had, you know, some pastoral experience too. 'Cause I know the question comes up sometimes. What about using robotics and AI in the ministry or the local church? If it's good for therapists, why wouldn't it also be good for something like pastoral counseling? What, I know you've thought about that a bit.
>> I have, and I don't have it all figured out. For some reason when it comes to ministry, I become that much more cautious with all the issues that we've raised. Because Christianity's such an embodied presence kind of religion, so to speak. Our job is to love God and to love others. So I'm less concerned with pastors being out of a job, although I don't want pastors out of a job. As I'm asking the question, are we ministering to people in a humane way? So my bottom line principle is if robots can better support humans ministering to humans, save time, save money, be more efficient, I don't, in principle have a problem with that, but the moment it replaces human to human interaction or confuses a robot with a human being, then I'm out. Then I'm out. So that's the in principle barrier I would trace. But if somebody, I mean, even simpler, artificial intelligence is used all the time by pastors today.
>> Scott: Right.
>> We just have to be cautious, we're never dehumanizing somebody and they're just a system of efficiency that replaces ministry, then we've gone too far.
>> Yeah and I think that, that would hold for therapists, counselors, you know, any type of sort of human to human contact that we find, actually find it irreplaceable. But I think it can help support that. It can help get those started, move us along the way. But when it comes to replacing it, I think you're absolutely right that then we've reached a danger zone that we ought to retreat back from.
>> And we also, I gotta think through, I think of robots. Like there's been stories of serving, you know, the sacraments or potentially seeding people. Part of me is like, that's fine. But we also gotta remember there's roles within the body of Christ that humans are meant to do. Is it encroaching on that? Those are the questions again, we have to ask before we rush into technology that just saves us time or seems cool with that, oh, go ahead.
>> Let me, I got, I wanna speculate.
>> Okay.
>> For a minute.
>> All right. And ask you to do the same.
>> Okay.
>> Where do you see this going in the future?
>> Boy, I-
>> Open your crystal ball there for a minute and...
>> I gotta tell you, I don't know the answer to that. I'm somewhat, I'm not a prophet, obviously, we work at a non-profit organization to steal Barry Corey's line that he used. I don't even, I don't even know how to answer that. But I think, the bottom line is it's gonna become more and more integrated into our daily life. Probably going to increase like a hockey stick, in some ways. We're just seeing the beginning of this and some of this has been brought on by the pandemic. So I think it's going to increase, it's going to grow. We're seeing somewhat of an arms race across countries to get artificial intelligence that affects the way we even approach war. So I think there will not be, the fact that we are having the question about how it affects church ministry is kind of downstream from this. This tells us that everything we do in life in some fashion, is going to be touched by artificial intelligence. That's where it's headed, if it's not already there.
>> That, I think, that's really insightful. Yeah, this is, it's hard to be prophetic about this because I don't, you know, 10 years ago who saw any of this coming other than the people who are actually in the industry. I think the average person didn't see any of this on the horizon and still may not see much of it. What's penetrated the fabric of their lives, they still, they just see it as just, you know, just part of what technology provides.
>> Hmm. Well said. Bottom line is we can't be afraid of these questions, but need to invite 'em and consistently go back to scripture and think biblically about 'em. Thanks for joining us today. This conversation was brought to you by the Think Biblically podcast. Go subscribe. That's an audio podcast. And yet sometimes we get to do this by video as well. Scott, great discussion on artificial intelligence-
>> Scott: Sean, lots of good questions-
>> And robotics.
>> Scott: Lots of good interaction, lots of fun.
>> We'll see you next time. [upbeat outro music]