fbpx arrow-leftarrow-rightaudio closedivot-right emailfacebook firesidegoogle-podcastsinstagramituneslinklogo-fullmicrophoneread searchsnapchatsoundcloudspotifytwitterutg-door-solidutg-doorvideo x youtube

Alarming news about artificial intelligence is everywhere you turn—students using AI to cheat on tests, the deception of AI, even Pope Francis was not immune to being featured in an AI-generated image. Every sector of work and culture is navigating the frontier of AI and trying to understand and guard against this new technology. The Church is no exception. Sifting through the alarming headlines uncovers a host of ethical and moral questions for even the most tech-savvy among us. 

The Unleash the Gospel editorial team recently sat down with Jeffrey Quesnelle to tackle our biggest questions and learn more about how to navigate AI as a Catholic missionary disciple. Jeffrey Quesnelle is the founder of Nous Research, an open-source AI research group with a focus on artificial intelligence and machine learning in the technology sector. He is also a father of four, a parishioner at The National Shrine of the Little Flower Basilica, and a former student at Sacred Heart Major Seminary. And he attests that when it comes to AI, “we have nothing to fear.” Here are his biggest takeaways. 

*This interview has been edited for length and clarity 

UTG: For those of us who are not technocrats, explain Artificial intelligence in the most basic way. 

Jeffrey Quesnelle: AI is computers giving us results without their having to follow explicit programming. Normally, when computers run programs, those programs are written by a computer programmer. These programs are basically very detailed instructions that tell the computer what to do in every situation. In contrast, what people would consider to be artificial intelligence is when the computer does something or is able to come to a result that wasn’t explicitly programmed by the programmer. 

AI is not magic. It was designed and written by humans. We designed the math behind it. It is not a complete black box in which  we have no idea what’s going on. The specific design of AI is fairly immaterial. The entirety of how an AI performs is dependent essentially only on the data that it’s trained on, and we control that data. Every AI that’s been designed right now has been created by and trained on data that people decided it should see. They said, ‘I’m going to get all these books together, or I’m going to scrape the web to create this data set that the AI is going to train on.’ The data is what matters, and we as humans control the data that trains it. So from that level, we still control how these AIs behave.  

UTG: What do you think are some of the biggest misconceptions surrounding AI?

Jeffrey Quesnelle: “One of the bigger misconceptions I encounter from people is that AIs are constantly learning from them, and that is not the case. Take ChatGPT, for example. It’s disembodied in the sense that you talk to it once, and it gives you an answer and the next time you talk to it, it has complete amnesia about everything that’s happened before. It does not update or change itself at all based on the interactions that people have with it. It’s as if it’s frozen in time because it’s a brain that learns exactly nothing.

There are two modes an AI model operates in. The training mode is where it is learning and training, and there is inference mode where you actually have it produce outputs. And when it’s producing outputs, it’s not learning at all. I think people have this misconception that as you talk to ChatGPT, it’s learning about you.  That is not the case at all. 

Now, a company like OpenAI can take all of the interactions with its users, record them, and then later use those to train a new model. In some respect , it has incorporated some of the information from you; but ultimately, it’s not learning from you. This idea of the AI improving itself autonomously, going out and figuring out how to train itself more is not something that currently exists.

UTG: So you’re saying AI won’t ‘take over the world?’

Jeffrey Quesnelle: I talk to a lot of people about my work in AI, and it usually goes down one or two paths. They say , ‘Well, that’s kind of interesting and kind of scary.’ and they ask me if the AIs are going to take over the world.  That seems to be the first thing that people jump to. I always tell them it’s a very unlikely scenario.

The big thing I really want to discourage is jumping to the fear of AI becoming sentient and taking over. I would like to remind people that almost anything could happen, right? You can take almost any scenario in the future and imagine some chain of events that might lead to that. What we need to ask ourselves is what is likely to happen and what would be the necessary conditions for that to happen. And before we get to the future we fear, there must  be all these intermediary steps.  We just need to be sensible about regulating and legislating those intermediate steps as they happen.

I like to tell people that saying AI could become sentient and take over the universe is like, if humans had never invented fire and someone showed you fire and you said, “Yeah, but what if you can make the atomic bomb with this?” And then they’re like, ‘We need to legislate fire right now—No fire for any humans because you could make an atom bomb with this fire.’ Yes, functionally, that is true. The creation of the atom bomb is a direct lineage of the technology advancement that was created with the cultivation of fire. But we shouldn’t  outlaw fire just because of some distant path of causality. Rather, we do what we did then: As these issues become realistic and we see evidence of problems, we take enriched uranium and we make it illegal to enrich uranium. We do the sensible things at the time that are going to have a likelihood of actually stopping the very specific thing that we’re trying to stop versus this nebulous sci-fi fantasy of what could happen in the future. 

UTG: What, then, are the real dangers of AI?

Jeffrey Quesnelle: What’s on the horizon in AI, likely in a year or two years, is the ability to do voice and images together with text. So right now, ChatGPT is a text chat bot. All of the research right now is on what are called multimodal link models, which is where AIs have been trained on audio and text and images, and the AIs can produce these things together. So you’ll start to see videos that are going to be highly realistic. And there will be a period of time coming pretty shortly where people are going to have to develop a pretty thick skin for not believing what they see. I don’t know how that’s going to have an effect on people who don’t have the sensibilities to discern what’s real and what’s not. This is something that is in the research community already right now and will come to market more and more throughout this year. I think by Christmas this year, there’s going to be highly advanced multimodal AI’s that are able to do voice and text and images at levels that are pretty difficult to detect. And what comes out of that remains to be seen. But I think people should be aware that they won’t be able to trust as much of what they see, unfortunately.

What I do tell people, is that things they ought to be scared about when it comes to AI are sort of larger and softer social changes that something like this could bring. That, to me, is the actual danger, what this will do to society at large and it can be either a good thing or a bad thing. 

UTG: How do you see this changing society?

Jeffrey Quesnelle: The Industrial Revolution was either a good thing or a bad thing depending on how you look at it. It drastically improved the quality of life. Before the 1870s, nearly 90% of all Americans were employed in agriculture in some way which meant that they were essentially working so that they could eat and just meet their basic needs. It wasn’t until the Industrial Revolution that, over the course of one generation, from the late 1800’s to the start of World War II, you had 90% of the jobs get eliminated in the sense that only 10% of the population was in food production. 

So what that means is that the way of life for the majority of Americans was erased because of technology, and it had to be replaced, as it is likely that something like that will start to occur over the next several decades as AI becomes more prevalent. 

We can ask ourselves what are the good things coming out of this technology and what are the bad pieces? It will probably commoditize some intellectual labor. Some things that previously required people to do some sort of intellectual work, whether it’s writing or summarizing a document, will no longer  require a person to do that. This sounds a little bit scary, but what’s interesting is that this has already happened many times throughout human history—even within our own lifespan. Already we’ve seen computers take what was previously intellectual work that humans had to do, and have now made it so that humans don’t have to do that specific class of intellectual work. So this is likely what is going to happen again with a new class of tasks. So rather than making specific predictions, what we should think about is what are the macro trends we’ve seen historically when these classes of changes happen in society? 

One thing I think we’ll see is an increase in the value of authenticity. We’re going to continue to commoditize certain classes of things that were previously considered proof of humanity, right? I think we’ll see an increased value in the authenticity of the human experience and the verifiability of that authenticity of the human experience. You could see artistic representations and artistic classes becoming even more important because you’ve commoditized some aspects of it. But those things which are truly their own and have an authenticity to the human experience may actually even increase in value. 

UTG: When you say ‘proof of humanity,’  I feel like that’s probably one of the things that’s really at the core of a lot of the fear surrounding this for a lot of people. Do you have any examples of how AI has failed the proof of humanity?

Jeffrey Quesnelle: I would say almost all of it. We have not seen a widespread demonstration of some resemblance of sentience. I say this as someone who loves the technology and thinks it can be transformative, but I would not overrepresent what it’s been able to do. I know there is a lot of fear around what AI could be, but I would say we’re still pretty far away from a world where that is going to be an immediate case, but it’s good to start thinking about it now. 

UTG: Do you have any concerns that human flourishing might not be at the core of technological advancements?

Jeffrey Quesnelle: I think that technologies that have augmented the average person have been a net benefit to us in history. So when humans were able to harness agriculture and animals to help them plow the fields, that augmented the average human’s ability to do more work and better work all the way through the industrial revolution. The creation of automated machines and the ability to harness oil and energy allowed the average person to be able to do more with their time and their skills. And all of these things, I think, have been net benefits. But ultimately we need to ask ourselves, what is human flourishing? We have to approach that from a Catholic perspective because the world has a different answer to that question.

I would argue that the world has many different answers, and they don’t even know what their answer is. That’s where the confusion and the dangers ultimately lie. I think that God designed a universe that encourages cultivation and advancement. From the beginning of Genesis, we’re told to be fruitful and multiply, and to fill the earth and subdue it. So already we see humans taking the fruits of nature and using them to build themselves up in some aspect that is at least not morally evil. Right? It may be a net neutral, but I would even argue that it is actually a good, that God constructed the universe in the way it is.

He created electricity as one of the four fundamental forces of the universe. And so we don’t see God restricting the advancement of technology as an evil within itself. Even our Gospel reading from this last week was the talent parable.  We’re told to take what we have and to, to make more of it within the context of God’s ultimate plan for us. And God’s ultimate plan for us is to have abundant life through Jesus, right?  That’s where we part ways, perhaps, with other people in the world because some others would say that human flourishing just means more wealth and more resources, and we believe flourishing is an abundant life in Jesus.

If we’re not fulfilling all of that towards abundant life in Jesus, then it’s all for naught.In fact, and it’s all for evil. Think of  the Tower of Babel. In Genesis, we see that the ancient people are building this tower up to heaven—a metaphor for construction and technology. Building a tower at that time in the ancient Near East was very difficult. It took tons of labor to do. And so that is a representation of man taking the world and building something out of technology from it. And God does not say that building the tower is evil, right? The tower itself was not the evil thing. The evil thing was their desire to reach to heaven and to be able to make themselves like God.  When we try to subvert the will of God with technology and view that as the definition of flourishing, that’s where I think there’s trouble. But I think we can use the technology we have today as long as it ultimately drives us towards having a more fulfilling life in Jesus.

UTG: What should a Catholic be aware of?

Jeffrey Quesnelle: I would tell people that things we need to look out for are things that will exacerbate the dehumanization that already exists within our society. Things that blur the lines between what is a person and what is not a person. As Catholics, we have things that we have to hold sacred.  While the Church has not offered any definitive guidance such as, ‘Here’s the red line that we, we don’t cross,’ the larger body of Catholic social teaching can give us the broad strokes that we need to be thinking about. The dignity of the human person is always at the center of Catholic social teaching, and insofar as we believe that Catholic social teaching will bring forward the best culture and the best humanity that we will have and insofar as AI could allow us to go further away from that, well then that’s the areas that we need to be concerned about.

This could be something such as AI girlfriends, things that further remove people from a place where they should be, which is engaging with their fellow man and having real personal relationships with them. Insofar as AI can move people away from that, that will be the sort of danger that will cause society to be worse off. 

The core questions I think that we’re going to be butting up against would be the question of the importance and the specialness of human life. We as human beings have been made in the image and likeness of God. We have a soul that is unique and special, imbued by God at the time of creation, and it cannot be replicated by any aspect of technology. And that is something that we need to be able to synthesize. 

But what we shouldn’t do is project from an aspect of fear. Equipped with knowledge and equipped with that certainty, we can tackle this and use this technology for the betterment of our brother and know where the boundaries are.

UTG: How do you think Catholics should respond?

Jeffrey Quesnelle: As Catholics we’re called to be active participants within the scientific community. We are not meant to be hermits that say, “oh, well, this bad thing’s gonna happen, so we’re not gonna engage in it”. So for someone like me who can work in this field and who can make contributions, I find it imperative that  Catholics need to be at the forefront. I need to be there saying, “Here’s what we need to do.” Because just sticking our head in the sand is not the proper response. 

UTG: If you had to give your thirty-second pitch to an AI skeptic, what would you say?

Jeffrey Quesnelle: I would say it can help make your life easier and better, and that’s really the measure of any technology or product from a business standpoint—does it make your life easier and better? If it does, then great. If it doesn’t, then why are we even talking? So that’s the very first pitch, that it is not here to replace you, it is not here to take over you, it’s here to make your life easier and better.

UTG: Do you have some examples of how AI will make our lives easier and better?

Jeffrey Quesnelle: I think that it will benefit  us in ways the skeptics don’t see. There will be massive advancements in the education system, for example, because of AI. Imagine a teacher who knew everything and could spend all day with just one  student. There will be tutors specifically tailored to learning along with individual students, knows what their problems are, and be able to ask them novel questions that cut at the heart of where they’re having trouble with a concept. That is very much within the realm of where we are with AI today. 

In healthcare, there are a lot of ways we can use AI to augment human abilities: in decision-making processes and in diagnoses. You’d want your doctor to catch the one-in-a-million thing that they just had never been exposed to, they didn’t know about but is applicable to you. The AI has the ability to pinpoint very specific, rare conditions that the doctor you happened to go to at that time would not have been able to do.

UTG: Any final words?

Jeffrey Quesnelle: Through Jesus, we have nothing to fear. It’s all going to be okay.

Any recommended reading if we want to learn more?

Jeffrey Quesnelle: Encountering Artificial Intelligence by the AI research group for the Center of Digital Culture from the Holy See.