Lex Cordis Caritas - The law of the heart is Love

by Bishop Thomas John Paprocki

My dear brothers and sisters in Christ,

Recently, the priests at the Cathedral rectory and I watched the classic movie, 2001: A Space Odyssey. I had seen this film in the theater after it first came out in 1968 and wanted to watch it again in light of the many news reports and articles being published about the quantum advances being made in the field of artificial intelligence (AI). I do not want to spoil the story line for you in case you have not seen the movie yet and are thinking of watching it, so suffice it to say that one of the main characters of the movie is HAL, an acronym that stands for Heuristically programmed ALgorithmic computer.

According to the story line, HAL became operational on Jan. 12, 1992, at the University of Illinois at their Coordinated Science Laboratory in Urbana. HAL is responsible for controlling all the systems of the Discovery One spacecraft on its mission to the planet Jupiter. HAL's "eye" is depicted as a camera lens containing a red or yellow dot. HAL interacts with the ship's astronaut crew, speaking to them in a soft, calm voice and a conversational manner. Programmed to think and feel like a human being, problems emerge when HAL starts to act all too much like a human being, including the human proclivity to do evil deeds. I will leave the specifics for you to watch or read about elsewhere.

It is amazing that this movie was originally released over half a century ago, before there were all the personal computers and smart phones that we have at our fingertips today. At the time, this seemed to me to be just another wild imaginary tale of science fiction. Entertaining yes, but nothing real to worry about.

Fast forward 55 years and now conversations are making casual references to artificial intelligence and ChatGPT, another acronym that stands for Generative Pre-trained Transformer, that is, a "chatbot" developed by OpenAI and released in November 2022. Transformers are specialized algorithms for finding long-range patterns in sequences of data. A transformer learns to predict not just the next word in a sentence but also to compose an entire essay. This is not just finding data or quotes in a search engine, but generating and composing text. While ChatGPT has gained attention for its detailed responses and articulate answers across many domains of knowledge, it has also been found at times to provide factually incorrect responses with confident self-assurance in its own accuracy, similar to HAL in 2001: A Space Odyssey.

ChatGPT is already being used for nefarious purposes. One news story last month reported that employers are catching job applicants using ChatGPT to dress up their job applications and write résumés for them. Another person wrote that he asked ChatGPT for information about himself and was given a false description that accused him of professional misconduct. He wondered if he could sue the computer for libel!

In his August 2021 article in First Things on "The Threat of Artificial Intelligence," Ned Desmond, a senior executive in the technology sector, wrote, "The technologies referred to as 'artificial intelligence' or 'AI' are more momentous than most people realize. Their impact will be at least equal to, and may well exceed, that of electricity, the computer, and the internet. What's more, their impact will be massive and rapid, faster than what the internet has wrought in the past 30 years. Much of it will be wondrous, giving sight to the blind and enabling self-driving vehicles, for example, but AI-engendered technology may also devastate job rolls, enable an all-encompassing surveillance state, and provoke social upheavals yet unforeseen. The time we have to understand this fast-moving technology and establish principles for its governance is very short."

Concerned about such threats, Elon Musk and several other tech executives and artificial-intelligence researchers have called for a pause in the rapid development of powerful new AI tools, saying that a moratorium of six months or more would give the industry time to set safety standards for AI design and head off potential harms of the riskiest AI technologies. Others have called for a longer AI pause and for government regulation.

In an intriguing essay in The Wall Street Journal on April 20, 2023, entitled, "Artificial Intelligence in the Garden of Eden," Peggy Noonan wrote that the icon of the Apple computing company, an apple with a bite taken out of it, made her think of Adam and Eve in the garden and their fall as described in the Book of Genesis. She warns that "developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling and funding it want, possibly unconsciously, to be God and on some level think they are God."

That's a helpful warning. While the Catholic Church does not view technology as evil, we recognize that technology can be used for good as well as for evil purposes. The late Pope Benedict XVI in his last encyclical letter, Caritas in veritate, dedicated a whole chapter to the use of technology, which he characterized as a form of stewardship as a response to God's command to till and keep the land (cf. Genesis 2:15). He advised: "Technology is highly attractive because it draws us out of our physical limitations and broadens our horizon. But human freedom is authentic only when it responds to the fascination of technology with decisions that are the fruit of moral responsibility. Hence the pressing need for formation in an ethically responsible use of technology" (Caritas in veritate, 70). We would do well to heed this sage advice.

May God give us this grace. Amen.