With the rapid progression of AI development over the last three years, one thing is sure: the world will never be the same. Society will continue to implement and embrace artificial intelligence, and the coming generations will never know life without it. This will be positive and negative for the world. As Christians, how are we to think about artificial intelligence? Are we to embrace it wholesale, or abstain from its benefits?
First, it is important to note that Christians cannot write off AI as rotten to the core. Technological development is part of the creation mandate. Advancements or new tools are generally morally neutral with the capacity of being used for good or evil, except in some extreme cases. The internet is a perfect example of technology being neutral. The implementation of a “world wide web" has been used for immense good over the last thirty years, and has been used for evil in the same span of time. Similarly, we should expect that artificial intelligence will follow a similar trajectory. Incredible good and evil will flow out of its implementation.
The key then for Christians engaging with any kind of technology is discerning how to use our new advancements wisely. Christians use the internet in a wide variety of ways to further evangelistic and ecclesiastic ends that bless millions. But this does not happen by accident. The use of technology for God’s glory requires thoughtfulness. So too we must think carefully about stewarding artificial intelligence to magnify God, and keep ourselves from error. In short, we must be open to AI, but think very carefully about how we are to avoid sin or foolishness in using it.
Substantial thought concerning this new advancement will not be completed in this short article. This is a call for the church to collectively discern how it will proceed in wisdom over the next number of years. Instead, what can be done is offer a reflection on one particular point of intersection between Christianity, AI, and learning.
Learning and the Creation Mandate
In the beginning, when God commands Adam and Eve to “rule” and “subdue” the earth, he is calling them to a life of creation, work, and learning. The Proverbs, at multiple points, exhort those who fear the Lord to “increase in learning, and… obtain guidance” (Prov. 1:5). The four young men in Daniel are said to be given “knowledge and understanding of all kinds of literature and learning” directly from God (Dan. 1:17). Beyond this, multiple of the New Testament epistles admonish the church to excel in their respective professions, working hard, as one would work for God himself (Col. 3:23-24). Implicit within this exhortation is certainly an expectation of learning over time. One can hardly excel in their craft or truly work hard without growing in their cognitive capacities, and committing to learning. Therefore there is no question that Christians should value learning, even in subjects outside of theology or doctrine. Learning is a continuation of our worship and service unto God.
With the advent of AI, however, the virtue of Christian learning has never been more threatened. Many use AI models to conduct research, but more frequently these language protocols are being used to produce thoughts that are passed off as human and original. Christians and non-Christians alike are increasingly allowing AI to think on their behalf. This can be corroborated anecdotal and empirically. I completed my seminary training just as the first GPT models were being opened to the public. At my Christian university I personally witnessed classmates outsource essays, presentations, discussion posts, and more to an AI protocol. That was years ago. My wife currently works as a teaching assistant at a Christian institution, and she deals with AI-produced work on a daily basis.
Now, the real question is: “what’s so bad about that?” Afterall, everyone has different needs, competencies, and skill levels when it comes to learning. Is there anything seriously problematic with this current educational climate? The answer is yes, for two reasons. Using artificial intelligence to write for you, or create on your behalf is:
- Spiritually problematic.
- Pragmatically detrimental.
Spiritual Problems
To put it simply, over reliance on AI, or using it to create for you is spiritually problematic because it is fundamentally dishonest. Here, I’m specifically thinking of anything created by a language model that is then submitted, formally or informally, as your own original thoughts. If Chat GPT writes the material that is then submitted in your own name, there is a dilemma of truthfulness. This is no different from plagiarism, or passing off someone else’s sermon as your own.
The only difference between what has been understood as plagiarism and the usage of an AI model is the fact that the material is not truly created by another person who could call you out for dishonesty. The content created by the program is somewhat unique to you. Most commonly, the user prompts the engine towards a certain response. This can yield a feeling of ownership, and discard thoughts of plagiarism or dishonesty – but this is foolishness.
We find ways to rationalize our behavior. But at the heart of God’s Law stands the command “do not lie to one another,” (Col. 3:9). God is truth. He calls his children to walk in truth. Yes, using AI generated content is not necessarily taking material from someone else, but passing it off as your own original thought is deceitful, and transgresses God’s law. In the end, we all shall be called to give an account for every idle word spoken (Mt. 12:36). Will we not also be called to court concerning every word we claimed was ours?
In the eyes of God, passing off the work of an AI protocol as your own original thought is no different from those who commit fraud or lie to their families. Deceit is the native language of Satan (Jn. 8:44). If we are willing to use technology to forge essays, presentations, projects and more, then hard questions must be asked: are we more like the Father, who is Truth, or the adversary who delights in shadow?
It is also worth considering whether or not over-dependance upon AI programs for creative activities violates the creation mandate in an ontological manner as well. God created human beings to rule and subdue the earth. Humanity is called to create, even as they themselves are created in the image of God. The advancement of technology has, up to this point in history, only aided men and women in their creative capacities. Now that new models can take up the mantle of creation we have to ask hard questions: when does it go too far, and human beings are no longer living to faithfully rule, subdue and grow?
Practical Problems
The spiritual reality of using AI as a crutch for learning is bad enough. To press the issue even further, empirical studies show that over-reliance on artificial intelligence models is tantamount to intellectual suicide. In short, writing your essay with Chat GPT is not only spiritually bankrupt, it will damage your brain for years to come.
Nataliya Kos'myna, a research scientist with MIT just published the results of a study focused on the impact of AI implementation in education, specifically in writing. The study focused on three distinct groups: students who used Chat GPT, students who used a search engine, and students who only used their brain in writing an essay. The findings are frightening.
Kos'myna and her team discovered that students using AI produced essays that were largely homogenous, showing very little variance on a particular topic when compared to one another. In addition to this, judges could easily identify the respective biases of the Chat GPT model based on patterns in its output and assistance protocols. Artificial intelligence models are trained on data, so the content they produce will largely be biased towards the pool of data it was trained with. In short, essays written with significant assistance from Chat GPT largely regurgitated similar information – nuance, complexity, and diverse perspectives are leveled out by the program.
On a more concerning note, each phase of the testing involved in depth brain scans that helped clarify what was happening on a neurological level. In the end, Kos'myna and her team concluded that “Brain connectivity systematically scaled down with the amount of external support.” Physiologically, the connectivity of the brain changed depending upon the amount of outside help being offered to complete the essay. Tracking a metric that describes the overall usage and connectivity of brain waves, the team showed that “the Brain‑only group exhibited the strongest, widest‑ranging networks [of neural connectivity],” while subjects with “LLM assistance elicited the weakest overall coupling.” Compared to the brain-only group, individuals using Chat GPT to help with their essay experienced a 55% drop in their neural connectivity metric. Fifty-five percent.
Within the Chat GPT assisted group, 83% of the participants reported serious difficulty when asked to reproduce a quote within their essay from memory. The same participants were much less willing to profess ownership of their essay, while brain-only participants unanimously claimed ownership over their finished work. The final phase of the testing swapped the two groups, giving the Chat GPT participants no access to external tools and giving the brain-only group access to an AI model for one final essay. Those who started with AI performed extremely poorly, experiencing difficulty at multiple points. Those given an AI model after first beginning without such assistance excelled. More will be said about this in a moment.
In conclusion, the research team offered this concerned reflection:
“AI tools, while valuable for supporting performance, may unintentionally hinder deep cognitive processing, retention, and authentic engagement with written material. If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it. When individuals fail to critically engage with a subject, their writing might become biased and superficial. This pattern reflects the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.
Cognitive debt defers mental effort in the short term but results in long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity. When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalizing shallow or biased perspectives.”
The data shows that nothing is threatened more by the thoughtless use of artificial intelligence than intelligence itself. These pragmatic concerns arising from foolish implementation of AI technology vindicates God and his natural law. The created world, including our own physical bodies, are subject to God’s design which includes natural consequences for foolish actions. This theme, again, runs throughout Proverbs. If we do not think carefully about how to use our technology, we will run the risk of sinning against our Creator, and rightfully bearing the consequences – even within our own brains. Writing about the impact of AI on writing, Paul Graham’s diagnosis rings true:
“This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be. It will be the same with writing. There will still be smart people, but only those who choose to be.”
What Now?
So then, with these spiritual and practical concerns identified, what is a Christian to do? The frightening nature of the data, and the spiritual threat of sin against God may scare us to the point where complete separation from this new advancement feels warranted. As outlined previously, however, there are other ways forward. Christians can thoughtfully implement AI into their own learning.
Understanding what faithful usage of AI models looks like depends heavily on the categories of usage to be avoided, which have just been outlined. Using artificial intelligence to create on your behalf, with no disclaimer, is spiritually and physiologically concerning – but generally speaking, AI can be faithfully used in any capacity where it helps facilitate learning and creation, without usurping humanity’s role in the process.
For example, Chat GPT can help facilitate research on a given topic at a high level. In the same way that search engines brought about a new era of ease in accessing information, AI models can similarly streamline the acquisition of data. This is applicable at multiple points in the life of a learner. Artificial intelligence may help compile relevant sources to be further investigated, lead an individual to helpful resources as a starting point, or even directly suggest possible methodologies in approaching a problem. There is a massive difference between asking a program to write an essay for you, and asking it for resources to aid your writing. The former is dishonest. The latter is faithfully leveraging technology to facilitate learning.
This is the same place that tutors, books, and then search engines have occupied within the learning process. The only difference is that AI protocols are radically more efficient and accessible – just as search engines are when compared to books. Each resource still has its place, mind you. This is not to say that AI will render all other aids to learning obsolete, just as the printing press did not do away with valuable teachers.
Consider the differences between the following requests:
“Please write me a cover letter with a professional voice that highlights my skills as a salesman,” versus, “Please conceptually identify the best practices for writing a professional cover letter.”
Or “Write me an essay on the impact of Paul’s second missionary journey,” versus “Show me ten key resources related to Paul’s second missionary journey.”
Beyond facilitating research, AI can also aid learning in an entirely Christian manner by examining original work in a way that spurs learning forward. This is truly unique to AI, and has never before been possible. For example, when writing an essay, revision and editing is key. A large component of making good edits to your own work is accessing outside perspectives to provide feedback and suggest changes. Consulting a respected voice has always been recognized as part of the learning process. As we receive feedback, our brains stretch in considering whether or not we will implement their proposed changes. We war with ourselves over wording and structure, and often a comment will be rejected only to open up some other avenue of the mind to walk down.
Artificial intelligence models represent another voice in the crowd that we can go do for feedback and suggestions. They cannot replace human voices – because AI models, like humans, are biased and therefore not omniscient or always correct. But they can provide high quality questions to consider. This is fascinatingly corroborated by the aforementioned MIT study. Individuals who began an essay without any AI assistance flourished as a protocol was made available near the end of their project.
This too requires thought and limitation, but the principle is helpful. If an individual becomes reliant on a program’s feedback, blindly accepting every proposed change, then nothing is truly gained. The intention is to use this new technology as a provider of diverse perspectives. AI models can even be prompted to react or comment on work from distinct perspectives – liberal versus conservative scholars, for example. This capacity opens up new avenues of learning, and in the right hands can be used to develop a critical mind like never before.
I’ve personally implemented AI in such a manner. After finishing a writing project, I will ask for feedback from people I trust, and as part of the editing process I will ask my preferred AI model for conceptual feedback or suggestions from a specified perspective, without making any direct edits to the work. Sometimes, I throw its comments out, finding no value in them; or I think the program is simply incorrect. Other times, its reflections do prompt me to see a conceptual hole in my work, or I may find a suggestion to be helpful in a different context. I will allow Chat GPT to challenge my own thinking, leading to personal growth – but I will never let it think for me.
With considerable thought and care, Christians can keep technology in its rightful place. Engaging with AI in learning requires clear boundaries. Its role is to aid. Humanity’s role is to create, and utilize. When we implement advancements to facilitate learning without supplanting our role as human creators, we reap the benefits of the Creation mandate and God is glorified.