Serving Others Using Generative AI

AI is often marketed as a servant for us. What if we think of it instead as a tool to help us serve others? Instead of generating content for us, could we use it to reflect thoughtfully about how our words will affect others? Instead of seeking answers, could we seek to understand others better? Could we use it to enrich cross-cultural relationships instead of automating solo tasks?

The large language model (LLM) technology underpinning chatbots like ChatGPT is a gift of God that we must cultivate with his wisdom. It’s an outflow of the glorious structure with which God created the world. He made a world with rules and patterns—day and night, seasons, the basic laws of electromagnetism. These rules are not boring or stifling; they are the framework within which the wondrous diversity of life flourishes. Patterns and regularities abound for us to learn—and God expects us to learn from them: “Go to the ant, O sluggard; consider her ways, and be wise” (Prov 6:6); “[you say] ‘It will be stormy today, for the sky is red and threatening’” (Matt 16:1- 4). God built into the fabric of creation the diversity of patterns that would be needed for both human and machine learning. When a chatbot responds to our ideas, God’s rules enable the computation to happen. The diverse patterns that the communication captures from the world, make it possible for those machine computations to represent our thoughts and relate them to others’ written experiences, and God’s common grace has provided countless examples of helpful interactions between humans that the machine can echo in words that might be helpful to us.

Yet the use of this technology calls for wisdom and virtue. Generative AI is unprecedented in its ability to deceive.1 Dominant narratives and uses encourage reductionistic perspectives (e.g., writing is “just” words) that mimic people’s work without credit and replace human connections. We must not be ignorant of the schemes of the devil in this technology.

Nevertheless, if we use it with wisdom and discernment, generative AI presents many opportunities for the Church. I’ll start with a few examples of how I’ve used generative AI in my own work, then draw some implications that you might be able to take into your own uses—whether you’re developing software like I do or just trying to use the tools that others have provided in ways that thoughtfully serve others.

Linguistic Hospitality

A few people who are not comfortable in English regularly attend my English-speaking church in Grand Rapids, Michigan. I saw an opportunity for us to use technology to welcome them better. We now regularly provide a translated slideshow (including song lyrics), and near-real-time translation of notes (mostly sermon notes) that a volunteer takes for them on a laptop, with the help of real-time English captions (that our friends can also watch, since they are learning English). If you want to know the backstory of how I built this, check out my blog2 or contact me.

This system allowed us to be hospitable for the parts of the worship service that were prepared in advance, but it didn’t help for the sermon. Our pastor sometimes goes off-script, so we needed a way to translate in real-time. I didn’t trust an automated transcript to be faithful enough to translate, so I wanted a human in control of the process.

I built a system3 in which a person can type sermon notes in an outline format, and the AI translates each outline point as it’s typed.
We have found that these tools have provided benefits beyond language barriers. For example, one person with hearing difficulty follows along with the transcript to verify their understanding. The careful translation workflow was helpful for writing English translated lyrics for a French song. And the transcript and outline may be helpful for those who need to step out for a few moments (e.g., to deal with children) to catch back up. These secondary benefits shouldn’t have surprised me: The Universal Design principle reminds us that improving accessibility for some also helps others. For example, people with strollers, suitcases, and grocery carts use curb cuts that were first designed for wheelchairs, and many hearing people also use closed captions.

I’m keen to share the fruits of our work; please contact me if you want to try something like this in your ministry contexts.

Tools for Thinking in Writing

I teach computer science at Calvin University. A group of undergraduate student researchers works with me to develop tools that help people care for each other through thoughtful and clear communication. We focus on academic writing because many students have experience with it in our undergraduate educational context.

We have developed a range of tools that support and encourage thoughtful cognitive engagement in text authorship. A core principle is that our AI tools are never given direct access to place words in a person’s document; some engagement from the writer is always required.4 For example, the tools might ask questions, summarize what points seem to have been raised so far, or highlight places where the writer might focus to enact edits they want to make. Even when the tools present text that the writer could copy and paste into their document, the text is only given piecemeal. The result, we hope, is that despite receiving substantial help from AI, the writer is still unambiguously the author of their words and responsible for the content.

Missions workers may be interested in trying out these tools for refining their communications with their supporters or reflecting on the cultural appropriateness and accuracy of their materials. The “Thoughtful” add-in for MS Word provides alternative perspectives, suggestions for next directions, and (coming soon) conversational help with discussion reflections and revisions.5 We also make available work-in-progress prototypes6 that help people make low-level edits to wording and sentence structure. Anyone interested in exploring these tools should contact me.

Check Output

Don’t trust the first output from the AI. It is trained to sound like its outputs are carefully reasoned, but the way it forms those outputs is more akin to improvisational freestyle rap. Even in summarization, it might miss the point, omit key information, or confabulate details. So, my translation workflow doesn’t trust the first AI-generated output, but uses backtranslation and self-review to check the results. How might you avoid falling for false AI confidence? Could you incorporate AI and human review in your use of AI? If you can’t verify the output, should you even be using AI?

Reflection

Use generative AI to reflect, not (just) to generate. Too often we look to AI to shortcut our thinking. But to love our neighbors, we probably need to think about them—and about how our words and actions might affect them. So rather than incorporating AI’s outputs into what you make, you can use AI instead to get feedback and alternative perspectives on your own words. This is also practical intellectual humility: We’re asking for help understanding others’ perspectives. My team has built this value into our MS Word add-in (“Thoughtful”). My blog discusses several other “reflective mode” uses of AI, such as reflecting on emails that you’re about to send.7 How might you use AI to reflect on your own words?

Context

Use context strategically. Fundamentally, language models are trained to select words that fit a context. The clearer the context, the more useful its response. Sometimes it’s helpful to hide context from the AI—when verifying a translation, the translation workflow intentionally obscures the origin of the translation to avoid the LLM defending its own decisions. But more context often helps. Especially when used in a cross- cultural context, more specific details about cultural and situational context may lead to better outcomes.8 Practically, you might include the following kinds of information in your prompts:
·    whom you are serving (cultural background, language proficiency, etc.)
·    what your overall goal/task is, what you’ve done already
·    why you’re pursuing the overall goal, why you’re doing this current task
·    how the output you’ll get will be helpful to you

Formation

Using AI forms us. Our habits form us. When we use AI to generate content for us, this habit may form us into people who value quantity over thoughtfulness.9 My team developed our MS Word writing tool add-in to try to embed values of service, reflection, and love for our readers. Although many suggested uses of GenAI replace human interaction, they can be repurposed to help us serve each other better. How can you choose uses of AI that cultivate your love for God and neighbor?

Closing Thoughts

Generative AI, like technology more generally, is a tool that God has allowed us to use in obedience, or disobedience, to his commands. Let us choose to use it to obey the Great Commandment to love him and love our neighbor. Could you use it to help you think about how people will react to a message you’re about to send? Could you use it to include people more fully in your gatherings? Could you build a disposition of skepticism and verification into how you use this technology? And how is technology—whether generative AI or smartphones or any other technology—shaping your soul?

1     Alexander Meinke et al., “Frontier Models Are Capable of In-Context Scheming” (arXiv, January 14, 2025), doi.org/10.48550/ arXiv.2412.04984; Jiaxin Wen et al., “Language Models Learn to Mislead Humans via RLHF” (arXiv, September 24, 2024), doi. org/10.48550/arXiv.2409.12822.
2     See more on my blog: kenarnold.org/posts/translation-workflow/.
3     github.com/kcarnold/live-outline/.
4     It may seem that this is a heavy constraint to impose, but in fact, we have found it to be somewhat liberating, as constraints and law can be when they are aligned with God’s design.
5     app.thoughtful-ai.com.
6     huggingface.co/spaces/CalvinU/writing-prototypes.
7     kenarnold.org/posts/reflective-ai/.
8     That cultural context information might also help when asking the LLM to evaluate a potential text. Some AI systems provide contexts (“Projects” in Claude) that you can pre-load with general background. Others (like ChatGPT) can be told to update their private notes about you to include this kind of information or will automatically infer it based on your prior conversations.
9    Kate Lucky, “AI Will Shape Your Soul,” ChristianityToday.com, September 11, 2023, www.christianitytoday.com/ct/2023/october/ artificial-intelligence-robots-soul-formation.html.

Author

KEN ARNOLD

Ken Arnold ([email protected], kenarnold.org) teaches computer science at Calvin University and serves at New City Fellowship Grand Rapids in music and technology.

All Scripture references are from the NIV.

Subscribe to Mission Frontiers

Please consider supporting Mission Frontiers by donating.