I was just granted access to the latest version of GPT-4 that allows for uploads of longer and a greater variety of files. A few thoughts.
The take-home essay is history. It’s all blue books and oral quizzes now. A weaker version of GPT-4 can now upload books 300 pages long. Even if it’s not in the training set people can upload a PDF of a book and get it to write a B-level book report. (And no, AI detectors don’t work, they give a lot of false positives).
But enough inside undergraduate baseball, what implications does this have for gospel topics? We were already at the stage where you could upload a general conference talk and create an EQ lesson in seconds. These updates allow people to upload longer content. When I was doing the Maxwell Institute seminar with the Bushmans when I was in graduate school I spent an afternoon command+Fing through the Journal of Discourses to find pronatalist rhetoric of early Church leaders.
Of course, the problem with that is that you can have pronatalist rhetoric that doesn’t mention the word “child,” and you can have a lot of mentions of the word “child” that doesn’t have pronatalist rhetoric, so it was a lot of time, and to catch the really nuanced discussion and themes that didn’t have keyword triggers I’d have to schlog through the entire Journal of Discourses. (Indeed, while reading Michael Quinn’s work I was always stunned at his ability to find juicy quotes in very out-of-the-way materials, and my only conclusion was that he basically read through the entire Church History Library looking for surprising factoids.)
I uploaded the entire first volume of the J of D and asked it for pronatalist rhetoric. When it originally gave me a lot of quotes about how to raise children I corrected it and said I was specifically asking about having children (not that the former is not important, it just isn’t what I was looking for). It said that there wasn’t much explicit about encouraging childbearing in volume 1, but it directed me towards, among other things, the section in the first volume that talks about Jesus being a polygamist, correctly noting that the emphasis on plural wives and children clearly had pronatalist undertones even if it wasn’t explicit.
Additionally, I uploaded the entire book of “Jesus the Christ” and asked it to give me quotes from sections where it talks about Jesus being angry. There was a little bit of hallucination with some of the quotes dealing with people being angry at Jesus, but the handful of quotes that I handchecked checked out, and were actually in the book.
Finally, not relevant to the newest updates, but I am increasingly finding AI helpful in my scripture studies. This evening I told it to give me quotes from Jesus that are examples of him showing love to others. The other night I tried to remember which prophets literally asked God for death, and GPT-4 gave me the list along with quotations (can you name them off the top of your head? There are four, answers below).
So to summarize, the latest updates have the potential to greatly facilitate historical/social research and personal scripture study. Want to look at all references to ancient Egyptian or the pure Adamic language in early Church discourses and messaging? You can do that in moments now. Want to systematically track changes in Church policies across the years? Also mere moments. The caveats still apply: all work will have to be double checked by a human because of hallucination, but the time spent on quality checks notwithstanding we are quickly entering an era when every Church historian, amateur or professional, has an army of research assistants on tap.
Answer: Moses, Elijah, Job (who, it specifies, is “not a prophet in the traditional sense”), and Jonah.
Well, we’re doomed, I guess. I’m not being entirely sarcastic. You can use AI to write a passable sacrament meeting talk, but that will entirely undermine one of the important purposes of the exercise (getting the speaker to think and learn about the gospel). Or you can use AI to quickly mine the most awful quotes you can find in church history without worrying about context, and use them to full effect (and doing so eliminates the silver lining of at least getting people acquainted with historical documents, people and doctrine). I’ve seen multiple juicy quotes not live up to their promise once you track down the original source in the footnotes or dig into the context, but it was hard enough to get anyone to notice that already.
Also, if we’re stuck with blue books and oral quizzes now (and I think we are), the options for online teaching are a lot worse.
I think Stephen C has described a responsible way to use AI: let it identify relevant passages but verify all of them to make sure they exist and are actually relevant in context, and leave all the higher thought (hopefully guided by inspiration) to the humans. Of course many people will be lazier than that, and their output will show it.
The statistician in me notes that this procedure doesn’t have a mechanism for detecting false negatives (relevant passages the AI failed to recognize as such), those false negatives will be systematically different from the passages identified (harder for AI to detect as relevant for whatever reason) and that difference could be correlated with the content (i.e. hard-to-identify passages could say something different than easy-to-identify passages). But I’ll let those who have been trained in doing this kind of research ponder whether that’s actually important.