Discussion about this post

User's avatar
Rob Nelson's avatar

I can tell you exactly how the Khanmigo Jefferson chatbot responds to questions about Sally Hemings, at least how it responded back in October when I wrote up my experience pretending to be a fifth grader looking to complete a homework assignment. It responds with an approximation of an encyclopedia entry. And if you ask it questions that border on the sexually explicit it refuses to answer.

As you point out, the first problem with using LLMs as a tool for history education is confabulation. The second, as you also point out, is that LLMs generate language that has emotional and moral impact, but unlike writers and editors of encyclopedias, they have no agency or ethical understanding.

The third reason is that, unlike watching Bill and Ted, talking to one is deadly boring.

Sadly, the first two reasons are not likely to stop people from using them as educational tools. The third reason might.

Expand full comment
Jason Gulya's avatar

I really appreciate this breakdown, John!

This use case has always felt ick (I know, a technical term) so I just never did it.

I get your point. This exercise creates a caricature of the person, but our students will see it as authoritative.

A lot to think about...

Expand full comment
20 more comments...

No posts