Abstract
This paper reports on the linguistic accuracy of five renowned “chatbots,” with an evaluator (an ESL teacher) chatting with each chatbot for about three hours. The chatting consisted of a series of set questions/statements (determined as being in the domain of an ESL learner) – aimed at assessing the accuracy and felicity of the chatbots' answers at the grammatical level. Results indicate that chatbots are generally able to provide grammatically acceptable answers, with three chatbots returning acceptability figures in the 90% range. When meaning is factored in, however, a different picture emerges, with the chatbots often providing meaningless, nonsensical answers, and the accuracy rate for the joint categories of grammar and meaning falling below 60%. The paper concludes on the note that although chatbots as “conversation practice machines” do not yet make robust chatting partners, improvements in chatbot performance bode well for future developments.