Abstract
People's performance on knight/knave problems is deliberate. They make assumptions, draw deductive inferences from them, and evaluate the consequences of these inferences. In an initial paper on this topic (Rips, 1989), I proposed a model for a subset of such problems that depend on sentential reasoning. The main component of the model is a set of natural-deduction rules, drawn from prior work on propositional inference. This natural-deduction framework seems well suited to explain the reasoning that subjects display on these problems, since it incorporates a mechanism for making assumptions and following them up. Moreover, the number of assumptions and rule applications needed to solve a problem yields an intuitively appealing measure of how difficult the problem should be. In accord with this prediction, the experiments found increases in error rates and reaction times as a function of the assumptions-plus-inferences measure. In their note, Johnson-Laird and Byrne sketch a possible alternative. Their account posits five processing strategies tailored to this problem domain and a mechanism for evaluating sentential arguments based on mental models. The mental-model component is a variation on the usual truth-table method, where individual models correspond to truth-table rows. The main prediction of this component is that the more models subjects must consider, the harder the problem. However, the experiment reported here found no evidence for this prediction. Problems with larger numbers of models do not yield higher error rates than those with few. What does cause difficulties for subjects is scope relations among connectives, a fact that inference-rule theories can easily explain. Given these findings, it's not surprising that the predictive burden for knight/knave problems must be carried by Johnson-Laird and Byrne's strategies, rather than by mental models. These strategies control the order in which subjects consider parts of the problem, and they provide possible stopping points. There are, however, several difficulties with these strategies. Of their four new strategies, Johnson-Laird and Byrne offer no evidence at all for two of them. Of the remaining two, only one accounts for a significant proportion of the variance when allowance is made for confounding variables. Moreover, all four strategies are ad hoc, rather than being derived from some more general theory. Certainly, much remains to be done in filling out the picture of how such problems are handled, as both Evans and Johnson-Laird and Byrne point out.(ABSTRACT TRUNCATED AT 400 WORDS)
Funding Information
  • James McKeen Cattell Fund

This publication has 17 references indexed in Scilit: