In many reading comprehension tests, different test formats are employed. Two commonly used test formats to measure reading comprehension are sustained passages followed by some questions and cloze items. Individual differences in handling test format peculiarities could constitute a source of score variance. In this study, a bifactor Rasch model is applied to separate the cloze-specific variance in a reading comprehension test composed of sustained passages (plus questions) and a cloze passage. The results are compared with a unidimensional Rasch model where all items load on a single dimension. The inclusion of the cloze-specific dimension, that is, the method factor, improved the fit and resulted in substantially lower item difficulty estimates for the cloze items. Findings indicate that reading comprehension tests comprising sustained passages and cloze items are not unidimensional and contain a cloze-specific nuisance dimension that contaminates the latent construct variance.