*Ubiquitous Knowledge Processing Lab; Department of Computer Science; Technische Universität Darmstadt; Germany
**German Institute for International Educational Research; Frankfurt; Germany
Résumé (en anglais)
While detecting simple language errors (e.g. misspellings, number agreement, etc.) is nowadays standard functionality in all but the simplest text-editors, other more complicated language errors might go unnoticed. A difﬁcult case are errors that come in the disguise of a valid word that ﬁts syntactically into the sentence. We use the Wikipedia revision history to extract a dataset with such errors in their context. We show that the new dataset provides a more realistic picture of the performance of contextual ﬁtness measures. The achieved error detection quality is generally sufﬁcient for competent language users who are willing to accept a certain level of false alarms, but might be problematic for non-native writers who accept all suggestions made by the systems. We make the full experimental framework publicly available which will allow other scientists to reproduce our experiments and to conduct follow-up experiments.