-66-
to gender, for example, a hypothesis could be constructed that there is something about gender which
affects data quality, e.g., difference of salience of events between males and females. A second layer
of meaning inherent in a gender effect may be conceived of as some property of gender which leads
interviewers to interact differently with male and female respondents, thereby producing differences
in data quality. This latter hypothesis may be tested by an introduction of interaction terms into the
model. To my knowledge, however, we do not have a theory of what such interaction effects, if any,
may mean and how they come about (see Hoag & Allerbeck 1984 for a discussion of the literature).
To complicate matters further, in the model for invisible errors, the gender effect, to remain
with this example, can be interpreted on yet another level of meaning. The corresponding hypothesis
would be that some aspect of the social construction of gender, such as an (unconscious) disregard
for female labor force participation, may lead editors to be more likely to add job spells to interviews
with male respondents. Based on my experiences as supervisor of the editing group I can make the
claim that this was not the case-yet in the framework of the causal analysis presented below, the
interpretation of given effects is open to alternative explanations corresponding to levels of meaning.
An analysis of 'editing effects"" addressing this ambiguity in the interpretation of the effects
on data quality found in the analysis presented here would be desirable but poses great practical and
conceptual difficulties. First of all, each case was reviewed twice, and, as a rule, the second reviewer
would be a different person than the first. Data entry of the changes was made only after the second
review was completed, so that I have no way of telling who added which job spell other than going
back to the printouts and try to identify editors by their handwriting. Second, within the small and
fluctuating group of editors a division of labor was inevitable and organized around working hours
and experience with editing; those who worked in the evening hours would specialize on call-backs
which, in turn, were largely motivated by manifest or suspected inconsistencies in the event history
Like interviewing, data editing is a social process and thus open to similar processes than
those analyzed here. One possible editing effect explained above was the positive effect of tape
recording on invisible errors: given a tape, editors were more likely to add job spells since tapes
were the major source of information for this kind of editing. In many cases without tapes, only
call-backs could provide sufficient information. Analyzing editing effects would follow the same
design as the analysis of interviewer effects. Dummy variables indicating individual editors might
show differences between their respective "editing style" and thoroughness, for example.
Conceivably, some properties of respondents might influence the propensity of editors to detect
and correct errors. The occurrence of editing errors can be modeled only for invisible errors since
all visible errors were eliminated in the editing process. For visible errors, only the way in which
errors were corrected (i.e. according to the rules specified in the editing handbook or not) could
be analyzed.