This post is not directly about ‘British religion in numbers’ but I thought Christian Smith’s thought-piece on what should get published was worthy of signposting.
The paper is entitled ‘Five Proposals for Reforming (Especially Quantitative) Journal Article Publishing Practices in the Sociology of Religion toward Improving the Quality, Value, and Cumulativeness of Our Scholarship’.
Christian Smith is Professor of Sociology at Notre Dame and a major figure in the study of sociology of religion. In brief, the proposals are:
1. Scholarly journals should routinely solicit and publish numerous descriptive empirical research notes in every published issue.
At present journals are dominated by ‘the article’ grounded in theory, which excludes shorter pieces reporting applied research. Smith writes that ‘[the] expectation that every publishable piece of scholarship be “theoretically significant” is unnecessary and counter-productive’ (p. 7).
2. Scholarly journals should encourage and more frequently publish synthetic meta-analyses and literature reviews of accumulated published research notes and articles.
This type of work would allow compilation of and reportage on ‘the state of the art’, helping young researchers get up to speed, and established researchers to identify gaps. Such articles are generally found in Annual Review-type series, but Smith suggests that there is no reason the existing journals should monopolise this form.
3. Journal editors, paper reviewers, and readers should require of submitted and published research notes and articles a greater amount of specific information about their particular data and methodologies.
The specific example he gives is of people not always providing the sample size or the response rate, of the survey dataset which they are analysing. Researchers also need to address possible systematic non-response biases more explicitly, and use of weights. Smith also considers that the reporting of methodology for qualitative and experimental studies needs to be firmed up: ‘[p]aper authors sometimes claim, for example, to have conducted “ethnographies,” when really they have mere done some limited participant observation. And sometimes authors assert that they have conducted a “participant observation” study, when in fact they have only conducted some interviews, studied some printed literature, and taken a few notes on a field setting’ (p. 17).
4. Scholarly journal publications should be both more careful and more bold and confident about claims concerning causation and causal inferences and explanations in social life.
Many datasets only allow us to identify correlation, rather than causation, but analytical sociology is concerned with explaining social phenomena, specifically the mechanisms by which different outcomes come about. As David Hume noted, we can never observe causes directly, but we can aspire to coherent causal understanding and explanation.
5. Scholarly journal publications should pay less attention to statistical significance and more attention to the actual causal force, power, or effects that variables appear to exert on outcomes.
This section of the paper refers to the corpus of literature produced by Deirdre McCloskey and Stephen Ziliak, regarding the conflating of statistical significance with substantive significance. Smith concludes, ‘if a variable, particularly in a large dataset, is statistically significant but makes little difference, then it simply should not matter… [and if it] is demonstrably important in the meaningful difference it appears to make in the outcome yet remains shy of statistically significance at the (totally arbitrary) p <.05 level, then it should still count’ (p. 23).