Had I Been a Reviewer. A post-publication peer review with some added figures.
The authors of the study have responded to the points raised here. You can read their response here. We had a little bit of follow-up discussion on Twitter. In all, I found this was a productive exchange and I’m happy the authors took the time to respond in such detail.
Sassenberg and Ditrich published a paper in Advances in Methods and Practices in Psychological Science in May. It’s on a topic I care about deeply, namely the impact of changes in academic culture on research quality. Specifically, the authors were interested whether social psychologists have responded to the replication crisis in their subdiscipline and subsequent cries for higher methodological rigour (especially higher statistical power) by switching to less effortful methods of data collection (self-report).
I was not a reviewer of the paper, but given that I’ve already re-analyzed the N-pact paper, it felt only appropriate to do the same with this paper. I decided to do this post in the format of HIBAR.1 I think it’s an important topic and the authors collected valuable data, which surely took a lot of coding effort. The authors, regrettably, did not share any figures for their data. Their findings, which are easily summarised graphically, may therefore become less widely known. So, I made some figures from the open data (above and below).
I frequently hear arguments of the form “calls for rigour in some practices will just lead to less rigour in other areas”, “labour-intensive research will go extinct if higher sample sizes are required” from senior researchers. These arguments are often used to urge caution in response to calls for reform. They may end up being interpreted as advocacy for the status quo.
Empirical evidence that given consistent publication pressure, researchers urged to increase their sample sizes will do less rigorous research in other ways is thus worrying.2
Given the presented data, I am not convinced that the researchers have shown that calls for increased rigour in terms of sample size have led to decreased rigour in measurement. To get a fuller sense of valid information, it would also have been interesting to look at other measures of rigour, such as the number of items, reliability, and whether the measure was ad-hoc. This cannot be done with the existing data. What the authors can do, is to fully present the data they have collected, including data on other measurement methods. As a final note, I am not aware that many voices in the reform movement called for more studies per article, yet we see this trend. This might serve as a vivid example that there are always many things going on simultaneously when just examining trends over time.
Dataset name: Research in social psychology changed
The dataset has N=1300 rows and 6 columns. 458 rows have no missing values on any column.
|
The following JSON-LD can be found by search engines, if you share this codebook publicly on the web.
{
"name": "Research in social psychology changed",
"datePublished": "2021-03-04",
"description": "The dataset has N=1300 rows and 6 columns.\n458 rows have no missing values on any column.\n\n\n## Table of variables\nThis table contains variable names, labels, and number of missing values.\nSee the complete codebook for more.\n\n|name |label | n_missing|\n|:--------|:--------------------------------------|---------:|\n|paperID |unique paper identifier | 0|\n|Journal |journal in which article was published | 0|\n|Jahr |year of publication | 0|\n|Studynum |number of studies per paper | 842|\n|Sample |sample size | 0|\n|online |online data connection | 0|\n\n### Note\nThis dataset was automatically described using the [codebook R package](https://rubenarslan.github.io/codebook/) (version 0.9.2).",
"keywords": ["paperID", "Journal", "Jahr", "Studynum", "Sample", "online"],
"@context": "http://schema.org/",
"@type": "Dataset",
"variableMeasured": [
{
"name": "paperID",
"description": "unique paper identifier",
"@type": "propertyValue"
},
{
"name": "Journal",
"description": "journal in which article was published",
"value": "1. JESP,\n2. JPSP,\n3. PSPB,\n4. SPPS",
"maxValue": 4,
"minValue": 1,
"@type": "propertyValue"
},
{
"name": "Jahr",
"description": "year of publication",
"value": "0. 2009,\n1. 2011,\n2. 2016,\n3. 2018",
"maxValue": 3,
"minValue": 0,
"@type": "propertyValue"
},
{
"name": "Studynum",
"description": "number of studies per paper",
"@type": "propertyValue"
},
{
"name": "Sample",
"description": "sample size",
"@type": "propertyValue"
},
{
"name": "online",
"description": "online data connection",
"value": "0. no,\n1. yes",
"maxValue": 1,
"minValue": 0,
"@type": "propertyValue"
}
]
}`
It seems easier to change standards at journals than to decrease publication pressure and competititon throughout academia.↩︎
If you see mistakes or want to suggest changes, please create an issue on the source repository.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at https://github.com/rubenarslan/rubenarslan.github.io, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Arslan (2019, June 14). One lives only to make blunders: HIBAR: How methods and practices changed after the replication crisis in social psychology. Retrieved from https://rubenarslan.github.io/posts/2019-06-14-hibar-how-methods-and-practices-changed-after-the-replication-crisis-in-social-psychology/
BibTeX citation
@misc{arslan2019hibar:, author = {Arslan, Ruben C.}, title = {One lives only to make blunders: HIBAR: How methods and practices changed after the replication crisis in social psychology}, url = {https://rubenarslan.github.io/posts/2019-06-14-hibar-how-methods-and-practices-changed-after-the-replication-crisis-in-social-psychology/}, year = {2019} }