During the 2010s, I gradually adopted open science practices. With each study I started, I began to take more and more steps to make my research transparent. I started uploading my data, documenting analysis procedures, pre-registering my work, and taking other steps to ensure my research was transparent. After adding components of open science to my work, I finally decided in fall 2017 that I would conduct a fully open science project. My only regret was that I didn’t fully embrace open science earlier.
The study was published in Psychological Bulletin (Warne & Burningham, 2019) and just received an award from the Mensa Foundation. In this article, my student co-author and I found 97 archival datasets and correlation matrices of cognitive variables collected in 31 non-Western, non-industrialized countries. We subjected the data to exploratory factor analysis to determine whether a general cognitive ability factor (called g) emerged from the dataset. The results are as uniform as possible for psychology: 94 of 97 datasets (96.9%) showed a general factor, indicating that g is likely present in most non-Western cultures.
Here’s how this is an open science story:
- All of our data was collected by other people who had access to these countries. In keeping with the principles of open science, some had shared their data freely when we asked them to provide it to us. (Good) Other researchers had published their correlation matrices. (Better) Others uploaded the raw data online. (Best) These are varying degrees of open science data sharing, and I’m thankful that so many authors found a way to make their data available.
- My coauthor and I pre-registered our hypotheses and procedures. This was a huge advantage because the results supported g theory to a stronger degree than we expected. The pre-registration showed readers and peer reviewers that we did not set up our study to get the results we wanted. This is especially important because I am a vocal proponent of the importance of g (e.g., Warne, 2016, in press; Warne, Astle, & Hill, 2018). It could have been easy to accuse me of reverse-engineering the methods to get the results I want. Instead, we have a time-stamped document proving that we chose our analysis methods before we found any data.
- This study is my 50th article of my career, but this was the first that I had ever uploaded as a pre-print. The PsyArXiv file had hundreds of downloads in the first few days, and colleagues contacted us with suggestions. We implemented these, and after four weeks as a preprint, we submitted the manuscript to Psychological Bulletin. Even though it was the most prestigious journal I had ever submitted to, the manuscript only needed two rounds of review before it was accepted. We credit the feedback we got on the pre-print for making the review process so smooth.
- Having reaped the rewards of open science, it was an easy decision to upload our syntax files to OSF so that readers could verify our work. But this potential for extra scrutiny prompted us to check our files for errors. We found a few (in 8 datasets), though the errors did not alter the results of the study. Because we checked our files, we have more confidence in our findings, and this peace of mind came about because of the decision to embrace open science.
- This study was a replication in three ways. First, we modeled our procedures after a landmark study in the history of intelligence research in which Carroll (1993) conducted an exploratory factor analysis on over 400 datasets in order to determine whether g would emerge from the data. We did the same thing, except with data from non-Western, non-industrialized countries. Second, we re-analyzed published data, which is a form of replication to determine whether the original results are robust. Finally, by subjecting 97 different datasets to the same analysis procedure, we were—in effect—conducting the equivalent of 97 mini-studies at once to determine whether the results were consistent.
This article is the result of the first fully open science study of my career, though I had adopted pieces of open science beforehand. Here is what I learned from this study:
- Open science can create new knowledge. Without data sharing, this study would have been impossible. There was no way—at a teaching university with a tiny research budget—that we could have visited 31 countries to collect data from over 50,000 people. As a result, the important fact that g is a widely cross-cultural trait would not have been discovered.
- If you collect data, there is no reason to believe that you are aware of all the possible research questions that your data can answer. Some authors never considered our research questions; therefore, they never conducted an exploratory factor analysis. In fact, some of the older datasets that we found pre-dated the analysis methods we used by decades. Nobody knows what methodological advances the future will bring. If data are made available publicly, then future methods may refine the results of older studies.
- Similarly, public data sharing is an effective way to keep making contributions to science for decades to come. One example is my use of data from Guthrie (1963), who deposited a full correlation matrix of 50 variables in the Library of Congress for future analysis. Guthrie died in 2003. It is easy to imagine that his data would have been discarded or lost at his death. But the foresight he showed in archiving that matrix allowed him to make a contribution to science over a decade after his death. Guthrie’s contributions live on. Will yours?
- Open science shows the real scientific method, instead of a cleaned-up version that hides errors. Our pre-registration shows that we intended to use a method for choosing our number of factors—parallel analysis—that we were unable to implement. Oops. Rather than pretending that this was not mistake did not happen, the pre-registration exposes it for all to see. While this does deflate my ego a bit, it is actually a good thing. The pre-registration shows that scientific research is not a pristine process. Unforeseen circumstances occur; open science forces researchers to explain their reactions to these events and justify their decisions. As a result, readers get a realistic view of the actual practice of science.
So, join me in practicing open science. The potential benefits of open science are immense. If you implement them, you might even contribute to research that you never imagined.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press.
Guthrie, G. M. (1963). Structure of abilities in a non-western culture. Journal of Educational Psychology, 54(2), 94-103. https://doi.org/10.1037/h0040110
Warne, R. T. (2016). Five reasons to put the g back into giftedness: An argument for applying the Cattell–Horn–Carroll theory of intelligence to gifted education research and practice. Gifted Child Quarterly, 60(1), 3-15. https://doi.org/10.1177/0016986215605360
Warne, R. T. (in press). In the know: Debunking 35 myths about human intelligence. Cambridge University Press.
Warne, R. T., Astle, M. C., & Hill, J. C. (2018). What do undergraduates learn about human intelligence? An analysis of introductory psychology textbooks. Archives of Scientific Psychology, 6(1), 32-50. http://dx.doi.org/10.1037/arc0000038
Warne, R. T., & Burningham, C. (2019). Spearman’s g found in 31 non-Western nations: Strong evidence that g is a universal phenomenon. Psychological Bulletin, 145(3), 237-272. http://dx.doi.org/10.1037/bul0000184