I’m stretching myself this summer by teaching, for the first time ever(!), an introduction to psychology course. This is quite a change for me; for over a decade, I have only taught upper-division or graduate-level courses. Now, I’m teaching students who are early in their college career, and (because the class is a general education course) most of my students are not majoring in psychology. It is quite a change, but I am enjoying it.

Because introductory psychology is one of the most popular courses at my university, there are dozens of sections offered every semester. Years ago, my department decided to standardize the course by choosing which topics everyone would teach and creating assignments for every instructor to use. Part of that standardization process is assigning a textbook to all instructors, even full-time faculty like myself. The textbook for the class is Lumen Learning’s Introduction to Psychology, which is based on the OpenStax Psychology 2e.

Many, Many Basic Errors

To say that I am unimpressed with the textbook would be an understatement. The book is riddled with errors, and I must spend time in almost every class session correcting inaccuracies from the book. Here are some of the examples that I have found:

These errors are not typical differences of opinion among professionals. No, these are basic, factual errors and glaring holes in information that any reasonably well informed professional should not make. Note that this list doesn’t even include factual errors that are not relevant to basic psychology, such as the statement that a cleft chin is determined by a single gene. A simple Google search shows that this is not true. Another example is the incorrect claim that Einstein said that insanity was doing the same thing repeatedly and expecting the same results.

Errors in introductory psychology textbooks are not unusual. In the sections on intelligence, 79% of introductory textbooks contained basic errors (Warne et al., 2018). But the errors in the Lumen Learning/OpenStax book are far more frequent and egregious than anything I have ever seen in traditional textbooks. The authors should be embarrassed to be associated with such a shoddy product.

No replication? No problem!

There are also clear indications that the text (originally written in 2014) has not been updated to reflect psychology’s replication crisis. Almost every chapter I read prominently featured studies with small sample sizes and/or low statistical power:

  • One example from the research methods chapter has a sample size of 16 infants, and the same chapter states that one purpose of peer review is to “prevent unnecessary duplication of research findings in the scientific literature and . . . ensures that each research article provides new information.”
  • The biopsychology chapter cites a study of 21 soccer fans that is reliant on an interaction with a whopping effect size of η2 = .42 (Bernhardt et al., 1998), another study with a sample size of 16 women (van Anders et al., 2007), and a third study about the neuroscience of empathy with a sample size of 22 (Beckes et al., 2013).
  • The moral development section is based mostly on studies that had < 19 infants per cell (Kiley Hamlin & Wynn, 2011).
  • The sensation and perception chapter discusses social priming (including John Bargh’s discredited “elderly priming” study) as a real phenomenon. In reality, social priming seems to consist entirely of false positive results (Rohrer et al., 2019).
  • An entire paragraph of the subjective nature of pain discusses a study with 8 people per group, for a total sample size of 16 (Leknes et al., 2013). Of course, the textbook focuses on the study’s “surprising” finding.
  • A discussion on cross-modal speech effects that relies on a study with 10 people per cell and all the statistically significant p-values are between .01 and .05 (Rosunblum et al., 2007). Of course, the textbook author(s) call the findings “astounding”. 🙄
  • A very lengthy discussion of the Ebbinghaus illusion, based on a study of 46 golfers, where the statistically significant correlation (r = -.30, n = 46) is driven entirely by four outliers, as is clear in the original article’s scatterplot (Witt et al., 2008).
  • A mention of a study where recall was better for words learned and recalled in the same environment (underwater or on dry land after a dive) than when the learning and recall environment were mismatched. The 2 x 2 ANOVA in this study had two non-significant main effects, but a significant interaction–and a sample size of 16 (Godden & Baddeley, 1975).
  • Another memory study stating that people remember black basketball players’ and white politicians’ names more due to stereotypes about these racial groups is based . . . an interaction with a p-value reported as “p < .06” in a study of 57 people in a 2 x 2 x 2 ANOVA (Payne et al., 2004).
  • In a section on improving memory, the textbook summarizes a study in which people were randomly assigned to regularly write about traumatic experiences (instead of their best possible future self or a trivial topic). The subjects who wrote about traumatic experiences had higher working memory capacity. The textbook concludes by stating, “Psychologists can’t explain why this writing task works, but it does.” Not being able to theoretically explain how a result could occur is a strong hint that a result will not replicate. For that particular result, p = .048 (Yogo & Fujihara, 2008).
  • The chapter on learning claims that gambling is a chemical dependency based on a 1980s study of 24 pathological gamblers–25% of whom had a chemical dependency and over one-third of whom had a history of depression–and 20 controls (Roy et al., 1988). The study contained 11 p-values were reported (including three between .01 and .05 and two between .05 and .07), but no corrections for Type I error inflation were performed. A lot of emphasis is placed on a large difference in norepinephrine levels in urine, but only a brief mention of the fact that none of the metabolites of norepinephrine differed in urine. This is a likely indication that the norepinephrine levels are a fluke. There’s also a sketchy subgroup analysis of the gamblers in the original article.
  • A discussion of an article in support of the Sapir-Whorf hypothesis that reports three experiments with n = 10 to 35 people per cell and a lot of p-values that are suspiciously just below .05 (Boroditsky, 2001).
  • A mention of a study showing that women getting divorced who had thought more about their spouse/ex-spouse while awake had that person appear in their dreams more (Cartwright et al., 2006). This finding only occurred in one out of three data collection sessions and had a p-value of 0.045. The sample size was 20 divorcing women and 10 controls. 🙄
  • A description of the “marshmallow test,” a famous study which has failed to replicate (Watts et al., 2018).

Unsurprisingly, the social psychology chapter was the worst in this respect. It relied on small studies far more than the others, giving the topic an inflated level of importance and the illusion of scientific credibility.

  • The description of the actor-observer bias is based on an article reporting three small studies, where all the statistically significant p-values (except one) were between .01 and .05. In fact, in this article, p-values of .07 and “.10 < p < .15″ were interpreted as supporting the theory (Nisbett et al., 1973).
  • The section on attributions included a recap of a study of 110 individuals in 18 cells, which is an average of 6.1 participants per cell (Grove et al., 1991).
  • The behaviors in the Stanford Prison Experiment (SPE) are said to have developed “To the surprise of the researchers . . .” This is not true. The study’s creator, Philip Zimbardo, coached guards to act more cruelly, and he already had a press release ready on the second day of the study discussing how the prison setting is dehumanizing and encourages bad behaviors from prisoners (Le Texier, 2019). To the author’s credit, there is some explanation that the study has been questioned, but the textbook does not convey the magnitude and severity of the SPE’s problems.
  • The studies cited to support the claim that cognitive dissonance can cause changes in physiological responses have sample sizes of n = 29 (Croyle & Cooper, 1983) and n = 42 (van Veen et al., 2009) and a lot of research flexibility in the choice of dependent variables.
  • The discussion of the foot-in-the-door phenomenon does not mention that the original study has failed to replicate (Gamian-Wilk & Dolinski, 2020). The original article (cited in the textbook) looks fishy anyway: two studies in the article with cell sizes of n = 19 to 36. Both studies in the article rely on multiple statistical significance tests but do not control for Type I error inflation (Freedman & Fraser, 1966).
  • The description of social facilitation completely lacks citations.
  • The discussion on the bystander effect opens with a brief synopsis of the story of the murder of Kitty Genovese and refers to the case later. Psychologists have known for years that the traditional retelling of her murder–supposedly occurring while 38 watched and did nothing–is not true (Kassin, 2017; Manning et al., 2007). There is even a documentary produced by Ms. Genovese’s brother telling the truth about her tragic death. Other research has shown that in most situations, people do receive help from bystanders (Philpot et al., 2020).
  • A lengthy description of a study on pain perception while holding a (loved one’s or stranger’s) hand or viewing photographs (of a loved one, a stranger, or a neutral object) is recapped with the advice, “If you are going to have a painful medical procedure, bringing a picture of someone you love may be helpful in reducing the pain. In fact, based on comparison of the hand holding and picture viewing conditions, you may actually be better off bringing a picture than bringing the actual person to the painful procedure.” This advice (of dubious value) is based on a sample of 25 women, but neither the original authors nor the textbook can explain why a photograph of a loved one is better for pain relief than contact with a loved one, or why holding a neutral object is better than holding a stranger’s hand (Master et al., 2009). This is the exact sort of study with confusing, surprising, and contradictory findings that tends not to replicate.
  • Immediately after, another study on the impact of viewing photographs of a loved one was summarized. I think you know where this is going: n = 15, p = .026 (Younger et al., 2010). But, again, the findings are theoretically confusing: a photograph of a loved one and being distracted are equally effective in reducing pain–and both are more effective than viewing a photograph of an acquaintance. The textbook chocks up this strange result to the fact that “. . . things are seldom simple in the world of science,” and the textbook author never considers that the study might be capitalizing on random error. (This study is also odd because the authors claimed to have performed a Bonferroni adjustment, but if they had, their principal findings would no longer be statistically significant. But they reported the results as if the interventions were effective in producing a statistically significant reduction in pain.)
  • It is always a treat to find a citation to an article co-authored by Amy Cuddy and Susan Fiske, with a small sample size (n = 55 in 3 groups) and barely significant p-value (p = .026) where the authors failed to perform a correction for Type I error inflation (Cuddy et al., 2005).
  • The book actually recommends that students take the Implicit Association Test! The test has terrible test-retest reliability and does not correlate with any actual discriminatory behavior (Singal, 2021). The IAT is modern pseudoscience and does not measure anything, let alone “implicit,” or hidden, attitudes about human groups.
  • The Pygmalion in the Classroom study is cited as support for the existence of self-fulfilling prophecies (original report is Rosenthal & Jacobson, 1966, though the textbook cites a later summary). I discussed the problems of this study in Chapter 18 of In the Know: Debunking 35 Myths About Human Intelligence (Warne, 2020).
  • In a discussion of discrimination against gay job applicants, the textbook stated that “. . . the potential employer might treat the applicant negatively during the interview by engaging in less conversation [ps = .030 and .029], making little eye contact [exploratory post hoc analysis], and generally behaving coldly toward the applicant . . .” This latter finding is strongly statistically significant when the applicant had to rate perceived interactions, but when judged by independent raters, the effect was only about 1/3 as strong, and p = .031 (Hebl et al., 2002).

When reading these lists, keep in mind that they include only the citations that I (1) decided to look into, (2) could find, and (3) had the expertise to evaluate. This is likely just a fraction of the shoddy research that the book is based on.

While there is a section on the replication crisis and another section on “the value of replication,” most of the OpenStax book does not reflect the shift in psychology towards higher-quality research. Even the discussion of the replication crisis suggests that a failure to replicate may be due to that hidden moderators may cause differences in replication, even though this is actually quite implausible (Olsson-Collentine et al., 2020).

Many findings and/or theories that the have been questioned because of the replication crisis are prominently featured. The book has an entire section on mindset theory, including an in-depth description of the Mueller and Dweck (1998) study that has failed to replicate (Li & Bates, 2019), and the introduction encourages students to adopt a growth mindset. Nowhere is there any indication that mindset theory fails key falsifiability tests (Burgoyne et al., 2020) and that high-quality studies only seem to support the theory if Carol Dweck is a coauthor.

Quality Control Problems

There are also basic errors of quality control. Sometimes adjacent paragraphs do not flow together well, and some chapters are poorly organized. For example, the chapter on memory often mentions topics (like false memories or proactive and retroactive interference), drops them without explanation, and then discusses them again at a later point in the chapter. The same chapter even repeats the same paragraph twice, word-for-word. One entire section in the mental health chapter contains zero references.

Every chapter’s reference list does not match the in-text citations for a chapter, and often I was unable to trace citations to verify that the book was summarizing sources correctly. Sometimes citations contradict the statement they’re supposed to support (e.g., Ceci & Williams, 2011) or do not contain the information the textbook author claims they do (e.g., Asberg et al., 1976). Another problem I frequently saw was that alternate plausible explanations for findings are not explored (e.g., Adams et al., 1996, which may merely be due to the novelty of the stimulus). Having defective citations in most chapters is typical; I guess it is better than the therapy chapter‘s serious dearth of citations. In that chapter, most factual claims do not have any supporting citations.

Another common problem with the book is that quality of some of the references. Popular media reports are treated with the same credulity as scholarly articles, with scientific inferences being made from non-existent or inconclusive data in a popular source. I also suspect that some of the anecdotes (like one of a child imitating their mother by beating a teddy bear or students who made different choices during the Virginia Tech mass shooting) are fabricated. Most chapters have at least one typo. Lumen Learning also has a problem with its embedded videos; most of these are taken from YouTube and sometimes provide information that contradicts the text.

Open Stax’s Problems Are Not Unique

Recognizing that the Lumen Learning/Open Stax book may not be typical of all open source books, I decided to investigate other introductory psychology textbooks. While I haven’t examined them as much as the introductory psychology textbook from OpenStax, I am still not impressed. I browsed the social psychology and human intelligence chapters of four different textbooks and found many of the same problems that I found in the OpenStax book. Advocates of open source textbooks need to come to grips with the fact that their product suffers often inaccuracies, outdated information, and low quality control. Not every traditional textbook is better than every open source book, but there is an obvious difference in average quality when comparing traditional textbooks with open source books. People who thinks that open source books are a better educational choice are fooling themselves.

But Open Source Is Free!

The only plus-side I can see to open source textbook is the cost savings, and if the choice is between a mediocre open source textbook or no textbook, then a mediocre book may be better. I can imagine that students in developing nations or at community colleges in impoverished areas may need an open source textbook option. But for most students in wealthy nations, the cost savings are not worth the error-filled, low-quality education.

Advocates of open source textbooks do have a point. Traditional textbooks often do cost too much — and the problem is getting worse. Since 1978, college textbooks have increased over eightfold in price. It is important to keep in mind, though, that most college students do not pay full price for all their books. Many rent books, purchase or rent e-books, borrow books from the library, use old editions, and adopt other strategies to keep the cost down. Still, when it is unavoidable to buy a new book, it can be a brutal hit to the wallet.

When I was writing my statistics textbook, I learned firsthand why textbooks are so expensive. Some of the higher price is due to arbitrary publisher choices: color printing, glossy paper, expensive photos and artwork, and other aesthetic choices can drive up costs by a surprising amount. Extra bells and whistles, such as online supplementary materials (especially if they require a one-time access code to use), test banks, work books, videos, and interactive tools all add to the cost. When I was writing the first edition of my statistics textbook, Statistics for the Social Sciences: A General Linear Model, I was in a constant battle with my editor to keep the price down and was always pushing for less expensive choices.

Another driver of costs is the market. Most of the anonymous reviewers of my textbook gave feedback indicating that they would not consider adopting a book unless it included certain topics, had certain features, used particular examples, etc. None of these were requested by more than one-third of reviewers, but my editor told me to accommodate as many of these requests as possible. The result was a book that was nearly twice as long as I had planned. This experience taught me why most textbooks contain far more information than can be taught in a single semester. If only 15% or 20% of potential customers (i.e., college instructors) want a certain chapter or topic, then that demand is enough to get it included in the book. All else being equal, a longer book is a more expensive book.

While publishers share some of the blame for the costs of traditional textbooks, some of the expense is due to quality control. The first edition of my textbook had feedback from at least 15 anonymous reviewers (probably many more). These people were paid for their time and expertise, and they made the book better. The professional copy editors, graphic designers, proofreaders, and other production personnel also contributed to the quality control — and cost. Moreover, royalties give authors a tangible incentive to write the best book possible and to improve their book in later editions. I definitely watch the statistics literature more closely because I want any future editions to be up to date, which will give me an edge over the competition. It is not clear that an open source textbook author has similar incentives.

While open source textbooks may be free, they still have a cost: lower quality and inaccurate information. Until the open source textbook movement can fix the quality control problem, I will remain skeptical that open source textbooks are an adequate choice for my students’ education. I do not have a complete answer to the pressing problem of rising textbook costs. But until open source textbooks improve quality control, they are a limited solution at best.

References

Adams, H. E., Wright, L. W., & Lohr, B. A. (1996). Is homophobia associated with homosexual arousal? Journal of Abnormal Psychology, 105(3), 440-445. https://doi.org/10.1037/0021-843x.105.3.440

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. American Educational Research Association

Asberg, M., Thorén, P., Träskman, L., Bertilsson, L., & Ringberger, V. (1976). “Serotonin depression”—a biochemical subgroup within the affective disorders? Science191(4226), 478–480. https://doi.org/10.1126/science.1246632

Beckes, L., Coan, J. A., & Hasselmo, K. (2013). Familiarity promotes the blurring of self and other in the neural representation of threat. Social Cognitive and Affective Neuroscience, 8(6), 670-677. https://doi.org/10.1093/scan/nss046

Bernhardt, P. C., Dabbs, J. M., Jr., Fielden, J. A., & Lutter, C. D. (1998). Testosterone changes during vicarious experiences of winning and losing among fans at sporting events. Physiology & Behavior, 65(1), 59-62. https://doi.org/10.1016/S0031-9384(98)00147-4

Boroditsky, L. (2001). Does language shape thought?: Mandarin and English speakers’ conceptions of time. Cognitive Psychology, 43(1), 1-22. ttps://doi.org/10.1006/cogp.2001.0748

Burgoyne, A. P., Hambrick, D. Z., & Macnamara, B. N. (2020). How firm are the foundations of mind-set theory? The claims appear stronger than the evidence. Psychological Science, 31(3), 258-267. https://doi.org/10.1177/0956797619897588

Cartwright, R., Agargun, M. Y., Kirkby, J., & Friedman, J. K. (2006). Relation of dreams to waking concerns. Psychiatry Research, 141(3), 261-270. https://doi.org/10.1016/j.psychres.2005.05.013

Ceci, S. J., & Williams, W. M. (2011). Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences, 108, 3157–3162. https://doi.org/10.1073/pnas.1014871108

Croyle, R. T., & Cooper, J. (1983). Dissonance arousal: Physiological evidence. Journal of Personality and Social Psychology, 45(4), 782–791. https://doi.org/10.1037/0022-3514.45.4.782

Cuddy, A. J., Norton, M. I., & Fiske, S. T. (2005). This old stereotype: The pervasiveness and persistence of the elderly stereotype. Journal of Social Issues, 61, 267–285. https://doi.org/10.1111/j.1540-4560.2005.00405.x

Ferguson, C. J. (2013). Violent video games and the Supreme Court: Lessons for the scientific community in the wake of Brown v. Entertainment Merchants Association. American Psychologist, 68(2), 57-74. https://doi.org/10.1037/a0030597

Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure: The foot-in-the-door technique. Journal of Personality and Social Psychology, 4(2), 195–202. https://doi.org/10.1037/h0023552

Gamian-Wilk, M., & Dolinski, D. (2020). The foot-in-the-door phenomenon 40 and 50 years later: A direct replication of the original Freedman and Fraser study in Poland and in Ukraine. Psychological Reports, 123(6), 2582-2596. https://doi.org/10.1177/0033294119872208

Godden, D. R., & Baddeley, A. D. (1975). Context-dependent memory in two natural environments: On land and underwater. British Journal of Psychology, 66(3), 325-331. https://doi.org/10.1111/j.2044-8295.1975.tb01468.x

Hebl, M. R., Foster, J. B., Mannix, L. M., & Dovidio, J. F. (2002). Formal and interpersonal discrimination: A field study of bias toward homosexual applicants. Personality and Social Psychology Bulletin, 28(6), 815–825. https://doi.org/10.1177/0146167202289010

Kassin, S. M. (2017). The killing of Kitty Genovese: What else does this case tell us? Perspectives on Psychological Science, 12(3), 374-381. https://doi.org/10.1177/1745691616679465

Kiley Hamlin, J., & Winny, K. (2011). Young infants prefer prosocial to antisocial others. Cognitive Development, 26(1), 30-39. https://doi.org/10.1016/j.cogdev.2010.09.001

Le Texier, T. (2019). Debunking the Stanford Prison Experiment. American Psychologist, 74(7), 823-839. https://doi.org/10.1037/amp0000401

Leknes, S., Berna, C., Lee, M. C., Snyder, G. D., Biele, G., & Tracey, I. (2013). The importance of context: When relative relief renders pain pleasant. PAIN, 154(3), 402-410. https://doi.org/10.1016/j.pain.2012.11.018

Li, Y., & Bates, T. C. (2019). You can’t change your basic ability, but you work at things, and that’s how we get hard things done: Testing the role of growth mindset on response to setbacks, educational attainment, and cognitive ability. Journal of Experimental Psychology: General, 148(9), 1640-1655. https://doi.org/10.1037/xge0000669

Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social psychology of helping: The parable of the 38 witnesses. American Psychologist, 62(6), 555-562. https://doi.org/10.1037/0003-066X.62.6.555

Master, S. L., Eisenberger, N. I., Taylor, S. E., Naliboff, B. D., Shirinyan, D., & Lieberman, M. D. (2009). A picture’s worth: Partner photographs reduce experimentally induced pain. Psychological Science, 20(11), 1316-1318. https://doi.org/10.1111/j.1467-9280.2009.02444.x

Mueller, C. M., & Dweck, C. S. (1998). Praise for intelligence can undermine children’s motivation and performance. Journal of Personality and Social Psychology, 75(1), 33-52. https://doi.org/10.1037/0022-3514.75.1.33

Nisbett, R. E., Caputo, C., Legant, P., & Marecek, J. (1973). Behavior as seen by the actor and as seen by the observer. Journal of Personality and Social Psychology, 27(2), 154-164. https://doi.org/10.1037/h0034779

Olsson-Collentine, A., Wicherts, J. M., & van Assen, M. A. L. M. (2020). Heterogeneity in direct replications in psychology and its association with effect size. Psychological Bulletin, 146(10), 922–940. https://doi.org/10.1037/bul0000294

Payne, B. K., Jacoby, L. L., & Lambert, A. J. (2004). Memory monitoring and the control of stereotype distortion. Journal of Experimental Social Psychology, 40(1), 52-64. https://doi.org/10.1016/S0022-1031(03)00069-6

Philpot, R., Liebst, L. S., Levine, M., Bernasco, W., & Lindegaard, M. R. (2020). Would I be helped? Cross-national CCTV footage shows that intervention is the norm in public conflicts. American Psychologist, 75(1), 66–75. https://doi.org/10.1037/amp0000469

Rosenblum, L. D., Miller, R. M., & Sanchez, K. (2007). Lip-read me now, hear me better later. Psychological Science, 18(5), 392-396. https://doi.org/10.1111/j.1467-9280.2007.01911.x

Roy, A., Adinoff, B., Roehrich, L. Lamparski, D., Custer, R., Lorenz, V., Barbaccia, M. Guidotti, A., Costa, E., & Linnoila, M. (1988). Pathological gambling: A psychobiological study. Archives of General Psychiatry, 45(4), 369-373. https://doi.org/10.1001/archpsyc.1988.01800280085011

Rohrer, D., Pashler, H., & Harris, C. R. (2019). Discrepant data and improbable results: An examination of Vohs, Mead, and Goode (2006). Basic and Applied Social Psychology, 41(4), 263-271. https://doi.org/10.1080/01973533.2019.1624965

Rosenthal, R., & Jacobson, L. (1966). Teachers’ expectancies: Determinants of pupils’ IQ gains. Psychological Reports, 19(1), 115-118. https://doi.org/10.2466/pr0.1966.19.1.115

Singal, J. (2021). The quick fix: Why fad psychology can’t cure our social ills. Farrar, Straus and Giroux.

van Anders, S. M., Hamilton, L D., Schmidt, N., & Watson, N. V. (2007). Associations between testosterone secretion and sexual activity in women. Hormones and Behavior, 51(4), 477-482. https://doi.org/10.1016/j.yhbeh.2007.01.003

van Veen, V., Krug, M. K., Schooler, J. W., & Carter, C. S. (2009). Neural activity predicts attitude change in cognitive dissonance. Nature Neuroscience, 12, 1469-1474. https://doi.org/10.1038/nn.2413

Warne, R. T. (2020). In the know: Debunking 35 myths about human intelligence. Cambridge University Press. https://doi.org/10.1017/9781108593298

Warne, R. T., Astle, M. C., & Hill, J. C. (2018). What do undergraduates learn about human intelligence? An analysis of introductory psychology textbooks. Archives of Scientific Psychology, 6(1), 32-50. https://doi.org/10.1037/arc0000038

Watts, T. W., Duncan, G. J., & Quan, H. (2018). Revisiting the marshmallow test: A conceptual replication investigating links between early delay of gratification and later outcomes. Psychological Science, 29(7), 1159-1177. https://doi.org/10.1177/0956797618761661

Witt, J. K., Linkenauger, S. A., Backdash, J. Z., & Proffitt, D. R. (2008). Putting to a bigger hole: Golf performance relates to perceived size. Psychonomic Bulletin & Review, 15, 581-585. https://doi.org/10.3758/PBR.15.3.581

Yogo, M., & Fujihara, S. (2008). Working memory capacity can be improved by expressive writing: A randomized experiment in a Japanese sample. British Journal of Health Psychology, 13(1), 77-80. https://doi.org/10.1348/135910707X252440

Younger, J., Aron, A., Parke, S., Chatterjee, N., & Mackey, S. (2010). Viewing pictures of a romantic partner reduces experimental pain: Involvement of neural reward systems. PLOS ONE, 5(10), Article e13309. https://doi.org/10.1371/journal.pone.0013309

css.php