Try these critical thinking activities to foster scientific literacy.
Every day the news media trumpet psychology-related findings with the potential to affect our lives directly and indirectly. And we do mean every day.
- "Testing Neurons With Ultrasound" (Gorman, 2015).
- "Study Does Not Link Breast-Feeding With Child’s IQ" (Bakalar, 2015).
- "Effectiveness of Talk Therapy is Overstated, a Study Says" (Carey, 2015).
These headlines are just a sample from the website of one newspaper, The New York Times, on one day. In fact, the talk therapy article listed here even elicited a letter from the American Psychological Association pointing out that this article “was minimizing the clear benefits of psychotherapy that have been found over many years of research” (Anton, 2015), leading to an online dialog about psychology research. The media and other Internet sources, with their abundance of psychology-related material, provide a perfect proving ground for teaching scientific literacy.
A major goal of our courses — especially introductory psychology — is to teach students to be strong critical thinkers about psychology-related claims. This approach fits with current emphasis on teaching skills, and not just content, in the psychology classroom (see APA, 2013). In our opinion, the most important of these skills is scientific literacy.
To do this, we look to the growing body of research on how to teach scientific literacy. Most importantly, active learning, broadly defined, has been demonstrated to lead to better outcomes than straight lecture (Freeman, Eddy, McDonough, Smith, Okoroafor, Jordt, & Wenderoth, 2014). More specifically, Lovett and Greenhouse applied cognitive psychology research to the teaching of statistical and research methods concepts and developed several principles of effective teaching (2000). They found that students do not readily generalize new learning to other contexts. They also found that students tend to learn best when they can integrate new information into what they already know.
To help students build on what they know, we repeatedly dissect media examples so students apply psychological science to a variety of contexts. We start each class meeting by asking students to find psychology-related stories online — in major newspapers, on sports blogs, in political statements or even fashion e-zines. (For students without a connected device, we allow sharing. Alternatively, students can find an article before class.) There are only two rules: The article must be from the last 24 hours to demonstrate that psychology-related stories emerge every day, and it can’t be from a psychology-specific source like Psychology Today; it’s too easy when every story is related to psychology.
Without exception, students readily locate multiple examples — even in news sources that might seem far afield. Some are based on actual scientific research, like those in the headlines we opened with, whereas others are somewhat suspect, like the supposed relation of lipstick color to personality (Schultz, 2015) or tennis star Serena Williams’s superstitious belief in not washing her socks while on a winning streak (Brodie, 2014).
As instructors, we save the best of both scientific and not-so-scientific examples in our e-folders for the relevant introductory psychology chapter. Once a week, we engage in a longer-form exercise in which we introduce one example that offers opportunities for active learning. Over a 20- or 30-minute period, we use a four-part framework in which students:
- Identify the claim the researchers or journalists are making.
- Evaluate the evidence that is cited to support the claim.
- Consider alternative explanations for the finding.
- Consider the source of the research or claim.
Here’s a recent “ripped from the headlines” example. (For each step, we’ll include instructor preparation information.) A The New York Times blog post published on the same day as the articles listed above asked “Does Mindfulness Make for a Better Athlete?” (Reynolds, 2015). The reporter concluded that the study's findings “could mean that closely attending to our bodies might help us to be better, calmer athletic performers.”
Identify the Claim
In class, students read the article and identify the claim — in this case, that mindfulness improves athletic performance. At this step, we include a related interactive component. In fact, we choose articles that lend themselves to an activity. In this case, we might have students discuss in pairs their own anxieties about performance, whether in an athletic, artistic or academic endeavor. We would introduce some mindfulness techniques and have students practice them in the context of their own example. We might follow with a larger class discussion about how mindfulness might help performance.
As part of identifying the claim, we also ask students to talk about how the researchers have operationally defined the concepts they are studying. We encourage the students to think about different ways that the same concepts could be defined and measured, and how those differences might affect research findings.
Instructor preparation: Choose the article; develop a related activity that encourages active learning.
Evaluate the Evidence
Now we dig into the actual evidence, first by examining the source, in this case the blog post. The blog post tells us this research was published in a journal and conducted by scientists at a university, all good signs. It also tells us that the study was conducted on just seven elite BMX riders, all from the USA Cycle Team, who had their brains scanned while learning to identify signals of potential problems, underwent seven weeks of mindfulness training and then had their brains scanned again. Following the training, their response to the indicator of trouble ahead was improved, and they showed less “physiological panic.”
We then guide a discussion of the pros and cons of the study as presented in the news source. The pros include that university scientists were involved, and the study was peer reviewed. The cons include that there were just seven participants, with no random assignment and no control group. This is a within-groups design, and counterbalancing is not possible.
We then turn to the peer-reviewed journal article (Haase et al., 2015). In this example, we would inform the students that the researchers described this study as a pilot, acknowledging the small sample size. The published report also includes helpful graphs and brain scan images, some of which we would project so students could see the specifics of the data. We would reiterate the pros and cons that we gleaned from the article.
Instructor preparation: Locate and read the original source; identify specific information that will help students understand and evaluate the evidence.
Consider Alternative Explanations
For this step, we guide students to identify alternative explanations for the findings. We might do this as a larger group discussion or have students discuss in small groups first. For this example, we would discuss the lack of a control group and the possibility of confounds. But we would also discuss the alterative explanation that perhaps mindfulness led to different brain patterns — improved response with less “physiological panic” — without leading to improved athletic performance. After all, the researchers did not actually measure athletic performance.
We would ask students to identify where the blogger was careful to point this out. Specifically, she noted that “the experiment did not look at actual, subsequent athletic performance” (Reynolds, 2015). We would then point out that this is even more carefully discussed in the journal. The researchers explicitly point out that mindfulness training could have led to the results they reported “without actually affecting performance itself” (Haase et al., 2015, p. 10).
Instructor preparation: Develop a list of alternative explanations; locate and read any additional articles that relate to these alternative explanations.
Consider the Source
Finally, we compare and contrast the source we started with — the blog post in this case — with the peer-reviewed journal article. We talk about what to look for in a news story or other source that indicates that it’s based on research, including names and institutions of researchers and a mention of a published study. In cases in which there is no peer-reviewed journal article, like some websites that make wild claims to sell you something, we discuss the flaws of sources that don’t point to science.
Instructor preparation: Develop a brief overview (we use PowerPoint) of why peer-reviewed journal articles are almost always a better source than a newspaper, blog or website, and of what students should look for when evaluating these. This can be reused when this activity is repeated with a new source. We also recommend evaluating sources using the CRAP test (currency, reliability, authority and purpose/point of view) that can be found at many university websites.
This format for a recurring activity was developed based on research on the scholarship of teaching and learning and allows for active learning and repetition across contexts. In our experience, early in the semester, students have difficulty finding examples of psychological science in the news, unless a headline makes it explicitly clear that a given finding is from the field of psychology. By the end of the semester, they start to see psychological science everywhere — from sports stories to breaking international news.
Similarly, early in the semester, students have difficulty working through the four-part framework. But, just as many of them become skilled at noticing when psychological science is at play, many of them also become skilled at thinking critically about research. They learn to ask the right questions and to seek out appropriate answers for these questions — the mark of a budding psychological scientist.
- This article was originally published, with references, in the December 2015 Psychology Teacher Network.
- Alice G. Walton, PhD, is a writer in New York City.