I already blogged about an important paper which showed that averaging scores on tests is the most accurate method of identifying children for gifted programs (McBee, Peters, & Waterman, 2014).
A paper by my colleague, Joni M. Lakin, built on this earlier research and was named the Paper of the Year in Gifted Child Quarterly earlier this month. In this article, Lakin (2018) investigated the impact of different score combination rules on the diversity of the group of children selected for a gifted program. This is an important question because gifted programs are perpetually criticized for being disproportionately Asian and White, while Hispanic and African American students are less likely to be identified.
There are three possible rules for combining test scores from multiple measures to select gifted children. As I stated in my last post, the conjunctive rule identifies children for gifted programs if they exceed a cutoff on multiple tests. The disjunctive rule only requires a child to exceed a cutoff on a single test in order to be labeled as gifted. The compensatory rule averages the test scores, and if the average is above a cutoff, then the child qualifies for the gifted program.
Lakin (2018) looked at the impact of using these rules on a real dataset of examinees. Unsurprisingly, as groups grow, they become less academically elite. The conjunctive rule identified the smallest but most elite group of children (n = 619, mean IQ = 134.6, SD = 6.7). The compensatory group was the next largest (n = 2,646, mean IQ = 127.5, SD = 6.2). Finally, the disjunctive rule was the most permissive in identifying gifted children (n = 5,602, average IQ = 120.9, SD = 8.1).
As a rule identified more students as gifted, the percentage of different demographic groups labeled as gifted increased. For every group (e.g., African Americans, males, English learners, low-income children), the disjunctive rule labeled more group members as gifted, followed by the compensatory rule. The conjunctive rule identified the fewest number of students as gifted from every demographic group.
On the surface, this seems to be good news: using the disjunctive rule makes gifted programs more diverse. However, Lakin (2018) conducted a follow-up analysis and found that this greater diversity is solely due to the larger number of selected students. When each method was forced to select approximately the same number of students, the gifted groups had almost the same demographics. This is apparent in Table 5 of the article (shown below).
Not only are the demographics of the selected group equal for each identification rule similar, but each one also identifies almost the exact same children. Depending on which selected samples are compared, the methods identify 95-99% of the exact same children as gifted. No wonder the demographics in Table 5 are so similar!
Thus, any greater student diversity from using the disjunctive rule to identify children for a gifted program is solely due to the rule selecting more students. If you want the conjunctive or compensatory rules to identify a more diverse pool of students, then just lower the standards for program admission, and–voilà!–you have more diverse “gifted” students (and more students in the program anyway).
Lakin’s (2018) article is brilliant because of the way it builds on earlier research and answers an important question with real data. It deserves the Paper of the Year award at Gifted Child Quarterly.
But the paper also slaughters one of gifted education’s sacred cows. It is axiomatic in gifted education that using multiple measures to identify gifted students is better for diverse students. (See this recommendation from the National Association for Gifted Children, as an example.) Lakin’s (2018) article shows that this is not true. Assuming that one standard applies to all groups, only lowering standards–and thereby letting more children into gifted programs–increases diversity in gifted programs. Bummer. ☹️
Lakin, J. M. (2018). Making the cut in gifted selection: Score combination rules and their impact on program diversity. Gifted Child Quarterly, 62, 210-219. doi:10.1177/0016986217752099
McBee, M. T., Peters, S. J., & Waterman, C. (2014). Combining scores in multiple-criteria assessment systems: The impact of combination rule. Gifted Child Quarterly, 58, 69-89. doi:10.1177/0016986213513794