A few years ago, my colleagues Matthew T. McBee, Scott J. Peters, and Craig Waterman published what I think is one of the most important articles in the past 10 years in gifted education. In it, they examined the effect of using different rules to combine multiple scores into one dichotomous decision to admit or reject a child for a gifted program. They called these the “and,” “or,” and “mean” rules.

In the “and” rule (also called the conjunctive rule), students must score above a cutoff on both Test A *and* Test B. In a perfect world, the scores of gifted children using this rule would be represented in the image below (taken from p. 73 of the article). The white spaces represent the areas where non-gifted children’s scores would be in the graph.

In the “or” rule (also called the disjunctive rule), a child would be selected for a gifted program if they scored higher on Test A *or* Test B. Ideally, this would produce the graph below (also from p. 73 of the article). Notice that students who score above the cutoff on both tests qualify for a gifted program under the “and” rule and the “or” rule. These are the individuals with scores in the top right quadrant. But the “or” rule also labels people with performance in the top left and bottom right quadrants as gifted because they only had to score above the cutoff on a single test.

Finally, the “mean” rule (also called the compensatory rule) averages the scores on each test and labels gifted children as those whose average score is higher than a combined cutoff. It allows a high score on one test to compensate for a low score on another. This is represented in idealized format in the image below (taken from p. 74 of the article).

Each method has its advantages and disadvantages. The “and” rule produces a more elite group of gifted children and has very few false positives (i.e., children labeled as gifted who really are not). The “or” rule produces a less elite group of gifted children and has more false positives–but very few false negatives (i.e., gifted children who are not labeled). The “mean” rule falls in between and balances out false positives and false negatives, minimizing both.

What’s interesting is that as the number of test scores is combined in the “mean” rule, its accuracy increases. With reliability of each test score of .80, the “mean” rule correctly identifies approximately 76% of gifted students when averaging 2 tests, 80% when averaging 3 tests, and 84% when averaging 4 tests. As the number of tests increases, the accuracy of the “mean” rule improves. This is *not* true with the “or” or the “and” rules.

Theoretically, false positives and false negatives can be completely eliminated by administering a large number of tests and averaging scores through the “mean” rule. Hmmm . . . a large number of scores combined to form one overall score. That sounds like an intelligence test battery producing an IQ score. With higher accuracy, fewer false negatives and false positives, I don’t see a downside.

References

McBee, M. T., Peters, S. J., & Waterman, C. (2014). Combining scores in multiple-criteria assessment systems: The impact of combination rule. *Gifted Child Quarterly, 58*, 69-89. doi:10.1177/0016986213513794