ACT Explains their Essay Scores
ACT has released a document hoping to clear up the misconceptions, titled “ACT Research Explains New ACT Test Writing Scores and Their Relationship to Other Test Scores.”
I don’t want to ruin this delightful saga for you, but you should know going on: there is no relationship. Beyond that, ACT says there’s no relationship between individual subject scores, either.
ACT addresses the discrepancy between the Writing scores and the other scores by acknowledging that there is a difference in scores, but that concern about that fact is rooted in a lack of communication about how scores should be understood.
“It is true that scores on the writing test were on average 3 or more points lower than the Composite and English scores for the same percentile rank during September and October 2016. Some students may have had even larger differences between scores. This is not unexpected or an indication of a problem with the test. However, the expectation that the same or similar scores across ACT tests indicate the same or similar level of performance does signal that ACT needs to better communicate what test scores mean and how to appropriately interpret the scores.”
Much of the article reads as an attempt by ACT to debunk the idea that their individual scores are very useful. For example, “the difference in scores across tests does not provide a basis for evaluating strengths and weaknesses….” The important thing to look at, they maintain, are the percentiles, because scores don’t really translate across subject lines. A 25 in Reading is not a 25 in Math, and you were silly to think, because they’re reported on the same scale and created by the same people, that there was any relationship between them. (OK, so they don’t actually call anyone silly. But they do say that “[t]ests are not designed to ensure that a score of 25 means the same thing on ACT Math as it does for ACT Science or Reading.” Check out that skilled usage of the passive voice there!)
There have always been some differences between the scores in different sections. Anyone who looks at a percentile chart will notice that. For the most part, they’re not huge. But “even larger differences are found when comparing percentiles between the new ACT subject-level writing score and the other ACT scores. For example, a score of 30 on the ACT writing test places that same student at the 98th percentile, a full 9% higher than the reading score [of 30.] Similarly, an ACT subject-level writing score of 22 is over about 10 percentiles about the Composite or other ACT scores.”
Why change over to the new scale at all then? Don’t you think that the new scale might, you know, encourage those comparisons? The ACT admits the 1-36 score for the Writing “[makes] comparisons with the other scores much more tempting. Perhaps too tempting!”
ACT’s insistence on talking about the test like something that sprang into being all by itself and behaves in self-determining ways is giving me robot overlord concerns.
Didn’t they… design it? Didn’t they decide how the scoring would work? Unless the ACT test itself has become sentient and begun making demands, this doesn’t make any sense. Treating the scores as if they exist as something other than a product of a series of human decisions is absurd.
They put everything on the same scale. They’ve admitted that having things on an apparently identical scale invites comparison, and they’ve also said that “[c]omparisons of ACT scores across different tests should not be made on the basis of the actual score…” Oh, OK. What?
If the actual scores don’t really help in making comparisons (which is what standardized tests are for, after all – making comparisons), and everyone should just look at the percentiles, then why do we need the 1-36 scale at all?
In the document, ACT points out multiple times that this kind of variation is common on many tests. That doesn’t change the fact that they changed the Writing score to make it look like the other scores (except lower) which has caused confusion and will continue to do so. The math that turns an 8 into a 23 and a 9 into a 30 is not intuitive nor set in stone, and if they wanted to correlate a given Writing score and a Math or English score to the same percentile, they could.
Beyond that, as the ACT admits, “each test score includes some level of imprecision,” and that margin of error is considerably larger for the Writing test. They explain that “a score of 20 on ACT writing would indicate that there is a two-out-of-three chance that the student’s true score would be between 16 and 24.” (Put another way, this means there is a 33% chance that your “true score,” whatever that means, is more than 4 points away from what shows up on your report.”) The ACT cautions against using the Writing score on its own to make decisions, suggesting using the ELA score instead – an average of Reading, English and Writing. But if the Writing score on its own isn’t a useful metric, then perhaps it shouldn’t be reported?
Further complicating all of this, the ACT blames some of the low scoring on students’ unfamiliarity with the new version of the test: “students are only beginning to get experience with the new writing prompt. Research suggests that as students become increasingly familiar with the new prompt, scores may increase[.]”
The ACT claims that the essay tests “argumentative writing skills that are essential for college and career success.” If that were actually the case, familiarity with the test wouldn’t matter. Unless the ACT is arguing that as students become more familiar with this particular aspect of the test, they will also become more prepared for college. I suggest that those two factors – college readiness and ACT essay scores – don’t have a relationship, either.
Author: Audrey Hazzard, Premier-Level Tutor