Grades are posted below, but first, a few general trends. (Those viewing on phones should scroll rightward to see the entire spreadsheet.)
This section is rather self-explanatory. The ACT English Sub-Score only reflects the first 29 questions, taken directly from the ACT. It does not include the IXL-derived grammar questions from after the vocabulary section; that sub-score is not reported.
One noteworthy observation stands out to me here: despite all the schvitzing about vocabulary, you all performed just as well on that section as on the test overall. On the whole, vocabulary did not hurt your score.
I have mixed feelings about the 23 median ACT score. On the one hand, it is significantly higher than the Jennings average of 15. Scholarships start becoming available between 23-25 but increase dramatically as you enter the high 20s and low 30s. So this strikes me as a solid starting point in our journey to fail better.
On the other hand, the ACT portion was a) not timed and b) at the beginning of the test. The score here doesn’t reflect the negative score impact of rushing and test fatigue. That was intentional—I wanted to see first what you knew, not how quickly you could reproduce it. Eventually, our testing conditions will become closer to the ACT. Before that, though, we will practice techniques for increasing speed and stamina.
Fairness & Rigor
I was relieved to see an average fairness rating of 2.1, which signifies “basically fair“, and an average rigor rating of 2.4, which signifies somewhere between “same difficulty as my average test” (3) and “harder than my average test” (2). I would have been distraught if you had found the test either very unfair (4) or much harder than your other tests (5). I try to challenge you as a holistic person, not only academically but organizationally and in terms of character. Raising the bar a little above your other classes stimulates multidimensional growth.
Variability & Curve
I explain standard deviations and curving in more detail in the Upkeep section of my Materials page (Materials > Upkeep > Exams). As a quick synopsis, the standard deviation gives you a sense of how spread out the raw scores were. Small standard deviations suggest a test was either too easy (all the grades are clumped at the top) or too hard (clumped at the bottom). The standard deviation for this test was 12, which is almost the exact same as the past two tests. This shows a consistency in test rigor—strong performance was possible but not guaranteed to everyone.
“Strong performance” is a relative term, which brings me to the curve. I distributed grades according to a formula which maintained relative rankings but boosted everyone’s score. The curve was designed with the following stipulation: the lowest score was made a 50 (a 15 point bump for that student) and the average was made a 75. This strikes me as fair on two accounts. First, 75% equals a C, or “satisfactory”, which is intended to signify the average student. Second, a 50% minimum means that no student can score so low that it almost guarantees his or her failure in my class. Exams only constitute 60% of your overall grade, so that lowest scorer is guaranteed at least a 30% for the quarter. To pass, he or she needs to earn only half of the 40 remaining class & homework points. (Homework points = 20% of the quarter grade; class points = the final 20%).
Once the last student takes the test, we will review answers as a class. You will be provided with a copy of your answers.
Individuals are identified below by their class & presidential pseudonym. Some of you have begun using the same president; in such cases, I distinguished one student by pod. If you are still unsure, you can confirm identity by checking your grade on SIS.