Post by Deleted on Jan 15, 2023 19:55:17 GMT -5
Since we won't be getting admission statistics anymore, I've been trying to understand why that may be and how OVC is now calculating our admission averages. I believe they have switched to using Z-scores in the way many medical schools do, but this unfortunately means the importance of the interview is much lower now than in previous years (explanation below).
Prior to the introduction of CASPer, OVC calculated admissions scores using only two number: the academic average (out of 100), and the interview score (out of 100). Because both scores were out of 100, it was easy to apply the 65:35 formula to calculate an applicant's final score and rank them.
What's important to notice though from back when they published interview scores (class of 2023) is that interview scores spanned a much larger range (54.31-93.82%) than academic averages of interviewed applicants (87.96-97.56%). This meant that even though the interview was only worth 35% of the final admission's score, it had a larger impact than the academic average. For example, if the top academic applicant got the lowest interview score (97.56*0.65 + 54.31*0.35 = 82.4%) they would score much lower than if the lowest academic applicant got the highest interview score (87.96*0.65 + 93.82*0.35 = 90.0%). In this way, the interview really opened up the playing field for people with worse academics but who could display dedication and commitment to the field in their interviews. In fact for that year (class of 2023), the final admission score minimum was 85.03%, showing that even the academically lowest ranked applicant still had a good chance at admission with a strong interview, and that the academically highest ranked applicant could still easily be eliminated if their interview was not up to par.
Unfortunately, the introduction of the CASPer test changed that: CASPer scores are provided as Z-scores to academic institutions (a Z-score is a measure of how far you are away from the mean score, expressed in standard deviations, e.g. if you are 1 standard deviation below the mean your Z-score is 1, if you are 2 above your Z-score is +2). This score can clearly not be simply averaged with the percentage scores from the academic average and the interview. So the easiest thing to do is convert both percentage scores to Z-scores of their own, and then average them. In most distributions 99.9% of the population is within 3 standard deviations of the mean, so it is likely that all applicants have Z-scores between -3 and +3 for all sections. So now, because each score ranges from -3 to +3 and because the interview score is only weighed at 35% of the final score, no matter how well the lowest ranked applicant does on the interview, they will always be below the highest ranked applicant even if they have the worst interview performance (-3*0.65 + +3*0.35 = -0.9 v.s. +3*0.65 + -3*0.35 = +0.9) and it isn't even close.
If this is how the calculations are done, I believe this is a serious oversight by the admissions committee. To reduce the reliance on academics, they introduced the CASPer requirement. But this changed the way averages are calculated and has accidentally created a system wherein the top ranked applicants pre-interview are virtually guaranteed a spot regardless of their interview performance. Similarly, it is now almost impossible for the lowest ranked interviewed applicant to gain admission to the program, even if their interview performance is outstanding.
Now more than ever, in a holistic admissions approach I believe the interview should be one of the main factors governing admissions, but these changes have reduced its value instead of increasing it.
OVC's decision to not release statistics is confusing, especially because one would expect increased transparency in a year where admissions requirements were modified. If OVC's goal to reduce reliance on inflated academics was achieved, this should be visible in the statistics and would be important data to justify their changes to the admissions requirements. I believe OVC should be more open about how and why changes are made and the impact they have on which students are ultimately admitted.
Prior to the introduction of CASPer, OVC calculated admissions scores using only two number: the academic average (out of 100), and the interview score (out of 100). Because both scores were out of 100, it was easy to apply the 65:35 formula to calculate an applicant's final score and rank them.
What's important to notice though from back when they published interview scores (class of 2023) is that interview scores spanned a much larger range (54.31-93.82%) than academic averages of interviewed applicants (87.96-97.56%). This meant that even though the interview was only worth 35% of the final admission's score, it had a larger impact than the academic average. For example, if the top academic applicant got the lowest interview score (97.56*0.65 + 54.31*0.35 = 82.4%) they would score much lower than if the lowest academic applicant got the highest interview score (87.96*0.65 + 93.82*0.35 = 90.0%). In this way, the interview really opened up the playing field for people with worse academics but who could display dedication and commitment to the field in their interviews. In fact for that year (class of 2023), the final admission score minimum was 85.03%, showing that even the academically lowest ranked applicant still had a good chance at admission with a strong interview, and that the academically highest ranked applicant could still easily be eliminated if their interview was not up to par.
Unfortunately, the introduction of the CASPer test changed that: CASPer scores are provided as Z-scores to academic institutions (a Z-score is a measure of how far you are away from the mean score, expressed in standard deviations, e.g. if you are 1 standard deviation below the mean your Z-score is 1, if you are 2 above your Z-score is +2). This score can clearly not be simply averaged with the percentage scores from the academic average and the interview. So the easiest thing to do is convert both percentage scores to Z-scores of their own, and then average them. In most distributions 99.9% of the population is within 3 standard deviations of the mean, so it is likely that all applicants have Z-scores between -3 and +3 for all sections. So now, because each score ranges from -3 to +3 and because the interview score is only weighed at 35% of the final score, no matter how well the lowest ranked applicant does on the interview, they will always be below the highest ranked applicant even if they have the worst interview performance (-3*0.65 + +3*0.35 = -0.9 v.s. +3*0.65 + -3*0.35 = +0.9) and it isn't even close.
If this is how the calculations are done, I believe this is a serious oversight by the admissions committee. To reduce the reliance on academics, they introduced the CASPer requirement. But this changed the way averages are calculated and has accidentally created a system wherein the top ranked applicants pre-interview are virtually guaranteed a spot regardless of their interview performance. Similarly, it is now almost impossible for the lowest ranked interviewed applicant to gain admission to the program, even if their interview performance is outstanding.
Now more than ever, in a holistic admissions approach I believe the interview should be one of the main factors governing admissions, but these changes have reduced its value instead of increasing it.
OVC's decision to not release statistics is confusing, especially because one would expect increased transparency in a year where admissions requirements were modified. If OVC's goal to reduce reliance on inflated academics was achieved, this should be visible in the statistics and would be important data to justify their changes to the admissions requirements. I believe OVC should be more open about how and why changes are made and the impact they have on which students are ultimately admitted.