According to the IIT Kanpur Website, there has been a “A devastating and rather harsh exposé of the
'scientific temper' (or the lack of it) shown by members of the IIT Council.
'JEE 2013: An Open Letter to Prof. Barua' by Prof. Dheeraj Sanghi, IIT Kanpur.”
Elsewhere in the
same website we have “A very strong response by Prof. Dheeraj Sanghi,
IIT Kanpur to the claims made by those defending the IIT Council proposal.”
Harsh and strong: I agree.
I have no desire to engage in argument regarding my motives and my
behaviour. I only wish to state that I reject all allegations of lying. I have
defended the proposal because I think it is the best alternative under the
present circumstances. I was not responsible for delaying the Aptitude
Test. In fact not only me, but the IITG
Senate wanted an Aptitiude test (see the
IITG Senate resolution of Apr 25) . It should come in later years. I have no hidden agenda and I do not have any
“irrestible urge to manage other IITs” (ridiculous! way beyond decency!).
I forgive Dheeraj for his trespasses for he knows not what
……
But I would like to focus on the meat of the proposal:
On the ISI Report and Percentile Ranks
Dheeraj Sanghi has stated that
They gave a report which said that
more studies needed to be done with data from more boards for more years.
This had two problems. One, MHRD would have taken a long time to get all this data. ….
This had two problems. One, MHRD would have taken a long time to get all this data. ….
He has obviously not read the report or has not understood
its contents.
The ISI report made the following assumptions (the report is
available here):
2
Assumptions needed for comparability of different board scores
The following assumptions would
have to be made in order to make the aggregate
scores of different boards
comparable.
•
Aggregate scores are expected to increase from less meritorious to more
meritorious students in any
particular subject
•
Merit distribution is the same in all boards.
The first assumption is that Boards awards marks according
to merit.
This has
been challenged by many with respect to State Boards without any analysis of any data (not sure it is even possible to
do any analysis as merit cannot be established objectively: it has to be
something society by and large agrees upon), but by anecdotal evidences of
corruption, fraud etc.
The second assumption is that meritorious students are
unformily distributed across all Boards (I have used the argument of the law of
large numbers in relation to the
population base of Boards (and not the size of the Boards) to argue in
favour of this).
This has
been challenged by some on the basis of the varying sizes of Boards, but again without any analysis of any data (again,
not sure it is even possible to do any analysis).
The ISI report then goes on to state (bold mine):
3
Stability of board scores
Under the above assumptions, the percentile ranks of students in different board
examinations become directly comparable. It would be of interest to
observe how the
raw aggregate scores relate to the
percentile ranks, and how these relationships vary
from year to year as well as across
different boards.
There is therefore no
need for any more analysis of data of other Boards to establish this assertion.
I throw an open challenge to anyone to refute this assertion. It is so simple,
what is there to refute? Any classs IX student should be able to understand.
Unfortunately, many well respected IIT faculty have failed to understand this.
Maybe they have not read the ISI report (the full report is enclosed in another
post).
Now the ISI report does talk about analysing the data of
other Boards, Why? First of all they repeat the above assertion again in
section 4 (bold mine):
4
Criterion for selection
Under the two assumptions mentioned
in Section 2, the percentile ranks of the
students computed from aggregate scores are comparable across different
boards and
years. Any monotone transformation of the percentile ranks is also
appropriate for
comparison, as long as the same
transformation is used across different
boards and
years. Let us now consider a few
such transformations.
They then go on to consider a transformation (bold mine):
Any of the curves in the first
figure is a monotone function of the percentile rank. One
can use any one of them, say CBSE
2007, as standard. If the same transformation of
percentile ranks is used for other
boards and years, then the resulting modified score
of any student of any board in any
year can be regarded as the aggregate score, which
could have been obtained by that
student if he/she had appeared for the CBSE
examinations in 2007. Thus, the
transformed scores provide a common basis for
comparison.
A feature of such a transformation
is that, after this transformation, the scores are not
evenly distributed throughout the
available range of scores. In particular, when the
scale of the CBSE 2007 aggregate
score is used, less than 5% of the
students have
scores in the range of 90% to 100%
of the maximum score. On the other hand, more
than 10% of the students (spanning
over the percentile range of 50 to 62) have scores
squeezed in the narrow range
of 65% to 70% of maximum score. This
would lead to
a loss of discriminating power in
that percentile range, particularly if the board scores
are used only as a component in a
weighted selection criterion involving multiple
components.
For maximal discrimination over the requisite range of percentile
ranks, it is
imperative that the scores have the uniform distribution over that
range. This may be
achieved if the percentile ranks themselves are used as scores. If
there is a threshold
percentile, say 75%, then the
available range is maximally utilized by using the
following linear transformation of
the percentile rank:
(Percentile Rank of Student -75 /
(100-75) ) * 100 -- (1)
According to this scale, a student
with percentile rank 75 receives the score 0, a
student with percentile rank 90
receives 60, and the topper receives 100. Similar
computations can be done for other
choices of the threshold percentile.
Then comes the recommendations, which has caused some
confusion as some eminent folks seem to have read only the recommendations and
not the rest of the report.
5
Recommendations
(a) The above analysis regarding
stability of board scores should be carried out
for all the boards over a longer
period of time.
(b) If the reported stability of
the board scores is found to hold generally,
then a
transformed percentile rank with a
suitable cut-off, as described in (1), may be
used as a score representing
performance in the board examination, for the
purpose of admission to tertiary
education.
(c) The different boards should be
asked to indicate the percentile rank of each
student in the mark sheet.
(d) In order to prepare a formal
and reliable basis for selection at the tertiary level,
educational institutions at that
level, including the IITs, should be asked to
provide to the HRD ministry a
statement of marks obtained by each graduating
student, together with the
student’s score in the admission test of that
institution (if any), the board
score at the class XII level and the name of the
board.
Now why is the analysis mentioned in (a) above
required? Because of recommendation (b)!
A transformation is recommended only if the analysis of (a) is done. But if
there is to be no transformation but the percentile ranks themselves are used
as scores, then there is no need to analyse any further data, as the two
assumptions are there. One may point out that since ISI did not propose this,
there must be a problem with using percentile ranks as scores. I think they wanted better discrimination
through some transformation and so they only recommended some transformation. I
confess I am not able to give a clear answer to this. But I am confident that
what has been proposed is sound (see below).
Now to the formula in the proposal. The only difference is that ISI had suggested
a cut-off and had recommended that a suitable
cut-off be used, but the proposal
uses no cut-off. Why was this done? This was done because with reservations,
any cut-off could adversely affect the
filling up of reserved seats. Further,
while a cut-off would improve the level of discrimination, it was felt that since the proposal was
likely to meet some resistance, it is better to reduce the discrimination, and let the exams be the
discriminating components. So, there was no “Barua formula”, and there was nothing sinister about the proposal.
The “formula” itself, which was not given by ISI (they might have felt that
they would be insulting the readers of their reports if they did so – in
hindsight, they should have done so!), is a standard one that can be found in
any text book on Statistics. I cannot be given credit for this ( a case of
reverse plagiarism?).
What an unconvincing response !!
ReplyDeleteA perfect example of desperate and illogical rebuttal!
ReplyDelete