Click to learn more about author Steve Miller.
Early last Spring, I was texted by a neighbor friend who asked if I’d be willing to chat with her regarding the pending college choice of her daughter. A top student at our suburban Chicago high school, the young woman had plans to become a physician, and had been accepted at both Johns Hopkins University and an elite New England liberal arts college to study pre-med.
I’m a Hopkins alum, but as a quantitative social science major, couldn’t have been more distant from the pre-med world that consumes over 20% of the JHU student body. My closest encounters: a junior year pre-med roommate and a senior year job at the medical school residence. I was, however, often the beneficiary of a Hopkins halo that presumed alums were knowledgeable about health care. That halo probably got me my first job out of grad school at a major medical center in Chicago.
When the neighbor and I met, her big question was why in the world an aspiring pre-med would chose JHU over the elite liberal arts college when the med school placement rate was 90% for the latter against only 63% for the former?
Not so fast, I thought, as my evidence-based skepticism kicked in, raising a torrent of questions. I remember seeing comparable JHU figures on the Hopkins website a few years ago, while the neighbor claimed the 90% for the liberal arts college came from a 15 year old College Confidential thread that only she could confirm.
In the research methodology world, a comparison such as this one can be likened to a group or treatment effect contrast where the “treatment” is the choice of institution and the “outcome” is the % med school acceptance. Two questions I always ask surrounding such comparisons are 1) how were the samples for the disparate groups selected and do they validly represent the population(s)? and, relatedly, 2) are there factors other than the treatment that confound and hence potentially misguide an interpretation of the results — especially in real world “trials” sans benefit of random assignment?
It turns out those considerations are probably decisive in this case. The 15 year old 90% elite liberal arts rate is stale and almost certainly less now than it might have been then, owing to a significant increase in med school applications, with a less than comparable growth in acceptances. All pre-med programs are impacted by a trend of declining acceptance rates.
A second critical factor pertains to the sample selection. At the elite liberal arts college, the 90% figure was apparently tallied as 18/20 first semester senior applicants, while the 63% JHU figure included not only first semester seniors, but also post baccalaureate and other graduate re-applicants, who’re accepted to med school at lower percentages than seniors. Indeed, the current acceptance rate for JHU first-semester seniors is 80%, and it’s this percentage that should be compared to the 90 – which is probably less now given the rising applicant pool.
Another consideration I noted to my neighbor is that the 15-year-old 90% number was based on 20 applicants, while the annual JHU denominator is more than ten times that size. I’m statistically much more comfortable with the stability of the larger numbers in the Hopkins pool than those of the elite liberal arts college. In fact, I wouldn’t be surprised if the liberal arts college acceptance rate varied noticeably from year to year.
When all was said and done, I was confident that the purported difference in acceptance rates was nothing more than an artifact of the methodology used to compute the numbers. And I think my neighbor agreed. My advice to her: have your daughter visit both schools and chose the one that seems best fit. And be ever the wary data scientist when adjudging the plausibility of evidence.