Methodology
How we rank colleges, calculate acceptance likelihood, and build the “Straight Talk” hybrid sort.
1. Ranking Sources
Our college rankings are compiled from multiple authoritative sources, each contributing a different perspective on college quality:
2. Composite Ranking Algorithm
Each source ranks colleges on different scales. We normalize all ranks to a 0–100 scale, apply the weights listed above, and combine them into a single composite score:
1. Normalize each source: score = (1 - rank/totalRanked) × 100 2. Apply weights: composite = Σ(weight × normalizedScore) 3. Program-specific boost: +5–15 points for top departmental rankings 4. Final score clamped to 1–100
Colleges not ranked by a particular source receive the median score for that source, preventing unranked schools from being unfairly penalized or boosted.
3. Program Strength Scoring
When you select a specific major, colleges are ranked by how strong their program is in that area. Program strength scores (0–100) are derived from:
- US News departmental rankings (where available)
- Niche program-specific grades
- Research output in the field (publications, grants)
- Faculty-to-student ratio in the department
- Graduate employment outcomes for that major
A score of 90–100 represents a top-10 nationally ranked program, 80–89 is top-25, 70–79 is top-50, and so on. Colleges without a particular program are excluded from results when that major is selected.
4. Acceptance Likelihood Model
The acceptance likelihood calculator estimates your chances of admission in three stages:
SAT and GPA are compared to the college's admitted student profile using sigmoid functions, weighted equally (50/50). Your SAT is measured against the school's 25th/75th percentile midpoint; GPA is measured against their average admitted GPA.
Rigor only helps if your GPA backs it up. Taking all APs with a 3.8 GPA shows you challenged yourself and succeeded — a meaningful boost. Taking all APs with a 2.0 GPA means you struggled in hard classes — an actual penalty. This reflects how admissions officers actually read transcripts: rigor without performance is a red flag, not a strength.
Academic fitness is converted to a likelihood percentage through a sigmoid whose threshold and steepness vary continuously with acceptance rate. Ultra-selective schools (~4% acceptance) require very high fitness (~75+) for even a coin-flip chance, while open-admission schools have low thresholds. The parameters are interpolated smoothly between anchor points, avoiding cliff effects at arbitrary tier boundaries.
At highly selective schools, your academic fitness is the dominant signal. But at schools with high acceptance rates, the rate itself is more predictive — a 90% school admits students well below its average profile. For schools above 40% acceptance, the model blends the acceptance rate into the likelihood with increasing weight (up to 60%), so below-average stats at a 90% school still yield a realistic Safety-level result rather than an artificially low score.
Even a perfect applicant can't guarantee admission at a 4% acceptance school — essays, extracurriculars, legacy, and luck dominate at that level. A smooth, monotonically increasing ceiling caps likelihood: ~53% for ultra-selectives, ~77% for mid-range schools, approaching 95% for high-acceptance schools. This ensures a school with a higher acceptance rate always has a higher ceiling than a more selective one. The floor for very poor stats at selective schools is ~1%.
fitness = satFit × 0.5 + gpaFit × 0.5 + rigorAdjustment
rigorAdjust = (rigor × max(0, gpa-2.5) × 4) // benefit
- (rigor × max(0, 3.0-gpa) × 6) // penalty
academic = sigmoid((fitness - threshold) × steepness)
threshold & steepness interpolated between anchor points:
5% acc → threshold 75 | 17% → 60 | 37% → 45 | 75% → 30
accBlend = clamp((accRate - 40) / 80, 0, 0.6) // 0 at ≤40%, 0.6 at ≥88%
likelihood = (1 - accBlend) × academic + accBlend × accRate
ceiling = min(95, 48 + 50 × (1 - e^(-accRate/35)))
4% → ~53% | 20% → ~62% | 50% → ~80% | 90% → ~95%5. Likelihood Labels
At schools like MIT, Stanford, and the Ivies, nearly every applicant already has a near-perfect GPA and top SAT scores. Academics are table stakes, not a differentiator. The actual selection at these schools is driven by factors our model cannot measure: essays, extracurriculars, recommendations, leadership, demonstrated passion, legacy status, recruited athletes, and institutional priorities. A student with a 3.95 GPA and 1550 SAT is competitive on paper but is still more likely to be rejected than accepted at a 4% school.
This means our likelihood labels are least reliable for the most selective schools. A “Possible” at Harvard is very different from a “Possible” at a 40% acceptance school. At ultra-selective schools, treat our labels as measuring academic competitiveness only — the holistic factors that actually decide admissions are beyond what any stats-based model can predict.
6. "Straight Talk" Sorting
Default sorting ranks colleges by program strength for your selected major (or overall ranking when no major is selected). When you activate “Straight Talk,” a hybrid score adjusts rankings based on your acceptance likelihood:
qualityScore = programStrength (with major) or rankingScore (no major) if likelihood ≥ 60%: hybridScore = qualityScore // Pure quality elif likelihood ≥ 20%: hybridScore = qualityScore × (0.5 + 0.5 × (l-20)/40) // Gradual penalty else: hybridScore = qualityScore × (0.1 + 0.4 × l/20) // Heavy penalty
With a major selected, schools are ranked by program quality tempered by your chances. Without a major, the best overall colleges you can realistically get into rise to the top. Reach schools receive a moderate penalty and long shots are pushed down but never hidden.
7. Data Currency
College data was compiled for the 2025–2026 academic year using the most recently published figures from each source. SAT ranges, acceptance rates, and tuition figures reflect the latest available admissions cycle. Rankings are updated when major ranking organizations publish new editions.
8. GPA Data Methodology
The acceptance likelihood model compares your GPA to each college's average admitted GPA. All GPA values in our system are unweighted (4.0 scale). Sourcing accurate unweighted GPAs is surprisingly difficult — colleges report GPA data inconsistently, and many don't report it at all.
We categorize colleges into three groups based on what they publicly disclose, and use a different methodology for each:
These colleges publish unweighted GPA data in their Common Data Set (CDS), usually in section C11 or C12. We use these figures directly. Sources include official CDS filings and institutional research pages.
Some colleges (e.g. Harvard, UNC, Georgia Tech, Maryland) report only weighted GPA, which can exceed 4.0 and isn't directly comparable. For these schools, we estimate unweighted GPA using peer matching: we find the 3 most similar colleges that report both weighted and unweighted GPAs (matched by acceptance rate and weighted GPA), compute their weighted-to-unweighted gap, and apply an inverse-distance-weighted average of those gaps.
For each weighted-only school: 1. Find 3 nearest peers from Group 1 (by acceptance rate + weighted GPA) 2. Compute each peer's gap = weightedGPA - unweightedGPA 3. Weighted average of gaps (closer peers count more) 4. estimatedUW = weightedGPA - weightedAvgGap 5. Cap result at 4.0, floor at 3.0
Elite schools like MIT, Yale, Columbia, and many top liberal arts colleges leave the GPA sections of the CDS blank entirely. For these, we cross-reference multiple third-party sources (CollegeSimply, Clastify, CollegeVine, PrepScholar, Admissionado, CampusReel) and use the consensus estimate. Values above 4.0 from any source are flagged as weighted and excluded. Where available, official class-rank data (e.g. “93% in top decile”) is used as a sanity check.
All GPA values were last validated in February 2026 using 2024–2025 CDS data where available. Because Group 2 and Group 3 values are estimates, they carry more uncertainty than Group 1 values. We err on the side of slightly conservative estimates (rounding down rather than up) to avoid overstating difficulty.
9. Limitations & Disclaimers
This tool is for exploration, not official admissions advice.
- Acceptance likelihood is a statistical estimate, not a prediction. Holistic admissions factors (essays, extracurriculars, recommendations, legacy, athletic recruitment, demographics) are not captured.
- Program strength scores are approximations based on publicly available data and may not reflect recent changes in faculty, funding, or curriculum.
- Tuition and financial aid vary significantly by individual circumstances. Published tuition is the sticker price; actual cost after aid is often much lower.
- Rankings inherently simplify complex institutions into single numbers. A college ranked #50 may be a better fit than one ranked #10 depending on your specific goals.
- Always consult official college admissions offices and financial aid calculators for the most current and personalized information.