Think Tank Rankings and Ratings: How They Work
Think tank rankings and ratings are structured evaluation systems that assess policy research organizations on dimensions ranging from scholarly output and media visibility to funding transparency and ideological influence. These systems matter because policymakers, journalists, foundation officers, and academic researchers routinely rely on them to calibrate which institutions carry credible weight in specific policy debates. This page explains how ranking methodologies are built, what criteria drive scores, how different systems compare, and where the boundaries of these evaluations break down.
Definition and scope
A think tank ranking is a comparative scoring or classification of policy research organizations, produced by an independent body, academic team, or specialized publication, using defined methodological criteria. Ratings, by contrast, often apply to a single institution's credibility or quality without placing it in a ranked order against peers.
The most widely cited global ranking is the Global Go To Think Tank Index published annually by the Think Tanks and Civil Societies Program (TTCSP) at the University of Pennsylvania. The TTCSP index surveys over 1,950 experts — including policymakers, journalists, and academics — across more than 100 countries and generates ranked lists across 28 categories. These categories include Top Think Tanks Worldwide, Top Defense and National Security Think Tanks, and Top Think Tanks With the Best Use of Social Media, among others.
Domestic US rankings operate on narrower scope. Organizations such as the University of Pennsylvania's TTCSP, academic journals, and watchdog groups like Transparify produce assessments focused on fiscal transparency, research quality, and donor disclosure practices. Transparify, which evaluates think tank transparency and donor disclosure, rated 200 think tanks across 47 countries on a five-star transparency scale measuring whether organizations publicly disclose their funding sources.
Understanding the scope of any ranking requires knowing whether it measures influence, output, rigor, or transparency — these are not the same variable, and conflating them produces misreadings of what a high score actually signifies.
How it works
Ranking systems apply one or more of four core methodological approaches:
- Survey-based peer nomination — Expert panels nominate institutions they consider authoritative in specific policy domains. The TTCSP Global Index uses this method, aggregating nominations from over 1,950 respondents to produce category rankings.
- Bibliometric analysis — Researchers count citations in academic journals, government reports, and legislative records. This method privileges institutions with formal publication pipelines over those focused primarily on media engagement.
- Media and legislative footprint analysis — Systems count mentions in major newspapers, congressional testimony appearances, and broadcast media. The Brookings Institution, for example, has consistently ranked first or second in TTCSP global rankings, a result partly attributable to its volume of congressional testimonies and press citations.
- Transparency auditing — Organizations like Transparify score institutions on the granularity of donor disclosure, assigning ratings from zero stars (no disclosure) to five stars (full, detailed disclosure). This approach is fully distinct from research quality evaluation.
Most composite rankings blend approaches 1 and 3, weighting media presence heavily. This creates a systematic bias toward larger, Washington-based institutions with established communications infrastructure. Think tank media and communications operations directly feed the inputs these rankings measure.
Common scenarios
Policy area specialization rankings: A think tank may rank poorly overall but first within a narrow domain. The Cato Institute and Heritage Foundation, for instance, score differently depending on whether a ranking weights libertarian policy output or broad bipartisan citation. Consulting domain-specific lists — such as those covering top conservative think tanks or nonpartisan think tanks — produces more operationally useful results than relying on aggregate scores alone.
Foundation grantmaking decisions: Philanthropic foundations awarding grants to policy organizations frequently consult TTCSP rankings alongside independent fiscal audits. A five-star Transparify rating signals that donors and amounts are publicly disclosed, which some foundations treat as a prerequisite for funding.
Journalistic source evaluation: Reporters assessing whether to quote a think tank scholar consult ranking lists to gauge institutional standing. This feedback loop means high-ranked institutions attract more media invitations, which in turn raises their media-footprint score in future ranking cycles.
Academic hiring and fellowship programs: University departments and fellowship committees reviewing applicants from think tank internships and fellowships use institutional rankings as a proxy for the rigor of the candidate's prior research environment.
Decision boundaries
Rankings are not equivalent to peer review and carry specific limitations that determine when they are appropriate inputs for evaluation.
When rankings are informative: Comparing peer-nominated survey results across institutions within the same ideological family — for example, within the libertarian think tanks category — produces valid relative assessments. Rankings are also reliable for identifying which institutions have the highest congressional testimony frequency, a measurable structural fact documented in think tank congressional testimony records.
When rankings are unreliable: Survey-based systems are unreliable when used to compare institutions across ideological categories. A progressive institution and a conservative institution may receive identical TTCSP scores through entirely separate nomination pools, making cross-ideological comparisons methodologically invalid. Similarly, media-footprint rankings disadvantage newer organizations and those focused on think tank research methods that prioritize academic journals over press releases.
Transparency ratings vs. quality ratings: These two dimensions are independent. An institution may carry a five-star Transparify transparency rating and simultaneously produce research that criticism of think tanks literature identifies as advocacy-driven. Evaluating think tank credibility requires applying both dimensions simultaneously rather than treating either alone as dispositive.
The /index for this reference network provides additional orientation across the full range of think tank topics covered, including how think tanks are funded and the role of dark money and think tanks in shaping institutional incentives that rankings often fail to capture.