Research Methods Used by Think Tanks
Think tank research methods determine the credibility, influence, and policy relevance of the analysis these organizations produce. This page covers the full range of methodological approaches — from quantitative modeling to qualitative case studies — that think tanks deploy to generate policy-relevant knowledge. Understanding these methods is essential for evaluating the strength of think tank findings, recognizing the limits of particular approaches, and distinguishing rigorous analysis from advocacy dressed as research. The page on evaluating think tank credibility extends this framework into practical assessment.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
Definition and scope
Think tank research methods are the systematic procedures by which policy research organizations gather, analyze, and synthesize evidence to produce findings on public policy questions. The term encompasses both primary research — original data collection and analysis — and secondary research, which synthesizes, reanalyzes, or critiques existing bodies of knowledge.
The scope of methods used across the think tank sector is broader than commonly understood. The major US think tanks directory includes organizations ranging from the Brookings Institution, which employs formal econometric modeling, to the Cato Institute, which relies heavily on doctrinal legal analysis and economic theory, to the Urban Institute, which conducts large-scale microsimulation modeling using datasets from the U.S. Census Bureau and the Internal Revenue Service. This methodological diversity reflects the range of policy domains — tax, health, defense, education, immigration, and others — covered under the think tank policy areas framework.
Unlike university research centers, which are typically constrained by disciplinary conventions and peer-review timelines, think tanks often operate under compressed production schedules and explicit policy relevance mandates. This shapes their methodological choices in ways that diverge from academic norms, a distinction explored further on the think tank vs. university research center comparison page.
Core mechanics or structure
Think tank research generally proceeds through five identifiable phases regardless of the specific method deployed:
1. Problem framing. Researchers define the policy question, establish the scope of analysis, and identify what counts as relevant evidence. Problem framing choices — which populations are studied, which time horizons are applied, which counterfactuals are considered — embed normative assumptions that affect findings downstream.
2. Data acquisition. Think tanks draw on administrative data (federal and state government records), survey data from sources such as the Census Bureau's Current Population Survey, proprietary datasets licensed from third parties, and original data collection through surveys, interviews, or freedom-of-information requests.
3. Analysis. The analytical core varies by method. Quantitative projects use statistical regression, microsimulation, cost-benefit analysis, or econometric modeling. Qualitative projects use structured case comparison, process tracing, expert elicitation, or content analysis. Mixed-methods projects combine both in sequence or in parallel.
4. Peer or internal review. Major think tanks — including the Rand Corporation, Resources for the Future, and the Urban Institute — operate formal internal review processes. The Rand Corporation has published documentation of its quality assurance procedures, requiring independent review of all research products before external release.
5. Publication and dissemination. Output formats range from full technical reports to two-page policy briefs, op-eds, testimony, and interactive data tools. The choice of format affects which methodological details are visible to the audience; technical appendices are often published separately or omitted from brief formats. The think tank publications explained page covers output format conventions in detail.
Causal relationships or drivers
The methods a think tank uses are not chosen in a vacuum. Three structural factors drive methodological selection:
Funding structure. As documented in research on think tank financing — including work published by the Brookings Institution itself on organizational transparency — organizations with broad, unrestricted endowment funding tend to support longer-horizon, more technically demanding research. Organizations dependent on contract research or project-specific grants from funders with defined policy objectives tend to deploy faster, more targeted methods. The how think tanks are funded page provides the financing taxonomy that underlies this dynamic.
Institutional ideology. Organizations with explicit normative orientations — conservative, progressive, libertarian — tend to favor methods that align with or reinforce their theoretical frameworks. A think tank grounded in free-market economic theory is more likely to deploy cost-benefit analysis using standard welfare economics assumptions than one that explicitly questions those assumptions. This is not inherently problematic, but it shapes which results are treated as meaningful findings versus anomalies.
Policy window timing. When a legislative or regulatory window opens — a budget reconciliation cycle, a federal rulemaking comment period, a Supreme Court decision — think tanks accelerate production. Compressed timelines push researchers toward secondary analysis of existing data rather than original primary research, simply because data collection takes months and legislative windows can close in weeks.
Classification boundaries
Think tank research methods can be classified along three independent axes:
Primary vs. secondary. Primary research involves original data collection: field surveys, interviews, randomized pilots, ethnographic observation, or original legal and regulatory document review. Secondary research synthesizes, reanalyzes, or critiques existing datasets and published findings. The majority of think tank output is secondary research; primary data collection at scale is resource-intensive and more common at larger organizations such as the Urban Institute or the Rand Corporation.
Quantitative vs. qualitative vs. mixed. Quantitative methods produce numerical estimates — poverty rates under alternative tax schedules, projected mortality reductions from a regulatory change, fiscal impacts of immigration policy. Qualitative methods produce interpretive accounts — how a program was implemented across 6 states, why a legislative coalition formed or failed, what mechanisms explain an observed outcome. Mixed-methods designs use both in either a sequential or concurrent structure.
Descriptive vs. causal. Descriptive research characterizes the current state of a problem: how many households lack broadband access, what share of the federal workforce is eligible for retirement within 5 years, how healthcare spending is distributed across income deciles. Causal research attempts to identify mechanisms and estimate the effect of interventions. Credible causal identification requires designs — randomized controlled trials, difference-in-differences, regression discontinuity, instrumental variables — that are relatively rare in think tank output compared to descriptive or correlational analysis.
Tradeoffs and tensions
Rigor vs. speed. The tension between methodological rigor and production speed is the defining tradeoff in think tank research. A fully specified econometric model with robust identification can take 18 to 24 months to produce. A policy-relevant brief synthesizing existing literature can be produced in 4 to 6 weeks. Think tanks routinely sacrifice the former for the latter when policy windows demand it, creating a structural gap between what the research can credibly claim and what its summary language asserts.
Transparency vs. accessibility. Full methodological transparency — publishing code, model specifications, data sources, and sensitivity analyses — serves replication and critique but produces documents that non-specialist audiences, including legislative staff, cannot easily use. The how to read a think tank report page addresses how readers can navigate this gap. Think tanks face continuous pressure to simplify findings in ways that obscure the assumptions driving them.
Independence vs. relevance. Research designed to be maximally independent of funder preferences may produce findings with limited immediate policy application. Research scoped tightly to a funder's priority question may be more actionable but more constrained. This tension is most visible in contract research arrangements and is a central concern in discussions of think tank transparency and donor disclosure.
Generalizability vs. specificity. Case studies and qualitative process analyses produce deep, contextually rich findings that are difficult to generalize. Large-n statistical analyses produce generalizable estimates that may obscure important variation. Neither approach dominates; the choice depends on the policy question, and the failure to acknowledge the tradeoff is itself a methodological error.
Common misconceptions
Misconception: Peer review is standard across think tank research. Peer review — meaning independent expert evaluation prior to publication — is not a universal standard in the think tank sector. It is practiced rigorously at organizations such as Rand and Resources for the Future, selectively at Brookings and the Urban Institute, and inconsistently or not at all at smaller or more explicitly advocacy-oriented organizations. The absence of peer review does not automatically invalidate findings, but it removes an important quality check.
Misconception: Quantitative methods are inherently more objective. Numerical outputs depend on modeling assumptions, data selection choices, and parameter specifications that embed normative judgments. Two organizations applying different assumptions to the same microsimulation model can produce cost estimates that differ by hundreds of billions of dollars for the same policy proposal. The Congressional Budget Office has documented cases where different modeling frameworks produce significantly divergent fiscal estimates for identical legislative text.
Misconception: Think tank research is primarily original research. The majority of policy briefs, explainers, and commentary products are secondary syntheses of existing academic, governmental, or administrative data. Original primary data collection at scale — field surveys, randomized evaluations — is the exception, concentrated in resource-heavy organizations.
Misconception: Methodology is disclosed in standard output. Policy briefs and op-eds — the most widely distributed think tank formats — typically omit methodological detail entirely. Full technical reports may include methodology sections, but these are often separated from the summary products that reach broader audiences, including policymakers and journalists.
Checklist or steps (non-advisory)
The following sequence characterizes the research production cycle as documented across major think tank operational descriptions and methodological guidance, including Rand's research standards documentation:
- [ ] Policy question defined with explicit scope boundaries (geographic, temporal, population)
- [ ] Data sources identified and documented, with access and licensing confirmed
- [ ] Methodological approach selected and matched to the causal or descriptive nature of the question
- [ ] Analytic assumptions documented prior to analysis, not post-hoc
- [ ] Primary analysis completed with sensitivity checks on key parameters
- [ ] Internal or external review conducted by at least 1 subject-matter expert independent of the project team
- [ ] Findings distinguished from recommendations in the final document
- [ ] Technical appendix or supplemental methodology section prepared, even if published separately
- [ ] Funding sources disclosed in the publication itself, not only on organizational websites
- [ ] Limitations section included that specifies what the method cannot establish
Reference table or matrix
The table below summarizes the primary research methods used across the think tank sector, their typical applications, data requirements, and key limitations.
| Method | Typical Application | Data Requirements | Key Limitation |
|---|---|---|---|
| Econometric modeling | Tax, budget, labor market impact estimates | Administrative microdata, CPS, IRS records | Sensitive to model specification assumptions |
| Microsimulation | Distribution of policy impacts across income/demographic groups | Household-level survey or tax data | Requires calibrated baseline model; resource-intensive |
| Cost-benefit analysis | Regulatory impact, infrastructure investment | Program cost data, monetized outcome estimates | Discount rate and valuation choices drive results |
| Literature synthesis / systematic review | Evidence summaries for emerging policy areas | Published academic and government studies | Quality depends on completeness and selection criteria |
| Case study / comparative analysis | Program implementation, institutional process | Administrative records, interviews, documents | Limited generalizability across contexts |
| Legal and regulatory analysis | Statutory interpretation, regulatory compliance, constitutional questions | Federal Register, U.S. Code, case law | Interpretive; findings depend on legal framework applied |
| Public opinion surveys | Voter and public preferences on policy options | Original survey samples or secondary poll data | Wording effects; sampling frame limitations |
| Expert elicitation / Delphi method | Forecasting, emerging risk areas with thin data | Structured expert panels | Dependent on panel composition; consensus can mask disagreement |
| Process tracing | Causal mechanisms in policy adoption or failure | Historical documents, elite interviews | Labor-intensive; limited to small-n cases |
| Randomized controlled trial (RCT) | Program effectiveness evaluation | Enrolled participant data over time | Rare in pure think tank context; requires implementation partner |
The think tank research-to-legislation pipeline page documents how findings produced through these methods move from research outputs into the policy process. For a broader orientation to think tanks as institutions, including how research capacity relates to organizational type, that reference provides essential context.
References
- Rand Corporation — Research Standards and Quality Assurance
- Urban Institute — Research Methods and Data Sources
- Resources for the Future — Research Approach and Peer Review Policy
- Brookings Institution — Research Integrity Standards
- Congressional Budget Office — Methods and Data Documentation
- U.S. Census Bureau — Current Population Survey Technical Documentation
- Administrative Dispute Resolution Act of 1996, 5 U.S.C. §§ 571–584 — U.S. Government Publishing Office
- Office of Management and Budget — Circular A-4: Regulatory Analysis