Research Impact Measurement: Beyond Citations and H-Index
How should research impact be measured? The question affects funding allocation, hiring decisions, and promotion criteria. Australia’s research sector is grappling with this challenge as limitations of traditional metrics become increasingly apparent.
The Citation Problem
Citations have been the dominant metric for decades. Highly cited papers are considered influential. Researchers’ h-index scores quantify sustained citation impact.
However, citations measure only academic influence, not broader societal impact. Research that improves clinical practice, informs policy, or enables commercial products may generate few citations if its impact lies outside academia.
Citation counts are also highly field-dependent. Biomedical research typically generates far more citations than mathematics. Comparing researchers across disciplines using citation metrics is problematic.
Gaming of citation metrics occurs. Self-citation, citation rings where groups cite each other reciprocally, and predatory journals that inflate citation counts all undermine metric validity.
Alternative Metrics
“Altmetrics” track social media mentions, news coverage, policy citations, and other indicators of broader attention. These measures capture impact beyond academic citations.
However, altmetrics often reflect media attention rather than genuine impact. Controversial or surprising results generate buzz regardless of importance or validity. Promotion by individuals with large social followings inflates metrics artificially.
Correlation between altmetrics and long-term impact is unclear. Some highly tweeted papers have lasting influence; others are quickly forgotten. The metrics are too new for thorough validation.
Download and view counts measure how widely research is accessed. However, downloads don’t indicate whether papers were read carefully, understood, or applied.
Impact Case Studies
The UK’s Research Excellence Framework pioneered impact case studies where researchers document specific examples of research influence beyond academia. Australia’s Engagement and Impact Assessment includes similar approaches.
Case studies capture impact that metrics miss. A single project informing major policy change or enabling a medical breakthrough constitutes substantial impact not reflected in citation counts.
However, case study assessment is time-consuming and somewhat subjective. Evaluators must judge whether claimed impacts are genuine and significant. Attribution is often ambiguous when multiple factors contribute to outcomes.
Preparing case studies requires effort that could otherwise go to research. Critics argue the administrative burden outweighs benefits, particularly for institutions and researchers with limited support resources.
Time Lag Issues
Research impact often takes years or decades to materialize. Assessment frameworks with 3-5 year evaluation periods miss slow-burning impact.
This particularly affects fundamental research. Applied research may show practical impact quickly, while basic research underpinning future applications isn’t recognized.
The incentive structure this creates concerns some researchers. Focusing on near-term demonstrable impact could reduce investment in fundamental research with long-term payoff.
Short assessment periods also miss negative impacts that emerge over time. Research initially celebrated might later be recognized as problematic when unintended consequences become apparent.
Interdisciplinary Challenges
Interdisciplinary research typically generates broader impact than narrow disciplinary work, yet assessment frameworks often disadvantage it.
Publication venues for interdisciplinary work may have lower impact factors than top discipline-specific journals. Interdisciplinary researchers face challenges getting hired or promoted in departments with discipline-specific criteria.
Research outputs that don’t fit traditional publication models, like software, datasets, or design prototypes, often receive little recognition despite potential impact.
Some institutions are developing criteria that better accommodate interdisciplinary work, but change is slow. Many researchers still feel pressured toward discipline-specific work that fits established assessment categories.
Commercial Impact
Research commercialization through patents, licenses, and company formation represents one form of impact. However, measuring this is complicated.
Patents and licenses are countable, but not all represent significant impact. Many patents are never commercialized. License revenue varies enormously depending on industry and negotiation outcomes.
Startup company formation from research is tracked, but many startups fail. Should research be credited for founding a company that goes bankrupt within two years?
The University of Queensland and UNSW have generated substantial commercial returns from certain research programs. However, these successes are outliers. Most research doesn’t directly generate commercial value, though it may contribute indirectly.
Policy Impact
Research informing policy decisions constitutes important impact. However, documenting this influence is difficult.
Policy formation involves many inputs, and research is often one factor among many. Direct attribution is rare. More commonly, research contributes to gradual shifts in thinking and evidence bases.
Government citations of research in policy documents provide evidence of consideration. However, being cited doesn’t guarantee actual influence on final decisions.
Parliamentary inquiries and royal commissions sometimes provide clear examples of research impact when academics serve as expert witnesses or provide submissions that shape recommendations.
Social and Cultural Impact
Research can influence public discourse, challenge social attitudes, or preserve cultural heritage. These impacts are valuable but resist quantification.
Historical research that informs public understanding of national identity, or environmental research that shapes conservation attitudes, creates value not captured by academic metrics.
Museums, exhibitions, and public engagement activities extending from research represent impact. However, assessing reach and influence is subjective.
The Australian Research Council’s definition of research impact explicitly includes social, cultural, and environmental benefits. Operationalizing this into assessment criteria remains challenging.
International Collaboration
Internationally co-authored papers typically receive more citations than domestic-only collaborations. This could reflect broader dissemination or simply mutual citation practices.
International collaboration is generally viewed positively in assessment frameworks. However, not all collaborations are equally meaningful. Some represent substantive partnerships; others are token author inclusions.
Australia’s geographical isolation makes international collaboration particularly important for connecting to global research communities. Assessment frameworks that appropriately value this are beneficial.
Teaching and Training Impact
Research training of PhD students and postdocs represents significant impact. These individuals carry knowledge into various sectors.
However, this impact is rarely assessed. Supervisor track records in producing successful graduates don’t feature prominently in promotion criteria or funding applications.
Teaching informed by research benefits undergraduate education. However, research-intensive universities often prioritize research over teaching in assessment, despite rhetoric about research-teaching integration.
Equity Considerations
Impact assessment frameworks can disadvantage researchers with career interruptions from parental leave, illness, or caring responsibilities.
Time-based metrics like h-index inherently disadvantage those with shorter or interrupted publication records. Career-stage adjustments attempt to address this but are imperfect.
Access to impact-generating resources varies. Researchers at well-resourced institutions have more support for commercialization, public engagement, and policy impact than those at less-endowed institutions.
Early-career researchers may lack the networks and opportunities to generate certain types of impact regardless of research quality.
Institutional Pressures
Universities face competing pressures between maximizing research quantity (papers published), quality (citations and impact factors), and demonstrable broader impact.
Some institutions emphasize volume, encouraging researchers to maximize publication counts. Others prioritize elite journal publications. Still others focus on demonstrable societal benefit.
These different priorities affect what research is pursued and how it’s disseminated. Researchers respond to incentives, sometimes in ways that don’t align with maximizing genuine impact.
Performance-based research funding in Australia creates institutional motivation to game metrics. Focus may shift to optimizing assessment outcomes rather than conducting impactful research.
Unintended Consequences
Assessment frameworks shape research behavior. Emphasis on impact might discourage risky, curiosity-driven research that could fail to produce demonstrable outcomes within evaluation periods.
Pressure to demonstrate near-term impact could lead researchers to overclaim or exaggerate the significance of their work. Honest uncertainty about research implications doesn’t fit well with impact narrative requirements.
Administrative burden of documenting impact diverts time from conducting research. The opportunity cost could reduce overall research productivity even if assessment improves.
Alternative Approaches
Some argue for moving away from metrics entirely toward holistic peer evaluation. However, peer review has its own biases and inconsistencies.
Lottery-based funding allocation has been proposed to reduce gaming and administrative burden. This sounds radical but might not be worse than current systems for identifying impactful research prospectively.
Emphasis on research quality and integrity rather than quantity or impact could refocus attention on scientific rigor. However, this doesn’t address funders’ desires for demonstrable returns on investment.
The Path Forward
Perfect assessment frameworks don’t exist. All approaches involve trade-offs between different values and practical constraints.
Australian research assessment will likely continue evolving toward broader impact consideration beyond citations. However, implementation challenges remain substantial.
What seems clear is that over-reliance on simple metrics like citation counts or h-index is inadequate. Research impact is multidimensional, and assessment frameworks should reflect this complexity even at the cost of reduced simplicity.
Whether the research community can develop and implement assessment approaches that appropriately balance different impact dimensions while maintaining reasonable administrative burden remains an open question. The answer will significantly shape Australian research priorities and practices in coming years.