Research Computing Funding Should Mostly Just Go To Researchers

avatar
Written by   on June 08, 2021

Research computing and data — supporting research efforts with software, computer and data expertise and resources — is fundamentally all of a piece. Today there’s fewer and fewer hard boundaries between where the system requirements end and where the software or data resource requirements begin; and teams supporting researchers must have expertise across the stack.

This convergence is a huge opportunity for research computing, but it’s also a challenge for funders. How to know how much to allocate to software, and how much to hardware? Within software, how many resources should go to new software development or procurement, and how much to maintenance? In hardware, what is the right balance between GPUs and CPUs or FPGAs, and within data, how much should we support curation efforts vs discovery, or archival vs near-line storage?

Luckily, there is a simple, robust, time-tested mechanism research computing funders can easily take advantage of, and they should do so. Funders for research computing and data efforts manage their portfolio effortlessly — in exactly the same way health funders know how to balance spending between reagents and lab staff, or the same way physical science funders know how much to allocate to trainee salaries vs tabletop equipment.

Most research computing funding should go directly to researchers, via traditional funding councils, and the researchers should spend that research computing and data portion of their grants as and where they see fit.

With research computing and data funding as an integral component of project funding, the same research review process that adjudicates the research proposal would weigh in on the computing and data resources requested to conduct it. This eliminates nonsensical but all-too-common situations where a researcher successfully wins computing cycles for a non-funded project, or gets funding for a postdoc for a project but doesn’t get enough compute or storage resources for the trainee to perform the project. It would also allow the researcher to adjust how they were using resources mid-stream; if after initial efforts it turned out that software development effort to improve the code was a better use of funding than throwing hardware at the problem, the money could be spent that way, rather than applying ahead of time for people time and computing resources separately and hoping that it all works out in the end.

A technician validates genetic variants identified through whole-exome sequencing at the Cancer Genomics Research Laboratory, part of the National Cancer Institute's Division of Cancer Epidemiology and Genetics (DCEG).
We fund researchers to buy all kinds of complex equipment, they can handle buying research computing services.

In this model, a researcher would include in their grant proposal a research computing and data component where necessary. As with the purchasing wet lab equipment, animal experiments, or large physical apparatus — undertakings which are no less technical or complex than research computing — research grants would include cost justifications for the proposed research computing services or equipment, and funding agencies would rate the quality of the justification and the worthiness of the proposed goals versus the cost.

A researcher whose proposal was successful would then, as with other line items, be free to spend that research computing and data component of their grant where they wish on for software development, data management and analysis, or access to storage and compute resources. Obviously as known entities with existing working relationships, local research computing centres — now working in a familiar core facility model — would have a huge advantage. But the researcher would not be limited to working with those centres, nor to working with only one service provider.

This approach will work well for capacity computing, data, and expertise — those needs where there are many possible service providers. And in those areas, having the researcher in control of what services they can use where will help drive those vendors to providing the kinds and quality of services that researchers need. But not every kind of computing or expertise capability is available enough for researchers to be able to easily buy needed quantities of. Researchers can’t conjure into existence a (say) quantum computing shared facility one investigator-led grant at a time. Those new and emerging capabilities have to be handled separately, with existing funding councils setting priorities. Once those new capabilities are operational, they can and should be sustained with the same core-facility portable-funding model; if they can’t, maybe they didn’t need to be built. Other needs like foundational infrastructures — research and education networks, advisory bodies — will also need to be handled separately by funders.

But for the bulk of research computing, for capacity support of research using computing, data and related expertise, there’s no longer need for endless surveys and consultations and projections to indirectly inform decision making. Parallel competitions for different kinds of support for a research project have long since stopped making sense. Internal computing organization debates about what kinds of services to offer should make way for researchers allocating the funds themselves. Let researchers decide what works best for advancing their research.

-->