FRANCESCA BEDDIE. The common-sense test for assessing research applications.

Nov 8, 2018

In 2014, the last year for which complete data is available on the Australian Research Council’s website, 20.7 per cent of applications for research grants were successful; 1,417 grants were made, at a cost of $1,018,017,312. The Australian taxpayer deserves to know if sums of such magnitude are being well allocated.

They also need to know that this investment carries risk; the search for new knowledge and new ways to apply knowledge is tricky and unpredictable. It can deliver extraordinary scientific breakthroughs, lead to cures and new treatments for diseases, better ways to govern and farm and create and teach. Sometimes, research will confirm what is known or add contemporary nuance to perennial problems of the human condition. Sometimes, it’ll hit a dead-end because the data are not there or the methodology does not suit the question or because the research team dissolves.

Rigorous application and selection processes should mitigate such risks, especially if they are overseen by the right people. And when it comes to using public funds it is incumbent on the applicant to demonstrate the value of their project, defined not in narrow political terms nor crude material ones, but in terms of enhancing our society.

In my experience, researchers find it difficult to step back from their chosen subject to persuade the reasonable lay person of the merit of spending taxpayers’ money. Their explanations tend to be mired in hi-falutin’ terminology. The habit of performing for peers makes it impossible to set down in plain English on half a page why what they are doing matters. I know this ostensibly simple task is not easy because I’ve assessed hundreds of research applications for the National Centre for Vocational Education Research (NCVER), the ministerially owned company that collects statistics and conducts research about Australia’s training system. I’ve also been involved in the ARC’s engagement and impact assessment. This exercise aims to see how researchers who received support engage with the end-users of their work and how universities translate their research into economic, social, environmental and other impacts. This will always be an imprecise measure. The take-up of research findings can be a long process, with impacts sometimes coming about through serendipity but usually thanks to dedicated dissemination to make sure the right people know about, and how to use, the research. This costs time and money over a longer period than just the research phase of a project. Recording impact within and beyond academia is being made easier by software but the computer program can’t, yet, write the pithy summary of 800 characters required on the ARC template.

As last week’s furore about ministerial intervention in decisions and a new national interest test showed, researchers still need to hone their persuasive abilities. Another box on the template won’t be the answer. Another way to ensure that proposals have resonance beyond research circles is to involve the end user of the research from the beginning of the process. This can be achieved, in part, by including them in the assessment panels for grants. At the moment, ARC panels are made up of insiders from universities, who have subject-matter expertise and an understanding of research. Sign-off is in the hands of the minister but, as many have been asking, how can the minister possibly examine and judge the merit of more than a thousand projects ranging across science, mathematics, engineering, technology, history, languages, economics, law and commerce? They can’t. Furthermore, political expediency is bound to influence their decision.

This does not make for the sound allocation of research dollars. The current minister, Dan Tehan, insists that, because the buck stops with him, he must have the final say. That’s a cute argument for control. It won’t remove short-termism and ideology from the selection process. What is needed is an arm’s-length system, which involves both insiders and outsiders making decisions about the allocation of research dollars. Such a system can signal to the community that funding is going to research that matters, not only to the researchers and their universities but also to the nation. It can build public trust because it includes the common-sense test.

I have seen this work. When NCVER had competitive funding rounds, the research applications were considered by a panel made up of research experts as well as government, industry and provider representatives. Not only did this mix of people help align the research ideas to the real-world problems, it also fostered among non-researchers a better understanding of the complexities of research. They listened to how a proposed methodology was or was not the best way to answer the question and helped make decisions that weighed novelty against relevance and usually saw intrinsic value trump vested interest.

Unfortunately, this system is no longer in place at NCVER, where decisions on research are now made by senior bureaucrats. Governments certainly require evidence to help them make decisions but rigorous research on the VET system is also sorely needed in industry circles, among careers advisers and in training providers. That’s a whole other topic.

A selection process involving a cross-section of society improves the odds for getting research that serves the national interest rather than suiting the political moment. The interchange between researcher and end-user grounds the research. It also improves lay people’s appreciation of how messy research can be and how valuable.

Francesca Beddie worked in the Department of Foreign Affairs and Trade, AusAID and in the ministerially owned company, the National Centre for Vocational Education Research. She is co-director of Make Your Point, a consultancy offering communication training, writing and editing services.

 

Share and Enjoy !

Subscribe to John Menadue's Newsletter
Subscribe to John Menadue's Newsletter

 

Thank you for subscribing!