Michelle Barker: My experience as a participant in the RDA COVID-19 Working Group

You are here

14 Sep 2020

Michelle Barker: My experience as a participant in the RDA COVID-19 Working Group

Michelle Barker is the Director of the Research Software Alliance (ReSA) and moderator of the software subgroup.


What was your role in the RDA COVID-19 Working Group?

I was one of the moderators of the software subgroup.

 

Why did you join the group?

I am the director of the Research Software Alliance (ReSA), which facilitated the group. It was important to put forward some key practices for the development and reuse of research software to facilitate sharing and accelerate the production of results in response to the COVID-19 pandemic. ReSA coordinated the participation of 45 community members, who were involved in writing the software chapter in very tight timelines.

 

Who do you think benefits most by applying these guidelines and recommendations?

The software section contains recommendations for policymakers, funders, publishers and researchers. They enable readers to facilitate open collaborations that can contribute to addressing the current challenging circumstances. The goal is to enable relatively small points of improvement across all aspects of software, allowing its swift reuse. This encourages the accelerated and reproducible research needed during the crisis.

 

How or where can these guidelines and recommendations be applied to help alleviate the impact of another potential emergency?

The guidelines for researchers will help them improve their software quality and research reproducibility and also have an impact on policymakers, funders and publishers. They were designed to help improve research innovation and efficiency in areas such as software openness and its role in disease transmission modelling software utilised by governments to make significant decisions. For example, the code for the high-profile Imperial College epidemic simulation model used by the UK government (Special report: The simulations driving the world's response to COVID-19) was not publicly available until 28 April 2020. Earlier availability could have strengthened the integrity of this work and increased trust in the outcomes. The policy recommendations are aimed at policymakers and funders to reinforce awareness of how they can help create opportunities that address issues around valuing research software, such as the acquisition of skills in software citation to ensure reproducibility. The recommendations for publishers are designed to encourage them to push forward citable software, so it becomes equally recognised with data and scholarly publications as a research outcome. The biggest value of the guidelines, however, would be mainstream implementation of these best practices in the long term, not just in specific situations.

 

What were the pros and cons of producing the report in such a short timeframe?

One of the benefits was that we were able to leverage existing work on best practices for research software and review the parts that were most applicable to a global health emergency. One of the negatives was that achieving consensus can take a lot of time and we had tight timelines.

 

What’s the value of the international research collaboration on an initiative such as this?

This is the first work on best practice for research software to meet the challenges of COVID-19. It complements existing work on research data, such as Wellcome Trust’s sharing of research data and findings relevant to the COVID-19 outbreak which has 150+ signatories and affirms commitment to the principles set out in the Wellcome Trusts 2016 Statement on data sharing in public health emergencies.

 

What was your biggest takeaway from this experience?

This project clearly demonstrated the benefit of having the research software community collaborate on important strategic activities to increase the recognition and value of research software as a fundamental and vital component of research worldwide.