GORC International Model WG Case Statement
GORC International Model WG Case Statement
The GORC International Benchmarking WG changed its name to GORC International Model WG on 2 August 2021. The revised Case Statement with the new group name has now been attached to this page.
The following previous case statements have now been superseded.
- Revised Case Statement following TAB review.
- Original Case Statement, which underwent community review.
Introduction
The Global Open Research Commons (GORC) is an ambitious vision of a global set of interoperable resources necessary to enable researchers to address societal grand challenges including climate change, pandemics, and poverty. The realized vision of GORC will provide frictionless access to all research artifacts including, but not limited to: data, publications, software and compute resources; and metadata, vocabulary, and identification services to everyone everywhere, at all times.
The GORC is being built by a set of national, pan-national and domain specific organizations such as the European Open Science Cloud, the African Open Science Platform, and the International Virtual Observatory Alliance (see Appendix A for a fuller list). The GORC IG is working on a set of deliverables to support coordination amongst these organizations, including a roadmap for global alignment to help set priorities for Commons development and integration. In support of this roadmap, this WG will establish benchmarks to compare features across commons. We will not coordinate the use of specific benchmarks by research commons. Rather, we will review and identify features currently implemented by a target set of GORC organizations and determine how they measure their user engagement with these features.
- Log in to post comments
- 5562 reads
Author: Francoise Genova
Date: 03 Feb, 2021
I used to be the vice-chair of the FAIR Working Group of the EOSC Executive Board, which completed its task at the end of 2020. I would like to strongly support the proposal to create this GORC International Benchmarking WG. The EOSC FAIR WG recommended in particular that its proposal for FAIR Metrics in the EOSC, inspired from the FAIR Data Maturity Model WG, be reviewed in an international context, and we suggested the GORC IG. The point was discussed during the P16 pre-WG BoF and it seems to fit well as one of the points which could be addressed in the International Benchmarking WG. I am also pleased to see the domain specific organisations are considered, and that the subdivision of tasks will be defined in a flexible way.
A few minor comments:
- the long list of possible topics in Section 1 can be frightening when organisations will be contacted to participate. It would be useful to say explicitly that not all commons-developing organisations are expected to develop all these features, depending on their mission and community requirements.
- I suggest to add the FAIR Data Maturity Model WG in the list of relevant RDA Groups in Section 3. It is cited elsewhere, but FAIR plays an essential role for enabling seamless access to data and other digital objects.
Very minor comment: in Section 9, last sentence, howwe > how we
I will disseminate information about this WG proposal in the IVOA, which is cited as one of the relevant organisations among the domain commons.
Best wishes
Francoise Genova
Author: Ville Tenhunen
Date: 08 Feb, 2021
Hi all,
We (Yin Chen and Ville Tenhunen) discussed in the EGI office (nowadays virtual one) about the RDA GORC Benchmarking IG Charter proposal.
Generally, the idea of the IG is good and easy to support. This is an activity which supports GORC WG work and global research data collaboration.
We hope you could consider a few points of views to clarify or sharpen the charter text:
The value proposition has described values of having a set of benchmarks. It might be better if it defines who are the targeting readers and what benefits they can gain from this work?
Some phase of the WG lifecycle it is necessary to provide more information how such a set of benchmarks will be developed, so that whether the approach is appropriate, whether the result is valid can be evaluated.
How will the benchmark be used? It should provide some use scenarios to justify the usefulness of these benchmarks.
Couple detailed comments about the Charter and Appendixes:
1. Charter
There is also the list of some benchmarks to consider. It is understandable that here is some examples about benchmarks, but couple of proposals for further considerations:
2. Appendix A
3. Appendix B; Draft List of WG/IG, documents, recommendations, frameworks and roadmaps from related and relevant communities
With the best regards,
Yin & Ville