Results of an Analysis of Existing FAIR Assessment Tools
FAIR Data Maturity Model WG |
Group co-chairs: Edit Herczog, Vassilios Peristeras, Keith Russell |
Supporting Output title: Results of an Analysis of Existing FAIR Assessment Tools |
Impact: This document provides an overview of a number of existing FAIR assessment tools, listing the indicators used in these tools to assess the FAIRness of a data set. This is useful if you wish to compare existing FAIR assessment tools and the questions that are being asked. |
Authors: Christophe Bahim, Makx Dekkers, Brecht Wyns |
DOI: 10.15497/RDA00035 |
Citation: Christophe Bahim, Makx Dekkers, Brecht Wyns (2019). Results of an Analysis of Existing FAIR assessment tools. Research Data Alliance. DOI: 10.15497/RDA00035 |
Note: More information on the development of this document can be found in the WG's github repository. |
Abstract
This document is a first output of the FAIR Data Maturity Model WG. As a landscaping exercise, the editorial team of the WG analysed current and existing approaches related to FAIR self-assessment tools. The analysis was made based on publicly available documentation and an online survey. Questions and options stemming from theses different approaches were classified according to the FAIR principles/facets. Comments were collected and incorporated. This resulted in five slide decks, combined in this pdf document, that make up this preliminary analysis.
Please note that the latest version (v3) of this document supersedes the previous version 0.02 of the document, which underwent community review in May and June 2019.
Attachment | Size |
---|---|
Resultados de un análisis de las herramientas existentes para la evaluación FAIR.pdf | 120.83 KB |
Attachment | Size |
---|---|
Card RDA_FAIR_Assessment_Tools_June2020.pdf | 1018.72 KB |
- Log in to post comments
- 11064 reads
Author: Emilie Lerigoleur
Date: 15 Jun, 2019
This is a very nice initiative to compare and analyze these existing FAIR assesment tools.
Few comments:
- it will be interesting to describe the target audience in the background
- please explain the first term "IRI"
- what does it mean "X4" page 10?
- the question page 27 "Are standard vocabularies..." is truncated!
- the question page 29 "Please provide the URL..." is truncated!
- it appeared to be quite difficult to find an answer to the following question page 34: " Granularity of data entities in dataset is appropriate in Respect of Meta-Data Granularity"
- the question page 35 "Does the researcher provide..." is truncated!
Next step is to identify core elements without duplicates for the evaluation of FAIRness, isn't it? I hope the maximum of the core common metrics will be automatically measured by machine as far as possible to ease the FAIRness assesment process.
Author: Christophe Bahim
Date: 12 Aug, 2019
Dear Emilie,
Many thanks for your comment and apologizes for the late reply.
Please find below the questions that were truncated in the document;
Besides, this exercise served the sole purpose of comparing existing methodologies to measure FAIRness. We looked at the questions and options they were proposing. Thus, I would suggest if you have questions, such as your first or second to last bullet, to directly ask them on the dedicated GitHub where the WG is very active.
Indeed, the next step is to identify core elements for the evaluation of FAIRness, which is an exercise we are currently doing.
I remain at your disposal for further clarifications.
Best,
Christophe
Author: Keith Russell
Date: 06 Mar, 2020
Hi Emilie,
Thank you for your comments and suggestions on the document.
We have now removed all the truncations and places where text dissappeared off the page.
The term IRI refers to an Internationalized Resource Identifier,
https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier
I hope the updated output now answers at least some of your questions, although others are being discussed on Github when coming to the list of community agreed indicators.
Kind regards
Keith