Results of an Analysis of Existing FAIR Assessment Tools

    You are here

23
May
2019

Results of an Analysis of Existing FAIR Assessment Tools

By Stefanie Kethers


FAIR Data Maturity Model WG

Group co-chairs: Edit HerczogVassilios PeristerasKeith Russell

Supporting Output title:  Results of an Analysis of Existing FAIR Assessment Tools

Impact: This document provides an overview of a number of existing FAIR assessment tools, listing the indicators used in these tools to assess the FAIRness of a data set. This is useful if you wish to compare existing FAIR assessment tools and the questions that are being asked.

Authors: Christophe Bahim, Makx Dekkers, Brecht Wyns

DOI: 10.15497/RDA00035

Citation:  Christophe Bahim, Makx Dekkers, Brecht Wyns (2019). Results of an Analysis of Existing FAIR assessment tools. Research Data Alliance. DOI: 10.15497/RDA00035

Note: More information on the development of this document can be found in the WG's github repository.

 

Abstract

This document is a first output of the FAIR Data Maturity Model WG. As a landscaping exercise, the editorial team of the WG analysed current and existing approaches related to FAIR self-assessment tools. The analysis was made based on publicly available documentation and an online survey. Questions and options stemming from theses different approaches were classified according to the FAIR principles/facets. Comments were collected and incorporated. This resulted in five slide decks, combined in this pdf document, that make up this preliminary analysis. 

Please note that the latest version (v3) of this document supersedes the previous version 0.02 of the document, which underwent community review in May and June 2019.

 

 

 

Output Status: 
RDA Supporting Outputs
Review period start: 
Monday, 27 May, 2019 to Thursday, 27 June, 2019
Group content visibility: 
Use group defaults
Primary WG Focus / Output focus: 
Domain Agnostic: 
Domain Agnostic
  • Emilie Lerigoleur's picture

    Author: Emilie Lerigoleur

    Date: 15 Jun, 2019

    This is a very nice initiative to compare and analyze these existing FAIR assesment tools.

    Few comments:

    - it will be interesting to describe the target audience in the background

    - please explain the first term "IRI"

    - what does it mean "X4" page 10?

    - the question page 27 "Are standard vocabularies..." is truncated!

    - the question page 29 "Please provide the URL..." is truncated!

    - it appeared to be quite difficult to find an answer to the following question page 34: " Granularity of data entities in dataset is appropriate in Respect of Meta-Data Granularity"

    - the question page 35 "Does the researcher provide..." is truncated!

    Next step is to identify core elements without duplicates for the evaluation of FAIRness, isn't it? I hope the maximum of the core common metrics will be automatically measured by machine as far as possible to ease the FAIRness assesment process.

     

  • Christophe Bahim's picture

    Author: Christophe Bahim

    Date: 12 Aug, 2019

    Dear Emilie, 

    Many thanks for your comment and apologizes for the late reply. 

    Please find below the questions that were truncated in the document; 

    • Are standard vocabularies, thesaurus or ontologies used for all data types present in datasets, to enable interdisciplinary interoperability between well defined domains? If not, is a well-defined open data dictionary provided?
    • Please provide the URL to a formal Linkset or copy/paste the content of a formal linkset that describes at least a portion of the content at RESOURCE ID
    • Does the researcher provide information on methods and tools that permit the understanding, integrity, value and readability of data intended to be kept on the long-term ? (e.g. versioning, archival and long term reuse issue for protocols, softwares, required methods and contexts to create, read and understand data)

    Besides, this exercise served the sole purpose of comparing existing methodologies to measure FAIRness. We looked at the questions and options they were proposing. Thus, I would suggest if you have questions, such as your first or second to last bullet, to directly ask them on the dedicated GitHub where the WG is very active. 

    Indeed, the next step is to identify core elements for the evaluation of FAIRness, which is an exercise we are currently doing. 

    I remain at your disposal for further clarifications. 

    Best, 

    Christophe

  • Keith Russell's picture

    Author: Keith Russell

    Date: 06 Mar, 2020

    Hi Emilie,

    Thank you for your comments and suggestions on the document. 

    We have now removed all the truncations and places where text dissappeared off the page.

    The term IRI refers to an Internationalized Resource Identifier, 

    https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier

    I hope the updated output now answers at least some of your questions, although others are being discussed on Github when coming to the list of community agreed indicators.

    Kind regards

    Keith

     

     

submit a comment