We were at the VIVO conference and ran a workshop with stakeholders talking about all things FAIR. For those who maybe aren’t aware, FAIR refers to the recent calls to make all research outputs findable, accessible, interoperable, and reusable (FAIR) for both humans and machines.
By Mark Hahnel.
At Figshare, we’re big fans of this call to action and we are actively following these guidelines at the top level. We feel this fits in with our ethos that all publicly funded research outputs should be available to all of humanity, to question and build on top of. We are also big advocates for open APIs across all research systems.
Funders are often pointing to the FAIR principles as a stepping stone in their push for moving the research they fund further, faster.
These ideas combined has led to the suggestion of FAIR badges, or stamps. This is what we spent most of the workshop discussing. What, when, where, and how these badges or stamps of approval would look like? We have previously heard that this may not be as simple as it sounds via a report by the 4TU in the Netherlands. The report reviewed repository infrastructures like Figshare. We were very encouraged to see that our tools were deemed to be headed in the right direction, but two of the requirements were scored as a zero for all systems. This wasn’t because of any lack of functionality. It was because the requirements were too ambiguous, meaning any consensus on adherence could not be reached.
After walking through each of the steps on the Data FAIRport website, we noticed that some of the requirements would need some kind of human objectification, and some could be checked automatically. For example, ‘appropriate metadata’ is a very nuanced requirement. A metadata or subject specific librarian may be able to determine this. Someone working in a similar field may be able to confirm that the research output has ‘all of the metadata required to understand and reproduce the research.’ However, for machines to be able to interpret this for every single field and subfield of research is a monumental task, one we will not see the results of any time soon.
On the flip side, machine readable licenses are a simple thing to implement, and a simple thing for a machine to check for (as the name suggests). It would be odd for a human to query this in a curation workflow. i.e. To check the API documentation or even landing page HTML.
Finally, somewhere in the middle is the ‘use of an appropriate repository.’ This could be checked by machines, if some form of ‘suitable repository’ criteria was defined and machine-readable badges were issued.
After many hours discussing the best ways to achieve FAIR data, the group settled at a point. We are not proposing this as a solution. We are proposing this as a starting point for conversation. We felt that we could award the following badges with the appropriate resources:
What do you think? We would love to hear your thoughts on how a solution could and should be achieved here. We will continue to develop services (see more on solutions in the space at https://dataverse.org/blog/comparative-review-various-data-repositories#.WXe7gNfdSnc.twitter) for funders, institutions and publishers with the goal of creating infrastructure that supports data that can be as FAIR as possible.
If you have any questions, feedback or comments. Please get in touch at firstname.lastname@example.org or via twitter, facebook or google+.