figshare
Browse
Annotometer_poster_2017.pdf (1.21 MB)

Annotometer: Encouraging annotation of published works.

Download (3.71 MB)
Version 3 2017-03-14, 03:30
Version 2 2017-03-14, 03:24
Version 1 2017-03-14, 03:10
journal contribution
posted on 2017-03-14, 03:30 authored by Chris HunterChris Hunter, Xiao SiZheXiao SiZhe, Peter Li, Laurie GoodmanLaurie Goodman, Scott EdmundsScott Edmunds

It is clear that the amount of data being generated worldwide cannot be curated and annotated by any individual or small group. Even those generating the data only tend to curate it with information that is directly relevant to their interests/goals, perhaps neglecting to include information that could be important for someone to re-use the data in an unrelated study.

There is a general realization that the only way to provide the depth and breadth of annotations is to employ the power of the community, or as the saying goes “many hands make light work”. To achieve this we first required user-friendly tools and apps that non-expert curators would be comfortable and capable of using. Such tools are now in place, including iCLiKVAL (http://iclikval.riken.jp) and Hypothes.is (https://hypothes.is).

In order for these tools to become the powerhouse behind community curation they need a kick-start, something to seed them with useful information that will allow users to realize their utility and begin to both habitually use and add information to them.

To this end, GigaScience created and ran the first “Giga-Curation Challenge” at the BioCuration2016 meeting in April 2016. We created “The Annotometer” app (http://annotometer.com) to track and measure curations made over the duration of the conference.

Following on the success of the GigaCuration challenge at BioCuration2016, we would like to run the challenge again this year at BioCuration2017, with some minor changes to the format based on feedback from that event.

The principals remain the same, we wish to engage the entire BioCuration community, to inspire them to spread their expertise and knowledge via the use of publicly available tools for curation. But this time we wish to provide more comprehensive hands on training for use of both the tools, as well as a more directed set of things to be curated. The prize structure will also be changed slightly, so instead of just 1 big prize for the overall most productive user, we will provide multiple smaller prizes (exact details still to be finalized).


History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC