Article Level Metrics Workshop
“The digital environment of today’s research enables the collection and analysis of many more data sources and types than ever before, which trace the dissemination and reach of the article itself. Article-level metrics (ALMs) measure these activities at the level of the article and provide a valuable service lacking in traditional metrics: a real-time indicator of impact for research. In addition to the conventional measure of citations, ALMs incorporate altmetrics, newer measures of scholarly interaction based on the social web. Overall, they can provide much-needed new checks and balances, greater speed of feedback, and superior relationship mapping and influence tracking, none of which can be replicated by the traditional impact factor. They can form the basis of recommendation and collaborative filtering systems able to power navigation and discovery of articles synchronized to the needs of the researcher, publisher, institutional decision-maker, or funder”.
A detailed report from the workshop is now available at dx.doi.org/10.6084/m9.figshare.98828
One of the highlights of the event was the Altmetrics Hackathon which took place on the final day.
“To close the PLOS ALM Workshop, PLOS & ImpactStory co-sponsored the Altmetrics Hackathon on November 3, 2012. The purpose of the event was to bring the research community together to articulate concrete problem areas for altmetrics. Together, we worked with developers to seed technical solutions and raised awareness of the pressing need for expanded altmetrics tool development. Full listing of application ideas board, development resources, attendee interests can be found here.”
ReRank It allows users to rank your PubMed search results based on the impact they have had. This application allows users to see which of the articles have produced the most discussion, been cited the most or have been recommended by academics. ReRank was voted winner of the Audience Award at the end of the hackathon.
The AltViz (altmetrics visualization) group worked on better ways to visualize altmetrics data at various levels of aggregation (single article, few articles, many articles). The goal was to find ways to present the data in a meaningful way, that at the same time could easily be embedded into a variety of platforms. They discussed a variety of chart formats including sunburst plots, heat maps, dot charts, tree maps among several others and settled on two versions to implement during the hackathon.
Data sets are available in the git repository:
Metrics to assess scientific output have traditionally focused on highly indirect metrics such as journal impact factors. While, article-level metrics are a critical step forward as they focus on a meaningful unit (i.e., the paper), they do not provide a solution to the problem of assessing a scientists overall output. The purpose of this project was to develop such a metric.
“Our proof-of-principle implementation, pulled ALM data from the PLOS ALM API, shows that even middle authors add value”.
The altmetrics effort to track the dissemination of research is currently impeded by the existence of multiple referral URLs for each single artifact. Additionally, no standard naming convention exists across publishers for those which they publish. This group followed example articles across journals via its article identifier (DOI) and recorded the proceeding referrer URLs across multiple dissemination channels. This dataset will support altmetrics providers’ effort to more effectively capture these traces.
• AltGaming – aka - BatSignal
Altmetrics activity changes express the changing levels of interest in a research artifact. The specific pattern of activity depends on the metric being considered (e.g., journal page views, Tweets, bookmark, etc.). Spikes in activity are of interest as they may indicate either sudden interest in a paper (e.g., mainstream news coverage, prize awarded to an author) or manipulation of metrics (i.e., gaming). The group sought to develop a tool to detect deviations from expected activity levels.
“The code we generated is on github. We used the literature in the Mendeley burst detection group to help focus on algorithms that might help. The data sets that we generated are in a dropbox folder. We generated one large dataset that can be used for testing algorithms against, however it will need to be converted into time series sub-data sets”.
Finally, also worth a look at is the output from the Breakout Sessions.
You must be logged in to post comments.
- figshare fest London 2015
- The future of figshare
- Interoperate or die
- New partnership with ACS Publication ...
- The move towards institutional porta ...
- Showcasing the VIVO conference outpu ...
- On top of Romania - Taking open rese ...
- Controlled releases of the products ...
- Loughborough University shortlisted ...
- figshare for institutions