Normalization of Mendeley reader impact on the reader-and paper-side: A comparison of the Mean Discipline Normalized Reader Score (MDNRS) with the Mean Normalized Reader Score (MNRS) and bare reader counts

For the normalization of citation counts, two different kinds of methods are possible and used in bibliometrics: the cited-side and citing-side normalizations both of which can also be applied in the normalization of “Mendeley reader counts”. Haunschild and Bornmann (2016a) introduced the paper-side normalization of reader counts (mean normalized reader score, MNRS) which is an adaptation of the cited-side normalization. Since the calculation of the MNRS needs further data besides data from Mendeley (a field-classification scheme, such as the Web of Science subject categories), we introduce here the reader-side normalization of reader counts which is an adaptation of the citing-side normalization and does not need further data from other sources, because self-assigned Mendeley disciplines are used. In this study, all articles and reviews of the Web of Science core collection with publication year 2012 (and a DOI) are used to normalize their Mendeley reader counts. The newly proposed indicator (mean discipline normalized reader score, MDNRS) is obtained, compared with the MNRS and bare reader counts, and studied theoretically and empirically. We find that: (i) normalization of Mendeley reader counts is necessary, (ii) the MDNRS is able to normalize Mendeley reader counts in several disciplines, and (iii) the MNRS is able to normalize Mendeley reader counts in all disciplines. This generally favorable result for the MNRS in all disciplines lead to the recommendation to prefer the MNRS over the MDNRS – provided that the user has an external field-classification scheme at hand.


Introduction
Normalization of citation counts regarding subject category and publication year of publications started in the mid-1980s (Schubert & Braun, 1986). The comparison of units in research (e.g. researchers, research groups, institutions, or countries) publishing in different disciplines and time periods is only possible with normalized citation scores. Basically, one can distinguish between two levels of normalization: (1) In the case of normalization on the cited side, the total number of citations of the paper i to be evaluated is counted (see e.g. the times cited information in Web of Science, WoS). This number of times cited is compared with other publications published in the same year and subject category (the reference set). Since one can expect different citation rates for papers with different document types, the document type is also frequently considered in the constitution of the reference set. The mean citation rate over the papers in the reference set determines the expected value. The comparison of times cited of paper i with the expected value results in the normalized citation score (NCS) for i. This procedure is repeated for all papers in a paper set (e.g. of a researcher, research group, institution, or country) and a mean NCS (MNCS) on a higher aggregation level is calculated (Waltman, van Eck, van Leeuwen, Visser, & van Raan, 2011).
(2) In the case of normalization on the citing side, each citation of a paper is multiplied with a weighting factor (Zitt & Small, 2008). This weighting factor reflects the citation density of the discipline: Since it is assumed that the number of references in publications reflects the field-specific citation density, the inverse of the field-specific citation density is usually used as a weighting factor. The sum of all weighted citations is the normalized citation impact of a publication. The results of Waltman and van Eck (2013b) offer considerable support for the use of citing-side bibliometric indicators. In section 3.2, the citing-side concept of citation impact normalization is described in more detail, since it is central to the paper.
In recent years, scientometrics started to explore alternative metrics (altmetrics) to study the impact of publications (Priem, 2014;Priem, Piwowar, & Hemminger, 2012;Priem, Taraborelli, Groth, & Neylon, 2010). Here, notes, saves, tweets, shares, likes, recommends, tags, posts, trackbacks, discussions, bookmarks, comments etc. are counted (Bornmann, 2014a). Altmetrics seems to have two advantages over citation counts: They allow (1) an impact measurement within a shorter time period after the appearance of a paper than citation counts (Lin & Fenner, 2013) and (2) a broader impact measurement, which is not only restricted to the area of science but also to the rest of society (Altmetric and Scholastica, 2015). The possibility of a broad impact measurement using altmetric counts is part of the current scientometric research (Bornmann, 2014b(Bornmann, , 2015. Data from Mendeley (Elsevier) which reflect the readership of papers are one of the most important sources for altmetrics. Haunschild and Bornmann (2016a) introduced the paper-side normalization of "reader counts" (see also Fairclough & Thelwall, 2015), because several studies have shown that Mendeley reader impactsimilar as citation impactvaries across scientific disciplines (Haustein & Larivière, 2014a;Jeng, He, & Jiang, 2015;Maflahi & Thelwall, 2015;Zahedi, Costas, & Wouters, 2014;Zahedi & Eck, 2014): In one discipline papers are read more often on average than in other disciplines. The new indicator on the paper-side was named mean normalized reader score (MNRS). The normalization also considers the document type of publications, because it has an influence on reader counts, too (Haunschild & Bornmann, 2016a;Haustein & Larivière, 2014b). Since citing-side normalization is a promising alternative in scientometrics to cited-side normalization and the MNRS needs further data besides data from Mendeley, we introduce the mean discipline normalized reader score (MDNRS) in this study. The MDNRS is an adaptation of the citing-side normalization to reader data and is solely based on Mendeley data.
The manuscript is organized as follows: Subsequent to the Methods section (see section 2), we demonstrate differences in reader impact between disciplines (see section 3.1) and introduce the MDNRS after explaining the citing-side normalization methods in bibliometrics (see section 3.2). In sections 3.3 and 3.4, we test the MDNRS in its ability to field-normalize reader counts (theoretically and empirically) and compare the MDNRS with the MNRS and bare reader counts. Finally, MDNRS values are presented for the best performing scientific journals and correlations between MDNRS and MNRS are calculated.

Mendeley
Mendeley is both a citation management tool and social network for academics (Gunn, 2013). Mendeley data is very attractive for scientometric use because (1) the coverage of the worldwide publications is high (Haustein & Larivière, 2014b;Priem, 2014), (2) the data can be downloaded relatively easily via an API (Bornmann & Haunschild, 2015;Mohammadi & Thelwall, 2013), and (3) the use of Mendeley as a citation manager is part of the research process of many scientists: Literature is added to the library for reading or citing later on.
However, the use of Mendeley is not only restricted to researchers, but also to other people reading scientific publications (e.g. students, journalists, or research managers) (Bornmann & Haunschild, 2015;Mohammadi, Thelwall, Haustein, & Larivière, 2014). One basic assumption for using Mendeley data as altmetrics is that Mendeley users who add publications to their libraries can be counted as readers of the publications. Although Mendeley users have also other reasons for adding publications to their library, the results of Mohammadi, Thelwall, and Kousha (in press) show that "82% of the Mendeley users had read or intended to read at least half of the bookmarked publications in their personal libraries". Thus, adding of publications to a Mendeley library should reflect reading of the publications (in most of the cases). In the following sections, we generally treat the Mendeley data as readership data.

Data
It is common practice in bibliometrics to include only articles and reviews in a study.
Other document types are usually discarded, although letters are sometimes also considered (e.g. with a lesser weight factor than one) (Moed, 2005 3 Results

Differences in reader impact between disciplines
The Mendeley reader counts broken down by discipline are shown in The arithmetic average number of Mendeley readers per paper is also shown in Table   1 because this is the basic variable to construct the new indicator. In cases where publications have readers from multiple disciplines (X and various other disciplines), we calculated the average number of readers for discipline X by taking all publications with at least one reader from discipline X and by calculating an (unweighted) average over these publications of the number of readers from discipline X.
The results in Table 1 reveal that the average number of readers varies significantly across the Mendeley disciplines. The reader spread across the disciplines is somewhat smaller than the reader spread across WoS subject categories (see the results in Haunschild & Bornmann, 2016a). Obviously, one reader count has a different value in biological sciences compared to arts and literature: The average number of reader counts is reviews =1.14 ( articles =1.15) in the discipline arts and literature while in the discipline biological sciences an average reader count of reviews =14.30 ( articles =6.85) is observed for reviews (articles). The average number of Mendeley readers per paper is larger for reviews than for articles in 15 of the 25 disciplines. In the case of business administration, education, and social sciences, articles have a higher reader count on average than reviews.
For the discipline biological sciences, an article has to have 15 readers to be among the 10% most frequently read papers (top-10%) and 62 readers to be among the 1% most frequently read papers (top-1%). Eight readers are needed to be among the top-10% and 23 readers to be among the top-1% in medicine. Seven readers are necessary for an article to be among the top-10% and 24 readers to be among the top-1% in chemistry.

Reader-side normalization of reader impact
The method of citing-side normalization was formulated for the first time in a paper by Zitt and Small (2008). They used the Journal Impact Factor (JIF) modified by fractional citation weighting. The citing-side normalization which is also referred to as fractional counting of citations or a priori normalization (Waltman & van Eck, 2013a) is not only used for journals but also in other contexts. The normalization tries to account for the different citation densities from where a citation originates from (Haustein & Larivière, 2014b;Leydesdorff & Bornmann, 2011;Leydesdorff, Radicchi, Bornmann, Castellano, & de Nooy, 2013;Radicchi, Fortunato, & Castellano, 2008; van Leeuwen, 2006;Zitt, Ramanana-Rahary, & Bassecoulard, 2005). Each citation is weighted with regard to its citation density: A citation from a discipline where many papers are cited has a lower weight than a citation from a discipline where fewer papers are cited.
The number of cited references plays an important role in citing-side normalization: "Citing-side normalisation is based on the idea that differences among fields in citation density are to a large extent caused by the fact that in some fields publications tend to have much longer reference lists than in other fields. Citing-side normalisation aims to normalise citation impact indicators by correcting for the effect of reference list length" (Wouters et al., 2015, p. 19). In one of the proposed methods of citing-side normalization, the number of references is used to weight the citations from a paper (Leydesdorff & Bornmann, 2011;Leydesdorff & Opthof, 2010;Leydesdorff, Ping, & Bornmann, 2013;Leydesdorff, Radicchi, et al., 2013;Waltman & van Eck, 2013b). That means a citation from a paper with fewer references has a higher weight than a citation from a paper with more references. This method assumes that the number of references in a particular paper reflects the typical number of references. This variant of the citing-side normalization has been named by Waltman and van Eck (2013b) as SNCS (2) and is defined as follows: Thus, each citation i of a publication is divided by the number of references in the citing publication r i , whereby c is the total number of citations the publication has received.
In another previously proposed method of citing-side normalization (SNCS (1) ), the average number of references of the papers which appeared in the same journal as the paper in question is used in calculation of the weighting factor.
Here, a i is the average number of references in those publications which appeared in the same journal and in the same publication year as the citing publication i (c is the total number of citations the publication has received). The definition of SNCS (1) is based on the audience factor proposed by Zitt and Small (2008).
Since this variant considers an average number of references (instead of the references in one publication), it has a higher probability of estimating typical citation densities with a higher accuracy than SNCS (2) (Bornmann & Marx, 2014). However, this is true as long as the citation does not originate from a multi-disciplinary journal, such as PLoS ONE.
A combination of both variants is also possible, as described by Waltman and van Eck (2013b).
In SNCS (3) r i is the number of references in the citing publication (analogously to SNCS (2) ). p i is the share of the publications which contain at least one linked reference among those publications which appeared in the same journal and in the same publication year as the citing publication i. c is the total number of citations the cited publication has received.
Following the citing-side normalization variant SNCS (2) , we propose the reader-side normalization of reader impact. Since the journal in which a citing paper was published cannot be considered with readers (readers are independent of journals), both other citing-side normalization variants SNCS (1) and SNCS (3) are not transferable to the normalization of reader impact.
In SNCS (2) , the paper's citation impact is normalized with respect to the disciplines where the paper is cited from. Cited references of citing publications are used as proxies reflecting citation densities in disciplines: the longer the reference lists of publications, the more intensive the citation activity in a discipline. Analogously, in the reader-side normalization, the paper's reader impact is normalized with respect to the (Mendeley) discipline in which it has been read. Whereas in the citing-side normalization the number of references is used as a proxy for citation density, the reader-side normalization can directly focus on disciplinary differences on the reader side by using the average reader counts in a discipline as the discipline-specific reader density.
As Table 1 shows that the readers differ on average between the document types "article" and "review" for most Mendeley disciplines, the normalization procedure is done separately for both document types.
The procedure for normalizing reader counts on the reader side is as follows: First, the average number of reader counts in each Mendeley discipline (ρ d ) is determined (see Table 1): Here, I d is the number of papers in Mendeley discipline d and R id is the raw Mendeley reader count of paper i in Mendeley discipline d. A paper is in discipline d if at least one of its readers is in this discipline. The average reader count (ρ d ) should reflect differences between disciplines in reading papers.
Second, the Mendeley reader counts (R id ) of paper i and Mendeley discipline d is divided by the average reader count (ρ d ) in discipline d: Third, the sum over the normalized reader counts (β id ) in the disciplines in which a paper i was read is calculated: In Eq. (6), D is the number of Mendeley disciplines. Currently, D = 25.
We obtain a normalized reader score (discipline normalized reader score for each paper i, DNRS i ). Similar to the citing-side normalization of citation counts, where each citation is weighted by the citation density in a discipline (reflected by the number of references), each reader of a publication is weighted by the corresponding reader density (reflected by the average readers in a discipline). Since reader counts are dependent on time (the longer the time window between the publication of a paper and its impact measurement, the more readers can be expected), the DNRS i should be calculated separately for papers published in different years (Lin & Fenner, 2013).
The reader-side normalization does not lead to a score like the MNRS which has a mean value of one and can be interpreted as an "average" impact. It only leads to reduced reader counts whereby the reduction is dependent of the reader density in the corresponding reading disciplines. This makes the interpretation of the DNRS somewhat more difficult than the MNRS, but this is a characteristic which the DNRS shares with citing-side indicators.
However, we provide threshold values in Table 2, which a paper has to have at least in order to reach the 1% (top-1%), 10% (top-10%), and 50% (top-50%) most frequently read papers in 2012independent of the discipline. The overall reader impact for aggregation levels (e.g. single researchers, research groups, institutions, countries, or journals) can be analyzed in terms of averages over a set of N papers:

Theoretical analysis of the discipline normalized reader score
In the theoretical analysis of the MDNRS, we study whether the indicator fulfils several properties as desirable conditions for proper impact measurements (Waltman, et al., 2011).
First, we analyse the MDNRS regarding the property of consistency. The property of consistency has been defined for citation-based indicators of average performance by van Eck (2009) andWaltman, et al. (2011). Here, we reformulate this definition for average performance indicators based on reader counts: A performance indicator is said to be consistent if the ranks of two publication sets of equal size (e.g., of two different researchers) do not change if both publication sets are expanded by an additional publication with the same number of reader counts in the same disciplines. It can be seen when Eq. (4)-(7) are reformulated that the MDNRS has this property of consistency: Here, S 3d is the reader count of the additional publication for discipline d. S 1 and S 2 consist of N papers each: i = 1, 2, ..., N for S 1 and j = 1, 2, ..., N for S 2 . Now, we start with the given condition that MDNRS(S 1 ) ≥ MDNRS(S 2 ) and analyse if MDNRS(S 1 ∪S 3 ) ≥ MDNRS(S 2 ∪S 3 ) holds true: It follows from Eq. (8-16) that MDNRS(S 1 ∪S 3 ) ≥ MDNRS(S 2 ∪S 3 ) if MDNRS(S 1 ) ≥ MDNRS(S 2 ). Therefore, the MDNRS has the property of consistency. We provide an illustrative example in Table 3 and Table 4 for clarification.  (5) and (6) clearly show that the performance indicator DNRS i increases when additional reader counts occur. Therefore, the (M)DNRS has the desired property that the indicator value does not decrease (worsen) when additional reader counts occur for a publication set.
The third property we would like to study is the mean value over all papers. It is desirable to obtain a mean value of 1 over all papers (published within one year) so that it is easier to judge if a paper has an above or below average impact. Equation (6) and the examples in Table 3 and Table 4 show that the average MDNRS value over all papers differs from 1. It follows from Eq. (4) and (5), that the basic components which the MDNRS is based on (β id ) do have the property that the mean value over all papers equals 1. Therefore, the MDNRS has this property, too, under the assumption that all reader counts for a paper are from the same discipline, e.g. d=1: Due to the definition of ρ d , Eq. (11) shows that in this extreme case the MDNRS over a full publication year equals 1.
The β id values are added in Eq. (6) to obtain the DNRS values. Therefore, the mean value of all DNRS values (the MDNRS of a full publication year) is greater than or equal to 1.
There is another extreme case: each publication has at least one reader count from each discipline: The last term in Eq. (18), 1 ∑ =1 , equals 1 for each discipline d. Therefore, the MDNRS over a full publication year equals the number of disciplines in this extreme case.
However, as the reader distribution is skewed across papers and disciplines, the MDNRS for a full publication year is between those two extreme cases and varies between 1 and the number of disciplines: 1 ≤ MDNRS ≤ d. The deviation from 1 increases with increasing reader counts from different disciplines per paper. Therefore, the deviation from 1 can be seen as an In Eq. (19), DNRS ̅̅̅̅̅̅̅̅ is the average value over all DNRS i values. This scaling allows an easier interpretation of the MDNRS values (average, above or below average impact of a paper set). However, this property has no bearing on the question whether a performance indicator has the property of field-normalization. This more important property is studied empirically in section 3.4.

Empirical analysis of the discipline normalized reader score
Although the definition of the MDNRS follows proposals of citing-side normalization in bibliometrics and fulfils certain desirable properties (see section 3.3), it is not clear whether the MDNRS reaches the desired goal of field-normalizing reader impact. Bornmann, de Moya Anegón, and Mutz (2013) introduced a statistical procedure which can be used to study the proposed ability of the MDNRS to field-normalize (see also Radicchi, et al., 2008). In order to compare the results for the MDNRS with other reader indicators, the procedure is also applied to the MNRS and bare reader counts.
In the first step of the procedure (done for each indicator separately), all papers from 2012 are sorted in descending order by an indicator. Then, the 10% most frequently read papers are identified (a new binary variable is generated).
In the second step, the papers are assigned to main disciplines using the OECD field classification scheme. 1 The main OECD disciplines aggregate WoS subject categories which consist of sets of disciplinary journals to the following broad disciplines: (1) natural sciences, (2) engineering and technology, (3) medical and health sciences, (4) agricultural sciences, (5) social sciences, and (6) humanities. Since many journals are assigned to more than one WoS category, many papers in our dataset are correspondingly assigned to more than one OECD main discipline. Thus, we used a dataset including several papers multiply. The multiplicative approach does not affect the test of disciplinary biases, because the 10% most read papers are selected on the basis of the dataset with papers included multiply. The category "not categorized" is also used by the OECD scheme, but is not considered in this study.
In the third step, the proportion of papers which belong to the 10% most frequently read papers from the first step is determined for each broad discipline. The expectation is that this proportion equals 10% if the indicator values are independent of disciplines or are properly field-normalized, respectively. Thus, bare reader counts should show greater deviations from 10% than MNRS and MDNRS.
The results of the three-step procedure are shown in Table 5. The MDNRS is compared with the MNRS and bare reader counts. The table shows the total number of papers within the main disciplines and the proportion of papers within a main discipline which belongs to the 10% most frequently cited papers. As the number of papers for bare reader counts, MDNRS, and MNRS show, the paper numbers for MNRS are significantly higher in all main disciplines. This is due to the fact that papers with zero readers and papers from 2012, which could not be found on Mendeley, cannot be considered for the analyses of the bare reader counts and MDNRS. We expect the most and largest deviations of the proportions from 10% for bare reader counts, because they are not field-normalized. In the interpretation of the proportions in Table   5, we deem deviations of less than 2 percent points acceptable (i.e. proportions with greater than 8% and less than 12%). Here, we follow rules of thumb given by Belter and Sen (2014) for the interpretation of top-10% proportions. In other words, if the deviation of the proportion is within this range of tolerance, the normalization seems to be approximately valid. Since we selected with 2 percent points a large error margin, we report also the mean absolute deviation from the expected value of 10. Thus, all proportions in Table 5 are printed in bold with a deviation outside this range of tolerance. As the results in the table show, the bare reader counts fail to reach the range of tolerance in five out of six disciplines. The largest deviation is visible for the social sciences (with 17.9%). Although the social sciences have a still larger deviation for the MDNRS (with 25%), more disciplines are within the range of tolerance for the MDNRS (three out of six) than for the bare reader counts.
The MNRS shows the best results in Table 5: All main disciplines have less than 1 percent point deviations from 10%. The indicator seems to field-normalize reader counts in all disciplines properly. Following the argumentations of Sirtes (2012) and Waltman and van Eck (2013b), the comparably best results for the MNRS could have a simple reason: The indicator uses the same scheme of subject categories for the field-normalization on which the tests in Table 5 are also based. In other words, the use of the OECD main disciplines in the analyses of this study gives the MNRS an advantage over the MDNRS. Waltman and van Eck (2013b) therefore repeated the analyses using another scheme of field categorization: an algorithmically constructed classification system (ACCS). Waltman and van Eck (2012) proposed the ACCS as an alternative to the frequently used field categorization scheme based on journal sets. The ACCS is based on direct citation relations between publications. In contrast to the WoS category scheme each publication is assigned to only one field category.
We downloaded the ACCS for the papers at the homepage of the Leiden ranking. 2 Mean deviation 3.8 3.4 0.8 The results of the comparison between bare reader counts, MDNRS, and MNRS based on ACCS (applied on the highest field-aggregation level) is shown in Table 6. The results are similar to those in Table 5. The MNRS reveals the best result: The proportions of papers belonging to the 10% most frequently read papers fall within the range of tolerance in all disciplines. MDNRS follows with two out of five disciplines having larger deviations from the expected value. However, the MDNRS improves its results compared to the main OECD disciplines (see Table 5): the proportion for biomedical and health sciences is close to the range of tolerance and the proportions for three disciplines (out of five) are within the range (instead of three out of six in Table 5). However, there is with 22.2% a large deviation for the social sciences and humanities (similar to Table 5). This large deviation for social sciences and humanities is also visible for bare reader counts (with 16.2%), whereby social sciences and humanities is one of four disciplines with larger deviations from 10% for bare reader counts.
Taken as a whole, the proportions in Table 5 and Table 6 for bare reader counts reveal that field-normalization is generally necessary for Mendeley reader counts. Larger deviations from the expected value of 10% are found in most of the disciplines. The MNRS should be preferred for the field-normalization, because it seems to reach the desired goal in all disciplines. The mean absolute deviation from the expected value of 10 is significantly smaller for the MNRS than for bare reader counts and the MDNRS. The results of the study are ambivalent for the MDNRS: In natural sciences, humanities, and engineering the reader counts seem to be properly field-normalized, but in social sciences they are definitely not.

Normalized reader impact of journals
Papers from 9,563 journals out of the 12,334 WoS journals in 2012 are covered in the papers found at Mendeley. We calculated the MNRS and MDNRS for all 9,563 journals. Table 7 shows for each OECD main discipline the 10 journals with at least 10 papers in 2012 and highest MDNRS. The minimum of 10 papers published in 2012 ensures that the calculation of the journal indicator values is based on a considerable number of papers (a higher threshold would have excluded too many journals in the humanities for the analysis).
Also, we calculated within each discipline the spearman rank-order correlation between MNRS and MDNRS. We used the guidelines of Cohen (1988) and Kraemer et al. (2003) to interpret the coefficients. Much larger than typical coefficients (r s ≥0.7) would reveal that both indicators lead to similar journal rankings. Table 7 shows two paper numbers for the journals: number of papers and number of unique papers. Many papers are assigned to more than one WoS subject category and many journals are also assigned to more than one WoS subject category. Since these papers are multiply considered in the calculation of the journal MNRS, this indicator is based on higher case numbers than the MDNRS (where each paper is considered only once). The results in Table 7 reveal that the best journals in natural science, engineering and technology, medical and health sciences, and social sciences receive MDNRS values of around 30. In other words, an excellent reader performance (on the journal level) is reached at this level in these disciplines. In agricultural sciences and humanities, the best MDNRS values are significantly lower: The best journal in agricultural sciences (Fish and Fisheries) reaches a MDNRS of 9.4 and the best journal in humanities (New German Critique) a MDNRS of 14.9. Similar deviations, especially for the humanities, are also visible for citingside indicators (see Figure 1, F, in Bornmann & Marx, 2015).
In all disciplines, the MNRS values are expectedly lower than the MDNRS values (see Table 7). Very high MNRS values of larger than 10 can be found in natural sciences (e.g. Nature Photonics has a MNRS value of 13.4). A MNRS value of 10 means that the papers in the journal have been read ten times more frequently on average than an "average" paper in the corresponding subject category and with the same document type.
The correlation coefficients in Table 7 show much larger than typical correlations (r s ≥0.7) in five out of six disciplines: natural sciences, engineering and technology, medical and health sciences, agricultural sciences, and social sciences. Only in the humanities is the coefficient between typical and larger than typical (r s =0.4).
Taken as a whole, the coefficients in Table 7 show that one receives similar rankings (of journals), if MNRS or MDNRS are used in five disciplines. However, one can expect (larger) differences between the indicators in the humanities: Here, the choice of the indicators makes a larger difference.

Discussion
Here, we have proposed the field-normalized indicator MDNRS based on Mendeley data which might complement the paper-side normalization of reader counts (MNRS) proposed by Haunschild and Bornmann (2016a). Since the calculation of the MNRS needs further data besides data from Mendeley (in order to have a field-classification scheme for normalization), the reader-side normalization of reader counts is an attractive alternative: The MDNRS does not need further data and can be exclusively calculated with data from Mendeley, because the MDNRS is normalized with respect to Mendeley disciplines which are reported with virtually all reader counts. The MDNRS is an adaptation of the citing-side normalization used in bibliometrics to altmetrics.
In this study, we tested whether the MDNRS is able to field-normalize reader counts.
For comparison, we included also bare reader counts and the MNRS in the analyses. In a first analysis (see section 3.3), we studied in a theoretical analysis whether the indicator fulfils several properties (e.g., the property of consistency). The analyses revealed mixed results for the indicator. The analysis has shown that the MDNRS has the property of consistency.
However, the analysis has also shown that the MDNRS does not perform as proper field normalization (except for idealized circumstances with for instance each publication being read only in a single discipline). In a second analysis (see section 3.4), we identified the papers which belong to the 10% most frequently read papers over all disciplines. Then, we calculated the proportion of these top-10% papers within several main disciplines. Deviations from the expected value of 10% in a discipline should reveal problems in field normalization or the field-dependence of an indicator. The MNRS shows the best results in general and can be recommended as a properly field normalized indicator in all disciplines. The results for the MDNRS are ambivalent, whereby the social sciences are the most problematic discipline with large deviations from the expected value. However, both methods of field-normalization received significantly better results than bare reader counts. In a third analysis (see section 3.5), correlations between MDNRS and MNRS have been calculated on the journal level.
Large coefficients indicated for most of the disciplines that both indicators led to similar journal rankings. The only exception in this analysis is a smaller coefficient in the humanities.
Taken as a whole, the MNRS should be preferred as the field-normalization method for Mendeley reader counts. The indicator shows the best ability to field-normalize. However, if the user does not have an external field-classification scheme at hand (e.g. the WoS subject categories), which is necessary for the calculation of the MNRS, the MDNRS can be calculatedhowever with caution. This means, for example, that the MDNRS should not be used with publication sets in the social sciences (and only with care in the humanities).
Although MDNRS values are more difficult to interpret than MNRS values, our analyses on the journal level reveal one important benchmark: A MDNRS30 seems to be an exceptional high value in several disciplines.
What are the limitations of our study? Our retrieval strategy in the Mendeley API on reader counts using the DOI could be seen as a limitation. This leads to the exclusion of articles and reviews from 2012 which do not have a DOI. Limiting the data set to the publications which have a DOI reduces the number of publications from 1,390,504 to 1,198,184. Therefore, 86.2% of the articles and reviews of the WoS core collection of 2012 are considered in this study. It might be the case that papers without a DOI are disciplinarily (towards humanities and the social sciences) and geographically (towards developing countries) biased. There might be better options to identify the papers in the Mendeley API.
However, this study is not intended as an evaluation study which compares reader impacts of journals. The main aim of the study is to propose and investigate possibilities to normalize Mendeley reader counts.
A further limitation concerns the test, which we used to assess the ability of the MDNRS and MNRS in field-normalizing reader counts. In the first step of the test, all papers in the dataset are sorted by the indicator which is intended to be studied. In the second step, all papers are marked (with the value 1 in a new binary variable x) which belong to the 10% best performing papers and all papers are assigned to a subject category scheme (e.g. the OECD scheme). In the third step, the percentage of papers within each subject category are determined which are marked with x=1. The more the percentages within the subject categories deviate from the expected value of 10%, the more the indicator is not able to normalize reader counts. Although the test is able to assess the lack of disciplinary biases in the results, it is not able to assess the value of different metrics. The value of different metrics can be assessed by using other methods, for example, by correlating indicators with other criteria of quality, especially expert judgements (Bornmann, 2014b;. We have proposed the paper-side normalization (MNRS) in Haunschild and Bornmann (2016a) and the reader-side normalization (MDNRS) in this study. The reader-side normalization is only possible due to the availability of Mendeley reader disciplines. The paper-side normalization can also be applied to other altmetrics data, such as tweets, blog posts, as it relies on an external classification system assigned to individual papers (or journals where the paper was published). The transferability of the MNRS points to a further limitation of the MDNRS which does not apply to the MNRS. The MDNRS was obtained through normalization with regard to self-assigned Mendeley disciplines. Fortunately, the share of Mendeley users who report and share their discipline is very high, based on papers from 2012. However, we do not see a viable way to check if the Mendeley users report their discipline information properly and update it constantly.
This study is a first attempt to develop field-normalized indicators on the reader side.
We would appreciate if the paper is used as a starting point to develop more complex indicator variants with reduced disciplinary biases. In possible variants, readers of a publication from multiple disciplines could not be fully (as in the MDNRS), but fractionally assigned to each of the disciplines. However, these approaches should be theoretically and empirically investigated in detail.