Incorporating Perceived Importance of Service Elements Into Client Satisfaction Measures

Objective: The purpose of this study was to assess the need for incorporating perceived importance of service elements into client satisfaction measures. Method: A secondary analysis of client satisfaction data from 112 clients of an elderly case management setting was conducted. Results: This study found that the relationship between global client satisfaction and the composite of satisfaction with service elements differed significantly, depending on perceived importance of service elements. Conclusions: These results call into question the practice of simply adding or averaging scores from satisfaction items to produce global satisfaction scores without considering perceived importance of the service elements.


Introduction
Although client satisfaction has consistently received attention in social work and social services (e.g., Eckert, 1994;Kane, Bartlett, & Potthoff, 1995;Rossi, Freeman, & Lipsey, 2004;Royse, Thyer, Padgett, & Logan, 2009), the practical utility of client satisfaction studies has been restricted due, in part, to two interrelated issues (Hsieh, 2009). First, many client satisfaction studies used measures that were not contextspecific (Schneider, 1991). Popular generic satisfaction instruments, such as the Client Satisfaction Questionnaire (CSQ-8; Nguyen, Attkisson, & Stegner, 1983) and the Reid-Gundlach Social Service Satisfaction Scale (R-GSSSS; Reid & Gundlach, 1983) often cannot provide detailed information for service providers because different settings have different service elements (Chou, Boldy, & Lee, 2001). Second, the concept (or construct) of client satisfaction is multidimensional (e.g., Ruggeri & Greenfield, 1995). That is, clients can be satisfied with the service overall but dissatisfied with some elements of the service or the other way around. Although many published client satisfaction instruments provide information on various domains or dimensions of the instruments, these domains or dimensions (often obtained through exploratory statistical analysis like factor analysis) are typically too abstract to make direct inferences for service provision. Users of popular generic instruments generally end up examining only global or overall satisfaction scores. Global or overall satisfaction scores coming from client satisfaction measures that are not context-specific are of limited use in that they do not provide adequate feedback that is detailed enough to service providers on how to improve their services because they cannot pinpoint the sources of satisfaction and dissatisfaction.
In order to obtain client satisfaction data that have direct relevance, it is not uncommon for researchers, evaluators, or service providers to develop their own client satisfaction measures that are context-specific to the service settings. These measures are often constructed by a set of satisfaction (most likely Likert-type) rating response items for each of the service elements that are specific to their settings (e.g., Hsieh & Essex, 2006). Scores from items of satisfaction with these service elements are then either summed or averaged to produce global satisfaction scores. By summing or averaging satisfaction scores across all element satisfaction items, one implicitly assumes that all survey items that represent various service elements carry equal weight (e.g., Chou et al., 2001;Kruzich, Clinton, & Kelber, 1992).
Given that individual clients may perceive certain survey items or service elements to be more important, or carry more weight, than others, the assumption of equal weight seems somewhat counterintuitive (Chou et al., 2001;Hsieh, 2006). In fact, there has been evidence against the equal weight assumption (Hsieh, 2009). One approach to address the potential unequal weights among survey items is to incorporate perceived importance of service elements into the scoring, otherwise known as importance weighting (Hsieh & Essex, 2006). The use of importance weighting is not uncommon and has received wide attention in the life satisfaction literature (e.g., Hsieh, 2003Hsieh, , 2004Rojas, 2006;Russell & Hubley, 2005;Russell, Hubley, Palepu, & Zumbo, 2006). However, little is known about the adequacy of importance weighting in the context of client satisfaction. The purpose of this article is to assess the adequacy of importance weighting in the client satisfaction context. By exploring the relationship between global/ overall satisfaction, element-specific satisfaction, and importance based on data obtained in an elderly case management service setting, this article empirically assessed the effect of importance weighting in the client satisfaction context.

Lessons From the Life Satisfaction Literature
As observed in previous studies (Hsieh, 2006(Hsieh, , 2009Hsieh & Essex, 2006), there are a number of similarities between client satisfaction and life satisfaction regarding their measurement and conceptualization. First, both client satisfaction and life satisfaction involve subjective evaluations of objective conditions (e.g., Diener, Lucas, Oishi, & Suh, 2002;Reid & Gundlach, 1983). Second, client satisfaction (Chou et al., 2001;Ruggeri & Greenfield, 1995) and life satisfaction (e.g., Cummins, 1995Cummins, , 1996Diener, 1984) are both multidimensional constructs. Third, client satisfaction can be measured by either a single-item global satisfaction or a composite of satisfactions with various domains (e.g., Nguyen et al., 1983); the same applies to life satisfaction (e.g., Cummins, 1995Cummins, , 1996. Given these similarities, research addressing the measurement and conceptualization issues in life satisfaction literature can serve as a foundation to study client satisfaction.
The life satisfaction literature provided both conceptual and empirical evidence for understanding the role of perceived importance of various life domains in life satisfaction measures constructed through the so-called bottom-up approach (e.g., Hsieh, 2003Hsieh, , 2004Hsieh, , 2006Hsieh, , 2009. Despite the use of different terms (e.g., value priority by Inglehart in 1978; and psychological centrality by Ryff and Essex in 1992), researchers generally agree that perceived importance could act as a mechanism linking global life satisfaction and domain satisfactions. In other words, domains that are more important could have more influence on global satisfaction than domains that are less important (Hsieh, 2003(Hsieh, , 2004. Given the empirical findings in life satisfaction literature, it is not unreasonable to infer that clients' perceived relative importance of service elements may also be the mechanism of linking satisfaction with various service elements to global satisfaction. However, it would be presumptuous to simply assume that client satisfaction studies will generate similar findings like those in the life satisfaction literature without actual empirical evidence. Although many researchers of life satisfaction support the concept of importance weighting (e.g., Campbell, Converse, & Rogers, 1976;Inglehart, 1978), there is no consensus on how this weighting functions. It is not unreasonable to assume that more important life domains carry more weights than less important life domains in predicting overall life satisfaction. However, the full range of ways in which importance weighting might function is large. For example, do changes in domain importance follow a straightforward linear (constant) function or some other type of function? Should domains rated as not important be conceptualized as carrying no weight at all in predicting overall life satisfaction? Campbell et al. (1976) discussed a number of possible approaches, such as ''hierarchy of needs,'' ''threshold,'' and ''ceiling.'' The ''hierarchy of needs'' approach suggests certain kinds of needs are more essential than others, and one would then expect that unless these most essential domains are reasonably satisfied, it probably does not matter much in terms of overall life satisfaction what happens in the less essential domains. The ''threshold'' approach suggests that overall life satisfaction depends on the presence of some threshold number of satisfactions. If the number of domains with which a person is satisfied does not meet this threshold, it is suggested that the person would not feel satisfied with life as a whole. Finally, the ''ceiling'' approach suggests there is a top limit or ceiling to the number of domain satisfactions experienced by an individual, and satisfaction with domains beyond the top limit would not produce increased satisfaction with life as a whole. Unfortunately, these are just some of possible weighting processes. The actual weighting process remains unclear. Given the lack of a commonly accepted conceptualization of a weighting process, many researchers, Campbell et al. (1976) included, decided to use a simple sum of domain satisfactions (summed-domains approach) without taking into account the potential interperson differences to represent one's global life satisfaction (e.g., Beatty & Tuch, 1997;Mookherjee, 1992).
Although not the only method, a common way to achieving importance weighting, in the life satisfaction literature, is to use the approach known as the multiplicative scores (e.g., Ferrans & Powers, 1985) by multiplying satisfaction and importance ratings (Trauer & Mackinnon, 2001). To examine the effect of importance weighting, correlation analysis is often used by correlating the weighted (i.e., multiplicative) scores with a criterion variable (e.g., a global life satisfaction measure). The effect of importance weighting is generally assessed by comparing the correlation between the criterion variable and the weighted score and the unweighted score (e.g., Hsieh, 2003Hsieh, , 2004Russell & Hubley, 2005;Wu, 2008).
However, as many (e.g., Russell & Hubley, 2005;Trauer & Mackinnon, 2001) have pointed out, the practice of using multiplicative scores to measure life satisfaction can be problematic. It should be noted that generally domain-specific satisfaction as well as importance is often measured with items using Likert-type scale responses. The scores or values provided by ways of these responses are generally assigned arbitrarily. It is difficult to justify why the values of a 5-point scale should take one to five and not two to six or some other range of values. Aside from the conceptual ambiguity resulting from a score that is a product of satisfaction and importance (e.g., Hsieh, 2004;Trauer & Mackinnon, 2001), a major issue with the multiplicative scores has to do with the potential problem of multiplying two ordinal-level scores (Trauer & Mackinnon, 2001). Further complicating the issue is the fact that a number of researchers (e.g., Arnold & Evans, 1979;Evans, 1991) argued that correlation was not an appropriate method to test the effect of multiplicative scores. Since correlation between a product variable (the multiplicative score in this case) and a third variable is dependent on the scales of the two original variables, the effect of importance weighting cannot be reliably captured by correlation analysis. It has been suggested that the appropriate method to examine the effect of product variable (such as the weighted score) is moderated regression analysis (Arnold & Evans, 1979;Cohen, 1978;Evans, 1991). That is, the effect of importance weighting should be examined by assessing whether the relationship between global satisfaction measure and domain satisfaction scores varies significantly across domain importance.
Given the similarities between client satisfaction and life satisfaction in their measurement and conceptualization, it is not unreasonable to make the same assumption that more important service elements carry more weights than less important service elements in contributing to overall client satisfaction. However, like findings from the life satisfaction literature, the ways in which importance weighting should function or how importance weighting should be incorporated into client satisfaction measures remains unclear. Also, in client satisfaction context, the effect of importance weighting should probably be examined by moderated regression analysis to assess whether the relationship between the global client satisfaction measure and the composite of element-specific satisfaction scores varies significantly across the perceived importance of various service elements.
The purpose of this study was to empirically investigate the following research question: Should perceived importance of service elements be incorporated into client satisfaction measures? More specifically, this study examined whether the relationship between global/overall client satisfaction and the composite of element-specific satisfaction varied significantly across perceived importance of service elements among a group of clients receiving elderly case management services.

Sample and Setting
Empirical results presented here came from a study that aimed to develop a client satisfaction measure with practical utility to improve case management services for the elderly (see Hsieh, 2006 for details). A client satisfaction survey was conducted with a group of clients of an elderly case management service unit located in a large city in the Midwest region of the United States. The case management unit provides case management to persons age 60 or older and in need of in-home services, including intake screening, assessment, developing a plan of care, referrals or linkages with service providers, monitoring, and reassessment. This state-funded program serves about 4,000 clients. Due to the concern of privacy of its clients, the case management unit decided that only clients who gave (face-to-face) written consent (consent to be contacted) to the unit's staff could be contacted for research. Since obtaining the written consent of all clients would require a significant amount of work that was beyond the limited staff capacity of the unit, consent to be contacted was obtained only among the clients who were scheduled for the unit's follow-up or reassessment visits (so the unit's staff could obtain face-to-face written consent) during the months of January through July 2005. Clients who could not speak English were excluded from the study. Clients who scored lower than 21 on the Mini-Mental State Examination (MMSE; Folstein, Folstein, & McHugh, 1975) were also excluded to avoid any potential problems due to cognitive impairment. Participants of this study, therefore, could not be considered representative of all clients of the unit. Upon the receipt of client's consent, a trained graduate research assistant set up interview appointments, and face-to-face interviews were conducted by the research assistant at the homes of the participants. We were unable to reach 14 of the 141 clients who gave consent to be contacted (after at least five attempts). In all, 15 of the remaining 127 clients refused to participate in the study. Interviews lasted, on average, 20 min, and participants received $10 for the interview. The study was approved by the University of Illinois at Chicago's Institutional Review Board. A total of 112 interviews were completed. Most of the study participants were female (81%) and African American (92%). The mean age of the study participants was 76.4 (SD ¼ 7.3), ranging from 62 to 94. The mean years of schooling completed were 9.8 (SD ¼ 3.0), ranging from 2 to 16. Most of them were retired (96%) and had an annual household income below $15,000 (90%).

Measures
Service element satisfaction and importance. Based on the literature on elderly case management services (e.g., Geron et al., 2000;Robinson, 2000;White, 1986) and discussions with case managers and clients, five major elements of service provision were identified: assessment of clients' needs, plan of care development, case manager's knowledge regarding available services, case manager's ability to get services for clients, and the availability of the case manager (see Hsieh, 2006 for details). Participants were asked to rate their satisfaction rating for each of the five major service elements with 7-point Likerttype rating items. The statement used for the Likert-type satisfaction rating was: please use a number from 1 to 7 to indicate your satisfaction where 7 means completely satisfied and 1 means completely dissatisfied. If you are neither completely satisfied nor completely dissatisfied, you would put yourself somewhere from 2 to 6; for example, 4 means neutral, or just as satisfied as dissatisfied. Similarly, participants were asked to rate the importance of each of the five service elements, using the question: ''Some people may feel some areas of the case management services are more important than others.
What areas of case management services do you consider extremely important or not at all important to you? Please use a number to indicate the importance of the services from 1 through 5, where 5 means 'extremely important' and 1 means 'not at all important.''' As discussed previously (Hsieh, 2006), the reliability of the measure should be based on the test-retest approach. The test-retest reliability for the current measure in this study was r ¼ .81.
Global satisfaction. The CSQ-8 (Nguyen et al., 1983) was used as a global measure of client satisfaction. Following the assigned score values provided by Nguyen et al. (1983), each item could range from 1 to 4 with higher value indicating higher satisfaction. The reliability coefficient (Cronbach's a) was .85 for this 8-item measure for the current study sample. The mean score of the CSQ-8 was 28.58 (SD ¼ 3.55), ranging from 18 to 32 for the current study sample.

Analysis
Given that both satisfaction and importance scores in this study were obtained by Likert-type items with ordinal data. Using the multiplicative score approach to incorporate perceived importance could be problematic (Trauer & Mackinnon, 2001). To avoid any potential issues related to assigning arbitrary score values to data from the importance items, the importance score for each service element was dichotomized into ''extremely important'' (most important) or not as important. That is, a dichotomy dummy variable was created for each (service element) importance item. For each dummy variable, a value of 1 was assigned to indicate ''yes'' to being most important if the importance item had an importance score of 5 (extremely important). Otherwise, a value of 0 was assigned to indicate ''no'' to being most important. Although dichotomizing the responses to items of service element importance would result in not fully taking advantage of the ordinal data, comparisons of service element importance could still be made between service elements there were perceived to be extremely (most) important and not as important by clients.
In order to assess the adequacy of incorporating relative domain importance in client satisfaction measures, the approach of moderated regression analysis was used (Arnold & Evans, 1979;Cohen, 1978;Evans, 1991). Specifically, the purpose of the analysis was to determine whether the relationship between global/overall client satisfaction and the composite of element-specific satisfaction differed significantly for service elements that were considered as extremely important versus not as important. If the relationship between global/overall client satisfaction and the composite of element-specific satisfaction differed significantly across service elements did not differ significantly for service elements that were considered as extremely important versus not as important, then it would be concluded that there was no evidence to support the incorporation of perceived importance of service elements in client satisfaction measures.
The moderated regression analysis was conducted according to the three-step hierarchical regression analysis suggested by Evans (1991). The moderated regression analysis, similar to what was proposed by Mastekaasa (1984), would begin by estimating a regression model with global client satisfaction as dependent variable and satisfaction with all the service elements together as independent variables. The second step would be to add the (dichotomized) perceived importance of all service elements as independent variables. The third step would be to add to the second step the interaction/product terms of satisfaction by importance for all service elements as independent variables. Since the focus was to determine the contribution of the importance by satisfaction interaction terms, coefficients on specific service element satisfaction, importance, and satisfaction by importance were not of interest. Rather, the change in R 2 from Step 2 to Step 3 would be the focus, indicating the need for the inclusion of the interaction (i.e., importance weighting) terms (see Evans, 1991 for detail). Table 1 shows clients' perceived importance of and satisfaction with each service element. Based on mean ratings shown in the upper panel of Table 1, the most important element was case manager's ability to get services, followed, in order, by case manager's assessment of needs, case manager's availability, plan of care, and case manager's knowledge regarding available services. The service element that had the highest percentage of the ''extremely important'' rating was case manager's ability to get services, followed by case manager's assessment of needs, case manager's availability, plan of care, and case manager's knowledge regarding available services.

Results
The lower panel of Table 1 shows client's satisfaction with each service element. Based on these results, clients were most satisfied with case managers' assessments of their needs and were not as satisfied with the plan of care they received and case managers' ability to get services for them.  Table 2 shows the R 2 and change in R 2 of the moderated regression analysis. It should be noted that some respondents did not answer all the questions. Since the portion of missing data was quite small (about 5%), list-wise deletion was used to handle missing data. The final sample size for the regression analysis was 106. As shown in Table 2, the regression model with global client satisfaction, CSQ-8, as dependent variable and satisfaction with all five service elements together as independent variables had an R 2 of .55. When adding the block of dichotomized (extremely important or not) perceived importance of service elements into the model as the second step, the change in R 2 was .02. The corresponding incremental change in F tests was F(5, 95) ¼ 0.89, p ¼ .49, f 2 ¼ .05, which was not statistically significant at the .05 level. When adding to the second step an additional block of the interaction/product terms of satisfaction by importance for service elements as the third step, the change in R 2 (from Step 2 to Step 3) was .06. The corresponding incremental change in F tests was F(5, 90) ¼ 3.06, p ¼ .01, f 2 ¼ .17, which was statistically significant at the .05 level. These results indicated that the relationship between global client satisfaction (as measured by CSQ-8) and the composite of client satisfaction with various service elements differed significantly across client's perceived importance of service elements. More specifically, the relationship between global client satisfaction (as measured by CSQ-8) and the composite of client satisfaction with various service elements was different for service elements that were most important to clients than for service elements that were not as important to clients.

Discussion
The purpose of this article was to assess the adequacy of importance weighting in the client satisfaction context. More specifically, this study empirically examined whether the relationship between global/overall client satisfaction and the composite of client satisfaction with various service elements varied significantly across perceived importance of various service elements, using data collected in an elderly case management setting. As reported in the Results section, the relationship between global client satisfaction and the composite of satisfaction with various service elements differed for service elements perceived to be extremely (most) important from service elements perceived to be not as important. It should be mentioned that the results presented in this article pertained only to a specific client satisfaction measure developed for the elderly case management setting and data came from clients of only one elderly case management service agency. Generalizability of these results may, therefore, be somewhat limited. However, these findings are still worth noting and have the following two applications to social work research and practice: First, the empirical results shown in this article indicated that the relationship between global client satisfaction and composite of satisfaction with various service elements was dependent upon perceived importance of service elements. These findings support the incorporation of perceived importance of service elements in client satisfaction measures. In other words, the common practice of obtaining global satisfaction score through summing or averaging client satisfaction scores across various service elements without considering importance of service elements should be revisited. More specifically, since this study found that the relationship between global client satisfaction and composite of satisfaction with various service elements was significantly different in relation to perceived importance of the service elements, summing or averaging satisfaction scores across service elements without considering importance of service elements to represent global client satisfaction would not capture the actual relationship between composite of satisfaction with various service elements and global client satisfaction. Researchers, evaluators, or service providers who develop their own client satisfaction measures should not dismiss the possibility that a client's global satisfaction may be a composite of satisfaction with various service elements weighted by perceived importance of these service elements. It is, therefore, reasonable for researchers, evaluators, or service providers to take into account perceived importance of various service elements when measuring client satisfaction.
Second, although findings from this study provide preliminary support for importance weighting, it is important to note that the ways by which importance functions (or how to weight) remains unclear. That is, it is not unreasonable to assume service elements that are more important may carry more weight in determining global client satisfaction. Exactly how importance should be incorporated into client satisfaction measures to produce global satisfaction scores remains an area that needs further investigation on both theoretical/conceptual and empirical grounds. It is suggested that the use of multiplication Step 1: Satisfaction R 2 Step 2: Importance DR 2 Step 3 scores (multiplying satisfaction scores with importance scores) as a weighting method be avoided due to the issue of conceptual ambiguity (for a detailed discussion, see Hsieh, 2004;Hsieh & Essex, 2006;Trauer & Mackinnon, 2001). The straightforward importance weighting scoring method proposed by Hsieh and Essex (2006) to produce global client satisfaction scores, using satisfactions with various service elements and perceived importance of the service elements should be adapted with the understanding that it assumes the function of importance can be to approximate as linear and may not necessarily reflect the actual function. The actual function of importance may not be a linear one. It could be curvilinear or nonlinear, such as ''hierarchy of needs,'' ''threshold,'' and ''ceiling'' (discussed earlier), or a combination of these possible functions.
In sum, empirical evidence presented in this article points to the importance of considering perceived importance of various service elements in client satisfaction measures. Given the potential practical utility that client satisfaction measures developed through the so-called bottom-up approach can provide (Hsieh, 2006), researchers, evaluators, and service providers who use this approach to construct client satisfaction measures should not dismiss the possible role that perceived importance of service elements could play in linking global satisfaction and satisfaction with various service elements. Although global client satisfaction, service-element satisfaction and service-element importance are distinct constructs in constructing a client satisfaction measure based on the bottom-up approach, the possibility of the halo effect should not be discounted (e.g., Nisbett & Wilson, 1977). That is, the possibility that the global satisfaction judgment might unconsciously affect the service-element satisfaction and/or importance judgments should not be overlooked. The consequences of the potential halo effect on client satisfaction measures are an area that needs further investigation. Of course, further research is needed to assess if perceived importance of service elements plays any potential role in client satisfaction measures designed for settings other than elderly case management services. Future research that offers conceptual frameworks for linking global client satisfaction and satisfaction with various service elements will also be valuable. In addition, empirical research that compares various potential weighting methods linking global client satisfaction and satisfaction with various service elements play can facilitate a better understanding the role perceived importance of service elements in client satisfaction measures and help identify reasonable weighting or scoring mechanisms to produce accurate global client satisfaction scores through satisfaction scores and importance scores of various service elements.

Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding
The author received no financial support for the research, authorship, and/or publication of this article.