Usefulness Lost: Aggregating Information with Differing Levels of Verifiability

In this paper, we study information asymmetries about verifiability between a principal and an agent. Our main result is that an information asymmetry about verifiability not only reduces the usefulness of a given performance measure for stewardship purposes, it can completely destroy that performance measure's usefulness.


Introduction
Many of the Financial Accounting Standards Board's (FASB's) recent standards and ongoing projects emphasize fair value measurements in an attempt to increase the relevance of financial reports. These same standards and projects have reduced or have the potential to reduce the reliability of accounting. 1 Critics of fair value and other subjective measurements often focus on the limited verifiability of such measurements and the concern that limited verifiability facilitates manipulation. 2 An expanded use of fair value measurements and the incorporation of those measurements in determining managerial rewards seem to have played a significant role in Enron's spectacular fraud and collapse. 3 One might have expected the fiction of Enron's fair value measurements to result in the FASB and the Securities and Exchange Commission (which pre-cleared Enron's expanded use of fair value measurements) reversing course, but such a reversal does not appear to have occurred or to be forthcoming anytime soon.
Limited verifiability produces numbers that are less reliable and, holding all other attributes constant, less useful. From a measurement perspective, aggregating reliable line items with unreliable line items to arrive at an overall measure of performance such as net income pollutes that measure. Arguably, an even worse problem in financial reporting today is that financial statement users cannot tell how verifiable a given line item is. Highly verifiable amounts from realized transactions are often included in the 1 In CON 2, the FASB defines reliability as consisting of verifiability, representational faithfulness, and neutrality. 2 See, for example, Carmichael (2004) and Watts (2003). Watts writes: "[m]anagers' limited tenures and limited liability give them incentives to introduce bias and noise into value estimates. The lack of verifiability of many valuation estimates gives managers the ability to do so." same financial statement line items as fair value remeasurements that are, at least sometimes, much less verifiable. The preparer knows which types of measurements are included in any given line item in any given reporting period, but the user does not.
In this paper, we use a principal-agent model to study information asymmetries about verifiability between an agent (manager) and a principal (owner). Our focus is not on measurement per se, but instead on information content and, in particular, on the stewardship role of accounting Ijiri, 1975), which we treat as one of the contracting roles of accounting (Sunder, 1996;Watts and Zimmerman, 1986). Our main result is that an information asymmetry about verifiability not only reduces the usefulness of a given performance measure (e.g., a financial statement line item) for contracting, it can completely destroy that performance measure's usefulness.
In contrast, if the principal and the agent have symmetric information about the level of verifiability of a performance measure, a less verifiable (but still informative) performance measure always has a positive value.
Our contribution lies in distinguishing between limited verifiability and information asymmetries about verifiability. Under existing financial reporting, limited verifiability and information asymmetries about verifiability seem to go hand in hand. If we instead disaggregated measures based on their level of verifiability, limited verifiability would, by design, be less associated with information asymmetries about verifiability. Glover, Ijiri, Levine, and Liang (2005) suggest one such verifiability-based disaggregation, focusing on the distinction between facts and forecasts. Barker (2004) proposes disaggregating initial measurements from remeasurements because remeasurements have less predictive value, but such a disaggregation can also be viewed as a verifiability-based disaggregation. In their final chapter, Paton and Littleton (1940) discuss the possibility of reporting historical cost measurements in a first column and market values in a second supplementary column.
This paper can be viewed as providing a stewardship/information economics foundation for such verifiability-based separations within financial statements. While it seems natural that the separation would better facilitate multiple uses of accounting information (e.g., valuation and stewardship), we find it interesting that a demand for such a separation arises in our model for stewardship purposes alone.
The FASB's Exposure Draft on Fair Value Measurements (FASB, 2004) suggests yet another approach, relegating information about verifiability to the footnotes.
Arguably, the most important information about verifiability is whether the measurement comes from a realized transaction or an unrealized remeasurement. The current working draft of the standard calls for a disaggregation of unrealized and realized amounts for only the worst level (Level 5) fair value measurements, which are fair value measurements arrived at from models that use entity-specific rather than market inputs.
We disagree with what seems to be the implicit view of the FASB that the distinction between realized and unrealized amounts is important only as a second-order consideration and only in the most extreme case (Level 5 measurements).
We now elaborate on our model and results. The model starts with underlying transactions that stochastically depend on the agent's effort. The transactions are then converted into accounting numbers via a mechanistic auditing/verification process. The agent's reward depends on only the final accounting measurements. Limited verifiability is modeled as a garbling of accounting measurements. The garbling reduces informativeness (in the sense of Holmstrom, 1979), which increases the expected cost of compensation but does not affect the form of the optimal contract.
Our equating verifiability with a garbling of measurements is different than the way the term verifiability is typically used in contract theory, where verifiability is used to mean a report can be contracted on because it can be verified by a court of law.
Instead, we attempt to stay close to the FASB's definition of verifiability, which is: "the dispersion of independent measurements of some particular phenomenon (FASB, CON 2)." The FASB's definition of verifiability corresponds to Ijiri and Jaedicke's (1966) definition of objectivity.
To our basic structure, we then add an information asymmetry about the verifiability of the accounting system. Specifically, the agent knows more about the verifiability of the second of two performance measures than does the principal. The contract now takes on a different form. If the information asymmetry about the verifiability of the second performance measure is large enough, that performance measure is rendered useless and is excluded from the contract.
The omitted performance measure satisfies Holmstrom's (1979) informativeness condition, which would lead to the measure being included in the optimal contract in the standard principal-agent model. Our model is different than the standard one in assuming (i) there is an information asymmetry about verifiability and (ii) the contract cannot depend on an (unaudited) report by the agent on the level of verifiability. (More on this second assumption later.) The intuition for the optimality of ignoring the second performance measure is that the agent's informational advantage requires the principal to design a contract that is robust to a variety of possible levels of verifiability. When the information asymmetry is large, the least expensive means of satisfying the robustness requirement is to eliminate the measure for which there is an information asymmetry about verifiability from performance evaluation.
We then add the possibility of performance measure manipulation by the agent.
As it turns out, under symmetric information about verifiability, the addition of manipulation actually benefits the principal. In our simple one-period model, the principal is better able to anticipate the agent's (equilibrium) attempt to overstate a performance measure than she can the two-sided noise introduced by limited verifiability alone. 4 Manipulation is facilitated by limited verifiability but results in overstatements only, since the agent's compensation is increasing in the reported performance measures.
Here we have a case of "the devil that you know is better than the devil that you don't." If instead there is asymmetric information about verifiability, the principal can prefer the agent not to have the manipulation option. The reward needed to motivate the agent is determined by the lowest level of manipulation. If the manager can manipulate more extensively, the contract can become extremely costly. When the information asymmetry about manipulability (induced by the underlying information asymmetry about verifiability) is large enough, it can be optimal to exclude from the contract the performance measure for which there is an information asymmetry about verifiability. 4 Importantly, our comparison is between manipulation and random noise. In Demski (1998), earnings management is better than no noise from the principal's perspective when the agent's ability to manipulate (smooth) is dependent on his productive effort. See also Arya, Glover, and Sunder (1998), Christensen, Demski, and Frimor (2002), and Sunder (1996) for related models and discussions of earnings management.
The remainder of the paper is organized as follows. Section 2 presents the basic model and its solution as a benchmark. Section 3 contrasts verifiability with information asymmetries about verifiability. Section 4 extends the model to include manipulability.
Section 5 incorporates verifiability-dependent effort. Section 6 concludes the paper.

The Basic Model
A risk-neutral principal contracts with a risk-neutral agent subject to moral hazard on the agent's action supply and limited liability on the payment to the agent. The principal makes a payment to the agent, not the other way around. This limited liability constraint gives rise to the possibility of the agent earning rents, which the principal tries to minimize. (See Laffont and Martimort, 2002, p. 155.) Agent risk neutrality makes the analysis particularly tractable, but the qualitative results of the paper continue to hold when the agent is risk averse (see Example 1B).
The principal would like to motivate the agent to supply an unobservable and personally costly action a = a H rather than a L < a H . The implicit assumption is the parameters are such that the high action generates a large enough marginal benefit to the principal that she always finds it optimal to motivate the high action instead of the low action. (See Section 5 for additional details.) The agent's action gives rise to two transactions t 1 and t 2 , each of which can take on realizations of t L or t H . Transactions t 1 and t 2 are then converted into accounting reports r 1 and r 2 through a combination of reporting and verification. We assume throughout the paper that the first accounting report, r 1 , is a perfectly verifiable performance measure: when t 1 is t j , it will be reported as r j , j € ∈ {L, H}. The second accounting report, r 2 , is less verifiable: with probability θ, when t 2 is t j , it will be reported as r k , k ≠ j; j,k € ∈ {L, H}, θ ∈ [0,1/2]. θ = 0 means the second transaction, t 2 , is perfectly verifiable-there is no accounting-induced noise in its measurement. θ = 1/2 means the report is perfectly unverifiable. Intermediate values of θ allow for differing levels of verifiability.
Throughout the paper, the agent knows the verifiability of the reporting system, θ. In some cases, the principal will also know θ. In others, the principal will not know θ, i.e., there will be an information asymmetry about verifiability. In these cases, the principal knows only that θ is uniformly distributed over the interval [0,θ max ], θ max ≤ 1/2. 5 If the agent chooses a H , the probabilities over the set of possible (t 1 ,t 2 ) transactions If the agent chooses a L , the corresponding probabilities are {q LL , q LH ,q HL ,q HH }.
The contract is allowed to depend on only the final reports r 1 and r 2 . The contract is simply a report-contingent payment s(r 1 ,r 2 ). Importantly, we have deliberately ruled out a report on θ. Ruling out a report on θ allows us to study the impact of information asymmetries about verifiability that persists (are not resolved by unaudited direct communication). We are interested in studying the implications of such a restriction because it seems to be characteristic of the current financial reporting environment. As we discussed in the introduction, we think current financial reporting would be well served by introducing ways of (partially) resolving the information asymmetry about verifiability. 6 5 The uniform distribution is important only in that, if the distribution is too concentrated around the mean, the information asymmetry can never be large enough to drive the second report out of the contract. 6 Ruling out a report on θ can also be viewed as a way of studying robust contracts vs. the standard approach of (highly) fine-tuning the contract to the environment (e.g., the exact probability distribution).
The agent (who knows θ) has the induced probability distribution over (r 1 ,r 2 ) realizations presented in Table 1.
The ordering of the payments is determined by the standard likelihood ratios. For tractability, we assume the likelihood ratios are ordered as follows.
The agent's payoff is the payment he receives less the cost of effort, also denoted by a H or a L . The principal's objective is to minimize the expected payment to the agent E[s(r 1 ,r 2 )] (Equation 1) subject to the constraints that choosing a H is incentive compatible for the agent (Equation 2) and the payments s(r 1 ,r 2 ) be nonnegative (Equation 3). The usual individual rationality constraint is omitted. Alternatively, the individual rationality constraint is dominated by the other constraints if we assume a L and the agent's reservation utility are both zero.
As a benchmark, we formulate and solve the principal's program assuming the problem is a standard moral hazard problem in which both performance measures are objectively determined (θ = 0): Standard contracts typically perform poorly if the environment is even slightly different than assumed and (thus) do not bear a close resemblance to real-world contracts. It seems inevitable, even desirable, that robust contracts would exclude (informative) performance measures that are not well understood by both parties to the contract. The study of robust mechanisms is known as the (Robert) Wilson Program in game theory. Bergemann and Morris (2005) The following result is well known (e.g., Laffont and Martimort, 2002, p. 164).
Observation. If both performance measures are perfectly verifiable (θ = 0), the solution is to make a single bonus payment of (a H -a L )/(p HH -q HH ) for the report (r H ,r H ) with the highest associated likelihood ratio p HH /q HH . All other payments are zero.

Verifiability vs. Information Asymmetries about Verifiability
In the case of limited verifiability (θ > 0), the solution is modified in a straightforward way to incorporate θ. The compensation contract still makes a single payment in the state when both of the reports are high, although the actual amount of compensation changes.
Proposition 1. If the second performance measures is subject to limited verifiability but there is no information asymmetry about θ, then the optimal contact uses both performance measures. The solution is to make a single bonus payment of: We now study the case in which there is an information asymmetry about verifiability (θ is not known by the principal). Here, we have in mind that the information asymmetry is created by exogenous accounting standards that allow for the aggregation of more and less verifiable information into a single line item. The preparer will know how verifiable the given line item (r 2 in our model) is, but the financial statement user will not.
See Banker and Datar (1989) for a model in which there is a direct connection between the statistical notions of precision and sensitivity and the optimal weight placed on a performance measure in aggregating it with other performance measures. In our model, the garbling produced by limited verifiability decreases both the second performance measure's precision and its sensitivity. In their model, unlike ours, the principal and the agent share symmetric beliefs about the stochastic properties of possible performance measures.
The principal's objective function (Equation 4) uses the expected level of verifiability (for the uniform case, θ max /2). Since the agent knows the actual level of verifiability when making his effort decision, the principal must find a contract that satisfies the agent's incentive compatibility constraint for every possible level of verifiability. That is, the incentive compatibility constraint (Equation 5) is actually a family of constraints, one for each θ . The form of the optimal contract is characterized in Proposition 2.

Proposition 2.
If there is an information asymmetry about the verifiability of the second performance measure, then the optimal contact uses only the first performance measure if and only if: In the case where the contract "drops" the second measure, the bonus state occurs with much higher probability (the sum of the top two states). This more muted contract seems roughly consistent with common managerial accounting wisdom that bonus targets should be difficult but achievable. Instead of a behavioral explanation for achievable targets, our paper suggests that rewarding agents for extreme outcomes can be hazardous because extreme outcomes are likely to be less well understood than more probable outcomes. The two cases correspond to the agent's marginal productivity in the performance measures, which determines the size of the required bonus if both performance measures are to be used. If p HH -q HH ≤ p HL -q LH , the size of the bonus payment is based on a θ of 0. If p HH -q HH > p HL -q LH , the size of the bonus payment is based on a θ of θ max . Example 1A illustrates the underlying intuition for the first case.

Manipulability
We now generalize the model to allow for both limited verifiability and manipulation by introducing a parameter µ ∈ {0,1}. µ = 0 represents the model in the previous section. µ = 1 denotes the presence of a manipulation option for the agent, which interacts with the accounting/auditing process. We offer two equivalent descriptions. One, the agent observes the underlying transaction, t 2 , and then prepares a report for a mechanistic auditor to verify. The auditor discovers any manipulation with probability (1-θ ). Two, a mechanistic accounting system first produces a preliminary report that correctly identifies t 2 with probability (1-θ ). The agent then selectively corrects any mistake the accounting system makes. Under this second interpretation, the agent is assumed to be able to provide further evidence to make corrections but not to be able to make false corrections. (As in the previous sections of the paper, we assume t 1 is perfectly verifiable.) The only way the principal can motivate the agent to choose the high action is to offer an incentive scheme that is non-decreasing in each report (and strictly increasing in at least one report). As a result, the agent will manipulate the report only when the underlying transaction is t L . Given t L , the final report will be r L with probability (1-θ ) and r H with probability θ. Given t H , the final report will be r H with probability (1-θ +µθ ) and r L with probability (1-µ)θ. The following table presents the induced probabilities under our more general model, taking the agent's equilibrium reporting behavior as given.
Suppose the agent has the ability to manipulate (µ = 1). When the verifiability parameter θ is common knowledge, the principal also knows what the amount of expected manipulation is. When the agent knows more about the verifiability parameter than does the principal, there is an induced information asymmetry about the level of manipulability, since limited verifiability facilitates manipulability. Assuming µ = 1 and that there is an induced information asymmetry about manipulability, the principal's program is given as: First, consider the case in which manipulation is possible but the probability of successful manipulation is common knowledge (because θ is common knowledge).
Proposition 3. If the second performance measure is subject to manipulation but there is no information asymmetry about the extent of manipulability, the optimal contact uses both performance measures. The solution is to make a single bonus payment of: (a H -a L )/(p HH + θ p HL -q HH − θ q HL ) for the report (r H ,r H ).
The intuition for Proposition 3 is the same as for Proposition 1. The known amount of manipulation is costly for the principal (relative to perfect verifiability) but can be incorporated into the contract in a straightforward manner. An important distinction between the optimal contract in Proposition 3 and the optimal contract in Proposition 1 is that the principal can take the agent's equilibrium manipulation into account and rule out the possibility of downward misstatements. Now, consider the case of asymmetric information about verifiability, which induces asymmetric information about manipulability.
Proposition 4. If the second performance measure is subject to manipulation, there is an information asymmetry about the extent of manipulability, and then the optimal contract ignores the second performance measure.
A difference between Propositions 2 and 4 is that the two cases in Proposition 2 (corresponding the agent's marginal productivity in the performance measures) become one case in Proposition 4. The reason is that manipulation eliminates downward misstatements. The following corollaries follow from Propositions 1-4.
Corollary 1 (to Propositions 1 and 3). Limited verifiability alone is worse for the principal than manipulability.
The intuition for Corollary 1 is the principal can better anticipate the agent's attempt to manipulate a performance measure than she can the noise introduced by a verifiability problem alone. This is a comparison of two standard models in which there is no information asymmetry about verifiability.
(i) If p HH -q HH < p HL -q HL , the principal prefers an information asymmetry about verifiability to an information symmetry about manipulability, and the latter drives a performance measure out of the optimal contract sooner than the former.
(ii) If p HH -q HH > p HL -q HL , the principal prefers an information asymmetry about manipulability to an information asymmetry about verifiability, and the latter drives a performance measure out of the optimal contract sooner than the former.
The intuition for Corollary 2 is the principal's informational disadvantage regarding the accounting treatment makes it more difficult to customize the contract to anticipate the agent's manipulation, since the principal does not know how much manipulation is possible. If p HH -q HH is small (Case (ii)), the problem is particularly acute since using both performance measures means the s(r H ,r H ) payment must be set equal to (a H -a L )/(p HH -q HH ), which is large. Arguably, Case (ii) is more "typical" than Case (i) in that extreme outcomes are associated with small probabilities.
In our model, opportunistic manipulation is costly to the principal (relative to limited verifiability alone) only when the agent knows more about the underlying accounting system, as shown in Corollaries 1 and 2. Examples 2 and 3 are illustrative.

Verifiability-Dependent Effort
Throughout the earlier sections of the paper, we assumed the marginal productivity of the agent's effort was sufficiently large that hiring the agent and motivating him to choose high effort was always optimal. In this section of the paper, we relax this assumption. Given the principal maximizes the expected value of the firm less payment to the agent, we derive conditions under which our earlier main result, Proposition 2, holds. Throughout this section, we assume there is an information asymmetry about variability.
Assume t 1 and t 2 are each cash inflows. The principal now maximizes E[t 1 +t 2 -s(r 1 ,r 2 )]. To keep the notation to a minimum, assume a L = t L = 0. a L = 0 also ensures that hiring the agent and motivating him to shirk is always preferred to not hiring the agent at all. To rule out the case that always motivating a L is optimal, assume Denote the value of t H that satisfies (10) with equality by t * .
If p HH -q HH < p HL -q HL , (10) by itself ensures that high effort is always optimal, whether one or both performance measures are used. If p HH -q HH > p HL -q HL , (10) leaves room for it to be optimal to use both performance measures and to motivate the agent to choose a H when θ is below some cutoff θ C and a L when θ is between θ C and θ max . That is, the agent is motivated to work if and only if the actual verifiability of the second performance measure is high. The principal's objective function under this new contract is: The optimal cutoff θ C is determined by setting the derivative of (11) with respect to θ C equal to zero. (The second derivative is negative, guaranteeing a maximum.) To ensure that motivating high effort is always optimal, we need to find a value of t H under which θ C = θ max . Denote this cutoff level by t ** . A closed form expression for t ** is given in the proof of the following proposition (a restatement of Proposition 2).

Proposition 5.
If there is an information asymmetry about the verifiability of the second performance measure, then the optimal contact uses only the objective performance measure if: € p HH − q HH ≤ p HL − q HL , t H ≥ t * , and θ max 2 >

Concluding Remarks
In this paper, we study the impact of information asymmetries about verifiability and find they can lead to performance measures becoming not only less valuable but valueless for stewardship purposes. An implication seems to be that, if the stewardship role of accounting is to be improved, verifiability-based disaggregations within the financial statements are likely to help as they will reduce information asymmetries about verifiability. Of course, limited verifiability itself raises many concerns (some of which we discussed in the introduction). We view our paper as complementing existing concerns about limited verifiability, which have become central in accounting as the FASB introduces more and more difficult to verify fair value measurements.
To be fair (no pun intended), our model is of contracting, while the FASB's focus is on capital market decisions. We suspect similar forces can be developed in a market model (for example, if information asymmetries about verifiability distort the firm's investment decisions) but also view the contracting and stewardship roles of accounting as important and often under-emphasized.
Proof of Proposition 1. The principal's problem is a linear program. Let λ be the multiplier on the incentive compatibility constraint and the choice variable in the dual program. Consider the following proposed solution to the primal and the dual. Under each proposed solution, the objective functions of the dual and the primal are equal. The condition given in the statement of the proposition determines which of these solutions is feasible.
Proof of Proposition 5. For the p HH -q HH < p HL -q HL case, the new possibly optimal contract has the agent working when θ is above a cutoff level θ C and shirking for all smaller θs. t H > t * , the solution to equation (10) in the text, ensures θ C = 0, i.e., the agent always works. So, the additional condition given in Proposition 2 (and restated in Proposition 5) is sufficient to ensure it is optimal to ignore the second performance measure.
For the p HH -q HH > p HL -q HL case, the new possibly optimal contract has the agent working when θ is below a cutoff level θ C (verifiability is high) and shirking for all larger θs (verifiability is low). Differentiating the principal's objective function, given as equation (11) in the text, with respect to θ C and setting it equal to zero yields the firstorder condition to be solved. Evaluating that first-order condition at θ C = θ max yields a cutoff value of t H , denoted by t ** , above which setting θ C = θ max is always optimal.