Updating the peak-to-valley (PV) irregularity specification for the modern world

Communicating fabrication tolerances is a vital part of the optical manufacturing process. The most common surface form tolerance is peak-to-valley (PV) irregularity, and its specification and evaluation has largely remained unchanged for decades. Fabrication, testing, and computation capabilities, however, have evolved considerably over that time, exposing PV’s extreme sensitivity to outlier data points. When everyone was using the same measurement and analysis technique (visual inspection of test plate interferograms), this sensitivity was of secondary concern to ease of computation. Today, however, numerous measurement techniques are viable for evaluating surface form, computation power is cheap, and different measurements of the same surface can easily result in wildly different PV results. This creates confusion as to whether a surface conforms to tolerance or not. To address this issue, we propose standardized methods for evaluating a PV tolerance value that are resistant to outliers. We first provide an example of the problem on an actual surface measurement and demonstrate how trimmed PV estimators can mitigate it. We review two such estimators, robust peak valley (PVr) and clipped peak-to-valley (PV%). We then review the conceptual trade-offs involved with choosing an appropriate estimator and demonstrate estimator behavior on a variety of simulated surface profiles. Finally, we explore the challenges in adopting a more reliable PV metric and outline the plans for updating the ISO 10110-5 surface form standard to achieve this.


INTRODUCTION
The manufacture of optical surfaces has many facets, but is often divided into 3 general disciplines: design, fabrication, and testing.Communication between these disciplines is essential: the designer must precisely characterize the desired surface, the fabricator devises a process capable of achieving it, and the tester verifies the fabricated surface sufficiently matches the designed surface.Furthermore, the designer or customer of the optic may want to check the optic supplier's test data, employing their own incoming quality check to ensure the surface conforms to the design.
Precise specification of an optical component is a vital part of this communication, and standards for tolerances and their indication on a drawing facilitate this.There are numerous specifications associated with an optical component, but we will focus on surface form tolerancesin particular those based on the peak-to-valley metric (abbreviated PV).PV is simply the range of the distribution of measured points inside the evaluation area (maximum valueminimum value).PV is commonly applied to surface and wavefront measurements with thousands of data points to reduce them into a single quality value, which is then easily evaluated for conformance to specification.The international standards covering surface form (ISO 10110-5 and ISO 14999-4) provide for tolerance indications and computation of PV power deviation, irregularity, and total deviation [1,2].These metrics typically correlate directly to blur in an optical system assembled from such components thus the need for tolerances to ensure the optical blur does not rise to an unacceptable level.
The simplicity of PV makes it an ideal metric for visual interference fringe analysis.The two most extreme points of an interferogram are generally straightforward to identify, and minimal computation is necessary (a single subtraction).Contrast this to a metric like RMS (root-mean square), where every measurement point contributes (rather than the most extreme two), and numerous multiplications and additions performed followed by a square root (not a calculation one would want to perform by hand).
The same simplicity, however, makes it very sensitive to measurement noise and extreme data points (outliers).In principle, a single problematic data point that would barely move an RMS metric can change a PV estimate by an order of magnitude.This is a terrible property for a metric to have, since a corresponding degradation in optical system performance will not be observed (i.e. the optical blur resulting from such a component will not worsen by the amount the PV did).Note that power deviation, Zernike terms, and invariant irregularity (rotational or translational) have significant aggregation/fitting of data prior to the actual PV calculation, which significantly reduces the effect of outlier data points.Thus, the issue with PV as a metric is primarily confined to total deviation and irregularity.
Furthermore, there are a few trends in optical manufacturing and testing that have increased our sensitivity to the PV calculation's shortcomings.The first is that test plates / low coherence Newton interferometers and fringe counting analyses have largely been supplanted by phase-shifting laser Fizeau interferometers.Other metrology instruments have also seen adoption (e.g.other interferometer types, 3-D profilometry, deflectometry, etc.).These newer techniques tend to have much higher data density (millions of data points in some cases) and be more prone to outlier data points than the legacy test plate data that was prevalent when the specification was first devised.Secondly, fabrication technology has evolved beyond just full aperture techniques.Computer-controlled small tool generating, figuring, and polishing techniques that contact only a subaperture (or even nearly a single point) of a surface at a time are increasingly prevalent.These techniques often enable unprecedented surface precision, but the underlying statistics of the surface height are often quite different than that of "conventional" full aperture techniques.Thus, the correlation between the raw PV metric and optical performance can be quite different (than it was when the PV irregularity specification was first devised).

Example of PV evaluation problems
Let us consider a concrete example of peak-valley analysis performed visually versus on a phase-shifting laser interferometer.The surface under test is spherical with a 45 mm circular evaluation aperture.Figure 1 shows an interference pattern from the test surface with no power deviation and 4 tilt fringes added to facilitate visual analysis.Annotations added to the figure facilitate the visual fringe analysis process.It is easy for the analyst to ignore cosmetic defects and fiducials on the surface that should not be considered surface irregularity (e.g.regions 'A' and 'B' in the example figure).In this example, visual fringe evaluation results in approximately 53 nm of PV irregularity (with an estimate uncertainty of something like a twentieth of a fringe, or ±8 nm). Figure 1.Example surface analyzed visually via fringe pattern.The tilt carrier fringe spacing is 's', the maximum deviation observed along a fringe is 'δh'.Measurement artifacts include a cosmetic defect 'A' and a cross-shaped fiducial 'B'.Now let us consider the same surface measured with a phase-shifting laser interferometer with a 1-megapixel sensor (hereafter referred to as PSI), shown in Figure 2. The cross-shaped fiducial dominates the PV (shown in figure 2a), resulting in PV 573 nm, so clearly it must be masked out.Yet even with the fiducials masked out the PSI measurement has PV 115 nm (shown in figure 2b), more than double the 53 nm observed in the visual analysis.This discrepancy between the visual and PSI measurement PV estimates is substantial (a metric that varies by 2x is not a good one.The RMS statistics provide a clue: masking the fiducials reduces the RMS from just 9.4 nm to 9.0 nm (~5% change, compared to the 10x PV change).Outliers unsurprisingly have an outsize effect on PV as compared to RMS.To analyze this behavior more closely, let us examine the histogram of the data in figure 3a.The distribution clearly has long "tails", but it is difficult to discern the details on a linear scale.Figure 3b displays the histogram frequency on a logarithmic scale to highlight the distribution behavior at the tails.2b) with the frequency shown on a (a) linear and (b) logarithmic scale (to make the tails of the distribution visible).The histogram bin size is 1 nm, and the most extreme bins are at -54 nm and +60 nm.
The tails are very sparse, for example removing just 5 data points (3 from the 'low' tail and 2 from the high, 0.0008% or 8 ppm of the total) changes the valley by 20 nm and the peak by 5 nm (reducing the PV from 115 to 90 nm).Removing a very modest 0.01% of the points reduces the PV to 69 nm.If we further increase the points removed to 0.2% (keeping 99.8% of the points, roughly equivalent to ±3 standard deviations in a normal distribution), the PV is reduced to 50 nm.Increasing the excluded points to 1% reduces the PV further to just 43 nm.These 'trimmed PV' values are summarized in figure 4 (which shows the histogram of figure 3a with lines superimposed to indicate the included points.Note that the change in RMS statistic is negligible in these cases due to the small number of points involved (e.g. from 8.97 to about 8.93 nm, or ½% change).
Note that the 99.8% inclusion obtains approximately the same PV as the visual analysis, at least for this example.Furthermore, if we hadn't first masked the fiducials, we still get approximately the same answer.The PV of 99.8% of the points in the entire evaluation aperture (including fiducials), is 53 nm in this example, compared to 50 nm when the fiducials are excluded prior to the histogram trimming (like the ~5% change in the RMS statistic).We can thus surmise that the visual analysis has some implicit filtering of outlier points associated with it, as compared to the PSI test.

USING TRIMMED PV ESTIMATORS
The above example illustrates that applying a PV metric directly to a PSI measurement of a surface doesn't obtain an equivalent result to a visual analysis of the same surface.And while not shown here, the problem gets worse as measurement lateral resolution is increased.Specifically, "raw" (unfiltered/untrimmed) PV irregularity metric (maximum pointminimum point, which we will abbreviate PVmax-min) has several undesirable characteristics: • it strongly depends on the number of data points / lateral resolution of the measuring instrument, • it is very sensitive to post processing analyses such as masking and filtering, • it is not weighted by the size of the defect (a 1 pixel peak has equal significance to a peak that runs around the entire edge of the part), and • it biases especially high in the presence of noise/measurement artifacts.As result, often somewhat arbitrary processing operations are performed on measurement data to make the PV correspond to what it "should" be.Such processing commonly includes masking, filtering, and spike removal.It can even include deliberately measuring a surface on a low-resolution instrument (which practically just means filtering in hardware instead of software).Unfortunately, these issues make the estimation of PV very sensitive to the measuring instrument, conditions of test, and especially analysis options chosen.This sensitivity makes it challenging for suppliers and customers of an optical component to compare PV quality results (i.e., resulting in disagreement as to whether an optic meets the specification).
To mitigate these issues, it is recommended to perform systematic, reproducible outlier trimming when estimating PV irregularity or total wavefront rather than a "raw" (maxmin) PV computation.a NOTE describing allowable processing steps prior to PV evaluation (e.g., allowable filtering, masking limitations, etc.).This solution, however, makes more work for both the creator and reader of the optical drawing.It also doesn't help with legacy drawings, which likely will lack such a note.What is really needed is a standardized method of data trimming that mitigates the undesirable PV characteristics while still providing a good estimate of the expected optical performance of the test surface.

Robust peak-to-valley deviation (PVr)
Chris Evans recognized the problems with PV over a decade ago and proposed a new parameter PVr ("robust" PV) [3].The PVr calculation is relatively simple: perform a 36-term Zernike fit to the measurement, and add the PV of that fit to 3 times the fit residual RMS, which can be expressed like: where S is the measurement of the surface (typically a matrix due to discrete sampling in the x and y dimensions), and the "Z36" function performs a 36-term Zernike polynomial fit to the measurement.We can abbreviate it even further as: with the subscripts Z36 and Z36-indicating the 36-term Zernike fit and residual of the surface, respectively.
The PVr calculations above (equations 1 and 2), however, exhibit some "edge cases" that merit additional treatment.The first is when the calculated PVr exceeds PVmax-min.In this case, the PVr should just be set to PVmax-min, i.e.: if {PVr > PVmax-min } then {PVr = PVmax-min }. ( Another edge case is when the Zernike fit to the measured surface has a very low magnitude.In this case, three times the residual RMS tends to underestimate the "expected" PV by too much, and a larger RMS multiplier is more appropriate, i.e.: if {PVr < 6 RMSZ36-} then {PVr = 6 RMSZ36-}.
Finally, if the measurement aperture deviates significantly from a disc, the 36 Zernike polynomial fit may not capture the low order form error appropriately.In such cases, a different polynomial order or type would need to be used.See the Evans reference for additional examples and a more thorough treatment of PV, PVr, expected ratios of PV to RMS metrics, and other practical issues.
The PVr of the measurement example in section 1.1, figure 4 is 55 nm (PVZ36 = 43 nm, 3 RMSZ36-= 12 nm), a very favorable agreement with the visual PV 53 ±8 nm.Note that the PVr calculation implicitly filters/trims outlier points because the full PV of the Zernike residual is not used, and largely exhibits the characteristics we'd like in a PV estimator.Since PVr addresses the key issues of PV for optical surface form specifications, we should just use it and be done, right?In fact, PVr has been standardized for optical drawings since 2015 (ISO 10110-5:2015), but unfortunately it has not achieved widespread adoption.While it is difficult to know the exact reasons, likely the most significant one is that the indication is "opt in" and not backward compatible.Meaning you cannot use PVr to evaluate PV unless the drawing specifically had a PVr callout.To achieve better adoption, we need to allow it to be applied to existing drawings without special indication (i.e., be "opt out": you can use PVr in place of PV for an irregularity specification unless the drawing specifically disallows it).But in discussions with the standards committee (ISO/TC 172/SC 1/WG1+2), it became clear that this strategy would not be viable without a universal solution, and PVr is not generally effective for non-round apertures.

Clipped peak-to-valley deviation (PV%)
An alternative to PVr for trimming outliers is directly clipping the data histogram.This was demonstrated in the example given in section 1 (and shown in figure 4).Basically, we just discard the most extreme points in the measured data prior to evaluating the PV.This type of data clipping is sometimes referred to as "alpha trimming".One parameter we need to decide on, however, is the amount of clipping to allow (or the percentage of points to keep, thus the "%" indication in "PV%").Table 1 summarizes the PV% results for different percentages using the example shown in figure 4. In this example, keeping 99.8% of the points provides the best PV% agreement with visual fringe analysis.This fraction of points also is roughly equivalent to ±3 standard deviations in a normal distribution (~99.73%),so 99.8% seems a sensible default.It also obtains excellent agreement with PVr.This is of course just an anecdotal example, but it turns out to behave well on a variety of other examples.We'll demonstrate this in section 3.1 on some simulated measurements.If for some reason a different percentage were desired, it is easy enough to indicate that % value in a drawing note.
The calculations for trimming points are simple enough: after sorting the points by value, drop the most extreme "N" points (where N is the fraction of points to discard times the total number of points in the measurement).For a symmetric distribution, an equal number of points would be clipped from both ends (e.g., 0.1% from each "tail" for PV99.8%).For asymmetric distributions, however, selecting the optimal points to discard is a little more involved.The goal is to select the points to drop that will minimize the value of the clipped PV.This is readily achieved by iterating through the number of excluded points until the minimal PV is found.Put another way, we can calculate the PV when all the points are clipped from one tail, the other tail, and every case in between, selecting the smallest value.For most cases, this extra optimization does not change the result much.For example, performing the "tail clipping" optimization process on the measurement in section 1.1 changes the PV% value by less than 0.1 nm (from 50.26 nm to 50.18 nm).

DISCUSSION OF ESTIMATOR ERROR
When calculating the PV of an optical function such as the wavefront irregularity, typically it is compared to a tolerance, e.g., 3/(B) for PV irregularity on an ISO 10110 compliant drawing.For the measured optic in question to conform to specification, it must achieve a value lower than the tolerance; if greater, it fails the specification.That said, this is merely an estimate of conformanceit does not guarantee it.In a binary decision such as this, there are two ways in which the estimate can fail: • overestimating the 'true' PV due to measurement uncertainty/estimator error (falsely rejecting a 'good' part), or • underestimating the 'true' PV due to an inaccurate estimator function (falsely accepting a 'bad' part).These false positive and negatives are analogous to type I and type II errors in statistical hypothesis testing.There is, however, another more subtle way in which the tolerance conformance evaluation process can fail: a potential lack of correlation with optical performance.A designer will often make assumptions about the shape and statistics of the surface/wavefront deviation when modeling and setting tolerances.If these assumptions are violated, there are two more potential failure modes: • if the actual deviation shape behaves 'better' optically than modeled, this can result in rejecting a part that will meet the desired optical performance; and • if the actual deviation shape behaves 'worse' optically than modeled, this can result in accepting a part that will not meet the desired optical performance.For example, a designer might perform a Monte-Carlo simulation with numerous combinations of low-order polynomials and monitor the rms spot size degradation at a few different field points.But if an actual manufactured surface form error is not substantially a low order polynomial, then the simulation will not be a good match to the real system (could be either better or worse, as noted above).
Let's consider some examples to illustrate these two failure modes.For simplicity, for all examples let the aperture be 50 mm, the prescribed shape be flat (plano), the PV irregularity tolerance be 63 nm ("tenth wave"), and no measurement uncertainty (we know what the surface looks like exactly).For our first example, consider a surface with a 100 nm high local artifact on an otherwise perfect part as shown in figure 5.This surface rms value is just 1.1 nm, but the PV irregularity exceeds the tolerance by over 50% (100 nm vs. 63 nm).It is obviously not well represented by a low-order polynomial fit, and thus violates that tolerancing assumption.This surface will very likely meet the imaging performance requirements of the optical system, despite failing the indicated 63 nm PV tolerance.Figure 5. Map of a surface with a single localized defect that is otherwise perfectly flat with 2 different views: (a) an overhead map with a color scale going from purple to red (from 0 to 63 nm; points exceeding 63 nm are colored pink) and (b) an oblique 3-D style plot.The PV of the surface is 100 nm, while the rms is just 1.1 nm.
For our second example, consider a surface height map that exhibits a spoke/pinwheel pattern, shown in figure 6.This surface, like the previous example, is not well represented by a low-order polynomial fit.Unlike that example, however, this pattern is not localizedit covers most of the surface.The PV of this surface just passes the 63 nm tolerance, but its rms slope is some 5 times larger than a low-order polynomial shape with an equivalent PV.It is thus likely to fail to meet the imaging performance requirements, despite conforming to the indicated 63 nm PV irregularity tolerance.Strictly speaking, these tolerance modeling issues (the connection of surface form errors to optical performance) are a failure of the design and the tolerancing process rather than the evaluation process.Additional tolerances should be employed if PV irregularity is not adequate, but the challenge in defining those tolerances should not be underestimated.Often inadequate tolerances are discovered "the hard way"; that is after all the manufactured optical components are assembled but the completed optical system fails to perform to its overall optical performance specifications.Some published examples include a 300x zoom system that needed a tolerance on local slope for key aspheric elements [4], optics for the National Ignition Facility (NIF) laser system that required power-spectral density (PSD) specifications [5], and wide-field aspheric camera lens suffering from localized artifacts in the center of the lens [6].Surface form errors of the sort shown in figures 5 and 6 are often termed "mid-spatial frequency" (MSF) errors, and PV irregularity will often be inadequate for tolerancing them.Researchers have made progress in modeling the impact of MSF errors more efficiently [7,8], but the optical performance predicted by those models must still be converted to tolerance indications on a drawing.Standardized tolerance indication options include local slope, band-passed rms, and PSD.Further details on these specifications can be found in ISO 10110-5 and ISO 10110-8.
With all that said, PV irregularity is still the "default" tolerance of choice by designers, especially for less demanding optical systems and applications.Therefore, PV estimators that are more resistant to these potential tolerancing failures are desirable, to more likely to represent the designer's intent when tolerancing.This is especially true of a metric like PV irregularity which has long legacy of use in visual fringe analysis.
These points are important to consider, as while PVmax-min reliably overestimates the 'true' PV of an optic (biased high); trimmed PV estimators can underestimate it.It is thus important to understand the potential shortcomings of each estimator, to better understand when tighter specifications, specific analysis rules, and /or additional tolerances might be appropriate.

Examples of PV estimator calculation results
We illustrated the shortcomings of using a direct maximumminimum calculation to estimate the PV in the introduction (section 1.1).Now let us consider a variety of simulated data to see how PVmax-min, PVr, and PV% respond to various surface shapes and statistics.These examples were generated on a 512x512 pixel grid (for 205982 total pixels over the circular area) and are shown in tables 2 and 3.The spherical and astigmatism cases demonstrate that all three metrics behave well on low-order error.PVr and PV match perfectly (as expected), and PV99.8%only slightly underestimates these forms of deviation (within 1-2%).The local artifact is ~4% the width of the evaluation aperture (too small to be captured by a 36 term Zernike fit), and the trimmed estimators significantly underestimate the actual PV in such cases.Note that the PV/RMS ratio of this local artifact is about 36, compared to the typical value of ~5 for typical low order error.The trimmed metrics reduce this ratio considerably, to ~6 for PVr and ~10 for PV99.8%.This demonstrates that the trimmed estimators do trim "real" data as well as noise/artifacts.That said, this behavior can be desirable even on "real" data due to tolerancing assumptions (PV irregularity is typically intended to capture low order error as discussed in section 3 and illustrated in figure 5).Also notice that PV and RMS metrics are virtually insensitive to the location of a local artifact.Other specifications are needed if the location of an artifact is important; e.g., if the optical application is more sensitive to artifacts in the center of the optic, a specification should be defined over a central subaperture to tolerance that.Next let us observe how the PV metrics behave on the more complicated surface profiles shown in table 3. The first two maps in table 3 are very similar: the low-order spherical profile has the center artifact added or subtracted.These two distributions are essentially identical from the point of view of optical performance (exact same amount of spherical and local artifact size, only the sign of the artifact has changed).Despite this, one has double the PVmax-min of the other due to the center artifact height being canceled or amplified by the valley in the spherical.This further highlights how PVmax-min metric behavior can correlate poorly with optical performance, complicating tolerancing as discussed in section 3. The trimmed PV metrics perform 'better' in this case; because although they underestimate the 'real' PV, their values are more representative of actual optical performance.We can also see that raw PV cannot be relied upon to reliably detect the presence of such local artifacts: the spherical + center artifact case has the exact same PV as spherical only, despite the addition of a center artifact.Therefore, if such localized artifacts are important to the overall optical performance, additional tolerance specifications (such as slope or Zernike RMS residual) should be employed.And as mentioned earlier, if the location of the artifact is important (center versus offset), a subaperture specification should be defined.
The random distributions (uniform and normal) demonstrate how the shape of the histogram can affect PV metrics.For this example, the uniform distribution was scaled to have a PV of 100 nm, and the normal distribution was scaled to have the same RMS value as the uniform distribution (28.9 nm).The normal distribution has a higher PV value (264 nm), and this PV would change if we simulated with a different number of data points (since the normal distribution has infinite "tails").The uniform distribution lacks this sensitivity.The trimmed PV estimators (PVr and PV%) basically have no effect on the uniform distribution (giving the same result as PVmax-min).They do, however, have a modest impact on the normal distributionreducing the PV from 264 to ~175 nm, or approximately 6 times the RMS value.This is by design for both PVr and PV99.8%.For PVr, the 36 term Zernike fit is nearly zero, so it defaults to 6 RMSZ36-.For PV99.8%, a normal distribution includes approximately 99.8% of the points within ±3 standard deviations, equivalent to a span of 6 RMS.The last example map takes 80 nm PV spherical and adds 20% of the normal distribution to it as noise.The noise increases the PVmax-min by about 40 nm (to 119 nm); the trimmed PV estimators reduce the noise impact by about half (to a PV of ~100 nm versus 80 nm for the spherical without any noise).

CONCLUSIONS
Surface form error is commonly toleranced with a PV irregularity specification, and its computational simplicity was ideal for visual fringe analysis.The sensitivity of PV to outliers, however, poses problems when employed on modern highresolution digital measurements.PVr and PV99.8% (trimmed PV estimators) mitigate the outlier sensitivity, allowing digital measurements to behave similarly to visual fringe analysis and with greater consistency across measurements.This eliminates the need for unstandardized ad hoc masking, filtering, and spike removal to get the PV value "correct".Given these benefits of trimmed PV estimators, the plan is to allow their use by default in the upcoming 10110-5 standard (tentatively scheduled for release in 2025).Put another way, PV99.8% or PVr can be used instead of PVmax-min to evaluate a PV irregularity or PV total unless the drawing specifically indicates otherwise.
Also, there are a few key cautionary lessons learned in Section 3 worth repeating here.
• PV (trimmed or not) is not a reliable metric for tolerancing mid-spatial frequencies (MSF) or localized surface errors (such as center artifacts).
o See Figure 5, Figure 6, Table 2  o Use additional specifications such as local slope (peak or RMS), band-passed RMS, and power-spectral density (PSD) if the optical application is sensitive to such errors.
o None of the metrics are location sensitive; if that is needed (e.g., if a center artifact is less tolerable than off center) then an evaluation subaperture will need to be specified as well.
▪ E.g., note the lack of difference between Local (center) and Local (offset) in Table 2.
• Trimmed PV can underestimate the "true" PV of a surface, especially if the PV/RMS ratio is very high.
o This typically only occurs when the surface error is highly localized (small laterally).
o This isn't necessarily "bad", depending on how tolerances were modelledsee Section 3.

Figure 2 .
Figure 2. The example surface analyzed via phase shifting Fizeau interferometer.(a) includes all data in the 45 mm evaluation aperture, while (b) has the fiducials masked out.

Figure 3 .
Figure 3. Histogram of the example surface with fiducial masked (figure 2b) with the frequency shown on a (a) linear and (b) logarithmic scale (to make the tails of the distribution visible).The histogram bin size is 1 nm, and the most extreme bins are at -54 nm and +60 nm.

Figure 6 .
Figure 6.Map of a surface with a spiral pattern with 2 views: (a) overhead map with a purple-to-red color scale and (b) an oblique 3-D style plot.The PV of the surface is 62.5 nm, while the rms is 14 nm.
The designer/drawing creator could add

Table 1 .
PV% values from the example in section 1.1, figure4for different amounts of clipping.The results from visual fringe analysis and PVr of this example are also shown on the right side of the table (shaded).

Table 2 .
Surface statistics (histogram, rms, PVmax-min, PVr, PV%) of several different simulated surface error maps (loworder: spherical, astigmatism, localized: center artifact, offset artifact, annular zone).The scales of the maps and histograms span the full PV (100 nm for most maps), and all numbers have units of nm.The vertical scale of the histogram (counts) is shown on both a linear and logarithmic scale (to make outliers visible).The shaded cells indicate intermediate calculations for PVr, with the number that is highlighted and underlined indicating which was used for PVr (or none if PVr = PVmax-min).

Table 3 .
Continuation of table 2, but with different simulated surface error maps that include noise (uniform and normal distributions) and combinations of errors (spherical ± center artifact, spherical + noise).