Resampling Estimation Based RPC Metadata Verification in Satellite Imagery

Recent advances in machine learning and computer vision have made it simple to manipulate a variety of media, including satellite images. Most of the commercially available satellite images go through the process of orthorectification to remove potential distortions due to terrain variations. This orthorectification process typically involves the use of rational polynomial coefficients (RPC) that geometrically remap the pixels in the original image to the rectified image. This paper proposes the first method to verify the authenticity of RPC metadata in an orthorectified satellite image. The steps include calculating the Residual Discrete Fourier Transform (DFT) pattern from the image using a linear predictor based residual spectral analysis and comparing with Expected Residual pattern that is obtained using the RPC metadata associated with the image. If the metadata associated with orthorectified image is correct, then the Residual-DFT pattern (which represents image data) and the Expected-Residual-DFT pattern (which represents metadata) should be similar. We use SSIM (Structural Similarity Index Metric) to quantify the similarity and thereby verify if the data has been tampered or not. Detailed experimental results demonstrate that our method achieves over 97% accuracy in the majority of binary tampering detection tests.


Resampling Estimation Based RPC Metadata
Verification in Satellite Imagery Chandrakanth Gudavalli , Michael Goebel, Tejaswi Nanjundaswamy , Member, IEEE, Lakshmanan Nataraj , Shivkumar Chandrasekaran, and B. S. Manjunath, Fellow, IEEE Abstract-Recent advances in machine learning and computer vision have made it simple to manipulate a variety of media, including satellite images. Most of the commercially available satellite images go through the process of orthorectification to remove potential distortions due to terrain variations. This orthorectification process typically involves the use of rational polynomial coefficients (RPC) that geometrically remap the pixels in the original image to the rectified image. This paper proposes the first method to verify the authenticity of RPC metadata in an orthorectified satellite image. The steps include calculating the Residual Discrete Fourier Transform (DFT) pattern from the image using a linear predictor based residual spectral analysis and comparing with Expected Residual DFT pattern that is obtained using the RPC metadata associated with the image. If the metadata associated with orthorectified image is correct, then the Residual-DFT pattern (which represents image data) and the Expected-Residual-DFT pattern (which represents metadata) should be similar. We use SSIM (Structural Similarity Index Metric) to quantify the similarity and thereby verify if the data has been tampered or not. Detailed experimental results demonstrate that our method achieves over 97% accuracy in the majority of binary tampering detection tests.
Index Terms-Resampling estimation, digital image forensics, metadata tampering detection, signal processing.

I. INTRODUCTION
T HERE has been an exponential increase in the number of commercial, public and defense-oriented satellites, and with it, concerns over potential manipulation or misuse of such data. Unlike consumer-oriented images, satellite images include significant additional metadata. The metadata may hold information about camera position, orientation, capture time. In addition, these images are generally orthorectified to remove perspective distortion. This orthorectification produces a map-like image, with lines of latitude and longitude being perpendicular and equally spaced. While forensic analysis on digital images have been well studied [1], [2], [3], there are fewer studies in detecting manipulations in satellite images [4] and even lesser when it comes to the detection of tampering of metadata. Satellite images, for the most part, are orthorectified using rational polynomial coefficients (RPC) and Digital Elevation Models (DEM). The RPC coefficients define a best-fit mapping from latitude, longitude, and elevation to pixel coordinates. These coefficients are determined from the camera location, orientation, and parameters intrinsic to the imaging system. Such RPC coefficients are provided in the image metadata. The DEMs contain a dense grid of elevation measures at different points around the globe, and are freely available online. The combination of the RPC coefficients with the DEMs allows for the transformation of the captured image into an orthorectified view.
RPC metadata associated with an image is essential in obtaining the pixel coordinates of a given object or a given lat-long in the unrectified image. Tampering could be used by adversaries to mask the true locations of objects or other geo-features of interest. Tampering of RPC metadata associated with orthorectified satellite imagery raises questions and suspicions on the authenticity of image content as well. This paper addresses the problem of verifying RPC metadata that is attached to orthorectified satellite imagery.
Our approach to detecting the tampering is based on resampling estimation. Given an image, there exists a noise associated with it. Here noise can be treated as the deviation from ideal pinhole camera model. Once an image is resampled using an affine transformation, the noise variance fluctuates periodically across the image. DFT of noise variance in the resampled image can be calculated using the method proposed in Kirchner [5], which we refer it as "Residual DFT Pattern" (a sample is shown in Figure 1).
The RPC metadata, together with the DEMs, define a mapping from sensor pixel space to orthorectified image space. DEMs for a given location define a mapping for orthorectified image space to elevation. The combination of RPC and DEMs together create a mapping from lat-long coordinates to sensor pixel location. This mapping therefore defines an expected resampling pattern due to orthorectification. Using the RPC + DEM metadata associated with the image, we resample a predefined checkerboard pattern and estimate the DFT of noise variance in the original image. We refer to this estimated DFT as "Expected Residual DFT Pattern." Any modifications to the RPC metadata will alter this expected DFT pattern. If the metadata used to resample/orthorectify the checkerboard pattern is same as the metadata used to resample the satellite image, then both the DFTs show structural similarity, as shown in Figure 1. Therefore, we use the structural similarity index metric (SSIM) between the two DFT patterns to verify if the associated RPC metadata is tampered on not.
The main contributions of the paper are as follows: 1) We propose an algorithm for calculating the "Expected Residual DFT pattern" based on the RPC Metadata. The details of this method is presented in Section III-A. This algorithm is new and to the best of our knowledge, has not been addressed anywhere in the published literature.

2) We propose a method that uses Structural Similarity
Index Metric (SSIM) to compare the expected DFT pattern computed from the metadata with the DFT pattern that is derived from the orthorectified image. The approach presented in Section III-B details the use of SSIM for this step. The rest of this paper is organized as follows. The related work in resampling detection is summarized in Section II, and it goes into greater detail about RPC Metadata, the Orthorectification procedure, and the calculation of Residual DFT pattern. Section III describes the proposed method to verify the authenticity of associated metadata. In Section IV, we provided detailed experimental results to validate the proposed RPC authentication. We conclude the paper with Section V by stating the pros and cons of our technique.

II. BACKGROUND
Since the process of orthorectification creates a new, warped set of image pixel locations, resampling must be used to produce the orthorectified image. Altering of the RPC coefficients will affect the warping pattern used, and therefore the resampling. There are several methods used for resampling detection and/or estimation ( [3], [5], [6], [7], [8], [9]). In our proposed approach, we selected a fixed linear predictor based residual spectral analysis as described in Kirchner [5]. This method offers fast prediction for large images, which makes the technique reliable for satellite images as they tend to have larger dimensions. Images that we worked on are typically of size 20, 000 × 8, 000. This method also calculates relatively unique features for a variety of scaling, rotation, and sheer factors.

A. Orthorectification and RPC Metadata
Orthorectification is the process of transforming an image onto its upright planimetry map by removing the perspective angle. Orthorectification is done using Rational Polynomial Coefficients (RPCs) based on empirical models that relate the geographic location (latitude/longitude, denoted by X, Y ) and the surface elevation data (denoted by Z ) to map the row and column positions (denoted by r, c) through two rational polynomials [10]. Satellite sensor models are empirical mathematical models that relate image coordinates (row and column position) to latitude and longitude using the terrain surface elevation. The name Rational Polynomial derives from the fact that the model is expressed as the ratio of two cubic polynomials. A single image involves two rational polynomials, one each for computing row and column position as shown in (1).
where, P 1 , P 2 , P 3 , and P 4 are cubic polynomials, each with 20 coefficients (which are referred as RPC Metadata) as shown in (2).
where, * belongs to 1, 2, 3, or 4. The coefficients of these two rational polynomials, shown in (1), are computed as the best fit mapping from spatial location (X, Y, Z) to pixel coordinates (r, c). This is done by considering the camera's orbital position, orientation, and corresponding physical sensor model.
Using the unrectified satellite image, its RPC Metadata, and a Digital Elevation Map (DEM) to provide the elevation values, an unrectified image is resampled to generate an orthorectified image. Figure 2 shows a visualization of this transformation, by passing a grid of vertical and horizontal lines through the warping function. DEMs with 30-meter

B. Calculation of Residual DFT Pattern
While there are several methods for resampling detection and/or estimation, we selected a fixed linear predictor based residual spectral analysis as described in Kirchner [5]. This method offers faster prediction, which is essential for satellite images with sizes close to 20, 000 × 8, 000 pixels. This method first estimates an image noise signal by applying a fixed linear filter. Here noise refers to any deviation in the sample from an ideal pinhole camera model. It is shown that different resampling patterns will create unique, periodic artifacts in the noise variance, and can be analyzed through the DFT of noise variance, which we refer as Residual DFT Pattern. We now briefly describe this resampling estimation method as it is used to calculate the residual DFT pattern.
Kirchner [5] assumes that the pixel noise in the image captured by the camera is zero mean and constant variance (σ 2 ). Resampling will cause the noise variance to fluctuate depending on the location. For the points where resampled pixels map directly onto one of the positions of original pixels, the variance will also be σ 2 . Points that lie equidistant from its  4 nearest neighbors will have a noise variance of only 0.25σ 2 . Visual representations of both are shown in Figure 3. All other points in the resampling pattern will have a noise variance which lies between these two extremes.
It is shown by [5] that affine resampling methods introduce periodic patterns in the pixel noise variance present in the resampled image. An example of this is shown in Figure 4. This figure represents the pixel displacements that occurred due to resampling technique in which the image is scaled by a factor of 0.9. The new pixel locations will be coming in and out of phase with the original pixel locations, causing periodic patterns in the noise variance of resampled image. Given this model of the resampling process, we only need a method to estimate the pixel noise variance. Then, unique periodic patterns will be visible in the DFT of the variance estimate, which is referred as Residual DFT pattern.
To estimate the noise variance in an image, the following procedure is used by Kirchner [5]. First, a high pass filter is used to remove a sufficient amount of image content. The following convolution kernel is used for the tests: Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. To estimate the noise value at each pixel, the method applies the above linear, high pass filter to remove the image content. This filtered image (denoted by e) is treated as an estimate of the noise values at each pixel. The method estimates the noise variance (denoted by p) similar to Popescu and Farid's Gaussian distribution based calculation [6] as shown in (3).
The DFT of the estimated noise variance ( p) is referred to as Residual DFT pattern. Some toy examples showing the image residual DFT patterns under different transformations are shown in Figure 5.
A distinction from previous works in resampling detection is that the resampling pattern for these satellite images is not affine. While an affine transformation will produce periodic artifacts, and discrete points in the Expected DFT spectrum, the RPC+DEM resampling patterns will not. However, these patterns are very close to being locally affine for small patches, and will instead form a cloud of points in the DFT pattern.

III. PROPOSED METHOD A. Calculation of Expected Residual DFT Pattern
We calculate the DFT of "expected noise variance" at each point in the orthorectified image, which is referred to as Expected Residual DFT Pattern.
The expected noise variance is estimated at each point in the orthorectified image by using the RPC metadata associated with the image. In order to estimate the variance at a given point, we exploit the fact that variance is inversely proportional to distance between the new orthorectified pixel location and its nearest neighbor in the unrectified image. We chose L1 distance for this purpose as there is no discernible difference between L1 and L2 metrics. We compute the DFT of the calculated distance, which is same as Expected DFT Pattern as the distance is inversely proportional to the variance.
We get these distances using only the forward warp function by using the following procedure. An array, Y , of the same height and width as the image (which is referred to as synthetic grid) is initialized with 4 channels. Then, it is filled with values such that any 2 × 2 block contains 4 orthogonal vectors as shown in Eq 4. This matrix is passed through the same transformation pipeline as the image. Assuming that the transformation can be approximated to a bilinear interpolation, from the transformed version of Y, the bilinear interpolation formula can then be reversed (as described in Appendix A) to calculate the distance matrix which represents the distance between each of the new orthorectified pixel location and its nearest neighbor in the unrectified image. This ensures that, it is required to provide only the forward warping function.
We calculate the DFT magnitude of this distance matrix, which we refer to as the Expected DFT pattern. This will later be compared with the residual DFT pattern to measure the structural similarity between both. Before comparing both the DFT patterns, we high pass filter the residual DFT pattern through multiplication by a cone, C, as shown in (5). We do it to suppress the strong and less informative low frequency components, and level out the noise floor in it.
where h, w are height and width of DFT matrix. This entire process is summarized into a flow chart that is shown in Figure 6. In this section, all processes in the flow chart are defined except for computing the mismatch between two DFT patterns. Section III-B investigates different methods of computing the mismatch score.

B. Similarity Score Calculation Between DFT Patterns
Given two DFT patterns, a metric to quantify the similarity between them is required. One of the options would be to use Mean squared error (MSE) as a metric to quantify the dissimilarity between two DFT arrays (say x, y) as shown in (6).
where M, N are number of rows and number of columns of the DFT arrays respectively.
Otherwise, both the two-dimensional arrays can be flattened into a single dimension and compute the cosine similarity score between them as shown in (7).
where α, β are the flattened DFT arrays. (6) and (7) show that both MSE and cosine similarity metric take every pixel into account and perform a one-to-one comparison when computing the similarity score.
Since, we observed from our experiments that, the intensity and position of peaks in DFT patterns can vary slightly around a specific area, image similarity measures that capture "structural" similarity are of interest. We used Structural Similarity Index Metric (SSIM) [11] to calculate the similarity score between two DFT arrays. SSIM score between two discrete signals is calculated as follows. Let x, y be the two DFT arrays that correspond to a given image patch. Let µ x , σ 2 x and σ x y be the mean of x, variance of x, and the co-variance of x and y, respectively. Approximately, µ x and σ x can be viewed as estimates of the luminance and contrast of x, and σ x y measures the the tendency of x and y to vary together, thus an indication of structural similarity. SSIM compares luminance (l), contrast (c), and structure (s) of x, y (using (8), (9), and (10), respectively) and the overall similarity score is computed using (13).
s(x, y) = 2σ x y + C 3 σ x σ y + C 3 (10) where C 1 , C 2 and C 3 are small constants given by L is the dynamic range of the pixel values (L = 255 for 8 bits/pixel gray scale images), and K 1 ≪ 1 and K 2 ≪ 1 are two scalar constants. The general form of SSIM between signal x and y is defined as: where α, β and γ are parameters to define the relative importance of the three components. We use α, β, γ = 1 to give equal importance to luminance (l), contrast (c), and structure (s). Hence the resulting SSIM index is given by Since SSIM measures similarity (i.e., 1 implies matched, and 0 implies mismatched), we use (1 − SSIM(x, y)) as the metric to calculate the distance (or dissimilarity) between the two DFT patterns in our experiments. Figure 7 and Figure 8 show sample scenarios explaining the reason for choosing SSIM over MSE and cosine similarity metrics respectively.
SSIM based scores were concentrated in a very small range. So we normalized the scores using a sigmoid function, f (x) in Eq 15, to be well spread between 0 (matched) and 1 (mismatched), thereby easily differentiate matched and mismatched pairs.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Fig. 7. Sample demonstration to show that SSIM is more apt than MSE for our use case. Even though both the DFT patterns look alike, MSE is unable to capture the similarity unlike SSIM. Fig. 8. Sample demonstration to show that SSIM is more apt than Cosine Similarity for our use case. Even though both the DFT patterns look alike, cosine similarity is unable to capture the similarity unlike SSIM.
We empirically estimated λ e , µ e (which determine the shape of the sigmoid curve) from the outputs of experiments on larger datasets. Details regarding the datasets and the experiments that are carried out on them are discussed in Section IV.
Given a large satellite image and associated RPC metadata, we divide the image into non-overlapping patches and compute both the DFT patterns for each patch. We calculate the SSIM score for each patch and generate a heatmap of SSIM scores for the entire image. Median of patch-wise SSIM scores is treated as the overall tampering score of the image, whereas heatmap can be used to determine where and how the match and mismatch differ.
For example, heatmaps for a given pair of images are shown in Figure 9. We can now clearly see the smaller (blue) score values for the matched case (top-left and bottom-right images) and larger (red) score values for the mismatched case (top-right and bottom-left).

IV. EXPERIMENTS
This section details the experiments that are carried out to verify the authenticity of satellite images using proposed technique.

A. Dataset
Level 1B data (in GeoTIFF format, with RPC Metadata associated) from Orbview-3 satellite [12] is collected from United States Geological Survey (USGS) Earth Explorer [13]. For elevation maps that are required for orthorectifying the    [14].
Samples from different regions of the globe with both flat and hilly terrains are used to carry out the experiments. Table I shows the number of samples collected from each region. Each sample in the dataset has pixel information and corresponding metadata associated with it. We created a dataset of tampered samples by replacing the metadata of each sample with metadata associated with one of the other samples in the dataset. We used two different approaches to create the tampered data. In the first approach we randomly exchange metadata (see Section IV-B). In the second approach we selected images that are overlapping spatially and captured at different time instances, and exchanged their metadata. We consider 85% overlap and 98% overlap, see Section IV-C

B. Random Exchange of Metadata
Experiments are carried out on Japan dataset by considering the collected image-metadata pairs as samples of untampered  III   EXPERIMENTAL RESULTS ON THE DATASETS FROM DIFFERENT PARTS OF  GLOBE. EACH IMAGE STEMS TWO SAMPLES -ONE IN THE TAMPERED  DATASET AND THE OTHER IN  dataset. We created a dataset of tampered samples by replacing the metadata of each sample with metadata associated with one of the other samples that are randomly selected from the dataset. We calculated both DFT patterns for each sample by dividing images into non-overlapping patches of size 1024 × 1024. As described in Section III-B, normalized SSIM score between DFT patterns of each patch is calculated. We set the window size parameter to 63 while computing SSIM score.
Median of scores of all patches in a given image is considered as the overall tampering score of the image and this score is used as the key to detect tampering in metadata. Binary classification of these scores resulted in:  Sample images where RPC metadata is untampered, but DFT patterns do not have structural similarity. The first example shows more significant distortion in the resampling grid than other example. The second appears to have a typical resampling pattern, but the image may have been subjected to additional post-processing: (a) Orthorectified satellite image; (b) Residual DFT Pattern; (c) Rectangular grid of horizontal and vertical lines; (d) Expected DFT Pattern.
• Area under ROC curve (AUC) of 0.9969 • Accuracy of 99.15% • Tampered detection accuracy of 99.65% (Percentage of tampered samples that are detected as tampered) • Untampered detection accuracy of 98.65% (Percentage of untampered samples that are detected as untampered) We repeated the above experiment for various patch sizes and the corresponding results are shown in Table II. As we got Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. highest accuracy for patch size of 1024 × 1024, we finalized on breaking the images into 1024 × 1024 patches.

C. Spatially Overlapping Image Metadata Exchange
Experiments presented in Section IV-B are carried out by randomly exchanging metadata between different samples in a dataset. But, a more purposeful manipulation is exchanging metadata between samples corresponding to the same GPS coordinates that are captured at different timestamps. To recreate this, for each image, we search for a maximum overlapping pair captured at different timestamp in the dataset. We used the image pairs with overlap percentage (overlap area/original area, ) above a threshold to exchange metadata between them.
With 85% overlap area threshold, only 164 images (and 328 samples) remain from the 1708 images of Japan region. Conducting tampered sample detection experiments with patch size of 1024 × 1024 on this small dataset resulted in: • Area under ROC curve (AUC) of 0.9848 • Maximum accuracy of 97.85% • Tampered detection accuracy of 98.16% • Untampered detection accuracy of 97.55% With 98% overlap area threshold, only 47 images (and 94 samples) remain from the big dataset of 1708 images. Classification results on this dataset resulted in: • Area under ROC curve (AUC) of 0.9941 • Accuracy of 96.81% • Tampered detection accuracy of 100% • Untampered detection accuracy of 93.62% For dataset from Japan region, results for purposeful temporal metadata exchange are slightly worse than random metadata exchange. But, this is not the case with the datasets from other regions.

D. Testing on Flat Regions
Orbview-3 satellite data is collected from other regions of the globe with flat regions (< 500 feet variation). Tampering detection experiments are conducted for Northern Europe, South America and West Africa datasets and corresponding results are described in Table III The comprehensive results for three different flat regions of the world compared to the Japan dataset, suggest that the performance of our proposed algorithm does not suffer even when there are no significant terrain features that can add more features to the DFT pattern.

E. Visual Examples
Few examples visualizing the residual DFT patterns and expected DFT patterns where our method was successful at detecting if the metadata is tampered and untampered are shown in Figure 10 and Figure 11 respectively. Even though our method has shown higher accuracy for RPC tampering detection, we show the rare examples where our method happen to fail in Figures 12, 13.

V. CONCLUSION
This paper proposes a novel approach to verify the authenticity of orthorectified satellite images with respect to the associated RPC metadata that is used to generate the orthorectified image. We calculate the expected DFT pattern from metadata and residual DFT pattern from pixel content of the image. These two DFT patterns tend to have structural similarity if the RPC metadata is untampered. We use sigmoid normalized SSIM score to classify tampered and untampered samples. Extensive experimental results are provided to demonstrate the effectiveness of the approach to detecting the RPC metadata manipulations.

APPENDIX A CALCULATION OF EXPECTED DFT
This section describes math behind the calculation of expected DFT patterns from the synthetic input, Y, as defined in (4).
Let Y be a 2 × 2 sub array from Y. This implies Y can be one of the following: For the illustration purpose, we consider Y as Let 'X' be the interpolated pixel in the transformed image that is generated by using "bilinear interpolation" of the four pixels of Y shown in (17). Therefore, for some 0 ≤ x, y ≤ 1, we can say From (17) and (18), we can say that: Let the four channel pixel 'X' be represented using X 0 , X 1 , X 2 , X 3 . This implies, From (20), we can say that: Y will not be always as shown in (17). It can be any one of the four 2 × 2 arrays shown in (16). Now, let us assume that: Then, we get: By solving (24), we get: (25) and (26) implies that we get point B in Figure 14 instead of point A. But, in any case, distance d to the "nearest neighbor in unrectified domain" is same in both the cases, which can be calculated as: Expected DFT pattern is therefore calculated by computing the 2d DFT of this expected prediction residue (d) for all the pixels in rectified domain, which is found to have structural similarity with the residual DFT pattern calculated using the rectified satellite image.
Chandrakanth Gudavalli received the master's degree in electrical engineering from the University of Colorado Denver. He is currently an Assistant Specialist with the Vision Research Laboratory, University of California, Santa Barbara. He is a Research Engineer with Mayachitra, Inc. His research interests include media forensics, computervision, and MLOps.
Michael Goebel received the B.S. and M.S. degrees in electrical engineering from Binghamton University in 2016 and 2017, respectively. He is currently pursuing the Ph.D. degree in electrical engineering with the University of California, Santa Barbara. His research interests include media forensics, deep learning, and computer vision.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
Tejaswi Nanjundaswamy (Member, IEEE) received the B.E. degree in electronics and communications engineering from the National Institute of Technology Karnataka, India, in 2004 and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of California, Santa Barbara (UCSB) in 2009 and 2013, respectively. He was with the Signal Compression Laboratory, UCSB, Infocoding Labs Inc., and with Ittiam Systems, where he focused on audio/video compression, processing and related technologies. The work presented in this paper was carried out while he was a Research Staff Member with Mayachitra, Inc. He is currently an Audio Codec Engineer with Apple Inc. He won the Student Technical Paper Award from the AES 129th Convention.
Lakshmanan Nataraj received the B.E. degree from the Sri Venkateswara College of Engineering, Anna University in 2007 and the Ph.D. degree in electrical and computer engineering from the University of California, Santa Barbara in 2015. The work presented in this paper was carried out while he was a Research Staff Member with Mayachitra, Inc. He is currently a Principle Research Scientist with Trimble Inc., Chennai, India. His research interests include multimedia security, malware detection, and image forensics.
Shivkumar Chandrasekaran received the Ph.D. degree in computer science from Yale University, New Haven, CT, USA, in 1994. He is currently a Professor with the Electrical and Computer Engineering Department, University of California, Santa Barbara. He is one of the Co-Founders of Mayachitra Inc., Santa Barbara, CA, USA. His research interest includes computational mathematics.
B. S. Manjunath (Fellow, IEEE) received the Ph.D. degree in electrical engineering from the University of Southern California, Santa Barbara, CA, USA, in 1991. He is currently a Distinguished Professor of electrical and computer engineering with the University of California, Santa Barbara. He has coauthored more than 300 peer-reviewed articles in image processing, computer vision, cyber-security, and media forensics. He received the 2020 Edward J. McCluskey Technical Achievement Award from the IEEE Computer Society. He is a fellow of ACM, AIMBE, and NAI.