A control chart for monitoring images using jump location curves

Abstract Image monitoring is a relatively new research area in statistics and machine learning that has wide applications in various fields including medical diagnostics and subsequently disease monitoring, satellite imaging, security systems, and so forth. In the literature, a vast majority of the methods use image intensity changes to detect out of control images. However, this approach is often unreasonable in many real-life applications where a change in contrast between the background and foreground of an image should not indicate an out of control image as long as the boundaries of the image objects remain unchanged. In this article, we propose a Shewhart-type control chart to monitor grayscale images using detected edges of the images. The central idea is to monitor the Hausdorff distance between the point-set of detected edge pixels in each image from the corresponding point-set of the estimated true in-control image. The proposed monitoring procedure should be easy to execute many real-life applications. Numerical studies show that it performs well in various types of situations in comparison with a number of competing methods.


Introduction
Traditionally, quality control techniques have been used widely in manufacturing industries.The fundamental task of quality control is to inspect a process over time to detect any nonrandom or special cause of variation in the observed data which is important to maintain the quality of manufactured items (Qiu 2013;Montgomery 2007).Due to rapid progress in image acquisition techniques, images have become a very popular data format not only in the field of manufacturing industries but also in many other different fields of application like medical science, satellite imaging, etc.In the field of medical imaging, X-ray, CT-scan, MRI, and fMRI have been widely used for medical diagnosis.Monitoring such images for any significant change has many important applications.Nowadays, satellite images have become a basic tool in surveillance of the earth's surface.It has been widely used in the research of agriculture, forest science, ecology and ecosystem, coastal resources, environment, etc.A security related example involves airspace image monitoring performed by the military to detect foreign air crafts.See Qiu (2020) and Qiu (2018) for more applications.Sometimes, images are obtained in the form of data streams as a longitudinal process over time and we are interested in monitoring new image sequences coming from this longitudinal process over time.Since 1972, USGS in collaboration with NASA launched 9 satellites to get images of earth's surface which is very well known as Landsat Project.Data from this project are easily available for researchers all around the world.Monitoring these sequence of images have received attention in different scientific disciplines.The aim of this paper is to develop a control chart for monitoring images which are being observed sequentially over time so that we can detect any nonrandom change in the images early (Figure 1).
There exists a number of SPC (statistical process control) methods for monitoring images.As because images have been traditionally used as a form of data in the chemical and manufacturing processes, most of the existing methods are discussed in connection with the chemical and industrial engineering applications (Megahed, Woodall, and Camelio 2011).Most of the existing methods consist of the following two steps.Firstly, extract important image features from the observed images and use a conventional univariate "or" multivariate control chart to get signals from process shifts.However, a number of existing methods select a set of pre-specified regions known as regions of interest (ROIs) for individual images and construct control charts based on an appropriate summary statistic (e.g.: average intensity) for ROIs over time using conventional GLR chart by Megahed et al. (2012) or MGLR chart by He et al. (2016).The MGLR control chart mentioned above is used mainly to detect multiple faults but the control chart statistic contains the variance-covariance matrix which is not invertible in high dimensional applications.Okhrin, Schmid, and Semeniuk (2021) address this issue and describe a more sophisticated version of the GLR method to get rid of this problem.Amirkhani and Amiri (2020) partition the whole image into a set of specific regions and for comparing with the nominal image, they use a test based on one-way ANOVA ,and then construct a p value based control chart to detect the out-of-control signal in an image sequence as soon as possible.Koosha, Noorossana, and Megahed (2017) propose profile monitoring approach for monitoring.They have taken the wavelets coefficients as a frequency domain feature and monitored the change in the coefficients to keep track of images.See Okhrin, Schmid, and Semeniuk (2019) for more discussion about different methods based on frequency domain features.
None of the methods mentioned above uses detected edges for image monitoring.The novelty of our approach lies in the fact that it explicitly uses detected edges, one of the most important features of an image.Almost all existing methods are based on image intensity values, and therefore, they are too sensitive for a very small change in noise level and for a slight misalignment of images.In that sense our method is more robust in presence of such insignificant changes.To our best knowledge, there is no existing study on monitoring image data considering edges as main feature for image surveillance.
Under jump regression analysis (JRA) (Qiu 2005), a 2-D grayscale image intensity function can be considered as a discontinuous regression surface, where the location of the discontinuities are often in the boundaries of the image object.The locations of discontinuity points are usually curves, known as jump location curves (JLC).For more details, readers are referred to Section 3. The main purpose of this paper is to propose a Shewhart-type (Shewhart 1931) control chart based on JLCs for monitoring images.Essentially, we aim to monitor the JLCs in the regression surface over time.For this purpose, we first detect the edge pixels from the images using an appropriate local smoothing procedure and then for comparison we choose a quantitative similarity measure.In literature, commonly used similarity measures are mean square deviation (MSD) between two images or the Pearson correlation among the intensities of two image intensity matrices.However, the major drawback of these measures are: they cannot take into account the spatial structures and other complicated details of an image.In our proposed methodology, we choose the Hausdorff distance between the point-sets of detected edge pixels of two images, and thus all issues mentioned above can be addressed properly.
The paper is structured as follows.Section 2 discusses the image registration problem and some of its state-of-the-art solutions as a pre-processing step.In Section 3, we provide an edge detection procedure based on local linear kernel (LLK) smoothing technique.In Sections 4 and 5, we describe our proposed method step by step for Phase-I and Phase-II respectively.Section 6 describes selection of procedure parameters.A statistical property of the detected edge pixels are discussed in Section 7. Its performance is evaluated by a simulation study in Section 8.A real data example is discussed in Section 9. Section 10 provides a set of guidelines for the practitioners and a number of concluding remarks are provided in the final section.

Image pre-processing
In most applications of image monitoring or image comparison, geometric misalignment of images is a very common issue.This is due to the fact that in MVS (Machine Vision System) or any other image capturing procedure, the position of the camera to take the images is different at different time points.So to ensure the subsequent image analysis is more reliable, it is important to geometrically align the images first.This is called image registration in the literature (Xing and Qiu 2011;Tustison, Avants, and Gee 2009).Since this kind of geometric misalignment in manufacturing industries is due to different positions of the camera, it is reasonable to assume that an appropriate rigid-body transformation would align two images.In such cases, transformation is defined as Tðx, yÞ ¼ ðT 1 ðx, yÞ, T 2 ðx, yÞÞ where T is the rigid-body transformation that maps the ðx, yÞ-th coordinate of one image to Tðx, yÞ-th coordinate of another image.Then T can be expressed as Where h is the rotation parameter and ðh, kÞ are the translation parameters corresponding to x-axis and y-axis, respectively.In the image registration problem, our aim is to estimate the parameters ðh, h, kÞ from the observed images from the production line.For further details regarding the estimation and registration methodology, see the recent paper Feng and Qiu (2018) on image comparison.Besides image registration, sometimes image denoising (e.g., Mukherjee 2017) and image deblurring (e.g., Kang, Mukherjee, and Qiu 2018) are also important in cases the observed images contain noise and blur.

Edge detection by LLK smoothing
As discussed in the previous section, a very important step in our proposed image monitoring control chart is to identify the edge pixels of the images using a reasonable edge detector.Any suitable edge detector can be used for this purpose.Many existing edge detection methods are based on first-order derivatives, (e.g., Canny 1986;Qiu and Bhandarkar 1996) or secondorder derivatives (e.g., Sun and Qiu 2007;Joo and Qiu 2009;Torre and Poggio 1986;Clark 1988) of the image intensity function.In this paper, we use edge detection method based on local linear kernel (LLK) smoothing.
Literature on jump regression analysis considers estimating jump location curves (JLCs) in the design space.However, presence of multiple JLCs or the JLCs crossing over in the design space make it very difficult to describe the JLCs mathematically due to the global nature of the curves (Qiu 2005).In the digital image setup the more flexible and appropriate method of jump detection is to consider JLCs as a point set in the design space.Qiu (1998) and Qiu and Yandell (1997) estimate JLCs as point set by a local linear estimator.We consider a similar approach using a kernel function in the circular neighborhood and it is known as jump detection by local linear kernel (LLK).Under the jump regression analysis framework, a 2-D image can be expressed by the following regression model: where fðx i , y j Þ : i, j ¼ 1, 2, ::::, ng are equally spaced pixel coordinates in the design space where K is a radially symmetric bivariate density kernel with support fðx, yÞ :  Where m pq ¼ P ðx i À xÞ p ðy j À yÞ q K ij for p, q ¼ 0, 1, 2, and The neighboring points are along the direction of bðx, yÞ, i.e., the estimated gradient direction at ðx, yÞ: Using these, the jump detection criteria is defined as the following: kðx, yÞ ¼ minfjj bðx, yÞ À bP 1 ðx, yÞjj, jj bðx, yÞ where bP 1 ðx, yÞ and bP 2 ðx, yÞ are the estimated gradient vectors at the neighboring points ðx P 1 , y P 1 Þ and ðx P 2 , y P 2 Þ, respectively and jj.jj is the Euclidean norm.We decide that there is no jump at the point ðx, yÞ if the value of kðx, yÞ is relatively small.Therefore ðx, yÞ is on the JLC or it is a detected jump point if kðx, yÞ > v n where v n is a threshold parameter.

Model description
In this paper, we mainly consider 2-D grayscale images.A sequence of m images of the dimension n Â n each can be expressed by the following 2-D JRA model: where ðx i , y j Þ is the ði, jÞ-th pixel coordinate of the k-th image.f ijk is the observed intensity and f k ðx i , y j Þ is the true intensity value of the k-th image at the pixel ðx i , y j Þ: e ijk 's are random errors with mean 0 and variance r 2 > 0: Further, we assume that images are independent with each other and they are geometrically aligned.In case they are not geometrically aligned, they have to be registered appropriately.

Edge detection and Hausdorff distance
In the previous section, we describe a method to detect the JLC as a point set.We assume that in Phase-I, we have images of batch size one from m different time points and our primary goal is to detect if there is any change in the image objects.In Phase-I, our main objective is to construct a chart statistic and estimate the corresponding parameters and modify them simultaneously with data of different time points.
Let I 1 , I 2 , :::, I m be the m sample images with dimensions n Â n from an image process at m different time points.We assume that the images contain noise of various levels.We define that estimated true in-control image as Î IC ¼ 1 m P m 1 I i : Now, by LLK smoothing as described previously, the estimated point set of edge pixels of sample images is denoted as f Êk : k ¼ 1, 2, :::, mg: The estimated edge pixel of the estimated true image ( Î IC ) is denoted as b E 0 : Since we are considering edge pixels as a point set, a very well-known measure to calculate the dissimilarity between two point sets is the Hausdorff distance measure.In image processing literature, it is very popular with lots of applications in image matching, object tracking, image comparison etc.It measures how close two subsets from the same metric space are from each other.Being a distance measure on some metric space it has the following properties: The Hausdorff distance between two sets is bounded if the sets are bounded.The Hausdorff distance between two sets having the same closure is zero.Triangle inequality holds for Hausdorff distance.
In our context, images are defined on the compact design space ½0, 1 Â ½0, 1 with equally spaced discrete pixel coordinates.Hence the distance is bounded and smaller the value of the Hausdorff distance between the point sets of detected edge pixels of two images more similar the images are.Therefore, for point sets of edge pixels b In this paper, we are trying to form an edge based control chart to monitor an image process over time.
To construct the control chart, we have to first define the chart statistic that could help us construct upper control limit (UCL) for the process and also help practitioners to detect changes in online monitoring stage.From the previous section, we know that the estimated point set of the edge pixels of an in-control (IC) image is defined as Ê0 : f Êi : i ¼ 1, 2, :::, mg are the corresponding estimated point sets of edge pixel of the sample images.A natural statistic can be defined as: Then, the chart will give signal if T > t Ã , where t Ã > 0 is a control limit.The value of t Ã can be obtained from the ð1 À aÞ-quantile of empirical distribution of T where a is given nominal IC false alarm rate (FAR).To get empirical distribution of the chart statistic bootstrap technique can be used.

Phase-II monitoring of image data
Let us denote the true in-control (IC) image as I IC : In reality, it is usually unknown and has to be estimated based on observed Phase-I image sample fI 1 , I 2 , :::, I m g of size m.As discussed before, estimated true image is defined as Î IC ¼ 1 m P m 1 I i , and the point set of detected edge pixels of this image is denoted as Ê0 : In Phase-II step, l-th observed image can be expressed as 2-D JRA model as: where Z ijl is the observed image intensity and the other related variables can be described similarly to those in the model written in Section 4.2.Image registration is also a necessary step in Phase-II monitoring as well.To register the images, we use any IC image as a baseline image.There will not be substantial variations if we use different IC images as baseline image.
After registration, we estimate the point set corresponding to the edge pixels of the images in Phase-II using LLK method and those are denoted as f Êl : l ¼ 1, 2, :::g: Then, the Hausdorff distance of the edge point set of l-th image in Phase-II with the edge point set of mean image is calculated by The chart will give a signal of change if T l > t 0 , where t 0 has been chosen in such a way that the chart statistic can reach the prefixed ARL 0 value.

Construction of the control limit with a
pre-fixed ARL 0 Without any loss of generality, we assume that the first IC image as the reference image.We apply local piecewise smoothing procedure for image surface estimation and calculate the residuals fê Ã g and r from the reference image.Traditionally, to generate bootstrapped images, we draw a re-sample of size n 2 from the set of residuals fê Ã ðx i , y j Þ : i, j ¼ 1, 2:::ng: However, in our e example, the residuals around the estimated JLCs could be large, and as a result, there will be a few falsely detected edge pixels in the image at the time of edge detection at the bootstrapped images.To overcome this problem, we generate the l-th simulated image by adding Gaussian noise Nð0, r2 Þ to each pixel, and calculate the statistic T l based on this generated bootstrapped sample image.For a given value of t 0 , we continue the above process until we get a signal of change.Thus, one run is found and the corresponding run length is calculated.Finally, we repeat the above two steps B times and average those B run lengths to estimate the actual value of the ARL 0 : If the estimated value based on B replications is smaller than the pre-fixed ARL 0 , then we should increase the specified value of t 0 and if the estimated value of the ARL 0 is larger than the pre-fixed ARL 0 , then we should decrease the specified value of t 0 :

Selection of procedure parameters
In the proposed image monitoring procedure, the parameter v n in Section 3 should be chosen carefully to get a good performance in edge detection.
If no jump exists in For fixed bðx, yÞ and ĉðx, yÞ, ð bðx, yÞ À bP 1 ðx, yÞÞ 2 þ ðĉðx, yÞ À ĉP 1 ðx, yÞÞ 2 =r 2 P 1 approximately follows v 2 2 distribution, where r 2 P 1 ¼ Varð bP 1 ðx, yÞÞ: Now from expression (2), we have where where v 2 2, 1Àa n is the ð1 À a n Þ quantile of v 2 2 distribution and r is a consistent estimator of r: A natural choice for r is the mean of residual squares of LLK (local linear kernel) estimator of the true intensity function f using the circular neighborhood of size h n : Moreover, a n can be specified beforehand to be small number, say, a n ¼ 0:01:

Selection of the bandwidth parameter h n
In this paper, we consider choosing h n using a bootstrap procedure.Larger value of h n detects several pixels around the JLCs as edge pixels while smaller value of h n fails to detect several true edge pixels.Therefore, it is a safer option to take a relatively large bandwidth.However, for such cases, the proposed monitoring method will fail to detect a small change near the JLCs.Based on our numerical experience, we suggest choosing h n 2 ½ 1:6 n , 2:5 n : In the simulation studies presented in Section 8, we choose h n ¼ 2 n : Readers are referred to Section 10.3 for more details about the bootstrap algorithm for selection of h n :

Statistical property
In this section, we discuss some statistical properties of the edge detection procedure mentioned in Section 3. The following proposition shows that the estimated jump points converge almost surely to the set of true jump points in Hausdorff distance.In our description a point ðx, yÞ is called a singular point if one of the following conditions is satisfied: There exists some q> 0 such that, for any 0 < q < q, the circular neighborhood of ðx, yÞ with radius q contains more than two connected regions.The jump size is zero.
We have defined earlier Ên ¼ fðx i , y j Þ : kðx, yÞ > v n g as set of detected edge points.
, where D H ðA, BÞ is the Hausdorff distance between two point set A and B.
The above proposition establishes the strong consistency of the detected edge pixels by the aforesaid n j bðx, yÞ, ĉðx, yÞg: procedure.Proof of the above proposition is attached in the supplementary file.

Numerical studies
In this section, we numerically assess the performance of our proposed method in comparison with a number of methods already in the literature.We perform comparative studies on both, artificially created toy images and real images.
In the manufacturing industry, any change in intensity values in a region is very common and often considered as faults.However, in many applications such as satellite imaging and medical imaging, these changes should not be taken as meaningful faults or changes in the underlying image process.See Figure 4 for this type of situation.In literature, a vast majority of methods are intensity based, and therefore, they are very sensitive to small changes in intensity in a region.Therefore, such methods are very sensitive to noise and as a result, the proportion of false detection by these methods is also large.The proposed method primarily focuses on major image features such as edges and monitors any change of that, rather than just monitoring any change of intensities.In this regard, the proposed method is particularly useful in monitoring satellite images and medical images.
We discuss the performance for Phase-I and phase-II separately.Here we consider the following methods for numerical comparisons: Wavelet-based image monitoring proposed by Koosha, Noorossana, and Megahed (2017) Phase-I and Phase-II monitoring of spatial surface data from 3D printing proposed by Zang andQiu (2018a, 2018b).
The above methods are comparable in the sense that they do not take into account the correlation structure of the images also they assume the independence among the images of different time points.

Brief description of the competing methods
The wavelet-based method for image monitoring uses a nonparametric profile monitoring approach where each image is decomposed into a set of 1D profiles and monitors them as a whole using a GLR control chart.To extract the frequency domain features, the method applies wavelet transformation and monitors the extracted features over time.In Koosha, Noorossana, and Megahed (2017), wavelet coefficients are termed as frequency domain features and "Haar" basis function is used to get the wavelets coefficients.Suppose we have m 2-D grayscale images of size n Â n: Then each row of an image is considered as 1D profile and suppose there are d estimated features from each profile.Therefore, each image feature is a nd Â 1 dimensional vector.In phase-I, there are m images therefore we get m coefficients vector each with dimensions nd Â 1: Suppose the coefficient vector of an image is denoted as C > ¼ ðc 11 , ::::, c 1d , ::::::, c n1 , ::::, c nd Þ: Then, we are interested in detecting the changes of the coefficients over time.To monitor the images in Phase-II, the GLR control chart statistic by Koosha, Noorossana, and Megahed (2017) for s-th Phase-II image is defined as: Here, s ¼ 1, 2, :: is the image number in the online monitoring step, ĉ1, s ðkÞ ¼ s À1 P s t¼1 ĉt, k , where ĉt, k is the k-th element of the C vector for the t-th sample, c 0 ðkÞ is the k-th element of the C vector corresponding to nominal image and r 2 k is the variance of the kth coefficient computed from Phase-I images of size m.In simulation studies, nominal images are known to us but in the real life scenario we have to estimate the nominal image from m images of Phase-I images.Koosha, Noorossana, and Megahed (2017) take another parameter and also maximize the statistic over the parameter to get the estimate of the changepoint.However, since we are not interested to estimate the change-point, we modify the statistic like above.The chart statistic will give signal at the s-th image if R s > c R where c R is the threshold value calculated by trial and error method.Now, we discuss another existing method based on Phase-I and Phase-II monitoring of 3D printing (Zang andQiu 2018a, 2018b).This is a potential competing method in the sense that we can always think the image intensity as a 3-D surface, and they also assume spatial independence in the object surface.Here, we assume that images are already geometrically aligned, and if not, then we can use an appropriate image registration technique to register them.Consider the model in the Phase-I step as where the variables are same as defined above in the Eq.[5].Then, the control chart statistic is defined as where b f k is a jump preserving surface estimate of f k and b is the estimated intensity value at ði, jÞ-th position from the images in Phase-I.For more details about the jump preserving surface estimates, readers are referred to Qiu (2009).Zang and Qiu (2018b) use CUSUM statistic in Phase-II stage.However, to make it comparable with our proposed method, we use Shewart-type chart based on the statistic in ( 9).The chart will give an out of control signal if Q ART > c ART where c ART is the ð1 À aÞ-quantile of the empirical distribution of Q ART : If we use L 2 norm instead of L 1 norm in the chart statistic, then it is denoted by Q SRT :

Performance evaluation measures
To evaluate the performance of the proposed method in comparison with the existing methods mentioned above, we use several measures for different phases of monitoring.In the Phase-I stage, to compare the outof-control (OC) performances, we use two measures: fraction correctly classified (FCC) and false positive proportion (FPP) (Chen, Birch, and Woodall 2015).The first criterion, named fraction correctly classified (FCC) is defined as the proportion of sample in repeated simulation that are correctly classified.For example, if there are 20 in-control (IC) images and 10 out-of-control (OC) images in a single replication and a specific chart gives out-of-control signal to 5 IC images and 8 OC images, then its FCC is calculated as ð15þ8Þ ð20þ10Þ % 0:77: The second criterion for Phase-I OC performance measure, false positive proportion (FPP), is the proportion of false signals among all signals.
In the above example, the FPP is calculated as ð5þ2Þ ð5þ8Þ % 0:58: In Phase-II, performance evaluation is done using 3 different criterion.Average run length (ARL), median run length (MRL) and standard deviation of run length calculates the efficiency of the proposed method and competing method for detecting faults as early as possible.Note that, in the Phase-II stage, we have only the zero-state calculation, i.e., in each simulated process monitoring, OC images start at the first observation time.As the proposed method is a Shewart-type control procedure, in Phase-II the control chart statistic depends only on that image and not on any image in the past.Moreover, we assume that Phase-I samples are independent across different time points.Also, no temporal aggregation is needed as in each time point we only have one single image (Zwetsloot and Woodall 2021).Therefore, instead of steady-state calculation, zero-state calculation is a reasonable choice in our context.

Simulations
In this section, we simulate various out-of-control images and compare the performance of our proposed method with other competing methods mentioned above.In our simulation, the true image intensity function of the in-control (IC) image has the expression: f 0 ðx, yÞ ¼ Ið0:31 < x < 0:58, 0:31 < y < 0:54Þ, ðx, yÞ 2 ½0, 1 Â ½0, 1, where Ið:Þ is the indicator function.The true IC image is shown in Figure 2a.We consider four out-of-control (OC) images and denoted these as OC Image-1, OC Image-2, OC Image-3, and OC Image-4 respectively.To generate Phase-I and Phase-II images, various types of noise such as Poisson, Salt and Pepper or Gaussian noise can be added as a white noise with the nominal image (f 0 ).Here we added a zero-mean Gaussian distribution with standard deviation (r) equaling 0.05.For determining the correct level of noise, signal-to-noise ratio (SNR) is an important measure (c.f., Koosha, Noorossana, and Megahed 2017).Figure 3 shows the simulated OC images.The size of all of these simulated images is 128 Â 128 (i.e., n ¼ 128).When noise level is high, our edge detection method falsely detects several edge pixels in the background of the images.To get rid of those falsely detected edge pixels, we run the following adjustment procedure: 1: a ¼ BinaryEdgeMatrix 2: for i :¼ 1 to 128 do 3: for j :¼ 1 to 128 do 4: if a½i, j 1 P iþ1 k¼iÀ1 P jþ1 l¼jÀ1 a½k, l 2 then 5: a½i, j 0 6: end if 7: end for 8: end for Note that for elements near the boundaries of the image matrix we use the mirroring technique for which there is no subscript out of the bound situation for the above algorithm.Table 1 shows the control limits for the methods to achieve in-control ARL, i.e., ARL 0 % 50 when the size of Phase-I data are m ¼ 10 and m ¼ 20: Note that the values of m considered in this study is relatively small compared to the conventional SPC literature, because in most of the applications of medical diagnostics, satellite imaging, etc., the number of images of the same object is always a few, in the order of 10's or even smaller in some cases.Therefore, it is reasonable to construct the control chart based on small number of Phase-I samples and correspondingly small pre-fixed ARL 0 : To find the empirical distribution, bootstrap technique could be used.However, to set the control limit at the prefixed ARL 0 value we use trial and error methods.Readers are referred to Section 5 for more details to determine the control limit with a pre-fixed ARL 0 value.Here, D new corresponds to the proposed method, R k is the statistic (8) based on the wavelet-based technique, and Q ART corresponds to the surface monitoring method defined in (9).From the definition of FCC and FPP, we expect a large value of FCC and a small value FPP if the monitoring method works well.
FCC and FCC measure the performance of the methods in Phase-I step.In this study, to calculate the FCC and FPP values, we consider 500 simulated incontrol images and 500 simulated out-of-control images.From Table 2, we observe that for each type of image, D new has large FCC and small FPP values.OC Image-3 deviates only a little from the in-control image, and still, our proposed method performs well.There is a substantial difference in FCC and FPP values with other competing methods that ensures the effectiveness of our proposed method.Moreover, for OC Image-3, neither of the competing methods work well in Phase-II stage, but our proposed method still gives an excellent result.The advantage of the proposed method is that if there is a prominent change even in a very small region of an image, it is capable to detect.Now, we are going to discuss various scenarios that ensure the effectiveness of our proposed method over the competing methods.It shows why edge based image surveillance is important.The IC image is shown in Figure 2b.Case A: Here, the image object is same but the contrast is changed, see Figure 4a for this scenario.This type of situation is very common in real life.For instance, we capture an image of an object with no shadow on it, and then capture another image from the same position in the presence of shadow in the background or on the image object itself.Both images will be the same but there would be changes in the intensity values.Case B: In this situation, the image object remains the same but there is a smooth change in the background.
For instance, we capture an image in the presence of diffused light in the background.Figure 4b presents one such example.Case C: Here also, the noise level of the images are different from the noise level of the IC image.It is very relevant in the real life scenario because it is not always possible to get the images at a fixed level of noise.In our example, the IC image in Figure 2B has noise level r ¼ 0:05 and the image in Figure 4c has noise level 0.06.
Since the image object is the same for both cases, and therefore, the images are in-control, and ARL 1 should be the same as ARL 0 : From Table 3, it is clear that in all the cases mentioned above, proposed method performs well based on the large FCC value and ARL 1 value.The wavelet-based method performed well for the last case but the surface-based method is not appropriate for these types of situations.Therefore, despite good performances in certain situations, we should be more careful to use the competing methods in practical applications.
Next, we perform a comparison study on real images from a production process.We consider the images of smooth steel surface as IC images and insert various types of simulated faults.Referred to Figure 5a for an IC image.This is a grayscale image and we resize it to 128 Â 128: To artificially create Phase-I data, we add Gaussian noise with mean zero and standard deviation 0.001.Table 4 shows the control limits for different Phase-I sample sizes.
For the comparison study in Phase-II, we incorporate various types of shifts to create OC images.In Figure 5b and c, we consider scratch and circular faults, respectively on the steel surface which are most common in the manufacturing industry.Table 5 shows the performance measures for this example where in-control ARL is 20.For the steel surface image with a scratch, all methods perform well based on the out-of-control ARL measures, but based on FCC and FPP, the surface-based method and the proposed method performs relatively better than the wavelet-based method.However, for the image with the circular-fault, the proposed method is not up to the mark as compared to the competing methods.This is mainly due to false edge detection.A major limitation of this method is that it is highly affected by false jumps in the background.This is because of the fact that the Hausdorff distance is very sensitive to individual points in the related point sets.

Real data example
In this section, we consider a real image example where we are interested in monitoring brain tumor growth (Stamatakos and Giatili 2017).The panels of Figure 6 show tumor growth at 0-th day, 60-th day,  120-th day, and 180-th day, from left to right.As a pre-processing step, we convert the original color images to grayscale images.Moreover, we resize each image to the resolution of 128 Â 128 and rescale the image intensities so that their values are in the range 0 to 1.In this study, we consider the 0-th day brain image as an IC image and now we have three different cases.For the first case, our objective is to check the ability to detect the change if we have 180-th day image as an OC image.Similarly, the other two cases are either we consider 120-th day or 60-th day image as OC image.For each case, we have calculated the FPP and FCC value and corresponding out-of-control ARL value, denoted as ARL ðkÞ OC ; k ¼ 180, 120, 60: To generate Phase-I and Phase-II images, we add a zero mean Gaussian noise with standard deviation of 0.004.Table 6 presents the control limits for the competing methods when the number of in-control Phase-I images are 10 and 20, respectively.In this example, we fix the in-control ARL value at 10 and to calculate the control limits, we replicate each set of the simulations 200 times.
From Table 6, it is clear that for a change in the value of m for our proposed method change of control limit is negligible whereas for the other methods significant amounts of changes are evident.Now, in Table 7 we have shown different metrics for evaluating performance.Since the noise level is not high, we have not done any algorithm for false edge removal in the background.Because there is no substantial change in the control limit for all the calculations we use samples of size m ¼ 10.Since the amount of change in images over time is respectively large, all the methods perform very well and give perfect detection.But, if we look into the FCC and FPP value, the proposed method gives a high FCC and low FPP value which is very important in medical science.The wavelet-based method performs comparatively well over the surfacebased method.To calculate the FCC and FPP, we have taken 1000 in-control images and 1000 out-of-control images.In the Phase-II stage, for each set of Phase-II sample we calculate the out-of-control ARL and to calculate the mean, median and standard deviation of the out-of-control ARL, we replicate the process for 100 set of Phase-II samples.Since the changes in the image is relatively large, all methods perform well.However, based on Phase-I metrics, the proposed method outperforms its competitors.

Change point estimation
Post-signal diagnosis is an important task in any process monitoring procedure.A major step in postsignal diagnosis is to accurately estimate the time point (s) at which the process becomes OC, i.e., to estimate change point of the process carefully.The proposed image monitoring method being a Shewhart-type procedure, ŝ, an estimate of the change point, is given by the first time point at which the chart indicates a signal.The proposed method is capable of estimating the change point almost accurately as indicated in Table 2. Table 8 presents the performance of the change point estimation.In Phase II monitoring, we set the change point by adding a sustained shift at time point s ¼ 6 and onward.That is, an OC image is incorporated in the process at time point 6    Step-3: To find the estimated jump points, use the proposed jump detection algorithm in Section 3, for a fixed level of h n : Step-4: Repeat Steps 2 and 3 B times, where B is a sufficiently large number.The sets of detected jump points from the bootstrap samples are denoted as ÊÃ 1n , ÊÃ 2n , :::, ÊÃ Bn , respectively.Then, the bootstrap estimator of the distance between the estimated and the true jump points (D BT H ð Ên , E; h n Þ) is defined as The optimal bandwidth is then approximated by minimizing d D BT H ð Ên , E; h n Þ, with respect to h n : Figure 8, shows a plot of the distance between the estimated point set and the true jump points for different values of h n , with the reference image in Figure 2b.Based on the above diagram we suggest choosing h n 2 ½ 1:6 n , 2:5 n : In particular, h n ¼ 2 n works well in most applications.Readers are referred to Mukherjee and Qiu (2011) for selecting h n by a cross-validation method.

Conclusion
In this article, we propose a new procedure for monitoring of image data.This is a Shewhart-type control chart for monitoring image data based on detected edges.The method is very simple to construct and convenient to use even in the high-dimensional problem of image monitoring.Similar to the Shewhart control chart, the proposed control chart monitors the process based on the observed images at one time point.Therefore, it is useful to detect relatively large and transient shifts in the images but not designed for detecting a small persistent shift.In our method, image registration is used as a pre-processing stage.However, if an upcoming method can handle image registration by its construction and design, that may be more suitable to use in real situations.

About the authors
Anik Roy is currently a research fellow at Indian Statistical Institute, Kolkata, and he is pursuing his doctoral studies in statistics.His research interests include control charts for image processing.
Partha Sarathi Mukherjee is currently an associate professor at Indian Statistical Institute, Kolkata.His research interests include image processing and statistical process control.

Figure 1 .
Figure 1.Three Landsat images (left-right) taken in 2014 to 2016 at the time of winter that shows the change of depth and breath of seasonal snowpack in Sierra Nevada.In 2015, long term hot and dry weather in California and Nevada region made snowpack to very low level.
and expressed inQiu (2005) by the following: 4.3.Construction of control chart for image monitoring Now we construct an edge based control chart based on the sample images of size m.As mentioned earlier, most of image monitoring methods in the literature are intensity based and a vast majority of them uses GLR chart or MGLR chart to construct control chart.

Figure 2 .
Figure 2. (a) True simulated image, (b) Observed Phase-I simulated image, (c) Edge detected image of estimated nominal image.

Figure 3 .
Figure 3. Different out of control images (without noise).

Figure 4 .
Figure 4. Different in-control images.(a) Case A is a situation where IC image has been taken in the presence of light shadow, (b) Case B represent similar situation where image object remains same but continuous change in background, and (c) Case C presents the same image with slightly higher noise level.

Figure 6 .
Figure 6.Visualization of a virtual glioblastoma tumor growth in vivo on a coronal slice at various time points.

Figure 7 .
Figure 7. (a) Actual change for OC Image-1, (b) Estimated change regions for OC Image-1, (c) Actual change for OC Image-3, and (d) Estimated change region for OC Image-3.

Figure 8 .
Figure 8. Hausdorff distance between point set of estimated jump points and true jump points for different values of h n : yÞ is given by bðx, yÞ ¼ ð bðx, yÞ, ĉðx, yÞÞ 0 which carries information about jump at point ðx, yÞ: Larger value of it indicates that underlying regression function f would be steeper around the point ðx, yÞ and ðx, yÞ is a possible jump point or an edge pixel.However, when f is steep but continuous in the circular neighborhood B h n ðx, yÞ, the value of bðx, yÞ would still be large.Therefore, large value of bðx, yÞ does not guarantee that ðx, yÞ is a jump point.To get rid from the slope effect we consider two neighboring points ðx P 1 , y P 1 Þ and ðx P 2 , y P 2 Þ such that: distance of ðx P 1 , y P 1 Þ and ðx P 2 , y P 2 Þ from ðx, yÞ is 2h n , i.e., there is no overlap among the circular neighborhoods of the three points ðx P 1 , y P 1 Þ, ðx, yÞ, and ðx P 2 , y P 2 Þ: Assume that f has continuous first order partial derivatives over ð0, 1Þ Â ð0, 1Þ except on JLCs at which it has the first order right and left partial derivatives; h n ¼ oð1Þ, 1 nh n ¼ oð1Þ and log 2 ðnÞ Then we have, Proposition 1:

Table 1 .
Control limits of the charts when m ¼ 10 or 20 and ARL 0 % 50:

Table 2 .
Phase-I and Phase-II performances for OC images.

Table 3 .
Phase-I and Phase-II performances for cases A and B.

Table 4 .
Control limits of charts when m ¼ 10 and 20 for a ¼ 0:1:

Table 5 .
Phase-I and Phase-II performance for steel surface.

Table 7 .
Phase-I and Phase-II performance for the competing methods.