Introduction
The redundant wavelet transform (RWT) is widely used in order to denoise signals and images. Here, we consider two denoising methods used in the literature, to attempt to denoise astronomical images with the aim of obtaining images in which we can search for very faint objects that are not noise.
The paper is organized as follows. In "Redundant Wavelet Transform", we introduce a few algorithms used to compute the RWT. In "Denoising Algorithms based on the RWT", we discuss some denoising methods based on the RWT. In "Denoising Simulation", a description of the simulation and the results from the implemented methods can be found, which are further discussed in "Conclusions".
Redundant Wavelet Transform
Undecimated algorithm
The redundant discrete wavelet transform, similar in nature to the discrete wavelet transform, decomposes data into low-pass scaling (trend) and high-pass wavelet (detail) coefficients to obtain a projective decomposition of the data into different scales. More specifically, at each level the transform uses the scaling coefficients to compute the next level of scaling and wavelet coefficients. The difference lies in the fact that none of the latter are discarded through decimation as in the discrete wavelet transform but are instead retained, introducing a redundancy. This transform is good for denoising images, as the noise is usually spread over a small number of neighboring pixels. The Rice Wavelet Toolbox used to compute the transform in the simulation implements the redundant wavelet transform through the undecimated algorithm, which as its name suggests is similar to the discrete wavelet transform but omits downsampling, also known as decimation, in computation of the transform and upsampling in computation of the inverse transform[1].
A trous algorithm
Another method of computing the redundant wavelet transform, the a trous algorithm differs from the undecimated algorithm by modifying the low-pass and high-pass filters at each consecutive level. The algorithm up-samples the low-pass filter at each level by inserting zeros between each of the filter's coefficients. The high-pass coefficients are then computed as the difference between the low-pass images from the two consecutive levels. To compute the inverse transform, the detail coefficients from all levels are added to the final low-resolution image [1]. While inefficient in implementation, the a trous algorithm provides additional insight into the redundant discrete wavelet transform.
Denoising Algorithms based on the RWT
Soft-Thresholding
In the traditional method of soft-thresholding, where the universal threshold is used, coefficients below a specified threshold are shrunk to zero while those above the threshold are shrunk by a factor of σ\(2log(N))[2]. On the orthogonal wavelet transforms, it has been shown to exhibit the following property:
Theorem 1 For a sequence of i.i.d. random variables zi∼N(0,1), P(maxi=1..N≤\(2logN))→1 for N→∞.
Bivariate Shrinkage
Sendur and Selesnick [5] proposed a bivariate shrinkage estimator by estimating the marginal variance of the wavelet coefficients via small neighborhoods as well as from the the corresponding neighborhoods of the parent coefficients. The developed method maintains the simplicity and intuition of soft-thresholding.
We can write
yk=wk+nk,(1)
where wk are the parent and child wavelet coefficients of the true, noise-free image and nk is the noise. We have for our variance, then, that
σy2=σk2+σn2.(2)
Noting that we will always be working with one coefficient at a time, we will suppress the k.
In [4], Sendur and Selesnick proposed a bivariate pdf for the wavelet coefficient w1 and the parent w2 to be
pw(w)=
exp(−
\(w12+w22),(3)
3 |
2πσ2 |
\3 |
σ |
where the marginal variance σ2 is dependent upon the coefficient index k. They derived their MAP estimator to be
w1=
y1(4)
| ||||
\y12+y22 |
To estimate the noise variance σn2 from the noisy wavelet coefficients, they used the median absolute deviance (MAD) estimator
σn2=
,yi∈subbandHH,(5)
median(|yi|) |
0.6745 |
where the estimator uses the wavelet coeffiecients from the finest scale.
The marginal variance σy2 was estimated using neighborhoods around each wavelet coefficient as well as the corresponding neighborhood of the parent wavelet coefficient. For instance, for a 7x7 window, we take the neighborhood around y1,(4,4) to be the wavelet coefficients located in the square (1, 1), (1, 7), (7, 7), (7, 1) as well as the coefficients in the second level located in the same square; this square is denoted N(k). The estimate used for σy2 is given by
σy2=
∑yi2,(6)
1 |
M |
where M is the size of the neighborhood N(k). We can then estimate the standard deviation of the true wavelet coefficients through Equation 2:
σ=\(σy2−σn2)+.(7)
We then have the information we need to use equation Equation 4.
BLS-GSM
Portilla, et. al. [3] propose the BLS-GSM method for denoising digital images, which may be used with orthogonal and redundant wavelet transforms as well as with pyramidal schemes. They model neighborhoods of coefficients at adjacent positions and scales as the product of a Gaussian vector and a hidden positive scalar multiplier, so that the neighborhoods are defined similarly as in the BiShrink algorithm. The coefficient within each neighborhood around a reference coefficient of a subband are modeled with a Gaussian scale mixture (GSM) model. The chosen prior distribution is the Jeffrey's prior, pz(z)∝
1 |
z |
They assume the image has additive white Gaussian noise, although the algorithm also allows for nonwhite Gaussian noise. For a vector y corresponding to a neighborhood of N observed coefficients, we have
y=x+w=\zu+w.(8)
The BLS-GSM algorithm is as follows:
- Decompose the image into subbands
- For the HH, HL, and LH subbands:Reconstruct the denoised image from the processed subbands and the lowpass residual
- Compute the noise covariance, Cw, from the image-domain noise covariance
- Estimate Cy, the noisy neighborhood covariance
- Estimate Cu using Cu=Cy+Cw
- Compute Λ and M, where Q, Λ is the eigenvector/eigenvalue expansion of the matrix S−1CuS−TS is the symmetric square root of the positive definite matrix Cw, andM=SQ
- For each neighborhood
- For each value z in the integration range
- Compute E[xc|y,z]=∑n=1N
zmcnλnvn zλn+1 - Compute the conditional density p(y|z)
- Compute E[xc|y,z]=∑n=1N
- Compute the posterior p(z|y)
- Compute E[xc|y]
- For each value z in the integration range
Denoising Simulation
Simulation description
In order to compare and evaluate the efficacies of the Bishrink and BLS-GSM algorithms for the purpose of denoising image data, a simulation was developed to quantitatively examine their performance after addition of random noise to otherwise approximately noiseless images with a variety of features representative of those found in astronomical images. Specifically, the images encoded in the widely available files Moon.tif, which primarily demonstrates smoothly curving attributes, and Cameraman.tif, which exhibits a range of both smooth and coarse features, distributed in the MATLAB image processing toolbox were considered.
As a preliminary preparation for the simulation, the images were preprocessed such that they were represented in the form of a grayscale pixel matrix taking values on the interval[0,1] of square dimensions equal to a convenient power of two. Noisy versions of each image were generated by superposition of a random matrix with Gaussian distributed pixel elements on the image matrix, using noise variance values {.01,.1,1{. For each noise variance level and original image, 100 contaminated images were created in this way using a set of 100 different random generator seeds, which was the same for each noise level and original image. A redundant discrete wavelet transform of each of these contaminated images was computed using the length 8 Daubechies filters, and the denoised wavelet coefficients were estimated using both the Bishrink and the BLS-GSM algorithms as previously described. Computation of the inverse redundant discrete wavelet transform using the denoised wavelet coefficients then yielded 100 images denoised with the Bishrink algorithm and 100 images denoised with the BLS-GSM algorithm for each original image and noise variance level.
Using this simulated data, the performance of the two denoising methods on each image at each noise contamination level were evaluated using the six statistical measures described here. The first of these was the mean square error (M,S,E), which is calculated by the average of
1 |
n |
n |
∑ |
i=1 |
over all 100 denoisings. Related to the above was the root mean square error (R,M,S,E), which is calculated by computing the square root of the mean square error. A third was the root mean square bias (R,M,S,B), which is calculated by
\
∑i=1n(f,(xi),−,f,(xi))2(10)
1 |
n |
where f(xi) is the average of f(xi) over all 100 denoisings. Two more, the maximum deviation (M,X,D,V), calculated by the average of
max|(f,(xi),−,f,(xi))|(11)
over all 100 denoisings, and L1, calculated by the average of
n |
∑ |
i=1 |
over all 100 denoisings, were also examined. The results of this simulation now follow.
Bishrink results
Measure | Cameraman | Moon |
MSE | 0.0019 | 0.0004 |
RMSE | 0.0442 | 0.0188 |
L1 | 2019.9 | 3160.4 |
RMSB | 0.0274 | 0.0117 |
MXDV | 0.3309 | 0.2634 |
Measure | Cameraman | Moon |
MSE | 0.0063 | 0.0012 |
RMSE | 0.0296 | 0.0345 |
L1 | 3612.4 | 5880.7 |
RMSB | 0.0568 | 0.0213 |
MXDV | 0.6147 | 0.4116 |
Measure | Cameraman | Moon |
MSE | 0.0173 | 0.0052 |
RMSE | 0.1315 | 0.0722 |
L1 | 6183.7 | 11839 |
RMSB | 0.0934 | 0.0389 |
MXDV | 0.8991 | 0.9774 |
BLS-GSM results
Measure | Cameraman | Moon |
MSE | 0.0015 | 0.0003 |
RMSE | 0.0390 | 0.0165 |
L1 | 1711.0 | 2718.6 |
RMSB | 0.0283 | 0.0141 |
MXDV | 0.3192 | 0.2635 |
Measure | Cameraman | Moon |
MSE | 0.0052 | 0.0008 |
RMSE | 0.0718 | 0.0288 |
L1 | 3111.5 | 4786.5 |
RMSB | 0.0583 | 0.0224 |
MXDV | 0.5862 | 0.3337 |
Measure | Cameraman | Moon |
MSE | 0.0136 | 0.0017 |
RMSE | 0.1167 | 0.0410 |
L1 | 5283.5 | 1500.2 |
RMSB | 0.0970 | 0.0346 |
MXDV | 0.7750 | 0.4614 |
Conclusions
The results obtained from this simulation now allow us to evaluate and comment upon the suitability of each of the two methods examined for the analysis of astronomical image data. As is clearly manifested in the quantitative simulation results, the BLS-GSM algorithm demonstrated more accurate performance than did the Bishrink algorithm in every measure consistently over all pictures and noise levels. That does not, however, indicate that it would be the method of choice in all circumstances. While BLS-GSM outperformed the Bishrink algorithm in the denoising simulation, the measures calculated for the Bishrink algorithm indicate that it also produced a reasonably accurate image estimate. Also, the denoised images produced by the Bishrink simulation exhibit a lesser degree of qualitative smoothing of fine features like the craters of the moon and grass of the field. The smoothing observed with the BLS-GSM algorithm could make classification of fine, dim objects difficult as they are blended into the background. Thus, the success of the Bishrink algorithm in preserving fine signal details while computing an accurate image estimate is likely to outweigh overall accuracy in applications searching for small, faint objects such as extrasolar planets, while the overall accuracy of the BLS-GSM algorithm recommend it for coarse and bright featured images.
REFERENCES
- A. Gyaourove, C. Kamath and Fodor, I.K. (2002). Undecimated wavelet transforms for image denoising. (UCRL-ID-150931). Technical report. Center for Applied Scientific Computing, Lawrence Livermore National Laboratory.
- Donoho, D. and Johnstone, J. (1994). Ideal spatial adaption by wavelet shrinkage. Biometrika, 3(81), 425-455.
- J. Portilla, M.J. Wainwright and Simonccelli, E.P. (2003). Image Denoising Using Scale Mixtures of Gaussians in the Wavelet Domain. IEEE Transactions on Image Processing, 12(11), 1338-1351.
- Sendur, L. and Selesnick, I. W. (2002). A Bivariate Shrinkage Function for Wavelet Based Denoising. IEEE ICASSP, (12),
- Sendur, L. and Selesnick, I. W. (2002). Bivariate Shrinkage With Local Variance Estimation. IEEE Signal Processing Letters, (12), 438-441.
0 comments:
Post a Comment