06408136

Upload: venkatsrmv

Post on 04-Jun-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 06408136

    1/14

    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013 2101

    Unified Blind Method for Multi-ImageSuper-Resolution and Single/Multi-Image

    Blur DeconvolutionEsmaeil Faramarzi, Member, IEEE, Dinesh Rajan, Senior Member, IEEE,and Marc P. Christensen, Senior Member, IEEE

    Abstract This paper presents, for the first time, a unifiedblind method for multi-image super-resolution (MISR or SR),single-image blur deconvolution (SIBD), and multi-image blurdeconvolution (MIBD) of low-resolution (LR) images degradedby linear space-invariant (LSI) blur, aliasing, and additive whiteGaussian noise (AWGN). The proposed approach is based onalternating minimization (AM) of a new cost function withrespect to the unknown high-resolution (HR) image and blurs.The regularization term for the HR image is based upon the

    Huber-Markov random field (HMRF) model, which is a typeof variational integral that exploits the piecewise smooth natureof the HR image. The blur estimation process is supported byan edge-emphasizing smoothing operation, which improves thequality of blur estimates by enhancing strong soft edges towardstep edges, while filtering out weak structures. The parametersare updated gradually so that the number of salient edgesused for blur estimation increases at each iteration. For betterperformance, the blur estimation is done in the filter domainrather than the pixel domain, i.e., using the gradients of the LRand HR images. The regularization term for the blur is Gaussian(L2 norm), which allows for fast noniterative optimization inthe frequency domain. We accelerate the processing time of SRreconstruction by separating the upsampling and registrationprocesses from the optimization procedure. Simulation results onboth synthetic and real-life images (from a novel computationalimager) confirm the robustness and effectiveness of the proposedmethod.

    Index Terms Blur deconvolution, blind estimation, Huber-Markov Random Field (HMRF) prior, image restoration, super-resolution.

    I. INTRODUCTION

    CAPTURING high-quality images and videos is critical inmany applications such as medical imaging, astronomy,surveillance, and remote sensing. Traditional high-resolution(HR) imaging systems require high-cost and bulky opticalelements whose physical sizes dictate the light-gathering capa-bility and the resolving power of the imaging system, a

    Manuscript received August 12, 2011; revised September 12, 2012; acceptedDecember 4, 2012. Date of publication January 9, 2013; date of currentversion March 29, 2013. This work was supported in part by a CollaborativeTechnology Agreement with the U.S. Army Research Laboratory under AwardW911NF-06-2-0035. The associate editor coordinating the review of thismanuscript and approving it for publication was Dr. Ramin Samadani.

    E. Faramarzi is with Samsung Telecommunications America, Richardson,TX 75082 USA (e-mail: [email protected]).

    D. Rajan and M. P. Christensen are with Southern Methodist University,Dallas, TX 75205 USA (e-mail: [email protected]; [email protected]).

    Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TIP.2013.2237915

    constraint that has persisted since their invention [1], [2]. Incontrast, computational imaging systems combine the powerof digital processing with data gathered from optical elementsto generate HR images. Artifacts such as aliasing, blurring,and noise may affect the spatial resolution of an imagingsystem, which is defined as the finest detail that can be visuallyresolved in the captured images.

    Blur deconvolution (BD) and super-resolution (SR) aretwo groups of techniques to increase the apparent resolu-tion of the imaging system. One major difference betweenthese two groups is that the goal in a BD problem is justto undo blurring and noise, whereas SR also removes orreduces the effect of aliasing. As a result, the input andoutput images in BD are of the same size, while in SR theoutput image is larger than the input image(s). The otherdifference is that since severe blurs eliminate or attenuatealiasing in the underlying low-resolution (LR) images, theblur in a SR problem may not be as extensive as in a BDproblem.

    For both BD and SR, techniques are proposed in theliterature for reconstruction from a single image or multipleimages. Multi-image super-resolution (MISR or shortly SRin this paper) methods reconstructs one HR image by fusingfrom multiple LR images [3][7]. By contrast, single-imagesuper-resolution (SISR) methods, which are also known aslearning-based, patch-based or example-based SR techniques[8][10], are proposed in which small spatial patches withinthe input LR image are replaced by similar higher resolutionpatches previously extracted from a number of HR images.In comparison with MISR methods, SISR methods do notneed motion and blur estimation processes, but have a lowerperformance instead.

    In case of BD, the most proposed methods are for recon-

    struction from a single image (SIBD) [11][14]. However,multi-image BD (MIBD) methods [15][18] are also devel-oped to boost the reconstruction performance. LR imagesgiven to a SR (MISR) system mostly have sub-pixel displace-ments between their fields of view (FOV). Also both SR andMIBD systems may either use LR images that have differencesin their point spread functions (PSFs) due to variations in theparameters of the lens (such as aperture, focal length, andfocus), or use LR images with variances in their illuminationconditions (photometric variations) due to dissimilar cameraparameters (such as exposure time and aperture size).

    1057-7149/$31.00 2013 IEEE

  • 8/13/2019 06408136

    2/14

    2102 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    In a SR problem, in order that the sub-pixel displacementsbring new information to the LR images, there should be anadequate amount of aliasing in these images. In an imagingsystem when the sensor array has insufficient density, thedetector Nyquist cutoff frequency would be lower than theoptical cutoff frequency, resulting in aliasing of the capturedimages. Aliased components contain valuable high frequencyinformation and their existence ensures the feasibility ofenhancement by SR. In theory, by removing all artifactsincluding aliasing, blurring, and noise, the resolution can beincreased by digital SR up to the diffraction limit of theimaging system [19]. In contrast to digital SR, optical SR(OSR) techniques which illuminate the object by controlledpatterns of light are able to transcend the diffraction limit ofthe system.

    Most publications on BD/SR are non-blind, i.e., they do notexplicitly consider blur identification during the reconstructionprocedure. Many studies assume that the PSFs are fullyknown a priori due to capturing images under controlledenvironmental/imaging conditions. Other works assume that

    the amount of blur is negligible and can be omitted from thereconstruction. While these simplifications are valid under cer-tain circumstances, they are impractical for many real-worldapplications in which varying blurring effects may accompanytheir imaging process. In parallel to these works, others havestudied blur identification along with BD/SR. These papers canbe classified into two main categories: methods that considerblur identification and image restoration as two disjointedprocesses [11], [13], [20], [21], and methods that combinethese two processes into a unified procedure, e.g. alternatingminimization (AM) [12], [22][27].

    In this paper, for the first time we propose a unified approachfor blind SR, SIBD, and MIBD reconstructions. The cost

    function for the output HR image includes a prior based onHuber-Markov random field (HMRF) model. This variational-type prior suppresses noise while preserving edges and finestructures effectively without causing noticeable ringing. TheHMRF prior is convex but not quadratic; however using thelagged diffusivity fixed-point (FP) scheme, it can be replacedwith a quadratic form at each iteration of the optimizationprocess. This strategy allows for employing efficient iterativeoptimization methods like conjugate gradient (CG) to solvethe optimization problem.

    The proposed kernel (blur) estimation procedure is basedon three important findings of fact: 1) Edges and their neigh-boring regions are more useful in blur estimation; 2) It is

    more accurate to start the blur estimation with just a fewsalient edges and progressively allow more and more edgesto contribute; and 3) Blur estimation in the filter domain ismore efficient than the pixel domain. The first fact is supportedby preprocessing the reconstructed image using an edge-emphasizing smoothing operation which aims to enhance softedges toward step edges while smoothing out weak structures.The second fact is achieved by setting large values for theregularization coefficients of both the HR image and thesmoothing function in the initial iterations, and decreasingthese values gradually at every iteration. The last fact suggeststhe use of the gradients of the LR and preprocessed HR images

    instead of their pixel values for the blur estimation. By theuse of Gaussian (L2-norm) prior(s) for the blur(s), the blurestimation procedure will be solely based on convolution andmultiplication-by-constant operations, and so the blur(s) canbe updated fast by pixel-wise multiplications and divisions inthe frequency domain.

    While in the MIBD reconstruction the input LR images havedifferent blur parameters, in reality all inputs (LR images)to motion-based SR systems (in contrast to motion-free SRsystems) have equal blurs and noise levels since they aremostly continuous shots of an image camera, successiveframes of a video sequence, or simultaneously captured byseveral cameras of equal type and settings. Even the worksthat do not explicitly consider this assumption in their modelsalmost test their proposed algorithms on such images. Thisassumption enables us to separate the registration and upsam-pling processes from the optimization procedure. We foundthat in this way, on one hand, the quality of the estimated blur(and so the estimated HR image) is improved, and on the otherhand, the optimization speed is increased several times.

    This paper is organized as follows: Section II explainsthe forward model of our method and the blind estimationproblem. Our proposed method is represented in Section III.Experimental results are shown in Section IV, and finallySection IV discusses conclusions and future research direc-tions.

    II. PROBLEMFORMULATION

    A. Forward Model

    The linear forward imaging model in the spatial domainwhich illustrates the process of generating the kth LR imagegk(x,y; c)of size NgxNgyCfrom a HR image f(x,y; c)of size N

    fx N

    fy Cis defined in (1) [19].

    gk(x,y; c)=[w(f(x,y; c); k) hk(x,y; c)]L (1)+nk(x,y; c),

    k=1, . . . ,N, c=1, . . . , CHere,(x,y)and (x,y)are the position of pixels within theLR and HR image planes, respectively, Nx and Ny are thenumber of pixels in x and y spatial directions, respectively,N is the total number of LR images, C is the number ofcolor channels, andis 2D convolution operator. In (1) w()is the warping function according to a global/local paramet-ric/nonparametric motion model, hk is the kth PSF caused

    by the overall blur of the system originated from differentsources (such as optical lens, sensor, motion, depth of scene,etc.), and nk is the noise which is commonly modeled asAWGN. Also []L is the downsampling operator called SRdownsampling factor or SR scale ratio (depending on the pointof view) so that Nfx = L Ngx and Nfy = L Ngy. Accordingto (1), the original HR image is warped, convolved with theoverall system PSF, downsampled by the value of L, andfinally corrupted by noise to generate each LR image. The blurfunction is different in general for each color channel; but itcan be considered identical for all channels (i.e. h(x,y, c)=h(x,y)) by ignoring the chromatic aberration of the lens.

  • 8/13/2019 06408136

    3/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2103

    In matrix notation, (1) is rewritten as:

    gk= DkHkSk f+nk= Wk f+ nk, k=1, . . . ,N (2)wherefis the input image in lexicographical notation indicat-ing a vector of sizeC N fx N

    fy1,Skand Hkare the kth warping

    (motion) and blurring operators of size C Nfx Nf

    y C N fx N fy,Dkis the downsampling matrix of size C N

    gxN

    gy

    C N

    fx N

    fy,

    gk and nk are the vectors of the kth LR image and noiserespectively, both of size C NgxN

    gy 1, and Wk is the overall

    system function. In the BD problem, Dk and Sk are identitymatricesI. In this paper, we assume that for the SR problem,the blurs in all LR images are the same (i.e. Hk = H).Alternatively, the model in (2) can be expressed in terms ofthe entire set of LR images as:

    g=W f+n (3)whereg=[gT1, . . . ,gTN]T andW=[WT1, . . . , nTN]T andn=nT1, . . . ,n

    TN. The notation in this paper is as follows: non-

    bold letters for images/blurs/filters/noise in the pixel domain,

    scalars and functionals (e.g. f, h,R(), n,N), bold lowercaseletters for vectors and vector functions (e.g. f, h, ()), andbold uppercase letters for matrices (e.g. W).

    B. Blind Estimation

    Although the amount of publications on blind BD is quiteextensive (e.g. [11][13], [28]), the literature on blind multi-image SR (MISR or shortly SR) is very limited due tothe difficulty of coping with the downsampling operation inblur estimation. roubek et al. [26], [27] proposed a unifiedmethod for blind MIBD and SR in which the well-known TVregularization is used as the image prior and the following

    regularization for the blurs:

    Rh=N1

    i=1

    Nj=i

    HizjHjzi2 (4)

    where zi is the LR image gi upsampled to the size of HRimage f, i.e. zi = DTgi . A TV regularization term is alsoadded to Rh in (4) with a very small coefficient, the role ofwhich is just to diminish noise in the estimated PSFs. The formof regularization in (4) (but without pre-upsampling of LRimages) is first proposed in [16] for the blind MIBD problem.By rearranging Rh , it can be written as Rh= ||Nh||22 whereh

    =[hT1, . . . ,h

    Tn]

    T and Nis a matrix solely based on the LR

    images which will contain the correct PSFs in its null space,i.e. Nh= 0 only if h contains the correct PSFs. While thisprior is able to properly estimate the HR image even when theLR images have different blurs, the estimated PSFs are withsome inevitable ambiguity. An example of this ambiguity isshown in [26] in which for a 88 PSF, 16 PSFs are shown thatspan the null space ofN . Even if the PSFs are the same, thisambiguity will still exist in the estimated PSFs. By contrast,in our proposed method which is introduced in Section III,there is no such ambiguity in estimating the blurs since ourmethod is intrinsically designed for the case that all PSFs areequal.

    Fig. 1. Nonuniform interpolation of LR images.

    Another work on blind SR using the MAP framework issuggested in [29] in which the cost function for estimatingthe HR image includes a TV prior. The blur identificationprocess consists of three optimization steps: first, initial esti-mates of the blurs h0 are obtained using the GMRF priors||Eh||2, whereE is the convolution kernel of Laplacian mask.Second, parametric blur functions b that best fit the initialblur estimates are calculated. Third, final estimates of theblurs h are obtained by reinforcement learning toward theparametric estimates using the quadratic prior ||h-b||2. Themost important limitation of this work is that considering justa few parametric models does not cope with the diversity ofblur functions in reality. Also in this work, the estimated blursare not demonstrated and the reported PSNR values for theestimated HR images are low (less than 18 dB).

    III. PROPOSEDMETHOD

    A. Nonuniform Interpolation

    Since for the SR problem we assume that all LR imageshave identical PSFs and noise levels, (2) is written as:

    gk= DkHSk f+nk (5)When the PSF is assumed linear space-invariant (LSI) andperiodic boundary condition is assumed, H is a block circu-lant matrix with circulant blocks (BCCB) [18], [30]. In thefollowing situations, the product of matrices H and Sk arecommutable (HSk= SkH) for all pixels:

    1) PSF is LSI and the perceived motions are global andpurely translational (so the matrices Skare block- cir-culant, as well).

    2) PSF is LSI and isotropic, and the perceived motions areglobal and rigid (include translation and rotation).

    Also when the PSF is LSI and isotopic, and the perceivedmotions are rigid but non-global (local), the commutability ofwarping and blurring operations holds for all pixels except for

    those on and near (within the PSF supports) the boundaries ofmoving objects. In these situations, the imaging model in (2)can be rewritten as:

    gk= DkSkHf+nk= DkSkz +nk (6)where z is the upsampled (antialiased) but still blurry image.When noise is the same for all images, an appropriate wayto reconstructz is using multi-frame nonuniform interpolationwhich directly reverses the reconstruction process of (6). Theconcept of this method is shown is shown in Fig. 1 in which,first, the pixel values of all LR images are registered andprojected to the HR image grid, and then, the HR image pixel

  • 8/13/2019 06408136

    4/14

    2104 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    (a) (b)

    (c) (d)

    Fig. 2. Reconstruction from the nonlinear interpolation method. (a) Ground-truth image. (b) One of 16 LR images scaled up to the size of the originalimage for better comparison. (c) Reconstructed image using nonuniforminterpolation which is free of aliasing but still blurry. (d) LR image obtainedby applying the same blur function and noise level as (b) but withoutdownsampling. Images in (c) and (d) have a relative PSNR of 39 dB.

    values are computed through an interpolation scheme suchas linear, cubic, or spline. Another method that utilizes theequality of all blurs and noise level is proposed in [30].

    In most non-blind SR reconstruction methods, registrationand upsampling operations are performed within the imageestimation process. However, our experiments show that fora blind SR problem with identical blurs and noise level, byseparating the upsampling and registration operations fromthe reconstruction process, both speed and precision of theblind estimation are increased. Fig. 2(a) and (b) respectivelyshow the ground-truth cameraman image and one of 16LR images generated by shifting, blurring, down-sampling,and corrupting with noise. The upsampled image using non-uniform interpolation is shown in Fig. 2b which, as seen, isstill blurry and noisy, but free of aliasing. This image is verysimilar to the image in Fig. 2(d) obtained by applying the

    same blur function and noise level as of Fig. 2(b) but withoutdownsampling. The high PSNR (defined in Section IV) valueof 39dB confirms this similarity.

    B. Image Optimization

    We use the following cost function for estimating the HRimage f:

    J (f)=N

    k=1zkHk f2 +

    4j=1

    (Bk f)2

    1

    (7)

    (a) (b)

    Fig. 3. (a) Huber (solid line) and quadratic functions (dashed line). (b) Theircorresponding Gibbs PDFs.

    where N is equal to 1 for the SR and SIBD problems, butgreater than 1 for the MIBD case. The blurry image zk forSR (k= 1) is obtained from the nonuniform interpolationof g1, . . . ,gN (Section III.A), and for BD equals gk. Theregularization term is of type of isotropic HMRF prior. Thedot above the root and square operators indicates that theoperations are element-wise, i.e. they are applied separately

    to the elements of their underlying vectors. For any vec-tor X=[x1, . . . ,xN]T, each element of the vector function(X) =[(x1) , . . . , (xN)]T is the Huber function (scalar)defined as:

    (xi )=

    x2i |xi | T2T|xi | T2 |xi |> T i=1, . . . ,N (8)

    This function is sketched in Fig. 3(a). It has a quadratic formfor values less than or equal to a thresholdT, and a less-than-quadratic growth for values greater than T. Consequently, edgeregions in the image are less penalized with this prior thanwith a quadratic prior. Fig. 3(b) shows the Gibbs PDF of theHuber function which is heavier in the tails than a Gaussian;in other words, it is Gaussian near the origin, while becomingLaplacian (double-tailed exponential) in the tails [31].

    The matrices Bks, k=1, , 4 are the convolution matri-ces related to the following directional first-order derivative(FOD) masks:

    b1= [0 1 1], b2= 12

    0 0 001 0

    1 0 0

    ,

    b3=

    011

    , b4= 1

    2

    0 0 001 00 0 1

    (9)

    The use of HMRF as image prior in non-blind SR was firstproposed in [7] with the non-isotropic form of HMRF(f)=(4

    j=1Bjf where B

    ks are the convolution matrices of

    second-order derivative (SOD) masks. However, our evalua-tions show that the isotropic form in (7) produces higher PSNR(defined in (24)) values.

    To optimize a cost function with the HMRF prior, in [7] thegradient projection method, and in [31] the damped Newtonmethod are employed. For the reasons that will be discussed inthis Section, our preferred iterative method to optimize the costfunction is Conjugate Gradient (CG) [32]. However, since thecost function in (7) is nonquadratic, its gradient is nonlinear.

  • 8/13/2019 06408136

    5/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2105

    Thus it is not possible to directly use CG as it can only operateon linear systems.

    We use lagged diffusivity fixed-point (FP) scheme [33] tolinearize the gradient of the HMRF prior through lagging thediffusive term by one iteration [23]. By using this approach,the following quadratic prior form is derived:

    Q (f

    n

    ; fn

    1

    )=4

    i=1(Bifn

    )

    T

    V

    n

    (Bif

    n

    )=4

    i=1 Bifn

    2

    Vn

    (10)The diagonal matrix Vn is given by:

    Vn = diag

    1 n1TT/n1 n1>T

    (11)

    where:

    n1 = 4

    j=1

    Bjf

    n1 2 (12)With this new form of prior, the cost function at iteration n

    is represented as:

    Jn (fn )= Nk=1

    zkHnkfn 2 +n4

    i=1Bifn 2Vn (13)

    The gradient of the cost function in (13) is obtained as:

    Jnf= J (fn )

    fn =

    Nk=1

    HnTk (g Hnkfn ) + n4

    i=1BTi V

    nBifn

    (14)Hence the optimum value of fn is computed by solving thefollowing linear equation:

    fn =

    N

    k=1HnTk H

    nk+ n

    4

    i=1BTi V

    nBi

    1 Nk=1

    HnTk g(15)

    = An1bn

    For the following reasons, we adopt the CG iterative methodto solve (15): it converges faster than the steepest descentmethod and its variations which are widely used to solvethe optimization problems. Also while Newton-type methodstypically tend to converge in fewer iterations, each iterationrequires the computation of the Hessian (matrix of secondderivatives) along with the gradient [34]. Consequently, theseNewton-type methods are computationally more expensivethan CG. While the complexities of the Newton and quasi-Newton methods are of orders O(n3) and O(n2), respec-

    tively, the complexity of CG is O

    (n

    ). This lower complexityis important in large-size optimization problems like imageanalysis. Moreover, many efficient methods such as Newton,Gauss-Seidel (GS), SOR, BiCG, etc. are based on matrixcalculation. In contrast, CG is a vector-based method andcan be totally implemented as the concatenations of filteringand weighting operations, so it requires less storage and itsimplementation is also simpler. Thus, in terms of efficiency,speed, and simplicity, CG is a suitable choice.

    We use the following scheme to update the regularizationcoefficientn :

    n =m ax(n1/r, mi n ) (16)

    where mi n is the minimum value of relevant to the noiselevel, and the constant r is fixed to 1.5. According to thisscheme, the value ofn decreases at each AM iteration until itsvalue reaches min. While the optimality ofn obtained from(16) is not known, it follows the variation trend of the optimalvalue (unknown) of this parameter over the AM iterationsaccording to the following proposition:

    Proposition 1: By the use of AM approach, the optimalvalue (which is unknown in our work) of the regularizationcoefficient decreases at each AM iteration until it reachesits minimum value which is proportional to the noise variancein the LR images.

    The proof is given in Appendix. The impact of using thescheme of (16) on the quality of blur estimates is explainedin Section III.C.

    C. Blur Optimization

    It is shown for the first time in [35] that, in a blind imagedeconvolution problem, more accurate estimation for the blur

    (and subsequently for the image) can be obtained if in the blurestimation process, the estimated image f is preprocessed byan edge-emphasizing smoothing operation. This has a positiveeffect on the quality of the blur estimate from the followingaspects: 1) Blur(s) can be estimated best from salient edgesand their adjacent pixels, so by smoothing f weak edgesand also false edges caused by ringing are smoothed outand do not contribute to the blur estimation; 2) Noise hasa stronger adverse effect in non-edge regions than in edgeregions, so smoothing the non-edge regions helps in improvingthe blur estimation; and 3) Ground-truth images are assumedto have sharp and binary-like edges, therefore, replacing softedges with step edges in fprovides a closer estimate of the

    ground-truth image and assists in getting better blur estimation,especially in the initial steps of the blind optimization wherethe estimated image f is still quite blurry.

    To smooth out non-edge regions and improve the sharpnessof edges, in [35] the shock filtering operation of [36] is used bywhich a ramp edge gradually approaches a step edge througha few iterations. Since the performance of the shock filteringand some other edge-emphasizing smoothing techniques areinfluenced by noise, in some works the image is pre-smoothed.For example, in [13], the bilateral filtering method of [37],and in [14], [38] a lowpass Gaussian filter are exploited beforeapplying the shock filter. In the above works, the pre-smoothed(and sharpened) image is computed fromfn (orfn1 depending

    on which one of the image or the blur is estimated first)at each AM iteration of the blind optimization to improvethe blur estimation, but this image has no direct effect onthe estimate of f. However, the SIBD method in [39] usesa different approach where the image f is directly estimatedby redistributing the pixels of the blurry image g along itsedge profiles. This estimation is performed in such a way thatantialiased step edges are produced. Subsequently, the blur isestimated from f and g using a MAP framework.

    In our work, we use the edge-emphasizing smoothingmethod of [28] which is applied at each AM iteration to fafter it is converted to grayscale. This method manageably and

  • 8/13/2019 06408136

    6/14

    2106 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    (a) (b) (c) (d) (e)

    Fig. 4. (a) Blurred and noisy image g . (b) Reconstructed image f1. (c) Smoothed image s 1. (d) Reconstructed image f20 . (e) Smoothed image s 20.

    (a) (b)

    (c) (d)

    (e) (f) (g)

    0

    5

    10

    15

    20

    0

    5

    10

    15

    20

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0

    5

    10

    15

    20

    0

    5

    10

    15

    20

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0

    5

    10

    15

    20

    0

    5

    10

    15

    20

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    Fig. 5. Blind SIBD for the Cameraman image. (a) Ground-truth image. (b) Blurred image. (c) Reconstructed image of [14] with PSNR of 26.1 dB. (d)Reconstructed image of our method with PSNR of 29.5 dB. (e) Original motion PSF. (f) Estimated PSF of [14] with NMSE of 0.24. (g) Estimated PSF ofour method with NMSE of 0.008.

    globally removes a significant amount of low-amplitude struc-tures via L0gradient minimization. It penalizes the number ofnon-zero gradients by minimizing the following cost function:

    J(sn )=s n fn 2 +n# p||(xsn )p|+ |(ysn )p| |=0(17)

    where sn is the output of the edge-emphasizing smoothingalgorithm at the kth AM iteration, =[x,y ] is the gradi-ent operator, #{} is the counting operator, and ()p indicates

    the value of the underlying vector at the pixel location p.No pre-smoothing is required before using this smoothingoperation. We found this method more effective than shockfiltering for smoothing and edge sharpening.

    It is presented experimentally in [40] that estimating theblur in the filter domain, i.e. using the gradients of HR andLR images, produces more accurate results. That is becausein the typical linear equation set Ah=b derived for estimationofh, the matrix A will have a better condition number [13].

  • 8/13/2019 06408136

    7/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2107

    (a) (b)

    (c) (d)

    (e) (g)0

    5

    10

    15

    0

    5

    10

    15

    0

    0.02

    0.04

    0.06

    0.08

    0

    5

    10

    15

    0

    5

    10

    15

    0

    0.02

    0.04

    0.06

    0.08

    0

    5

    10

    15

    0

    5

    10

    15

    0

    0.02

    0.04

    0.06

    0.08

    (f)

    Fig. 6. Blind SIBD for the Satellite image. (a) Ground-truth image. (b) Blurred image. (c) Reconstructed Image of [14] with PSNR of 28.8 dB.(d) Reconstructed image of our method with PSNR of 31.1 dB. (e) Original Gaussian PSF. (f) Estimated PSF of [14] with NMSE of 0.11. (g) Estimated PSFof our method with NMSE of 0.01.

    (a) (b) (c)

    (d) (e)

    (f) (g)

    Fig. 7. Blind MIBD for the Lena image. (a) Ground-truth image. (b) One of the four blurred images. (c) Four motion blurs in the directions of0, 45, 90, and 135. (d) and (e) Reconstruction result of [17] and [18] with image-PSNR of 31.9 dB and blur-NMSEs of 0.2, 0.2, 0.18, and 0.19.(f) and (g) Reconstruction result of our method with image-PSNR of 32.5 dB and blur-NMSEs of 0.09, 0.04, 0.07, and 0.04.

    Hence, the cost function we use for the blur estimation is:

    J(hn )=N

    k=1

    2i=1

    Ciz Ci Snhnk2 +N

    k=1

    4j=1

    Bjhnk2 (18)

    whereC1 and C2 are the convolution matrices of the gradientfilters c1 and c2 in the horizontal and vertical directions, andSn is the convolution matrix of sn . Unlike n in (15) that is

    updated at each AM iteration according to the scheme of (16), in (18) is fixed (we talk about the range of parameters inSection III.D.). Because (18) is a quadratic functional and onlycontains convolution and multiplication with a scalar, it canbe directly computed in the Fourier domain using Parsevalstheorem:

    hnk(x,y)

  • 8/13/2019 06408136

    8/14

    2108 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    (a) (b) (c)

    (d) (e)

    (f) (g)

    Fig. 8. Blind MIBD for the Mandrill image. (a) Ground-truth image. (b) One of the four blurred images. (c) Four average blurs of size 3 3, 5 5, 7 7,and 9 9, respectively, all with 15 15 support. (d) and (e) Reconstruction result of [17], [18] with image-PSNR of 27.2 dB and blur-NMSEs of 0.51, 0.30,0.30, and 0.20. (f) and (g) Reconstruction result of our method with image-PSNR of 28.3 dB and blur-NMSEs of 0.00095, 0.0029, 0.0040, and 0.0045.

    = F1

    2i=1

    [F(ci ) F(sn )]F(ci ) F(z)

    / (19) 2

    i=1|F(ci ) F(sn )|2 +

    4j=1

    |F(bj )|2

    where k = 1, . . . ,N, F(), and F1() denote FFT andinverse-FFT operations, and()is the complex conjugate oper-ator. We use the MATLAB function edgetaper()to avoidboundary artifacts as a result of performing deconvolution inthe Fourier domain.

    In the initial iterations of the algorithm that the estimated

    image is still blurry, we use large values for both n in(15) and n in (17) to apply more smoothness to the esti-mated image and allow fewer salient edges to contribute inthe blur estimation. This strategy (controlling the numberof contributing salient edges at each iteration) is previouslyemployed in [13] and [14] but in differrent ways. In [13] thisis done by decreasing the parameters of bilateral and shockfiltering processes, and also by decrasing a threshold to onlykeep the largest gradient magnitudes of the image intensities.On the other hand, in [14] this is performed by steadilydecreasing some thresholds so that the gradient magnitudesthat either have small values or belong to structures finer than

    the blur support are removed. By contrast, our sterategy forthis purpose is simpler and faster than those in [13] and [14]while providing the same or better performance. Graduallydecreasing at the AM iterations is also done in [41] butwithout considering the lower limit mi n in (16), so one mustvisually check the estimation results at each AM iteration inorder to terminate the optimization process prior to the pointswhen the quality of the estimates deteriorates.

    Fig. 4(a) shows a blurry and noisy image g. The recon-structed image f1 (fat the first AM iteration) is representedin Fig. 4(b). Because of large value of1 and1, this image iseven more blurry than g. The smoothed image s1 is illustratedin Fig. 4(c) which contains the most salient sharpened edgesoff1 and is used to estimate h1. The images f20 and s20 areshown in Fig. 4(d) and (e), respectively. More salient edgescontribute in estimation ofh20 than h0.

    D. Overall Optimization

    We use a coarse-to-fine scheme to perform initial estimatesof the image and blur kernels in lower scales using downsam-pled versions of the observed LR images. After a few AMiterations at each scale, the estimation results are upsampledusing bilinear interpolation and used as the inputs of thenext level. This scheme not only increases the processing

  • 8/13/2019 06408136

    9/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2109

    (a) (b)

    (c) (d)

    (e) (f) (g)

    Fig. 9. Blind SR for the Shepp-LoganPhantom image. (a) Ground-truth image. (b) One of 16 LR images. (c) Reconstructed image of [26] and [27] withPSNR of 24.5 dB. (d) Reconstructed image of our method with PSNR of 35.7 dB. (e) Original average blur. (f) Estimated PSF of [26] and [27] with NMSEof 0.65. (g) Estimated PSF of our method with NMSE of 0.0003.

    (a) (b) (c)

    Fig. 10. Diagrams for the experiment of Fig. 9. (a) and (b) Variations of image-PSNR and blur-NMSE over iterations. (c) Variation of image-PSNR versusSNR.

    speed, but also helps to avoid local minima. The kernelsize at the coarsest level is 3 3 and the upscaling factoris

    2.The pseudo-code for the proposed blind estimation problem

    is summarized in Algorithm 1 where the AM criterion and theCG stopping criterion for updating fn are given respectivelyby (20) and (21) as:

    nAM > NAMor fnfn1fn1 < Ttol and

    hnhn1hn1 < Ttol

    (20)nC G > NC Gor

    bn Anfn bn < Ttol (21)

    where nAM (= n in Algorithm 1) and nC G count the AMand CG loops respectively, NAM and NC G are the maximumnumbers of iterations for the AM and CG loops, and Ttol is

    the tolerance threshold. We set the parameters of the methodas follows: NAM to 20 for the top scale and to 10 for thelower scales, NC G to 10, Tt ot to 108, 0 to 0.1, to 106, rin (16) to 1.5, and T in (8) to 1. The PSFs are summed up

    to one and the range of images is [0, 255]. In some works theimage range is set to [0, 1] which in this case, the value of0

    is not changed, but and Tshould be divided by 2552 and255, respectively.

    After updatingf and hat each AM iteration, we apply thefollowing constrains on them: the pixel values of the estimatedHR image is fitted to the expected range, i.e. 0 to 255; alsoPSFs are shifted to the location of their centroids, their valuesless than zero are set to zero and then they are rescaled tosum up to one.

    fn = mi n(ma x(fn , 0), 225) (22)

  • 8/13/2019 06408136

    10/14

    2110 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    (a) (b)

    (c) (d)

    (e) (f) (g)

    Fig. 11. Blind SR for theBarbaraimage. (a) Ground-truth image. (b) One of 16 LR images. (c) Reconstructed image of [26] and [27] with PSNR of 23.6 dB.(d) Reconstructed image of our method with PSNR of 29.5 dB. (e) Original blur. (f) Estimated blur of [26] and [27] with NMSE of 0.78. (g) Estimated blurof our method with NMSE of 0.119.

    Fig. 12. Blind SR estimation results for some real LR images captured by our prototype of the PANOPTES computational imaging architecture.

    hnk= centralize (hnk),hnk= max(hnk, 0), hnk= hnk/hnk1 (23)

    where in (23), the centroid of the PSF hk(x,y) is the point(cx, cy ) = (

    xx

    yh k,

    y y

    xh k). For better perfor-

    mance, after applying the above constraints on fn and hn at

    each AM iteration, we apply a TV deonising algorithm onboth data.

    IV. EXPERIMENTALRESULT

    In this section, we illustrate the performance of the proposedalgorithm on both synthetic and real-life image sets For real

  • 8/13/2019 06408136

    11/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2111

    Algorithm 1Proposed Adaptive BSR Method

    images, the registration method of [42] is used to estimatethe global shifts and rotations between the LR images. Thesynthetic blurs are all generated by the MATLAB functionfspecial(). Our algorithm is purely implemented in MAT-LAB and tested on a laptop with Intel Core i7-2675QMprocessor. The severity of noise in the kth LR imagegk is represented by the signal-to-noise ratio definedas:

    S N RkdB=10 log10

    2gk 2nk

    (24)

    where 2gk and 2

    nk are the variances of kth LR image andnoise, respectively. The performance of the image restorationis measured by the peak signal-to-noise ratio (PSNR) between

    the ground-truth and reconstructed HR images. For an imagewith a maximum intensity level of 255, this metric is definedas:

    P S N R(f)=10 log10

    2552

    1Nf

    ff2

    (25)

    Also, the quality of PSF restoration is evaluated by thenormalized mean squared error (NMSE) between the true andestimated blurs:

    N M S E( h)= h h2

    h2 (26)

    A few important points should be considered in usingthese metrics. First, because ground-truth images are onlyaccessible in synthetic experiments, the above two metricsare not computable for real blurry images. Second, somefine structures are permanently removed by the blur func-tion and not recoverable by no means, even if the exactblur function is known. As a result the value of PSNR forimages having many fine structures would be lower than for

    images with only smooth regions. Third, in general, a goodalgorithm is characterized by high PSNR and low NMSE.Though, still the best way to understand the performance ofan algorithm remains visual inspection of its reconstructedHR images.

    In the following sections, the performance of our proposedunified method for SR, SIBD, and MIBD reconstructionsis compared to the well-recognized state-of-the-arts in theliterature. The combinations of the ground-truth images andblurs are chosen totally at random. In all synthetic experimentsof this paper, SNR is set to 30 dB .

    A. Results for Blind SIBDWe compare the performance of our proposed blind SIBD

    method with the Xu et al. method [14] which is one of thebest available algorithms for blind deconvolution from a singleimage. For the first experiment, the cameramanimage of Fig.5(a) is blurred by applying a motion blur of size 9 9 shown inFig. 5(e) resulting in the image shown in Fig. 5(b). Since forblind restoration the real PSF size is unknown, we assume thatthe PSF support is 19 19. The reconstruction result obtainedby [14] is illustrated in Fig. 5(c), (f) with image-PSNR of26.1 and blur-NMSE of 0.24. The reconstruction result by ourproposed method is represented in Fig. 5(d), (g) with image-PSNR of 29.5 dB and blur-NMSE of 0.008. The estimated

    image of our method has sharp edges without noticeableringing, whereas the ringing artifact is clearly visible in thereconstructed image of [14]. One zoomed portion of eachimage is also shown for better comparison.

    As another example, the Satellite image in Fig. 6(a) isblurred by applying a Gaussian blur with standard deviation of1.5 pixels. The blurred image and the original blur are shownin Fig. 6(b) and Fig. 6(e), respectively. The reconstructionresult by [14] is demonstrated in Fig. 6(c), (f) with image-PSNR of 28.8dB and PSF-NMSE of 0.11. The reconstructedimage here seems a little overshapened. Finally Fig. 6(d), (g)represent the result of our proposed method with image-PSNRof 30.4dB and PSF-NMSE of 0.01.

    B. Results for Blind MIBD

    In this section, our MIBD method is compared to the MIBDalgorithm of roubeket al. [17], [18].

    In the experiment of Fig. 7, the Lena image displayed inFig. 7(a) is selected as the test image. Four motion blurs inthe directions of 0, 45, 90, and 135 degrees, as shown in Fig.7(c), are used to generate four blur images, one of which isshown is Fig. 7(b). Motion blur is known as the worst-caseblur type in terms of the severity and distribution of the ringingeffects [43]. For performance comparison, the reconstruction

  • 8/13/2019 06408136

    12/14

    2112 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    result using [17], [18] is illustratedin Fig. 7(d), (e), with image-PSNR of 31.9 and PSF-NMSEs of 0.2, 0.2, 0.18 and 0.19.However, the use of our proposed method results in image-PSNR of 32.5 dB and PSF-NMSEs of 0.09, 0.04, 0.07 and0.04, as seen in Fig. 8(f), (g).

    As the next experiment, the 512512 ground-truth Mandrillimage in Fig. 8(a) is blurred by four average (out-of-focus)blurs of sizes 3

    3, 5

    5, 7

    7, and 9

    9, respectively, all

    with 15 15 support. One of the four LR images is shown inFig. 8(b) and the four PSFs are represented in Fig. 8(c). Thereconstruction result obtained by [17], [18] is represented inFig. 8(d), (e) with image-PSNR of 27.2dB and blur-NMSEsof 0.51, 0.30, 0.30, and 0.20. Fig. 8(f), (g) demonstrate thereconstruction results of our MIBD method with image-PSNRof 28.3 d B and blur-NMSEs of 0.00095, 0.0029, 0.0040, and0.0045. Our code is 4 times faster than the implementation of[17], [18].

    C. Results for Blind SR

    In this section, the performance of proposed blind SRmethod is demonstrated and compared to the SR method ofroubek et al. [26], [27].

    For the first test, the Shepp-Logan Phantom image inFig. 9(a) is used as the ground-truth image. A total of 16LR images, one of which is shown in Fig. 9(b), are generatedby applying an average (out of focus) PSF of size 15 15with main support of 5 5 (Fig. 9(e)), downsampling by a SRfactor of 4, and corrupting with 30 dB Gaussian noise. Thereconstruction result of the Blind SR method in [26], [27]is shown in Fig. 9(c), (f) with image-PSNR of 24.5 dB andblur-NMSE of 0.65. The ringing artifact is clearly visible inthe zoomed portion of the image. Fig. 9(d), (g) demonstrate

    the reconstruction result of our proposed blind SR methodwith image-PSNR of 35.7dB and PSF-NMSE of 0.0003. Ourmethod takes only 8 seconds to run, while the method in [26],[27] takes about 10 minutes. Fig. 10(a) and (b) respectivelyshow the variations of Image-PSNR and Blur-NMSE overiterations for the experiment of Fig. 9. Also the variation ofImage-PSNR versus SNR for this experiment is depicted inFig. 10(c).

    The next experiment shows the performance of our blindSR method for the Barbara image (Fig. 11(a)) blurred by anasymmetric PSF (Fig. 11(e)). This PSF represents the actualmeasured blur of a novel computational imaging system witha flat form factor [1], [2]. One of the 16 LR images is

    shown in Fig. 11(b). The reconstruction result of [26], [27]is demonstrated in Fig. 11(c), (f) having the Image-PSNR of23.6dB and the Blur-NMSE of 0.78. Fig. 11(d), (g) representthe reconstruction of our proposed blind SR method with theImage-PSNR of 29.5 dB and the Blur-NMSE of 0.119. Thealgorithm of [26], [27] takes more than an hour to run whereasour method takes only 40 seconds to calculate the results.

    We now show the capability of the proposed methodfor blind SR reconstruction of real images captured by anew working prototype [44] of a multi-aperture, adaptive,flat, computational imaging sensor based on the PANOPTES(Processing Arrays of Nyquist-limited Observations to

    Produce a Thin Electro-optic Sensor) concept [1], [2].This architecture is adaptive in the sense that it allows eachsubimagers FOV to be steered toward the regions-of-interestto use imaging resources in an optimal way, and the resultingdata are digitally processed by SR operation to extract HRdetail.

    Fig. 12 shows some of the blind SR estimation results of theimaging system field-tested at the White Sands Missile TestRange (WSMR) in White Sands, New Mexico. For each field-of-view (FOV), 25 LR images were captured at day and night,with one-quarter sub-pixel shifts along both the horizontal andvertical directions. The increase in the resolution is evidentfrom the reconstructed results obtained by SR factor of 4.Since we do not have information about the actual HR imagesand blurs, an image PSNR or PSF-NMSE cannot be calculated.

    V. CONCLUSION

    A blind unified method for multi-image super-resolution(MISR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) is presented. The optimiza-

    tion procedure is based on alternating minimization of a well-defined cost function which consists of a L2 fidelity term,a HMRF prior for the HR image, and a L2 prior for eachof the blur functions. The inputs to the MIBD algorithm areseveral LR images with different blurs, but without spatialdisplacements. By contrast, our MISR (or SR) method acceptsa number of LR images with subpixel displacements but thesame blur function and noise parameters. The assumptionof blur and noise similarity in SR allows us to separatethe registration and upsampling processes from the recon-struction procedure. The proposed blur estimation procedurepreprocesses the estimated HR image by applying an edge-emphasizing smoothing operation which enhances the softedges toward step edges while smoothing out weak structuresof the image. The parameters are altered so that more andmore salient edges are contributed in the blur reconstructionat every iteration. For better performance, the blur estimationis performed in the filter domain using the derivatives of thepreprocessed HR image and the LR image(s). Experimentsincluding comparisons with state-of-the-arts confirm the per-formance of the proposed method. This work can be extendedin several directions, for instance to have space-variant bluridentification, study joint image registration and restorationprocedures, or consider compression errors in the forwardmodel.

    APPENDIX

    A. Proof of the Proposition in Section III.B

    In the Bayesian framework, the Gibbs PDF of noise n in(3) is expressed as:

    p (g|f)=

    2

    Ng /2exp

    2 g Wf2

    (27)

    where is the inverse of noise variance and Ng is the totalnumber of pixels in g. The optimal value for is obtainedby:

    =argmin

    p (g|f)= Ngg Wf2 (28)

  • 8/13/2019 06408136

    13/14

    FARAMARZIet al.: UNIFIED BLIND METHOD FOR MULTI-IMAGE SUPER-RESOLUTION AND SINGLE/MIBD 2113

    Also the Gibbs PDF of the HR image is given by:

    p (f)= 1Zf

    exp

    cCfVc(f)

    (29)

    where Zfis the partition function,Vc(f)denotes the potentialfunctions associated with each clique c , andCf indicates theset of all cliques [45]. In the MAP framework, a cost function

    is obtained from the negative logarithm of the a posterioriprobability p(f|g)which according to the Bayes rule is equalto:

    p (f|g)= p (g|f) p (f)p (g)

    (30)

    p(g) is constant; so:

    J(f,h) logp (g|f, h) logp (f) (31)which results a cost function in the following form:

    J(f, h)= 2 g Wf2 +

    cCf

    Vc(f) (32)

    The comparison of cost functions in (7) and (32) illustratesthat in (7) is proportional to the inverse ofin (32):

    By assuming that the AM approach converges globally,the noise variance and consequently the optimal value of decrease as the iterations continues. Further, when fn

    approaches its ground-truth value, the noise also approachesn in (3), so reaches its minimum value proportional to thenoise variance in the LR images.

    REFERENCES

    [1] M. P. Christensen, V. Bhakta, D. Rajan, T. Mirani, S. C. Douglas, S.L. Wood, and M. W. Haney, Adaptive flat multiresolution multiplexed

    computational imaging architecture utilizing micromirror arrays to steersubimager fields of view, Appl. Opt., vol. 45, no. 13, pp. 28842892,May 2006.

    [2] P. Milojkovic, J. Gill, D. Frattin, K. Coyle, K. Haack, S. Myhr, D. Rajan,S. Douglas, P. Papamichalis, M. Somayaji, M. P. Christensen, and K.Krapels, Multichannel, agile, computationally enhanced camera basedon PANOPTES architecture, in Proc. Comput. Opt. Sens. Imag. Opt.Soc. Amer., Oct. 2009, no. CTuB4, pp. 13.

    [3] M. Irani and S. Peleg, Improving resolution by image registration,Graph. Models Image Process., vol. 53, pp. 231239, Apr. 1991.

    [4] R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E.A. Watson, High-resolution image reconstruction from a sequence ofrotated and translated frames and its application to an infrared imagingsystem, Opt. Eng., vol. 37, no. 1, pp. 247260, 1998.

    [5] R. C. Hardie, K. J. Barnard, and E. E. Armstrong, Joint map registrationand high-resolution image estimation using a sequence of undersampledimages, IEEE Trans. Image Process., vol. 6, no. 12, pp. 16211633,

    Dec. 1997.[6] S. Farsiu, M. Robinson, M. Elad, and P. Milanfar, Fast and robustmultiframe super resolution, IEEE Trans. Image Process., vol. 13,no. 10, pp. 13271344, Oct. 2004.

    [7] R. Schultz and R. Stevenson, Extraction of high-resolution framesfrom video sequences, IEEE Trans. Image Process., vol. 5, no. 6,pp. 9961011, Jun. 1996.

    [8] W. Freeman, T. Jones, and E. Pasztor, Example-based super-resolution,IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 5665, Mar.Apr. 2002.

    [9] D. Glasner, S. Bagon, and M. Irani, Super-resolution from a sin-gle image, in Proc. IEEE 12th Int. Conf. Comput. Vis., Sep. 2009,pp. 349356.

    [10] W. Dong, L. Zhang, G. Shi, and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regular-ization, IEEE Trans. Image Process., vol. 20, no. 7, pp. 18381857,Jul. 2011.

    [11] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman,Removing camera shake from a single photograph, ACM Trans.Graph., vol. 25, pp. 787794, Jul. 2006.

    [12] Q. Shan, J. Jia, and A. Agarwala, High-quality motion deblurring froma single image, ACM Trans. Graph. (SIGGRAPH), vol. 27, no. 3,pp. 73:173:10, Aug. 2008.

    [13] S. Cho and S. Lee, Fast motion deblurring, ACM Trans. Graph.(SIGGRAPH ASIA), vol. 28, no. 5, p. 145, 2009.

    [14] L. Xu and J. Jia, Two-phase kernel estimation for robust motiondeblurring, in Proc. 11th Eur. Conf. Comput. Vis. I, Ser., 2010,

    pp. 157170.[15] F. roubek and P. Milanfar, Robust multichannel blind deconvolution

    via fast alternating minimization, IEEE Trans. Image Process., vol. 21,no. 4, pp. 16871700, Apr. 2012.

    [16] G. Harikumar and Y. Bresler, Perfect blind restoration of images blurredby multiple filters: Theory and efficient algorithms, IEEE Trans. ImageProcess., vol. 8, no. 2, pp. 202219, Feb. 1999.

    [17] F. Sroubek and J. Flusser, Multichannel blind deconvolution of spa-tially misaligned images, IEEE Trans. Image Process., vol. 14, no. 7,pp. 874883, Jul. 2005.

    [18] F. Sroubek and J. Flusser, Multichannel blind iterative image restora-tion, IEEE Trans. Image Process., vol. 12, no. 9, pp. 10941106,Sep. 2003.

    [19] E. Faramarzi, V. R. Bhakta, D. Rajan, and M. P. Christensen, Superresolution results in PANOPTES, an adaptive multi-aperture foldedarchitecture, in Proc. 17th IEEE Int. Conf. Image Process., Sep. 2010,pp. 28332836.

    [20] M. Chang, A. Tekalp, and A. Erdem, Blur identification usingthe bispectrum, IEEE Trans. Signal Process., vol. 39, no. 10,pp. 23232325, Oct. 1991.

    [21] N. Nguyen, P. Milanfar, and G. Golub, Efficient generalized cross-validation with applications to parametric image restoration and res-olution enhancement, IEEE Trans. Image Process, vol. 10, no. 9,pp. 12991308, Sep. 2001.

    [22] Y. Guo, H. Lee, and C. Teo, Blind restoration of images degraded byspace-variant blurs using iterative algorithms for both blur identificationand image restoration, Image Vis. Comput., vol. 15, no. 5, pp. 399410,May 1997.

    [23] T. Chan and C.-K. Wong, Total variation blind deconvolution, IEEETrans. Image Process., vol. 7, no. 3, pp. 370375, Mar. 1998.

    [24] Y.-L. You and M. Kaveh, A regularization approach to joint bluridentification and image restoration, IEEE Trans. Image Process., vol. 5,no. 3, pp. 416428, Mar. 1996.

    [25] Y.-L. You and M. Kaveh, Blind image restoration by anisotropicregularization,IEEE Trans. Image Process., vol. 8, no. 3, pp. 396407,Mar. 1999.

    [26] F. Sroubek, G. Cristobal, and J. Flusser, A unified approach to super-resolution and multichannel blind deconvolution, IEEE Trans. ImageProcess., vol. 16, no. 9, pp. 23222332, Sep. 2007.

    [27] F. roubek, J. Flusser, and G. Cristbal, Super-resolution and blinddeconvolution for rational factors with an application to color images,Comput. J., vol. 52, no. 1, pp. 142152, Jan. 2009.

    [28] L. Xu, C. Lu, Y. Xu, and J. Jia, Image smoothing via L0 gradientminimization, ACM Trans. Graph. (SIGGRAPH Asia), vol. 30, no. 6,p. 174, 2011.

    [29] Y. He, K.-H. Yap, L. Chen, and L.-P. Chau, A soft map frameworkfor blind super-resolution image reconstruction, Image Vis. Comput.,vol. 27, no. 4, pp. 364373, Mar. 2009.

    [30] M. Elad and Y. Hel-Or, A fast super-resolution reconstruction algorithmfor pure translational motion and common space-invariant blur, IEEE

    Trans. Image Process., vol. 10, no. 8, pp. 11871193, Aug. 2001.[31] D. Capel,Image Mosaicing and Super-Resolution. New York: Springer-Verlag, 2004.

    [32] J. R. Shewchuk, An introduction to the conjugate gradient methodwithout the agonizing pain, Carnegie Inst. Technol., Carnegie-MellonUniv., Pittsburgh, PA, Tech. Rep. CMU-CS-94-125, 1994.

    [33] C. R. Vogel and M. E. Oman, Iterative methods for total variationdenoising, SIAM J. Sci. Comput., vol. 17, no. 1, pp. 227238, 1996.

    [34] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd ed. Philadel-phia, PA: SIAM, 2003.

    [35] J. H. Money and S. H. Kang, Total variation minimizing blind decon-volution with shock filter reference, Image Vis. Comput., vol. 26, no. 2,pp. 302314, Feb. 2008.

    [36] S. Osher and L. I. Rudin, Feature-oriented image enhancement usingshock filters, SIAM J. Numer. Anal., vol. 27, no. 4, pp. 919940,Aug. 1990.

  • 8/13/2019 06408136

    14/14

    2114 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013

    [37] C. Tomasi and R. Manduchi, Bilateral filtering for gray and colorimages, in Proc. 6th Int. Conf. Compu. Vis., Ser., 1998, pp. 839846.

    [38] C. Wang, L. Sun, P. Cui, J. Zhang, and S. Yang, Analyzing imagedeblurring through three paradigms, IEEE Trans. Image Process.,vol. 21, no. 1, pp. 115129, Jan. 2012.

    [39] N. Joshi, R. Szeliski, and D. Kriegman, PSF estimation using sharpedge prediction, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,Jun. 2008, pp. 18.

    [40] A. Levin, Y. Weiss, F. Durand, and W. Freeman, Efficient marginallikelihood optimization in blind deconvolution, in Proc. IEEE Comput.

    Soc. Conf. Comput. Vis. Pattern Recognit., Jun. 2011, pp. 26572664.[41] M. Almeida and L. Almeida, Blind and semi-blind deblurringof natural images, IEEE Trans. Image Process., vol. 19, no. 1,pp. 3652, Jan. 2010.

    [42] D. Keren, S. Peleg, and R. Brada, Image sequence enhancement usingsub-pixel displacements, in Proc. IEEE Comput. Soc. Conf. Comput.Vis. Pattern Recognit., Jun. 1988, pp. 742746.

    [43] R. Lagendijk, J. Biemond, and D. Boekee, Regularized iterative imagerestoration with ringing reduction, IEEE Trans. Acoustics, SpeechSignal Process., vol. 36, no. 12, pp. 18741888, Dec. 1988.

    [44] M. Somayaji, M. P. Christensen, E. Faramarzi, D. Rajan, J.-P. Laine, P.Sebelius, A. Zachai, M. Chaparala, G. Blasche, K. Baldwin, B. Ogun-femi, and D. Granquist-Fraser, Prototype development and field-testresults of an adaptive multiresolution PANOPTES imaging architecture,

    Appl. Opt., vol. 51, no. 4, pp. A48A58, Feb. 2012.[45] S. Z. Li, Markov Random Field Modeling in Image Analysis, 3rd ed.

    New York: Springer-Verlag, 2009.

    Esmaeil Faramarzi (S10M12) received theB.S. and M.S. degrees from Amirkabir Universityof Technology, Tehran, Iran, in 2000 and 2003,respectively, and the Ph.D. degree from SouthernMethodist University, Dallas, TX, USA, in 2012.

    He is currently a Senior Engineer with Sam-sung Telecommunications America, Richardson,TX, USA. He was a Research Faculty Memberwith the Iranian Research Institute for Informa-tion Science and Technology, Tehran, from 2003to 2008. His research interests include image and

    video deblurring and super-resolution, image registration, video coding, doc-ument image analysis, optical character recognition, and image indexing andretrieval.

    Dinesh Rajan (S99M02SM07) received theB.Tech. degree in electrical engineering from theIndian Institute of Technology Madras, Chennai,India, in 1997, and the M.S. and Ph.D. degreesin electrical and computer engineering from RiceUniversity, Houston, TX, USA, in 1999 and 2002,respectively.

    He is currently the Department Chair and anAssociate Professor with the Electrical EngineeringDepartment, Southern Methodist University (SMU),

    Dallas, TX, USA, where he joined in 2002 as anAssistant Professor. His current research interests include communicationstheory, wireless networks, information theory, and computational imaging.

    Dr. Rajan was a recipient of the NSF CAREER Award in 2006 for hisresearch on applying information theory to the design of mobile wirelessnetworks, the Golden Mustang Outstanding Faculty Award, and the SeniorFord Research Fellowship from SMU.

    Marc P. Christensen (SM11) received the B.S.degree in engineering physics from Cornell Univer-sity, Ithaca, NY, USA, in 1993, and the M.S. degreein electrical engineering from and the Ph.D. degreein electrical and computer engineering GeorgeMason University, Fairfax, VA, USA, in 1998 and2001, respectively.

    He was a Staff Member and the Technical Leaderwith BDMs Sensors and Photonics Group (nowpart of Northrop Grumman Mission Systems), from1991 to 1998, where he was involved in research

    on developing optical signal processing and vertical cavity surface emittinglaser-based optical interconnection architectures, infrared sensor modeling,simulation, and analysis. In 1997, he co-founded Applied Photonics, a free-space optical interconnection module company, where he was involved in thedemonstration for the DARPA MTO FAST-Net, VIVACE, and ACTIVE-EYESprograms, each of which incorporated precision optics, micro-optoelectronicarrays, and micromechanical arrays into large system-level demonstrations.In 2002, he joined Southern Methodist University, Dallas, TX, USA, wherehe was selected as the inaugural Bobby B. Lyle Professor of EngineeringInnovation in 2010 and is currently the Dean ad interim of the Lyle Schoolof Engineering. He has authored or co-authored over 100 papers in journalsand conferences. He holds two patents on free space optical interconnections,one pending patent on integrated photonics, and four pending patents oncomputational imaging.

    Dr. Christensen was a recipient of the Gerald J. Ford Research Fellowshipin 2008 for outstanding research. He has contributed to several industry anduniversity consortia on integrated photonics, such as the DARPA PhASERand CIPhER programs.