Hardware Implementation of Real-Time Multiple Frame Super-Resolution

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Hardware Implementation of Real-Time Multiple

Frame Super-Resolution
Kerem Seyid, Sebastien Blanc, Yusuf Leblebici
Ecole Polytechnique Fédérale de Lausanne (EPFL) Lausanne, Switzerland
Email: {kerem.seyid, sebastien.blanc, yusuf.leblebici}@epfl.ch

Abstract—Super-resolution reconstruction is a method for sensors are also playing an important role in commercial high
reconstructing higher resolution images from a set of low res- resolution imaging.
olution observations. The sub-pixel differences among different As explained, consumer applications are in need of high
observations of the same scene allow to create higher resolution
images with better quality. In the last thirty years, many methods resolution that can be obtained by smaller pixel sizes. Fur-
for creating high resolution images have been proposed. However, thermore current CMOS technology has the means to satisfy
hardware implementations of such methods are limited. In this the needs. However, smaller pixel size means degrading in the
work, highly parallel and pipelined implementation for iterative performance. In order to compete with the current needs of
back projection super-resolution algorithm is presented. The imaging technology, a new approach is required.
proposed hardware implementation is capable of reconstructing
512x512 sized images from set of 20 lower resolution observations, One approach is to use signal processing techniques to
with real-time capabilities up to 25 frame per second (fps). obtain an HR image where multiple low-resolution images of
Explained system has been synthesized and verified via Xilinx the same scene can be obtained. Recently, this approach has
VC707 FPGAs. To the best of our knowledge, the system is been one of the most attractive research areas. It is called
currently the fastest super-resolution implementation based on Super-resolution (SR) image reconstruction or in another
FPGA.
terms, image resolution enhancement. The term SR image
I. I NTRODUCTION reconstruction is referring to a signal processing approach that
aims to overcome the limitations imposed by obtained low
From the early 70s on, usage of the charge-coupled devices resolution images.
(CCD) and CMOS sensors are widely increased as opposed to The major advantage of signal processing approach is that
an exposure on photographic films. Back in 70s, these sensors it may cost less and the existing image systems can be utilized
were sufficient for most of the applications. However, today [1]. After obtaining low resolution images with an inexpensive
people want a digital handheld camera with high-resolution video recorders or handheld cameras, a higher resolution
(HR) and affordable prices. Furthermore scientists often need output image can be reconstructed in post processing.
a HR level close to an analog 35mm film that has no visible Another approach related to resolution enhancement is the
artifacts when an image is magnified [1]. This creates a need single-image interpolation approach. However, since there is
to find a way to increase the resolution level. no additional information and detail provided in this approach,
The most common method is to increase the pixel density, the quality of the single-image interpolation is very much
or in other words, reducing the pixel size (spatial resolution) limited due to the ill-posed nature of the problem. The lost
by fabrication techniques. Spatial resolution refers to the pixel frequency components cannot be recovered, therefore, image
density in an image and measures in pixels per unit area [2]. interpolation methods are not considered as SR techniques.
It is common knowledge that the scaling effects in CMOS In the example based SR methods, correspondences be-
technology allow the semiconductor industry to make smaller tween low and high resolution image patches are learned
devices [3]. This rule holds for CMOS imaging applications as from a database of low and high resolution image pairs.
well, in [3] it is indicated that CMOS image sensor technology Corresponding patches are applied to a new low-resolution
is lagging behind the technology nodes in ITRS roadmap. image to recover its most likely high-resolution version [5].
The reason behind this lagging is very simple: current CMOS Furthermore, Glasner et al. [6] proposed a patch based SR
process is not imaging friendly. Reducing the pixel sizes mean resolution, which is database free single image super resolu-
less amount of light available per pixel. Furthermore, smaller tion. These methods are computationally expensive and real-
size generates shot noise that reduces the image quality. There time implementations are not feasible.
is a limit to reduce the pixel pitch without suffering the effects In the SR setting, multiple low-resolution observations
of shot noise. For 35 µm CMOS process, estimated pixel area are available for reconstruction, making the problem better
is around 40 µm2 [1]. constrained. The nonredundant information contained in the
Another approach to enhance the resolution is to increase these LR images is typically introduced by sub-pixel shifts
the chip size. However, that leads to an increase in capacitance, between them. These subpixel shifts may occur due to un-
which makes it difficult to speed up a charge transfer rate controlled motions between the imaging system and scene,
[4]. Furthermore, cost of high precision optics and image e.g., movements of objects, or due to controlled motions

978-1-4673-9140-5/15/$31.00 c 2015 IEEE

219
Desired HR Image X Kth Warped HR Image Xk

Warping Blur Downsampling


Continuous to -Optical blur
Continuous -Translation Undersampling Kth Observed
Discreet Without -Motion blur
Scene -Rotation, Etc. (L1 ,L2 ) LR Image yk
Aliasing -Sensor PSF , Etc.

Noise

Figure 1. Observation model relating LR images to HR images

[2]. CCTV systems is now replacing DVRs and often they fully or partially in different SR techniques. The imaging
need object magnifications, such as extracting the car plate process is presented in Fig. 1.
or focusing on the face of the suspect or region of interest Assuming X is the desired HR, sampled above the Nyquist
(ROI) [1]. Also, the techniques can be utilized in every aspect rate and band limited. yk is the kth subsampled, warped
of imaging technology where multiple images of same scene and blurred LR observation of X. Furthermore, each image
can be obtained, such as Magnetic Resonance Imaging (MRI), is corrupted by noise, capturing process can be expressed as
computed tomography (CT), satellite imaging, remote sensing
and video standard conversion.
Multiple scenes can be obtained either from one camera yk = SbBk Wk X + nk (1)
with several captures that has relative motion in between
frames or from multiple cameras located in different positions
looking at same scene. After obtaining the motion estimation where Wk is the warping matrix, Bk is the blur matrix and
with sub pixel accuracy, SR image can be reconstructed. Sb is the subsampling matrix, nk represents the noise vector.
In this work, a real-time super-resolution algorithm capable A block diagram can be seen in Fig. 1.
of creating 512 ⇥ 512 sized images from set of low resolu- The warp matrix, Wk , represents all the motion among the
tion (LR) observations is presented. The paper is organized captured images. It may contain global and local translation,
as follows, theoretical background for imaging process and rotation and etc. It varies among each scene and needs to be
previously implemented FPGA based super-resolution algo- recalculated for each particular frame. Since the motion does
rithms are given in Section II. Super-resolution algorithm not occur in integer pixel shifts, sub-pixel calculations and
implemented in this work is presented in Section III. Real- interpolations are necessary.
time hardware implementation is presented in Section IV. Bluring might be caused by the optical system, relative
Implementation results is given in Section V and subsequently motion between the imaging system and the original scene,
discussion on real-time implementation and future work is and Point Spread Function of the camera lens which is rep-
given in Section VI. resented by the matrix Bk . It is assumed to be known during
the SR reconstruction, however, it is not easy to obtain this
II. T HEORETICAL BACKGROUND information. The subsampling matrix SB generates aliased
SR reconstruction has been one of the most active research images from warped and blurred HR image.
areas since its first mentioned by Tsai and Huang [7] in 1984.
In the past 30 years, numerous approaches and techniques B. Related Work
presented from frequency domain to spatial domain as well Although the super-resolution reconstruction is an attrac-
as from signal processing approach to machine learning tech- tive field for over thirty years, there are not many research
niques. In order to start talking about these techniques, one conducted for hardware implementation. Bowen and Bouganis
needs to firstly understand the image observation model, which [8] proposed a hardware architecture for FPGA implemen-
represents the low resolution image capture process. tation, however implementation needs 20 iteration stages for
satisfactory results and device capabilities were not sufficient.
A. Observation Model Moreover, it was limited to the on-chip memory. Another
Image capturing process is not perfect due to the hardware implementation for iterative back projection algorithm was
limitations. During the process, there is a natural loss of spatial introduced in [9], which was designed for adaptive sensor and
resolution caused by the optical distortions such as out of capable of outputting 25 frame per second (fps) in VGA reso-
focus, diffraction limit etc., optical blur which can be modelled lution. The bottleneck of the system was explained as the triple
by Point Spread Function (PSF), motion blur due to limited buffering memory access scheme. One other implementation
shutter speed, noise that occurs within the sensor or during was presented by Szydik et al. [10]. The proposed algorithm
transmission [1]. Thus, recorded image usually suffers from was a non iterative approach. However, results was limited to
blur, noise and aliasing effects. These degradations modelled QCIF to CIF super-resolution for 25 fps. Furthermore, authors

220
image registration is needed in order to start the super-
resolution process.

A. Image Registration
Image registration is the first and the crucial part of any
super-resolution algorithm. The registration is transforming
different sets of images to a single coordinate system. An
example of three images registered to a single coordinate
system can be seen in Fig. 2. Sub-pixel level registration
allows super-resolution algorithms to create the HR image
content. The registration methods can be divided into two,
namely intensity based and feature based methods. Among the
intensity based methods, majority of the algorithms use either
block matching based methods or optical flow to calculate
motion vectors in order to determine motion field. Optical flow
Figure 2. Sub-pixel shifts among the low resolution observations based algorithms [17] provide superior motion vector quality
over block matching based algorithms [18]. There are many
well known works focusing on sub pixel image registration
stated that their implementation is not scalable and limited to [19], [20]. Furthermore, there are works specifically focus-
work with only two reference frames. ing on motion estimation and image registration for super-
resolution [21], [22].
III. S UPER -R ESOLUTION In this work, images are assumed to be registered with
respect to reference frame using the method explained in [20].
Super-resolution is computationally complex and ill-posed Horizontal shifts a, vertical shifts b and rotation angle ✓ are
problem. The motion between the frames, the blur kernel(s), assumed to be known prior to starting the iterative image
and the high-resolution image of interest are three interwoven super-resolution process.
unknowns that should ideally be estimated together rather
than sequentially [11]. There are many methods proposed in B. Iterative Reconstruction
literature to solve the ill-posed SR problem where multiple Iterative algorithms start with initial high resolution ap-
low resolution observations can be obtained. These methods proximation such as linear interpolation of the reference low
are mainly divided into two domains, either in spatial domain resolution frame. For each iteration, the observation model is
or in frequency domain [1], [12], [13]. In this work, spatial explained in Fig. 1 is applied to simulate the low resolution
domain approaches are investigated since they are more suit- observation results y0 k .
able for hardware implementation and can be easily parallized. The aim of the IBP is to minimize the error e in every step
Additionally, it will avoid the computational complexity of the (n) between simulated results y0 k and observed images set yk .
frequency domain approaches. If Xn is the correct high resolution image, then the simulated
Several algorithms have been investigated for real-time images y0 k and observed images yk should be identical to one
implementation. Farsiu et al. [14] proposed a fast and robust another.
implementation where the aim is to reduce the modelling
errors. Zomet et al. [15] proposed another method, utilizing sX X
pixel wise median. The iterative back-projection algorithm e(n) = (yk (x, y) y0 0k (x, y))2 (2)
(IBP) proposed by Irani and Peleg [16] which aims to back k x,y
project the error between observed and simulated images.
Among these methods, IBP is suitable for hardware imple- In each iteration, every pixel in high resolution image X is
mentation, since it can be highly parallized and computational updated according to error of all low resolution pixels y0 k that
complexity is suitable for real-time implementation. In the it affects. The difference error (yk y0 k ) is multiplied by a
iterative back projection algorithm, firstly an initial guess is factor and added to the initial high resolution estimate.
created by using interpolation from one of the observed low It is important to know that original high resolution fre-
resolution images. Afterwards the algorithm aims to find the quencies may not be fully restored. For example, the blurring
high resolution image X by simulating the image capture operation may filter out the high frequency components and
process and creating the low resolution observations yk . The make them impossible to restore. In such cases, there are more
difference between low resolution observations yk and created than one single high resolution images which result in same
low resolution observations y0 k are iteratively back projected low resolution images after the imaging process. It has been
to the initial guess. The algorithm iteratively converges to an stated in [16] initial guess does not influence the performance
HR estimate until the error between simulated LR observations of the algorithm, but it will influence the HR estimate that the
and obtained LR observations is negligable. A high quality algorithm converges.

221
the differences between operating in the whole image and the
2 ⇥ 2 sized pixel block. Throughout the simulations, the block
size N = 9 gives the best results in the trade-off between block
size and output image quality. For each 2 ⇥ 2 block, using a
block size 9 ⇥ 9 for image acquisition process is sufficient for
small horizontal shifts a, vertical shifts b and rotation angles
✓.
The block diagram of imaging process designed for hard-
(a) Original Image (b) First Shearing in the x axis ware implementation can be seen in Fig. 4. It can be seen
in the first part of the diagram that the capture process is
being mimicked. First of all, the warping operation between
the HR image and the LR image is being conducted. After the
HR image being shifted and rotated, it is blurred according
to the model of the camera PSF. Once the blur operation
is conducted, the final HR image is decimated by 2 in both
directions to obtain the LR image. Once the process is finished,
the warped and decimated image forms the y0 k . Once the
difference between the y0 k and yk is calculated, the back
projection of the error operation starts. The back projection is
(c) Shearing in the y axis (d) Second shearing in the x axis
the inverse operation of the imaging process. The error block
Figure 3. Shearing based image rotation is back projected to HR image with inverse functions. Final
result is saved to the external memory after adding the error
to the current HR estimate of X(n).
For color images, the super-resolution algorithm is applied
in Y CbCr domain. All the images in the RGB domain A. Image Warping
are initially converted to Y CbCr domain and the proposed In the image warping process, shifting and rotating oper-
iterative back projection algorithm is applied in Y domain, and ations are being conducted. Firstly, the sub-pixel shifts, that
Cb and Cr components are interpolated for HR image. Finally, corresponds to integer pixel shifts in HR image, is applied.
the Y CbCr components are converted to RGB domain. Sub-pixel shifts are calculated with half pixel precision, which
are then applied to the HR image in integer pixel level.
IV. P ROPOSED H ARDWARE I MPLEMENTATION After the images are shifted, the rotation operation with
As stated previously, iterative back projection algorithm is the angle ✓ is conducted. There are many methods proposed
suitable for hardware implementation. The iterative scheme for image rotation operation among which Unser et al. [23]
suitable for pipelining and pipelined architectures can be proposed a convolution based image rotation algorithm. The
implemented in parallel for each set of observed images. In algorithm can be applied to operating blocks with angular
this work, a pipelined hardware is designed in order to mimic motion. In [23] it has been stated that rotation can be defined
the camera image acquisition process. The observation of low as rotation matrix
resolution images from high resolution estimate is designed in

block based operations. Each block mimics the operation of cos(✓) sin(✓)
capture process for 2 ⇥ 2 pixels. During the simulations, it has R(✓) = (3)
sin(✓ cos(✓)
been realized that using block size bigger than 2 ⇥ 2 causes
significant degradation in the output image quality. Which is also equivalent to
Another important aspect is to choose which portion of
estimated high resolution image should be applied to the " # " # " #
imaging process. The best option is to choose the whole image 1 tan(✓/2) 1 0 1 tan(✓/2)
⇥ ⇥ (4)
and to create low resolution observation simulations. However, 0 1 sin(✓) 1 0 1
this method will drastically increase the memory bandwidth of
the system. For every iteration, a new LR observation should The whole rotational translation can be decomposed to
be created and saved in the external memory and read back appropriate sequence of 1-D signal translations that can all
for the error calculations. Therefore, instead of saving the be implemented via simple convolutions. It is a three pass
simulated results in the external memory, the calculation of implementation of rotation that can be seen in Fig. 3. The
the LR observations are created block by block in a pipelined original image Fig. 3(a) is first sheared in x dimension Fig.
architecture, and final results obtained for HR sequence is 3(b) then in y dimension Fig. 3(c) and finally again in x
saved in external memory. dimension Fig. 3(d).
For calculating each yk in pipelined stage, simulations have The algorithm can be implemented as first shearing among
been conducted to find N⇥N block size such that it minimizes the x axis, using tan(✓/2) and current macro block position.

222
HR(n) Line 1
Warp Blur Dec UpSample Sharpen Warp HR(n+1)
HR(n) Line 9

LR Line 1

LR Line 5

Figure 4. System Architecture Block Diagram of IBP Super-Resolution

The shearing values on the edges and the corners will not be It can be chosen arbitrarily, where the camera PSF represents
equal to the shearing values in the center of the image. In order the camera blur parameters. 9 lines coming from the image
to solve this problem, appropriate values for tan(✓/2) are interpolation blocks are filtered in parallel in order to reduce
stored in block rams. Depending on the values of tan(✓/2) the noise to obtain Mdeblur
0
(x, y).
and current position of the macro block (X, Y ), shearing oper- In the final stage of the pipeline, Mdeblur
0
(x, y) rotation with
ation can be calculated. Line values are delayed or forwarded angle R = ✓ is conducted. The backwards warping block
0

depending on the calculated shearing operation. is the same as forward warping block, except the ✓ value
Afterwards, the output image can be sheared in y axis using which is the negative angle of the forward warping block.
sin(✓). Similarly, sin(✓) values are stored in the block rams. Same shearing process is applied on the deblurred block to
Utilizing (X, Y ) positions and sin(✓) values, the block is obtain Mwarped
0
(x, y). From Mwarped
0
(x, y) block, the middle
sheared among the Y axis. Output of the y axis shearing is subset of 2 ⇥ 2 block is taken for the back projected error.
extended in order for the image to be properly sheared. Finally
D. Parallelization
the macro block sheared among the x axis with tan(✓/2).
Image dimensions are carefully calculated and increased with The system overview of the iterative back projection algo-
respect to shearing operations in the y dimensional shearing rithm is shown in Fig. 4. The process can parallel for each
and second x dimensional shearing. Thanks to global motion, image i and the same HR macro block can be used for each
blocks do not coincide with each other. Final image is cropped operation. Finally, the calculated errors are summed with an
to fit 9 ⇥ 9 macro block. The obtained macro block is the adder tree depending on the number of the LR observations as
shifted and rotated version of the original HR macro block. seen in Fig. 5. The final added error E(n)(X,Y ) is multiplied
with a constant . The observed error is added to the current
B. Blur and Decimate HR estimate X(n).
After the image warping, the 9 line values coming from the E. Simulation Results
image warping block are synchronized for the image blurring The fixed point version of the proposed algorithm is im-
operation. The blur operation is applied to create the camera plemented via MATLAB with different input image sets.
Point Spread Function. Camera PSF function is estimated as Several shifted, rotated, blurred and subsampled sets of LR
3 ⇥ 3 matrix and blurring operation is applied accordingly to observations are created from a HR image and the HR image
mimic both atmospheric and lens blur. taken as ground truth. With different data sets, the output of
The output of the image filter block is subsampled in the algorithm had an average PSNR value of 30.34 dB.
order to create the simulated observation of y0 k . After the
subsampling process, size of the macro block is reduced to V. I MPLEMENTATION R ESULTS
5 ⇥ 5. The obtained block Mobt (x, y), corresponding to a The proposed iterative back projection hardware is imple-
portion of the simulated observation yk , is subtracted from mented using VHDL. The models are mapped to a Virtex-
the block of the observed LR Mobs (x, y) of y0 k . Results of
the subtraction Mdif f (x, y) corresponds to error function that
needs to be back projected into the initial HR image X(n). IBP

C. Back projection
Back projection is the inverse operation of the image obser- IBP
vation process. The imaging process applied until subtraction
block is now applied in reverse order. Firstly, Mdif f (x, y) E(n)(X,Y)

block is interpolated in order to obtain Mdif 0


f (x, y), the 9 ⇥ 9
block. Simulations of the SR process showed that there were IBP
no significant difference between tested interpolation methods.
Therefore, the averaging filter is applied to Mdif f (x, y) to
obtain Mdif IBP
f (x, y) block.
0

The upsampled block Mdif 0


f (x, y) is fed into the deblurring
filter. The deblurring filter is different than the PSF function. Figure 5. Parallelisation of iterative back projection blocks

223
Table I [4] T. Komatsu, K. Aizawa, T. Igarashi, and T. Saito, “Signal-processing
N UMBER OF ITERATIONS FOR DIFFERENT HR IMAGE RESOLUTIONS based method for acquiring very high resolution images with multiple
cameras and its theoretical analysis,” Communications, Speech and
Vision, IEE Proceedings I, vol. 140, no. 1, pp. 19–24, 1993.
HR 256 ⇥ 256 512 ⇥ 512 720 ⇥ 1280 1024 ⇥ 1024
[5] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-
Iterations 71 17 5 4 resolution,” Computer Graphics and Applications, IEEE, vol. 22, no. 2,
pp. 56–65, 2002.
[6] D. Glasner, S. Bagon, and M. Irani, “Super-resolution
from a single image,” in ICCV, 2009. [Online]. Available:
7 XC7VX485T FPGA. Single super-resolution module con- http://www.wisdom.weizmann.ac.il/ vision/SingleImageSR.html
sumes %0.96 of LUT and %0.98 of DFF. The FPGA utilization [7] R. Tsai and T. S. Huang, “Multiframe image restoration and registra-
scales proportionally depending on the number LR observa- tion,” Advances in computer vision and Image Processing, vol. 1, no. 2,
pp. 317–339, 1984.
tions need to work in parallel. The proposed hardware operates [8] O. Bowen and C.-S. Bouganis, “Real-time image super resolution using
at 265 MHz after place & route. The pipelined process flow an fpga,” in Field Programmable Logic and Applications, 2008. FPL
can process 2 ⇥ 2 blocks in every Ncyc = 9 cycles. With 2008. International Conference on. IEEE, 2008, pp. 89–94.
[9] M. E. Angelopoulou, C.-S. Bouganis, P. Y. Cheung, and G. A. Constan-
the operating frequency, the system can construct 512 ⇥ 512 tinides, “Fpga-based real-time super-resolution on an adaptive image
HR resolution image with 25fps, using up to Nit = 17 sensor,” in Reconfigurable Computing: Architectures, Tools and Appli-
iterations and does not depend on the number of images, cations. Springer, 2008, pp. 125–136.
except the adder tree. The required number of cycles for SR [10] T. Szydzik, G. Callico, and A. Nunez, “Efficient fpga implementation of
a high-quality super-resolution algorithm with real-time performance,”
implementation can be calculated as Consumer Electronics, IEEE Transactions on, vol. 57, no. 2, pp. 664–
672, 2011.
M N [11] V. Bannore, “Iterative-interpolation super-resolution (iisr),” in Iterative-
⇥ ⇥ Nit ⇥ f ps ⇥ Ncyc + Lattree (5) Interpolation Super-Resolution Image Reconstruction. Springer, 2009,
2 2 pp. 19–50.
[12] K. Nasrollahi and T. B. Moeslund, “Super-resolution: a comprehensive
Where Lattree is corresponding to adder tree delay in the survey,” Machine vision and applications, vol. 25, no. 6, pp. 1423–1468,
2014.
final stage. [13] S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and chal-
Number of iterations that can be conducted for different HR lenges in super-resolution,” International Journal of Imaging Systems
image sizes can be seen in Table I. During the simulations, and Technology, vol. 14, no. 2, pp. 47–57, 2004.
[14] S. Farsiu, M. Robinson, M. Elad, and P. Milanfar, “Fast and robust
most of the LR sets are converged over the minimum error multiframe super resolution,” Image Processing, IEEE Transactions on,
barrier in less than 10 iterations. The number of iterations as vol. 13, no. 10, pp. 1327–1344, 2004.
low as 5 can produce good results with sufficient number of [15] A. Zomet, A. Rav-Acha, and S. Peleg, “Robust super-resolution,” in
Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceed-
LR observations. ings of the 2001 IEEE Computer Society Conference on, vol. 1. IEEE,
2001, pp. I–645.
VI. D ISCUSSION AND F UTURE W ORK
[16] M. Irani and S. Peleg, “Improving resolution by image
In this work, a novel hardware implementation for multiple registration,” CVGIP: Graphical Models and Image Processing,
frame super-resolution is presented. The presented hardware is vol. 53, no. 3, pp. 231 – 239, 1991. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/104996529190045L
easily scalable in terms of number of low resolution observa- [17] B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial
tions and final high resolution output size. The presented sys- Intelligence, vol. 17, no. 13, pp. 185 – 203, 1981. [Online]. Available:
tem is implemented in hardware description language VHDL; http://www.sciencedirect.com/science/article/pii/0004370281900242
[18] S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and
synthesized, placed and routed for Xilinx Virtex-7 FPGAs R. Szeliski, “A database and evaluation methodology for optical flow,”
using Vivado Synthesis Tool. The implemented hardware Int. J. Comput. Vision, vol. 92, no. 1, pp. 1–31, Mar. 2011. [Online].
operating real-time with 25 fps, and reconstructs a 512 ⇥ 512 Available: http://dx.doi.org/10.1007/s11263-010-0390-2
[19] B. D. Lucas, T. Kanade et al., “An iterative image registration tech-
sized HR image from 256 ⇥ 256 LR observations with 20 nique with an application to stereo vision,” in Proceedings of the 7th
iterations. To the best of our knowledge, this is currently the international joint conference on Artificial intelligence, 1981.
fastest FPGA implementation for a real-time super resolution [20] D. Keren, S. Peleg, and R. Brada, “Image sequence enhancement using
sub-pixel displacements,” in Computer Vision and Pattern Recognition,
algorithm. Currently, an optical flow based real-time image 1988. Proceedings CVPR’88., Computer Society Conference on. IEEE,
registration algorithm is under development in order to create 1988, pp. 742–746.
a real-time multiple image SR system. [21] D. Barreto, L. Alvarez, and J. Abad, “Motion estimation techniques
in super-resolution image reconstruction: a performance evaluation,” in
R EFERENCES Proceedings of the International Workshop Virtual Observatory: Plate
Content Digitalization, Archive Mining and Image Sequence Processing.
[1] S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image Citeseer, 2005, pp. 254–268.
reconstruction: a technical overview,” Signal Processing Magazine, [22] G. Callico, S. Lopez, O. Sosa, J. Lopez, and R. Sarmiento, “Anal-
IEEE, vol. 20, no. 3, pp. 21–36, 2003. ysis of fast block matching motion estimation algorithms for video
[2] P. Milanfar, Super-resolution imaging. CRC PressI Llc, 2010, vol. 1. super-resolution systems,” Consumer Electronics, IEEE Transactions on,
[3] A. J. Theuwissen, “Cmos image sensors: State-of-the-art,” vol. 54, no. 3, pp. 1430–1438, 2008.
Solid-State Electronics, vol. 52, no. 9, pp. 1401 – 1406,
[23] M. Unser, P. Thevenaz, and L. Yaroslavsky, “Convolution-based inter-
2008, papers Selected from the 37th European Solid-State
polation for fast, high-quality rotation of images,” Image Processing,
Device Research Conference - ESSDERC. [Online]. Available:
IEEE Transactions on, vol. 4, no. 10, pp. 1371–1381, Oct 1995.
http://www.sciencedirect.com/science/article/pii/S0038110108001317

224

You might also like