7060 Image Sensors & Processing / 4061 Image Processing
Assignment 2
August 2018
(Total Marks: 12%)
Overview
This assignment is intended to give you some hands-on experience with manipulating image data in
MATLAB and the basic image manipulation concepts covered in the course during weeks 4-6.
In this assignment work you will be required to modify several functions which implement spatial
transformations and Fourier filtering.
IMPORTANT: No routines from the image processing toolbox (such as histeq, medfilt2, imfilter) etc
may be used in your part of the assignment solutions unless specifically advised. Using such routines
will result in zero marks for that component of the assignment work.
Assignment Submission
Assignments are to be submitted as a standard ZIP archive file (please do NOT use other formats such
as 7z, RAR, PKZIP) containing the key MATLAB functions and any test code you have written and a
Word (MS Office) or PDF document summarising your results with comments on the performance of
the processing steps employed (in 1A-1D) and written answers to questions (1E).
• All submitted materials MUST be your own individual original work.
• Marks will be deducted for late submission or inco
ect submission format (please check
myUni for the due date)
As part of the submission, you are expected to write up the results of experimenting with your code
for different settings and images. Do not simply submit your code with screen shots of the basic test
script output.
Source Materials
Source code, test functions and example imagery for this assignment is located on the MyUni website
(https:
www.myuni.adelaide.edu.au). You are welcome to add your own example imagery for testing
and reporting purposes.
DO YOU NEED HELP? - The coding required to complete this assignment is not intended to be
complicated but it does assume familiarity with MATLAB programming. If you are having problems
getting your code working, then please ask questions during the tutorial session or come and see me
during my consulting times.
https:
www.myuni.adelaide.edu.au
Exercise 2A – ( 5.5% ) – Image Transformations and Resampling
You have been assigned to the implementation of some image transformation functions including
partly co
ecting for lens distortion. To do this however, you will firstly need to complete a resampling
function and implement bi-linear and bi-cubic interpolation.
STEP 1: - Resampling – modify the supplied function iminterp() such that it implements both bi-linear
and bi-cubic interpolation.
This function creates a resampled image based on the relationship between each output pixel and its
co
esponding position within the original image data. In other words, this function uses information
about the inverse transformation from each output pixel back into the original image data to perform
the resampling.
The function takes four inputs, the image I, two a
ays x and y and a string value (one of
‘nearest’,’bilinear’ or ‘bicubic’). The two a
ays define each co
esponding output pixel’s x and y
position in terms of the original image data. For example if x(2,2)=1.6 and y(2,2)=3.1 then pixel (2,2)
in the output image co
esponds to location (1.6,3.1) in the input image.
Cu
ently the function only supports nearest neighbour interpolation which would mean that pixel
(2,2) in the above example would be given the intensity of the original image data at location (2,3).
Your task here is to complete this function and implement bi-linear and bi-cubic interpolation.
1.1: Bi-Linear Interpolation
As described in lectures bi-linear interpolation is based on the 2x2 neighbourhood around the sample
point ),( yx PP of interest. The bi-linear estimate from this patch is calculated by:
)1,1(
),1()1(
)1,()1(
),()1)(1()(
yxabI
yxIba
yxbIa
yxIbapI
ypbxpa
pypx
yx
yx
Here (x,y) is the nearest pixel to ),( yx PP rounded down and (a,b) are the rounding-off e
ors. In
MATLAB use the floor() function to round down ),( yx PP . Remember MATLAB uses (row,column)
indexing not (x,y) coordinates. It is strongly suggested you test your bi-linear code before
implementing bi-cubic interpolation.
1.2: Bi-Cubic Interpolation
In the case of bi-cubic interpolation, a 4x4 neighbourhood around the point of interest is used for the
estimation (centred on element (2,2)). To do this we will start with the 1D bi-cubic interpolation
expression presented in lectures as follows:
2
1
0
1
32
1331
1452
0101
0020
1
2
1
)(
x
x
x
x
tttth
where 1x , 0x … 2x are the four samples around the point of interest and t is the fractional offset of
interest between sample 0x and 1x .
To use this expression for bi-cubic interpolation, we firstly calculate h(t) for each row of the 4x4 region
of interest where t is the offset in the x-direction (same as a in the bi-linear expression earlier) and
then use these four values to calculate h(t) where t is the offset in the y-direction. This is visually
illustrated below where the blue represents the four interpolations in the x-direction and the red the
final one in the y-direction.
For points in the boundary of the image where there is no 4x4 neighbourhood simply use the nearest
value. A test script iminterp_test.m has been supplied so you can check your calculations. An example
output from this is shown on the next page.
Remember that MATLAB natively handles a
ay multiplication which significantly simplifies the
calculation of h(t) above.
Test your code on other examples and write up your observations in your short report. Did the bi-
cubic interpolation improve the estimation in your examples?
Above: Example output from iminterp_test.m which applies 2 x scaling and a small rotation to the
input data. Note the differences in estimation between nearest, bilinear and bicubic interpolation.
Input
XXXXXXXXXX
20
40
60
Output - nearest
XXXXXXXXXX
20
40
60
80
100
120
Output - bilinea
XXXXXXXXXX
20
40
60
80
100
120
Output - bicubic
XXXXXXXXXX
20
40
60
80
100
120
STEP 2: Simple Lens Distortion Co
ection – modify the function lensdistortion() such that it
implements the simple form of lens distortion co
ection described below. In class the following was
described as a simple approximation to lens distortion:
6
3
4
2
2
11 rkrkrkrd
where r is the radius (distance) of the pixel from the image centre in the undistorted image, dr is the
adius of the pixel in the distorted image and 1k etc are constants determining the level of distortion.
Unfortunately, inverting this expression to estimate r based on dr is non-trivial, however a simple
approximate solution for undoing this distortion which estimates r based on dr is as follows:
6
3
4
2
2
11 ddd rkrkrkr
where 1k etc are again constants (albeit slightly different ones).
Given a set of co
ection factors k1, k2 and k3, to implement this solution, you will need to:
1. Determine the row and col offset for each pixel in the output relative to the image centre and
then estimate the distorted radius dr for each pixel from the row and col offset. Assume the
image centre ),( cc colrow is the midpoint (ie. centre) of the image (rounded down).
2. Estimate r for each pixel based on the expression shown earlier and multiply each row and
col offset by r to get the row and col offsets into the distorted image.
3. Offset these by the image centre ),( cc colrow to get them into normal image coordinates
4. Use these values and call iminterp() to construct the undistorted image (if you have step 1
working you may use iminterp with ‘bilinear’ or ‘bicubic’ otherwise simply use ‘nearest’)
To keep things simple the output image will be the same size as the input image. If you choose to use
(x,y) coordinates rather than (row,col) be careful not to mix them up when indexing a
ays.
Use the test script lens_test.m to check your solution (see example output on the next page).
Experiment and write up your results. An example output generated using this code is shown on the
next page. Note that the result will still contain small imperfections.
Above: An example output from lens_test.m showing the results before and after distortion co
ection.
Input 1 (pincushion)
XXXXXXXXXX
100
200
300
Input 1 (fixed)
XXXXXXXXXX
100
200
300
Input 2 (ba
el)
XXXXXXXXXX
100
200
300
Input 2 (fixed)
XXXXXXXXXX
100
200
300
Exercise 2B – (4.5%) – De-Noising using a Butterworth and a Wiener Filter
In another part of the company you work for, they are experiencing problems with excessive Gaussian
noise in imagery coming out of a video capture system that is causing delays in their development
work. You have been asked to help them out by implementing two de-noising functions for them that
will be used to clean up the imagery. These involve a simple Butterworth filter and a Wiener de-noising
filter both of which need to be implemented in the frequency domain.
Above: An example noise-free and noisy image (see noisy_image.m).
STEP 1: Butterworth filter – Firstly modify the function fft_filter() such that given and image I and a
set of FFT filter coefficients B (where the coefficients are shifted such that the a
ay centre represents
zero frequency) the function returns the FFT filtered version of image I. That is:
I2 = fft_filter(I,B);
Use the functions fft2(), ifft2(), real(), fftshift(), ifftshift() and meshgrid() to achieve this.
Once completed, modify the function butterworth() such that this function returns a set of FFT filter
coefficients representing the Butterworth filter discussed in class. The function butterworth() is called
as follows:
B = butterworth(width, height, order, cutoff, x0,y0)
Where width and height refer to the size of the filter, order and cutoff are the filter order and
frequency cut-off. The values x0 and y0 are offsets of the Butterworth filter relative to zero frequency.
If not supplied x0 and y0 both default to zero.
The expression for the Butterworth filter weights (in the Fourier domain) is as follows:
???????(?,