Abstract
With the growth of the multimedia technology over the past decades, the demand for digital information increases dramatically. This enormous demand poses difficulties for the current technology to handle. One approach to overcome this problem is to compress the information by removing the redundancies present in it. This is the lossy compression scheme that is often used to compress information such as digital images.
The main objective of this thesis is to study and implement the operations used in a loss compression scheme to compress two-dimensional images. Basically, this scheme consists of three operations, which are the transform, quantization and entropy encoding operations. First, wavelet transform is used to analyze the non-stationary phenomena that are often found in two-dimensional digital images. Then, scalar quantization is applied to remove the redundancies in the images with no visible loss under normal viewing. Finally, entropy encoder is used to compress the image in an efficient way where a large reduction of image size occurs. To reconstruct the compressed image, the operations are reversed.
Software simulating the lossy compression scheme is developed using Matlab 6. This software provides the basic image analysis as well as the compression and decompression operations. The results obtained from this simulation are discussed and concluded for future studies.
1.1 The Need for Compression
With the advance development in Internet and multimedia technologies, the amount of information that is handled by computers has grown exponentially over the past decades. This information requires large amount of storage space and transmission bandwidth that the current technology is unable to handle technically and economically. One of the possible solutions to this problem is to compress the information so that the storage space and transmission time can be reduced. In this thesis, we shall focus on the compression of still images. A common characteristic that can be found in most images is that they contain redundant information. This redundant information can be classified as [9]:
· Spatial redundancy ’ Correlation between neighboring pixels values
· Spectral redundancy ’ Correlation between different spectral bands
1.2 Introduction to Image Compression
In general, image compression techniques can be broadly classified into:
· Lossless compression
· Lossy compression
In lossless compression, every bit of information is preserved during the decomposition process. The reconstructed image after compression is an exact replica of the original one. Such scheme only achieves a modest compression rate. It is used in applications where no loss of image data can be compromised. In lossy compression, a perfect reconstruction of the image is sacrificed by the elimination of some amount of redundancies in the image to achieve higher compression ratio. However, no visible loss of information is perceived under normal viewing conditions. Still Image Compression using Wavelet Transform The type of image compression scheme that we will be focusing on is the lossy compression scheme.
2 Objective
This thesis aims to study and understand the general operations used to compress a two-dimensional gray-scale images and to develop an application that allows the compression and reconstruction to be carried out on the images. The application developed aims to achieve:
· Minimum distortion
· High compression ratio
· Fast computation time
To compress an image, the operations include linear transform, quantization and entropy encoding. This thesis will study the wavelet-based transformation and discuss the superior features that it has over the standard Fourier transform. It will also look into quantization and discuss how this operation can help to reduce the volume of an image data before packing them effectively in the entropy coding operation. To reconstruct the image, an inverse operation is performed at every stage of the system in the reverse order of the image decomposition.
1.3 Overview
This thesis is organized as follows:
Chapter 1 ’ Introduction to Image Compression.
Chapter 2 ’ Introduction to Linear Transform. Types of transforms to be discussed include Fourier transform, Short-Time Fourier Transform, Continuous Wavelet Transform and Discrete Wavelet Transform.
Chapter 3 ’ Introduction to Quantization. Types of scalar quantizations that we will be discussing are Uniform Quantization, Subband Uniform Quantization, Uniform Dead-zone Quantization and Non-uniform Quantization.
Chapter 4 ’ Introduction to Entropy encoding. Types of encoders that we will discuss are Huffman and Run length encoders. Still Image Compression using Wavelet Transform
Chapter 5 ’ Discussion on the implementation of the algorithms that we have studied.
Chapter 6 ’ Discussion on types of metrics for measurement of the compression system.
Chapter 7 ’ Analysis and discussion of the results obtained from the simulations.
Chapter 8 ’ Summaries and concludes the result obtained.
Chapter 9 ’ Future improvement.
Wavelet Transformation (Source Listing)
%**************************************************************
% Filename : decompose.m
% Purpose : Decompose the image using 2d forward discrete wavelet transform
%**************************************************************
function wc = decompose(image,fopt,level)
[n_row,n_col] = size(image);
if n_row ~= n_col,
disp('row and col not the same')
%do padding
end
% For Decomposition
% lpfilter : C1 C2 C3 C4
% hpfilter : C4 -C3 C2 -C1
lpfilter = getFilter(fopt);
le = length(lpfilter);
for i=1:le
hpfilter(i) = ((-1)^i) * lpfilter(i); %set hpfilter = -C1 C2 -C3 C4
end;
hpfilter = hpfilter(le:-1:1); %set hpfilter = C4 -C3 C2 -C1
wc = image;
for i=1:level
subimage = wc(1:n_row,1:n_col);
sub_wc = analysisFB(subimage,lpfilter,hpfilter);
wc(1:n_row,1:n_col) = sub_wc;
n_row = n_row/2;
n_col = n_col/2;
end;
%**************************************************************
% Filename : analysisFB.m
% Purpose : Reconstruct the image using 2D forward discrete wavelet transform
%**************************************************************
function wc = analysisFB(data,lpfilter,hpfilter)
[n_row,n_col] = size(data);
n = n_row;
le = length(lpfilter);
%###################### operation on the rows #############################
pad_data = [data data(:,1:le)];
lpcoeff = downsampleConv(pad_data,lpfilter,le,n);
hpcoeff = downsampleConv(pad_data,hpfilter,le,n);
%transpose the matrix
data = [lpcoeff hpcoeff]';
%###################### operation on the columns ###########################
pad_data = [data data(:,1:le)];
lpcoeff = downsampleConv(pad_data,lpfilter,le,n);
hpcoeff = downsampleConv(pad_data,hpfilter,le,n);
%transpose the matrix again
wc = [lpcoeff,hpcoeff]';
return;
%local function
function downsample_coeff = downsampleConv(coeff,filter,le,n)
downsample_coeff = conv2(coeff,filter);
downsample_coeff = downsample_coeff(:, le:2:(le+n-1));
return;
**************************************************************
% Filename : reconstruct.m
% Purpose : Reconstruct the image using 2D inverse discrete wavelet transform
%**************************************************************
function wc = reconstruct(image,fopt,level)
[n_row,n_col] = size(image);
if n_row ~= n_col,
disp('row and col not the same')
%do padding
end
% For Reconstruction
% lpfilter : C4 C3 C2 C1
% hpfilter : -C1 C2 -C3 C4
lpfilter = getFilter(fopt);
le = length(lpfilter);
for i=1:le
hpfilter(i) = ((-1)^i) * lpfilter(i); %set hpfilter = -C1 C2 -C3 C4
end;
lpfilter = lpfilter(le:-1:1); %set lpfilter = C4 C3 C2 C1
wc = image;
n_row = n_row/(2^level);
n_col = n_col/(2^level);
for i=1:level,
n_row = n_row*2;
n_col = n_col*2;
subimage = wc(1:n_row,1:n_col);
sub_wc = synthesisFB(subimage,lpfilter,hpfilter);
wc(1:n_row,1:n_col) = sub_wc;
end;
return;
%**************************************************************
% Filename : synthesisFB.m
% Purpose : Reconstruct the image using 2d inverse discrete wavelet transform
%**************************************************************
function wc = synthesisFB(data,lpfilter,hpfilter)
[n_row,n_col] = size(data);
le = length(lpfilter);
index = (le/2);
%############## operation on the columns #############################
range = n_row/2;
lpcoeff = data(1:range,:);
lpcoeff = lpcoeff'; %transpose
lpcoeff = [lpcoeff(:, range-index+1:range) lpcoeff];
hpcoeff = data(range+1:n_row,:);
hpcoeff = hpcoeff'; %transpose
hpcoeff = [hpcoeff(:,range-index+1:range) hpcoeff];
x = upsampleConv(lpcoeff,lpfilter);
y = upsampleConv(hpcoeff,hpfilter);
data = x + y; %matrix addition
data = data(:,le+1:le+n_row)'; %transpose the matrix for row operations
%###################### operation on the rows ###########################
range = n_col/2;
lpcoeff = data(:,1:range);
lpcoeff = [lpcoeff(:,range-index+1:range) lpcoeff];
hpcoeff = data(:,range+1:n_col);
hpcoeff = [hpcoeff(:,range-index+1:range) hpcoeff];
x = upsampleConv(lpcoeff,lpfilter);
y = upsampleConv(hpcoeff,hpfilter);
wc = x + y; %matrix addition
wc = wc(:,le+1:le+n_col);
return;
%local function
function upsample_coeff = upsampleConv(coeff,filter)
[row,col] = size(coeff);
upsample_coeff = zeros(row,2*col);
upsample_coeff(:,1:2:2*col) = coeff;
upsample_coeff = conv2(upsample_coeff,filter);
return;
Appendix 2
Quantization (Source Listing)
%**************************************************************
% Filename : quantization.m
% Purpose : Quantization; getting the index
%**************************************************************
function index = quantization(indata,partition)
indata = indata(:);
nsize = length(indata);
index = zeros(nsize, 1);
n = length(partition);
for i = 1 : n
index = index + (indata >= partition(i));
end;
return;
%**************************************************************
% Filename : uniformQuantizer.m
% Purpose : Quantize input data uniformly
%**************************************************************
function [outdata,minvalue,maxvalue,stepsize] = uniformQuantizer(indata,quantlevels)
%indata - 1 D
%outdata - 1 D
minvalue = min(indata);
maxvalue = max(indata);
stepsize = abs((maxvalue-minvalue)/quantlevels);
partition = [minvalue:stepsize:maxvalue];
index = quantization(indata,partition); %get the indicies
outdata = reshape(index,1,length(index));
return;
%**************************************************************
% Filename : uniformDequantizer.m
% Purpose : Dequantize input data uniformly
%**************************************************************
function outdata = uniformDequantizer(indata,minvalue,maxvalue,stepsize)
%indata - 1 D
%outdata - 1 D
partition = [minvalue:stepsize:maxvalue];
outdata = partition(indata) + (stepsize/2);
return;
%**************************************************************
% Filename : subbanduniformQuantizer.m
% Purpose : Quantize subband data uniformly
%**************************************************************
function[outdata,minvalue,maxvalue,stepsize]=subbanduniformQuantizer(indata,level,subbandquantlevel,nsize)
% indata - 1 D data
% outdata - 1 D data
n_row = nsize;
n_col = nsize;
indata = reshape(indata,n_row,n_col);
outdata = indata;
index = 1;
for i=1:level,
HLimg = indata(1:n_row/2,(n_col/2)+1:n_col);
LHimg = indata((n_row/2)+1:n_row,1:n_col/2);
HHimg = indata((n_row/2)+1:n_row,(n_col/2)+1:n_col);
tmp_img = [HLimg(:) LHimg(:) HHimg(:)];
tmp_img = tmp_img(:);
maxvalue(index) = max(tmp_img);
minvalue(index) = min(tmp_img);
stepsize(index) = (maxvalue(index) - minvalue(index))/subbandquantlevel(i);
partition = [minvalue(index):stepsize(index):maxvalue(index)];
%########### HL sector #######
HLimg = quantization(HLimg,partition); %return indices
HLimg = reshape(HLimg,(n_row/2),(n_col/2));
%########### LH sector #######
LHimg = quantization(LHimg,partition); %return indices
LHimg = reshape(LHimg,(n_row/2),(n_col/2));
%########### HH sector #######
HHimg = quantization(HHimg,partition); %return indices
HHimg = reshape(HHimg,(n_row/2),(n_col/2));
outdata(1:n_row/2,(n_col/2)+1:n_col) = HLimg;
outdata((n_row/2)+1:n_row,1:n_col/2) = LHimg;
outdata((n_row/2)+1:n_row,(n_col/2)+1:n_col) = HHimg;
n_row = n_row/2;
n_col = n_col/2;
index = index + 1;
end;
%finally LL sub-band
LLimg = indata(1:n_row,1:n_col);
maxvalue(index) = max(max(LLimg));
minvalue(index) = min(min(LLimg));
stepsize(index) = (maxvalue(index) - minvalue(index))/subbandquantlevel(i);
partition = [minvalue(index):stepsize(index):maxvalue(index)];
%########### LL sector #######
LLimg = quantization(LLimg,partition);
LLimg = reshape(LLimg,n_row,n_col);
outdata(1:n_row,1:n_col) = LLimg;
outdata = reshape(outdata,1,nsize*nsize);
return;
%**************************************************************
% Filename : subbanduniformDequantizer.m
% Purpose : Dequantize subband data uniformly
%**************************************************************
functionutdata=subbanduniformDequantizer(indata,level,minvalue,maxvalue,stepsize,nsize)
% indata - 1 D data - contains indices
% outdata - 1 D data
indata = reshape(indata,nsize,nsize);
[n_row,n_col] = size(indata);
outdata = indata;
index = 1;
for i=1:level,
partition = [minvalue(index):stepsize(index):maxvalue(index)];
%########### HL #############
HLimg = indata(1:n_row/2,(n_col/2)+1:n_col);
HLimg = reshape(HLimg,1,(n_row/2)*(n_col/2));
HLimg = partition(HLimg)+(stepsize(index)/2);
HLimg = reshape(HLimg,(n_row/2),(n_col/2));
%######### LH #########
LHimg = indata((n_row/2)+1:n_row,1:n_col/2);
LHimg = reshape(LHimg,1,(n_row/2)*(n_col/2));
LHimg = partition(LHimg)+(stepsize(index)/2);
LHimg = reshape(LHimg,(n_row/2),(n_col/2));
%############ HH ############
HHimg = indata((n_row/2)+1:n_row,(n_col/2)+1:n_col);
HHimg = reshape(HHimg,1,(n_row/2)*(n_col/2));
HHimg = partition(HHimg)+ (stepsize(index)/2);
HHimg = reshape(HHimg,(n_row/2),(n_col/2));
outdata(1:n_row/2,(n_col/2)+1:n_col) = HLimg;
outdata((n_row/2)+1:n_row,1:n_col/2) = LHimg;
outdata((n_row/2)+1:n_row,(n_col/2)+1:n_col) = HHimg;
n_row = n_row/2;
n_col = n_col/2;
index = index + 1;
end;
%finally LL sub-band
partition = [minvalue(index):stepsize(index):maxvalue(index)];
%############ LL ############
LLimg = indata(1:n_row,1:n_col);
LLimg = reshape(LLimg,1,n_row*n_col);
LLimg = partition(LLimg)+ (stepsize(index)/2);
LLimg = reshape(LLimg,n_row,n_col);
outdata(1:n_row,1:n_col) = LLimg;
outdata = reshape(outdata,1,nsize,nsize);
return;
%**************************************************************
% Filename : uniformDeadZoneQuantizer.m
% Purpose : Dead Zone Quantization
%**************************************************************
function [ outdata, minvalue, maxvalue, stepsize] = uniform Dead Zone Quantizer (wdata, quantlevel, threshold, option)
if option == 1
%-ive value
index = find (wdata <= threshold);
else
%+ive values
index = find (wdata >= threshold);
end
n = length(wdata);
outdata = zeros(1,n);
quantsOne=wdata(index);
[quants,minvalue,maxvalue,stepsize] = uniformQuantizer(quantsOne,quantlevel);
outdata(index)=quants;
return
%**************************************************************
% Filename : uniformDeadZoneDequantizer.m
% Purpose : Dead Zone Dequantization
%**************************************************************
Functionoutdata = Uniform Dead Zone Dequantizer (indata, minvalue, maxvalue, stepsize, qlevel, option)
n = length(indata);
outdata = zeros(1,n);
d_indata = indata;
if option == 1
d_index = find((indata <= (qlevel + 1)) & (indata ~= 0));
quants = indata(d_index);
else
d_index = find(indata > (qlevel + 1));
quants = indata(d_index);
quants =quants - qlevel - 1;
end
partition = [minvalue:stepsize:maxvalue];
quants = partition(quants) + (stepsize/2);
outdata(d_index)=quants;
return
%**************************************************************
% Filename : nonuniformQuantizer.m
% Purpose : Compressor
%**************************************************************
function outdata = nonuniformQuantizer(indata,par,option,maxvalue,nsize)
%indata - 1d data
%outdata - 1d data
indata = reshape(indata,1,nsize*nsize);
if option == 1 %mu_law
mu = par;
outdata = u_law_quant(indata,mu,maxvalue);
end
return
function [quantdata,stepsize] = u_law_quant(indata,mu,maxvalue)
quantdata = (maxvalue/log(1 + mu)) * log(1 + ((mu*abs(indata))/maxvalue)) .* sign(indata);
return
%**************************************************************
% Filename : nonuniformDequantizer.m
% Purpose : Expander
%**************************************************************
function outdata = nonuniformDequantizer(indata,par,option,maxvalue)
% indata - 1 D
if option == 1
%u_law
mu = par;
outdata = u_law_dequant(indata,mu,maxvalue);
end
return
function outdata = u_law_dequant(indata,mu,maxvalue);
outdata = maxvalue / mu * (exp(abs(indata) * log(1 + mu) / maxvalue) - 1) .* sign(indata);
return
Appendix 3 _ Entropy Encoder (Source Listing)
%**************************************************************
% Filename : huffman.m
% Purpose : Encode input data using huffman encoder
%**************************************************************
function [symbol,codewordbits,tbitlen,compressedbits] = huffman(indata)
%indata = 1-D data
%get hist of the input data
[freq,sym] = histo(indata);
prob = freq / sum(freq);
%sort prob in descending order
[prob,index] = sort(-prob);
prob = abs(prob)
n = length(prob);
symbol = sym(index)
%create binarytree
binarytree = createBinaryTree(prob,n);
%create codeword
[codeword, codewordbits] = computeCodeWord(binarytree,n);
[bitstring, tbitlen] = encoder(codeword,symbol,indata);
compressedbits = compress(bitstring);
return;
function [freq,udata] = histo(indata)
%indata is 1D vector
count = length(indata);
freqcount = 1; prev = 1;
sortdata = sort(indata);
udata(prev) = sortdata(1);
freq(prev) = freqcount;
for curr=2:count,
if udata(prev) == sortdata(curr),
freqcount = freqcount + 1;
freq(prev) = freqcount;
else
prev = prev + 1;
freqcount = 1;
udata(prev) = sortdata(curr);
freq(prev) = freqcount;
end
end
return
function binarytree=createBinaryTree(prob,n)
prob = [prob zeros(1,n - 1) + 2];
binarytree = zeros(n-1, 3);
for i = 1:n-1,
[index1, index2] = find2minvalues(prob);
binarytree(i, 1) = prob(index1) + prob(index2);
prob(n + i) = binarytree(i, 1);
binarytree(i, 2) = index1;
binarytree(i, 3) = index2;
prob(index1) = 2; %reset
prob(index2) = 2; %reset
end
return
function [index1, index2] = find2minvalues(prob)
[minvalue index1] = min(prob);
prob(index1) = 2;
[minvalue index2] = min(prob);
return
function [codeword, codewordbits]=computeCodeWord(binarytree,n)
codeword = zeros((n * 2) - 2, 1) - 1;
x1 = binarytree(n-1,2);
codeword(x1,1) = 0;
x2 = binarytree(n-1,3);
codeword(x2,1) = 1;
codelen = 1;
for i = n-2:-1:1,
tcode = find(codeword(i + n,:) > -1);
tempcode = codeword(i + n,tcode);
tlen = length(tempcode);
%increase code length as required
[row, col] = size(codeword);
while col < (tlen + 1),
codeword = [codeword (zeros((n * 2) - 2, 1) - 1)];
col = col + 1;
codelen = codelen + 1;
end
x1 = binarytree(i,2);
codeword(x1, 1:tlen) = tempcode;
codeword(x1, tlen + 1) = 0;
x2 = binarytree(i,3);
codeword(x2, 1:tlen) = tempcode;
codeword(x2, tlen + 1) = 1;
end
codeword = codeword(1:n, :)
for i=1:n
nlen = length(find(codeword(i,:)> -1));
codewordbits(i,1) = nlen;
tbits = codeword(i,1:nlen); % store the number of bits for each codeword
binstr = sprintf('%d',tbits);
codewordbits(i,2) = bin2dec(binstr); % store the codeword
end
return
function [bitstring, tbitlen] = encoder(codeword,udata,wdata)
tbitlen = 0;
n = length(wdata);
loc = find(udata == wdata(1));
tmpbitstring = codeword(loc,:);
loci = find(tmpbitstring == -1);
if isempty(loci)
bitstring = tmpbitstring;
else
bitstring = tmpbitstring(1:loci-1);
end
for i=2:n,
loc = find(udata==wdata(i));
tmpbitstring = codeword(loc,:);
loci = find(tmpbitstring==-1);
if isempty(loci)
bitstring = [bitstring tmpbitstring];
else
bitstring = [bitstring tmpbitstring(1:loci-1)];
end
end
tbitlen = length(bitstring);
return
function compressedbits = compress(bitstring)
n = length(bitstring);
whole = floor(n/8);
remainder = mod(n,8);
base = 1;
for i=1:whole
tbits = bitstring(base:base+7); %8 bits
binstr = sprintf('%d',tbits);
compressedbits(i) = bin2dec(binstr);
base = base + 8;
end
if remainder > 0
i = i+1;
tbits = bitstring(base:base+remainder-1);
binstr = sprintf('%d',tbits);
compressedbits(i) = bin2dec(binstr);
end
return
%**************************************************************
% Filename : dehuffman.m
% Purpose : Decode input data using huffman decoder
%**************************************************************
function wdata = dehuffman(ssymbol,compressedbits1,codewordbits1,tbitlen1)
%wdata- 1D data
[codeword1,code_row] = decompresscodeword(codewordbits1);
bitstring1 = decompress(compressedbits1,tbitlen1);
wdata = decoder(ssymbol,codeword1,code_row,bitstring1);
[row col] = size(wdata);
return;
function [codeword1,code_row]=decompresscodeword(codewordbits1)
nsize = length(codewordbits1);
code_row = nsize/2;
max_col = codewordbits1(1:code_row);
ncol = max(max_col);
z = zeros(1,ncol);
zs = sprintf('%d',z);
base = code_row;
for i=1:code_row
nbits = codewordbits1(i);
codeword1{i,1} = nbits; %store number of bits rep the codeword
strbits = dec2bin(codewordbits1(i+base));
left = nbits-length(strbits);
if left > 0
%need to concat
codeword1{i,2} = {strcat(zs(1:left),strbits)}; %store the codeword
else
%dun need to concat
codeword1{i,2} = {strbits}; %store the codeword
end
end
return
function bitstring = decompress(compressedbits,tbitlen)
n = length(compressedbits);
nbits = 8;
z = zeros(1,nbits);
zs = sprintf('%d',z);
bitstring = '';
for i=1:n-1
strbits = dec2bin(compressedbits(i));
left = nbits - length(strbits);
if left > 0
strbits = strcat(zs(1:left),strbits);
end
bitstring = strcat(bitstring,strbits);
end
i=i+1;
strbits = dec2bin(compressedbits(i));
left = tbitlen - length(bitstring) - length(strbits);
if left > 0
strbits = strcat(zs(1:left),strbits);
end
bitstring = strcat(bitstring,strbits);
return
function output=decoder(ssymbol,codeword1,code_row,bitstring1)
n = length(bitstring1);
startpos = 1;
k = 1;
while startpos <= n
for j=1:code_row
len = codeword1{j,1};
cword = codeword1{j,2};
endpos= startpos+len-1;
if (endpos <= n)
status=strcmp(cword,bitstring1(startpos:endpos));
if status == 1, %same
%get symbol corresponding to codeword
output(k) = ssymbol(j);
k = k + 1;
break;
end
end
end
startpos = endpos+1;
end
return
%**************************************************************
% Filename : RLE_compress.m
% Purpose : Encode the file using RLE
%**************************************************************
function outdata=RLE_compress(indata,option)
%indata - 1 D
%outdata - 1 D
if option == 1
outdata=RLE_compress_1(indata);
else
%when input contents more zeros
outdata=RLE_compress_2(indata);
end;
return;
function outdata=RLE_compress_1(indata)
lensize=length(indata);
index = 1; count = 1;
prev = indata(1);
i = 2;
while i <= lensize,
if indata(i) == prev,
count = count + 1;
%reset counter, in case number of zero is larger.
if count >=255,
outdata(index) = 0; %to indicate the next number is a count
outdata(index + 1) = prev;
outdata(index + 2) = count;
index = index + 3;
count = 1;
i = i + 1;
prev = indata(i);
end
else
if (count <= 1)
outdata(index) = prev;
index = index + 1;
else
%previous number is a zero
if count >= 3
outdata(index) = 0; %to indicate the next number is a count
outdata(index + 1) = prev;
outdata(index + 2) = count;
index = index + 3;
count = 1;
else
%if count is 1 or 2
outdata(index) = prev;
outdata(index+1) = prev;
index = index + 2;
count = 1;
end
end;
prev = indata(i);
end;
i = i + 1;
end;
%last byte
if i>= lensize
if count == 1
outdata(index) = indata(i-1);
else
outdata(index) = 0; %to indicate the next number is a count
outdata(index + 1) = prev;
outdata(index + 2) = count;
end
end
return;
function outdata=RLE_compress_2(indata)
lensize=length(indata);
index = 1;
count = 0;
i = 1;
while i <= lensize,
if indata(i) == 0,
count = count + 1;
outdata(index) = 0;
outdata(index + 1) = count;
%reset counter, in case number of zero is larger.
if count >=255,
count = 0;
index = index + 2;
end
else
%current value is not zero
if (count > 0)
index = index + 2;
count = 0;
end;
outdata(index) = indata(i);
index = index + 1;
end;
i = i + 1;
end;
return;
%**************************************************************
% Filename : RLE_decompress.m
% Purpose : Decode the file using RLE
%**************************************************************
function outdata=RLE_decompress(indata,option)
%indata - 1 D
%outdata - 1 D
if option == 1
outdata=RLE_decompress_1(indata);
else
outdata=RLE_decompress_2(indata);
end;
return;
function outdata=RLE_decompress_1(indata)
lensize = length(indata);
compressdata = reshape(indata,1,lensize);
index = 1;
i = 1;
while i <= lensize
if compressdata(i) == 0,
value = compressdata(i+1);
count = compressdata(i+2);
for j=1:count,
outdata(index+j-1) = value;
end;
index = index + count;
i = i + 2;
else
outdata(index) = compressdata(i);
index = index + 1;
end;
i = i + 1;
end;
return;
function outdata=RLE_decompress_2(compdata)
lensize = length(compdata);
compressdata = reshape(compdata,1,lensize);
index = 1;
i = 1;
while i <= lensize
% if i >= lensize - 10
% a=1
% end
if compressdata(i) == 0,
count = compressdata(i+1);
if count == 0
outdata(index) = 0;
count = 1;
else
for j=1:count,
outdata(index+j-1) = 0;
end
end
index = index + count;
i = i + 1;
else
outdata(index) = compressdata(i);
index = index + 1;
end;
i = i + 1;
end;
return;
Results and Discussion
7.1 Introduction
The most important reason why wavelet transformation is so powerful is it’s Multi-Resolution Analysis(MRA) capability, which allows it to analyze signal at various scale and resolution. With help of the simulated program, we are able to capture how wavelets transform a two-dimensional image. In this section, we shall discuss wavelet transformation technique and the effect of performing multiple decompositions. Then we will proceed with the discussion on the results obtained for different combinations of quantizers and encoders on the image.
7.2 Analysis of image using Wavelet Transform
Wavelet transformation is powerful because of it’s multi-resolution decomposition technique. This technique allows wavelets to decorrelate an image and concentrate the energy in a few coefficients. In this section, we shall see how wavelets transform an image and distribute the energy at different decomposition levels.
Wavelets analyze the original Lean image in Figure 7.1(a) by decorrelating the high frequency in the image from the low frequency. In doing so,the wavelet transform is able to concentrate most of the information about the image to a few coefficients. Figure 7.1(b) shows the output of the image after first-level decomposition. This image is divided into 4 sub-images term as the LL(top-left),HL(top-right), LH(bottom-left) and HH(bottom-right) sub-images. The approximated version of the image is found in the LL sub-image while the details of the image are found in the rest of the sum-images. Table 7.1shows that the LL region contains the most information about the image having the energy of 99% while the rest of the 1% is distributed to the LH, HL and HH sub-images for the first-level decomposition. These energy distributions suggest that the values of the coefficients in the LL region are significant while the values of the coefficients in the rest of the sub-images are insignificant.
In order to contain so much information in a small region (LL), the magnitudes of the wavelet coefficients increase. This can be shown by the increase in the range of the wavelet coefficients in Figure 7.1 (e). Besides that, a large number of insignificant coefficients in the detailed sub images are found in the zero or near the zero region.
As the decomposed level increases, the range of the wavelet coefficients increases and a large number of insignificant coefficients near the zero regions also increase as shown in Figure 7.1(f). However, there is a drop in the energy level for the LL sub-images shown in Table 7.1. But for the rest of the HL,LH and HH sub images, the energy level increase steadily. The total energy for each decomposition decreases with the decomposition level increases. This is significant especially for level 4 onwards. The reason for the drop in the total energy is due to the Heisenberg Inequality, which says that it is impossible to localize a fixed amount of energy to an arbitrary small time interval. That explains the leaks in the energy as the decomposition level increases. Thus we chooses a 2-level decomposition for our compression the reason being that we want the image to have sufficient decorrelation and at the same time to preserve as much energy as possible.
7.3 Simulations
In this section, we shall discuss the performances of various combinations of encoders and quantizers on the Lena image. These operations act on the Lena image that has already been decorrelated by the wavelet transformation. The discussion will be based on the compression ratio, time and image quality of the4 compressed.
0 comments:
Post a Comment