CHAPTER-1
INTRODUCTION
CHAPTER I
INTRODUCTION
General Introduction
The digital representation of images and videos allows processing and archivingtasks to be integrated in multimedia platforms, computing and communications.The increasing demand for multimedia content such as digital images and videohas led to great interest in research into compression techniques. The development of higher quality and cheaper image acquisition devices has resulted in a steady increase in image size and resolution, which translates into further development of efficient compression systems. Although the storage capacity and transfer bandwidth have increased accordingly in recent years, many applications still require compression.
In general, this thesis investigates still image compression in the transform domain. Multidimensional, multispectral and volumetric digital images are the main topics of analysis.
The main objective is to design a compression systemsuitable for processing, storage and transmission, as well as providing acceptablecomputational complexity suitable for practical implementation. The basic rule of compression is to reduce the number of bits needed to represent an image.
The motivation for this study is discussed in detail in the next section, followedby a short introduction to image compression. Following this, the detailed objectives of the thesis are presented. An overview of this thesis is given in the finalsection of this chapter.
Motivation for the study
Digital image compression algorithms exploit redundancy in an image so that it can be represented using fewer bits while maintaining acceptable visual quality. Factors related to the need for image compression include :-
The large storage requirements for multimedia data
Low-power devices, such as mobile phones, have reduced storage capacity
Network bandwidths currently available for transmission
The effect of computational complexity on practical implementation.
Recent progress in image compression research offers various solutions to reducestorage requirements based on a wide range of techniques, including predictivecoding, transform coding, block truncation coding, subband coding and hierarchicalcoding. Wavelet-based compression allows the compression parameters tobe changed at the time of decoding. This is also known as rate scalability, whichrefers to the capability of decoding a compressed sequence at different rates. Ratescalability is useful for the transmission of compressed data between different devices.However, the challenges faced in image and video compression researchare not limited to absolute storage and bandwidth concerns. An acceptable levelof computational complexity in coding and decoding is also important for practicalapplications. As image resolution grows, the memory requirements for thecompression algorithms might also grow in such a way that would limit the usageof compression techniques in embedded applications such as high resolutionprinters and scanners. The memory needed will be higher when using compositetechniques for multispectral images. So, lower and fixed memory requirementsduring coding would be a good solution if the rate scalable properties of multispectralimages could be maintained. This thesis investigates these problems,considering multispectral properties such as colour and satellite images and threedimensional medical images.
There already have been many very successful works on image compression, anda large variety of algorithrn have been proposed. A standard compression algorithrn,JPEG, is available which will get good results on most images except when the compressionratio is high. Recently, the wavelet transform was proposed and it canachievea better compression ratio without increasing computational carnplexity.
The efficient representation of visual information is at the centre of image compression systems. The efficiency of a representation refers to the capture of significant information about an object of interest in a smaller description. Recentresearch suggests that separable wavelet transform is not efficient in representing singularities such as contours and edges in images. However, traditional separablewavelet transforms benefits from available coding techniques that are efficientlyable to capture the significant coefficient. This thesis investigates available solutionsfor directional coding in wavelet domain and their implementation forcolour image compression.
Overview image compression systems
This section provides a brief overview of digital images and compression systems.The basic framework of image compression is discussed; although further detailsregarding image transforms/representations and coding are presented in Chapter3. A digital image represents a two-dimensional array of samples, where eachsample is called a pixel. Precision is determined by how many levels of intensitycan be represented, and this is expressed as the number of bits per pixel (bpp). The value of bppreacts different components of the colour systems used. Forexample, in grayscale images the values represent brightness or luminance resolutionand range from 1, 2, 4,8,12 or 16 bpp. For RGB colour images, the valuesrepresent the intensity of each colour space, and resolution is usually 24 bpp.An ideal image compression would remove redundant and irrelevant informationbefore the coding process. Redundancy in images can be classified as statisticalredundancy or psycho visual redundancy 1. Statistical redundancy can be classified into three types 2 :Spatial, due to the correlation between neighboring pixels in an image;
Spectral, from correlation between colour planes or spectral bands;
Temporal, in terms of correlation between neighboring frames in a sequenceof images.
Figure 1.1 A general image compression framework 2
Irrelevant information or psycho-visual redundancy refers to the limitations of orvariations in the human visual system (HVS) in responding to certain stimuliunder certain viewingconditions. In image compression systems, different colourcomponents are often compressed separately as different grayscale images, andthey can be represented with different spatial resolutions. However, the implementationof composite methods can further exploit spectral redundancy andintroduce rate scalable colour image coding. Image compression can be classified as lossless and lossy compression. The basic framework of both types ofcompression system is shown in Figure 1.1 2.Data lossless compression maintains a small compression, in which the rebuilt image is numerically identical to the original image. This type of compression is important for applications suchas medical and satellites imaging, where distortion or loss of information is unacceptable.However, lossless compression can only achieve a modest degree ofcompression at ratios of around 2 – 5: 1 with a completely reversible process.Lossless algorithms usually compress the source to bit-rates close to its entropy.
The quantization process is removed in lossless compression so that the imagecan be recovered exactly. However, this thesis investigates lossless compressionwith progressive capability, where image preview is available during transmissionor download, thus quantization process is applied. On the other hand, lossy compressionallows information loss so that images can be represented with reducedbit-rates, which enables higher compression ratios. This type of compressionis suitable for transmission over limited bandwidths to different platforms. Ingeneral, a suitable transform is required to represent the image with a reduceddynamic range and removing redundant information. After that, the coefficientgenerated can be efficiently coded using quantization techniques to reduce thebits needed to represent the images. Further entropy coding will compress thegenerated bit before transmission or storage.
In the era of multimedia (document, audio, video, and image) technology, the amount orsize of information has increased in its transmission and reception. In fact, this information requires a large amount of storage space and transmission bandwidth for its handling which has been challenging. An image requires more storage capacity in comparison to document.
So, particularly image compression plays a very vital role in multimedia computer services as well as telecommunication. For the needs of high quality images, fast transmission, and less storage space, image compression is demanding increasingly. For the last decade, various methods have been implemented to perform image compression. The common motto among all image compression technique is to alter the representation of information contained in the image so that image can be represented sufficiently well with less information. Said and Pearlman introduced such a technique in their SPIHT (Set Partitioning In Hierarchical Trees) algorithm, in their successful effort to extend and improve Shapiro’s EZW (Embedded Zero tree Wavelet) algorithm. The SPIHT algorithm has become a standard reference point in image compression. The SPECK Coding retains all the desirable features of these algorithms and is entirely competitive to them in compression efficiency.
In the original SPECK coding technique the requirementsof the memory and transmission time is large because inwhich for transmit the image pixel value, LIS andmatrixvector value can be taken large memory and time. But inthe case of SPECK fast coding technique solve theseproblems i.e. the requirements of the memory and timefor transmit a image by divided the matrix in the form ofblocks. From the above results or reading it is clear thatthe requirement of PSNR, time and memory is needed byoriginal image is greater than the compressed image. Thuswe can say that the requirement of PSNR, time andmemory is reduced by applying the SPECK coding 3.
Though the contemporary image coding methods such asJPEG20005, set partitioning in hierarchical trees(SPIHT) 5 and set partitioned embedded block coding(SPECK) 6, support a wide range of functionality buteither with increased computational complexity (e.g.JPEG2000), or increased memory requirement (e.g.SPIHT and SPECK6). The SPIHT 5 and SPECK 6algorithms are fast and effective techniques forcompression of wavelet transformedimages. They achievecompression by aggregating a large number ofinsignificant coefficients in spatial trees (zero trees) andspatial blocks (zero-blocks) respectively and use linkedlistto keep track of which coefficients/block/sets are tobe tested for theirsignificance/insignificance. Use oflinked-list require variable and data dependent memory,but also necessitate the need for memory managementand adding complexity due to multiple access of memory.
These problems become more severe for large resolution images or when images are encoded/ decoded at higher data rates using portable devices with limited resources. Over the years, a number of researchers have suggested listless implementation of zero tree codes 7, 8 to reduce the dynamic memory requirement. These codec avoids the use of variable and data dependent linked lists,by using fixed size state tables and markers to keep trackof set partitioning. Motivated by listless implementationsof SPIHT algorithms 8.The proposed version of SPECKalgorithm uses a state table with 2 bit per coefficient tokeep track of blocks and coefficients to be tested for theirsignificance.
1.4Objective of the work
In this research work, a new set-partitioning, block-based embedded image coding scheme, the SPECK image coder has been used. The key to the algorithm is too properly and efficiently sort sets of wavelet coefficients according to their maximum magnitude with respect to a sequence of declining thresholds. The sorting mechanism is partitioning by quadric-section guided by threshold significance tests. This allows processing of different regions of the transformed image based on their energy content. The proposed coder can be viewed as an efficient low-complexity block entropy coding scheme having the much desirable properties of embedded, progressive transmission and fast encoding, decoding. The platform for simulation is acomputer system with Windows 7, and SPECK, and the proposed method are coded by Matlab. The simulation results will be carried out on various images showing a better performance of the proposed technique.
1.5Organization of the thesis
Present thesis is organized in seven chapters as discussed below:
Chapter 1: This chapter includes general introduction,Overview image compression systems, motivation and objective of the work. Finally, this thesis chapter explains organization of the thesis.
Chapter 2: This chapter explains literature reviews of various journals and conference publications.
Chapter 3:This chapter includes block overview and coding methodologyas Integer Wavelet Transform(IWT), SPIHT Algorithm with details.
Chapter 4:This chapter includes performance evaluation parameters as per objective and subjective.
Chapter 5: This chapter includes synthesis and simulated results of thesis work.
Chapter 6: This chapter includes the conclusion of the work with limitations and future scope.
1.6Conclusion
In this chapter introduction of this thesis, overview of this thesis discussed. Also explains asa overview image compression system and objective of the thesis.
CHAPTER 2
LITERATURE REVIEW
CHAPTER 2
LITERATURE REVIEW
2.1 Introduction
This section summarizes the pioneer works on image compression. The main objective is to design a compression systemsuitable for processing, storage and transmission, as well as providing acceptablecomputational complexity suitable for practical implementation.
2.2 Review
The digital representation of images and videos allows processing and archivingtasks to be integrated in multimedia platforms, computing and communications.The increasing demand for multimedia content such as digital images and videohas led to great interest in research into compression techniques.The development of higher quality and cheaper image acquisition devices has resulted in a steady increase in image size and resolution, which translates into further development of efficient compression systems. Although the storage capacity and transfer bandwidth have increased accordingly in recent years, many applications still require compression.
For the needs of high quality images, fast transmission, and less storage space, imagecompression is demanding increasingly. Differential pulse code modulation, transformcoding, subband coding, and many other image compression techniques have been developed9-12. State-of-the-art techniques can compress typical images by a factor rangingfrom 10 to 50 with acceptable quality 13. The Joint Photographic Experts Group (JPEG)image standard 14 known as the most widely used transform-coding based algorithmshows good performances at moderate compression ratios. Recently, the wavelet basedmultiple solution representation has received a lot of attention to the compression applications,as manifested in the JPEG2000 standard 15, 16. Many wavelet based image codingalgorithms such as the embedded zero-tree wavelets (EZW) 17, set partitioning in hierarchical trees (SPIHT) 18, morphological representation of wavelet data (MRWD)19, group testing for wavelets (GTW) 20, and the set-partitioning embedded blockcoder (SPECK) 21, 22 have been proposed with a great success. In wavelet domain, thehigher detailed components of an image are projected onto the shorter basis functionswith higher spatial resolutions, and the lower detailed components are projected onto thelarger basis functions with narrower bandwidths; this matches the characteristics of ahuman visual system 23.
Image compression technique involves removal of redundant data, the number of bits required to represent an image is minimized in compression process. Generally, all current images can be classified into either lossless or lossy compression. Lossless compression techniques achieve considerable compression ratio and at the same time the original data retained by the means of coding or interpixel redundancy removal 2. At present both these compression techniques lossy(JPEG,JPEG2000) and lossless (JPEG_LS,JPEG-lossless) are adopted in different communication and computer application 3.
New technology such as artificial neural network and genetic algorithms being developed to achieve high compression rate, near to be lossy technique and image quality is as good as in case of lossless technique. For this purpose there numbers of neural networks have been developed for image compression these are direct compression by neural networks, neural network implementation of existing technologies, neural network based technologies which provide improvement over traditional algorithms.
Back propagation is type of neural networks which can be directly used for the purpose of image compression 4 5 6 7. The neural network structure for back propagation consists of three layers one input layer, one hidden layer and one output layer .The two layers named input layer and output layer are fully connected to the hidden layer. Compression is achieved by designing the number of neurons at hidden layer (K), the number of neurons at hidden layer should be always lesser than the number of neurons at input layer to get compression.
The input image is split up in to blocks or vectors of 4×4, 8×8, or 16×16 pixels. When the input block is referred to as N-dimensional having equal number of pixels included in each block , all connection weights connected to each neurons at hidden layer can be represented by {wji , j=1,2,…..,k and i=1,2,…N}; this hidden layer can also be described by a matrix of order K×N. The connection weights from the hidden layer to output layer cab be denoted by {w`ij, 1? i? N, 1? j? K) which is also weight matrix having order N×K. The compression of the image is obtained by training the network in such a way that the weight of the joint scales the input vector of the N dimension into a narrow channel of the dimension K (K ;N) in the hidden layer and generates the optimal output value which makes the minimum quadratic error between input and output. Where, xi ? 0, 1 denotes normalized pixel values ??for monochromatic images with gray levels 0.255. The reason for using normalized pixel values is due to the fact that neural network can work more effectively when both their inputs and outputs are limited to a range of 0, 1 .The above linear networks can also be transformed into non-linear network if a transfer functions for example sigmoid is added to the hidden layer and the output layer.
Experiments carried out on a various samples report that linear networks actually outperform the non-linear one in terms of both training speed and compression performance. Most of the back propagation neural networks developed for image compression are designed as linear.The back-propagation neural network is further extended to build a hierarchical neural network by adding two more hidden layers into the existing neural network 8. In the hierarchical neural networks there are three hidden layers are termed as the combiner layer, the compressor layer and the de -combiner or de-compressor layer.
2.3 Conclusion
This section concludes the pioneer works onimage compression.This chapter also discusses about the different proposed work and algorithm of image compression such as IWT, SPIHT, and JPGEalgorithm.All techniques focus on lossless or lossy image compression.
CHAPTER 3
SPIHT / IWT
ALGORITHM
CHAPTER 3
SPIHT / IWTALGORITHM
3.1 Introduction
Set Partitioning in Hierarchical Trees (SPIHT) algorithm is based on embedded zero treewavelet (EZW) coding method; it employs spatial orientation trees and uses set partitioningsorting algorithm 57. Coefficients corresponding to the same spatial location in different subbands in the pyramid structure display self-similarity characteristics. SPIHT defines parent-child relationships between these self-similar sub-bands to establish spatially oriented trees. IntegerWavelet Transform (IWT) 11 is lossless compression 13 for medical images. Traditional WT,generally produces the floating point coefficients for the givensignals or pixels.These section we describe overview and Set Partitioning in Hierarchical Trees (SPIHT), Integer wavelet transform (IWT) Algorithm.
3.2 Set Partitioning In Hierarchical Trees (SPIHT)
The SPIHT1 image coding algorithm was developed in 1996 bySaid and Pearlman and is another more efficient implementationof the embedded zerotree wavelet (EZW)28 algorithm byShapiro. Order of the partial SPIHT algorithm of the elements of the image transformed into dimension, with transmission of orders by a partition algorithm that is duplicated in the decoder.SPIHT exploitation of the self-similarity of the image wavelet transform across different scales.3.2.1 Steps in SPIHT Algorithm
Step1: In the sorting pass, the List of Insignificant Pixel (LIP) is scanned to determine whetheran entry is significant at the current threshold. If an entry is found to be significant, output a bit’1′ and another bit for the sign of the coefficient, which is marked by either ‘1’ for positive or’0′ for negative. Now the significant entry is moved to the list of significant pixel (LSP). If anentry in LIP is insignificant, a bit ‘0’ is output.
Step2: Entries in List of Insignificant Set (LIS) are processed. When an entry is the set of alldescendants of a coefficient, named ‘type A’, magnitude tests for all descendants of the currententry are carried out to decide whether they are significant or not. If the entry is found to be assignificant, the direct offspring’s of the entry undergoes magnitude tests. If direct offspring issignificant, it is moved into LIP; otherwise it is moved into LSP. If the entry is deemed to beinsignificant, this spatial orientation tree rooted by the current entry was a zero-tree, so a bit ‘0’is output and no further processing is needed.
Finally, this entry is moved to the end of LIS as ‘type B’, which is the set of all descendantsexcept for the immediate offspring of a coefficient. If the entry in LIS is type B, significancetest is performed on the descendants of its direct offspring. If significance test is true, the spatialorientation tree with root of type B entry is split into four sub-trees that are rooted by the directoffspring and these direct offspring’s are added in the end of LIS as ‘type A’ entries. Theimportant thing in LIS sorting is that entire sets of insignificant coefficients, zero-trees, arerepresented with a single zero. The purpose behind defining spatial parent-children relationshipsis to increase the possibility of finding these zero-trees.Step3: Finally, refinement pass is used to output the refinement bits (nth bit) of the coefficientsin LSP at current threshold. Before the algorithm proceeds to the next round, the currentthreshold is halved.
3.2.2 SPIHT Algorithm
1) Partition Rules of Algorithm
The main idea of ??the SPIHT algorithm 12, 13 is to progressively reduce the threshold sequence by generating a series of important diagrams (or bitmaps) to gradually approximate each wavelet coefficient.
For any node (i , j) bring in four sets as follow:
O (i, j): The coordinate sets of the four direct subsequent node of (i, j);
D (i, j): The coordinate sets of all the subsequent node of (i, j);
L (i, j): The coordinate sets of the entire subsequent node except direct subsequent node of namely:
L (i, j) = D (i, j) – O (i, j)
H: The top of coordinate sets in decomposition of the wavelet pyramid.
For all child nodes except root node of(i,j) =
O (i, j) = {(2i, 2j) ,(2i, 2j+1),(2i+1, 2j),(2i+1,2j+1)}
Figure 3.1 Decomposition of lifting algorithm
Figure 3.2 9-7 integral lifting wavelet schemes
a) The initial segmented set consists of (i,j)and D(i,j) , D(i,j) ?H
b) If D (i,j) is important, split it into and four single element sets. (k,l) ? (i,j)
c) If L (i, j)is important, split it into four setsD(i,j) , (k,l) ? O (i, j)
The basic operations of SPIHT algorithm are set test, it description as follows:
2) Algorithm Description
SPIHT algorithm consists of three linked list to record coding information:
LIS: List of Insignificant Sets;
LIP: List of Insignificant Pixels;
LSP: List of Significant Pixels;
a) Initialization
Suppose,
Ci,jare wavelet coefficients, the initial LSP is empty
LIP={(i,j) | (i,j) ? H}
LIP={D(i,j) | (i,j) ? H} are coordinate sets of all roots.
(b) Fine Scanning
This process scanning LSP only, for each of LSP, if (i,j)is not a new add in the scanning process has just, according to the current quantitative threshold Tn, judging the size of the most significant bit of the wavelet coefficients, and output the nth important bit of | Ci,j|in binary.
(c) Update the Threshold
Order n = n + 1 and quantized threshold asTn+1= Tnfor the next scanning. For LIP, LSP and LIS use the same rules to update.
3.3 Optimization of SPIHT Algorithm
The proposed SPIHT algorithm is a fast and efficient technique for image compression andtransmission at lower bit rates over 6 any network. SPIHT algorithm mainly depends upon itsthree lists vise LIP, LIS and LSP.
List of Insignificant Pixels (LIP): The list of insignificant pixels (LIP) contains individual coefficients with a magnitude less the threshold. This list keeps track of pixelsto be evaluated.
List of Insignificant Set (LIS): The list of insignificant sets (LIS) contains sets of wavelet coefficients that are defined by “tree structures” and are found to have a magnitude less the threshold (Insignificant).The sets exclude the coefficients corresponding to the tree and all the roots of the substructure and have at least four elements. This list shows us that we are saving the work by not counting all the coordinates, but only the relative ones.
List of Significant Pixels (LSP): The list of significant pixels (LSP) is a list of pixels with magnitude greater than the (significant) threshold. This list keeps track of pixelsalready evaluated and need not be evaluated again.
It is the characteristic of the SPIHT algorithm that generally operates in one complete image at a time.Due to this the size of the three lists is often quite big and takes a lot of memory. The wholeimage is loaded and transformed, and then the algorithm requires repeated access to all lists.This consumes much time in encoding and compressing the image812. This is thedisadvantage of the SPIHT algorithm for the small mobile devices having limited memory andprocessing capability when they are compressing an image for transmission over any network.Secondly, at lower bit rates, maintaining the precise rate control is essential condition for bothencoding and decoding of the image. Precise rate control mechanism is one of the features ofthe proposed work. Therefore, in this proposed work, the SPIHT algorithm has been modified towith the following two constraints.
• Fast Coding Time
• Precise Rate Control
3.3.1 Fast Coding Time
The computation time mainly consists of two components.
In applying the wavelet transform on the image.
Time consumed in compressing the image.
In this thesis, time consumed in compressing the image is focused. The modification lies in theworking and utility of the LSP. As per the modifications done in algorithm, the LSP list iscompletely eliminated. This means that now there is no need to access the list in refinementpass. The LSP list contains thousands of elements and each element of the list is to be accessevery time to output most significant bit. This consumes much time. After modification the LSPlist is completely eliminated therefore time required to access each element of LSP is saved.Thus reduces the encoding time considerably without affecting the quality of reproduced imagemuch.
3.3.2 Precise Rate Control
Due to the modifications in the algorithm, a number of bits are outputted at once instead of onebit per one significant pixel value. This modification disturbs the precise rate characteristics ofthe SPIHT algorithm. Maintaining the precise rate control is necessary at low bit rates 23.Precise rate control means a mechanism to truncate the encoding and decoding as soon as theallocated bit budget is fully utilized. Therefore an effective precise rate mechanism is alsoimplemented along with optimized SPIHT.
3.4 Integer Wavelet Transform (IWT) Algorithm
The integer wavelet transformation was developed by 15 according to the elevation scheme presented by 17.Although this transform does not make use of dilation and translation of a mother wavelet like the regular wavelet transform, it is able to keep the multi-resolution properties of the wavelet transform. “To obtain the low frequency component”si-1” and the high frequency coefficients “di-1″ that the wavelet transform forms by transforming a signal ” si”, the integer wavelet transform operates on three steps: split, prediction, and update.”
Split Step
This step is also called the “lazy wavelet” transform 20. The split operation simply splits the signal siinto even si-1 and odd si-1 subsets, as
Compression Step
Due to the high correlation between the odd and even coefficients in an image, the subset di-1 can be predicted efficiently from subset si-1. Once the prediction is made, the signal sican be replaced by subset si-1 and prediction error between the predicted di-1 and the real values of di-1 obtained from the split.
Update Step
The update step is performed in order to enhance the subset si-1 after the prediction step. The update step is needed because some of the properties in data set si-1 don’t match with these of the original data set. After executing the three steps in the signal, the result will be the low pass coefficients si-1 and the high pass coefficients di-1.
Different transforms can be obtained by changing the prediction algorithm used in the transform. The simplest integer wavelet transform was built based on the Haar wavelet. The Haarintegertransformation, also known as the “integer wavelet transformation S” is described by 17 as: –
3.5Conclusion
A new Set Partitioning in Hierarchical Treesimage coding scheme, the SPIHT image coder. The keyto the algorithm is too properly and efficiently sort sets ofwavelet coefficients according to their maximummagnitude with respect to a sequence of decliningthresholds.This chapterconcludes a whole complex algorithm of SPIHT and overview of IWT.
CHAPTER 4
PERFORMANCE
EVALUATION PARAMETERS
CHAPTER 4
PERFORMANCE EVALUATION PARAMETERS
4.1 Introduction
The image compression technique introduces a certain amount of distortion in the reconstructed image; therefore, evaluation of image quality is an important issue. The quality of the reconstructed images can be evaluated in terms of objective measurement and subjective measurement. In the objective evaluation, the statistical properties are considered while, in the subjective evaluation, the viewers see and directly investigate the image to determine the quality of the image 33.In this chapter,both objective and subjective performance evaluation parameters for image compressionalgorithm are presented.
“The objective evaluation measure includes “Peak Signal to Noise Ratio (PSNR)”, the “compression ratio (CR)”, the variance and the “structural similarity index (SSIM)””.The image with the same PSNR value can have a different perceptual quality 34. By adjusting the parameters, it is possible to obtain offset for the compressed image with respect to the quality of the image reconstructed over a wide range.To have a visual perception, the subjective evaluation of an image / frame was also carried out for this research work and is also described in this chapter.
4.2 Objective Evaluation Parameters
The objective quality metric includes statistical analysis of input data. There are several objective evaluation parameters. The most commonly used parameters are the Peak Signal to Noise Ratio (PSNR), the compression ratio (CR), the variance and the structural similarity index (SSIM) described in the following subsections.
4.2.1. Peak Signal to Noise Ratio (PSNR)
Peak Signal to Noise Ratio (PSNR) has been the most popular tool for the objective qualitymeasurement of the compressed image and video. It is simple to compute. The PSNR decibel is evaluated as follows 35:
PSNR=10log10I2MSE (4.1)
Where, I is allowable image pixel intensity level. For 8 bit per pixel image,
I=28- 1 =255 (4.2)
and MSE is the mean square error.
Mean Square Error (MSE):
“MSE is another important evaluation parameter for measuring thequality of compressed image generally used along with the PSNR analysis.” It compares theoriginal data and reconstructed data which in results the level of distortion. The MSE between theoriginal data and the reconstructed data is:
MSE=1MNi=1Mj=1NAi,j-Bi,j2 (4.3)
Where, A = Original image of size M X N,
B = Reconstructed image of size M X N.
4.2.2. Compression Ratio (CR)
In the image compression process, it is important to know how important (important) details can be eliminated from input data to retain critical information from the original data.The compression ratio (CR) is a measure of the reduction of the data detail coefficient. It can be defined as:
CR=DiscardedDataOriginalData (4.4)
The CR can be varied to obtain a different image quality.The more the details of the coefficients are discarded, the higher the CR will be. A higher compression ratio means a lower quality of image reconstruction.The quantization table (Q) and the scale factor (SF) are the main control parameters of the compression ratio.Each transformed data element is divided by the corresponding element in the “quantization table (Q)” and rounded to the nearest “integer value”.This process makes some coefficients zero and can be discarded.
In addition, “to achieve a higher compression ratio, the quantization output is divided by some scalar constant (SF) and rounded to the nearest integer value.”This process produces multiple zero coefficients that can be discarded during compression.
4.2.3. Variance
Variance is another tool to calculate the performance of image compression algorithm.Variance describes how far certain value deviates from the mean. It’s the measure of variabilityfrom an average. In signal processing, the variances? for k=1, 2…N, are represented by theEigen values of the transformed coefficient 36. N is the size of input data block. The lower thevariance, higher will be the energy compaction property. The energy compaction propertydefines that most of the information signal tends to concentrate in the low frequency component.The greater the property of energy compaction, the better the compression algorithm.
Example 4.3: Variance calculation.
Any random sequence u(n)is called Markov -p, or pth? order Markov, if the conditional probability of u(n)given the entire past is equal to the conditional probability of u(n)given only u(n-1)……….u(n-p)37.
The covariance function of first order stationary Markov sequence u(n) is given as
Prob u(n)?u(n-1),u(n-2),………. = Prob u(n)?u(n-1),u(n-2),………. u(n-p) (4.5)
For first order Markov sequence having the correlation matrix of size Nand correlation Coefficient. Thus the variance of the transformed coefficient can be obtained as the Eigen value of the transformed correlation coefficient.
4.3 Image Compression Techniques
4.3.1 Lossless Compression
Lossless compression plays an important role in image compression in fields such as medical imaging, where due to the “sensitivity of information and legal requirements”, no information can be removed from the image once it has been digitized initially.However, compression ratios attained from lossless compression are very small, normally revolving around a 2:1 ratio as shown by 17.Image compression usually consists of three steps:transformation, quantization, and code word assignment. However, with lossless compression, and since quantization introduces quantization errors that prevent perfect reconstruction, lossless compression does not have a quantization step. The quantization step is normally used to turn the transformation coefficients from their float format to an integer format. With lossless compression, and to afford removing quantization, the algorithm must make use of a transform that yields only integer coefficients and allows for perfect reconstruction. Lossless compression algorithms make use of prediction based algorithms that only result in integer values. The prediction algorithm used is known and used by both the encoder and decoder of the image which allows it to be a perfect reconstruction transform. Regardless of the prediction algorithm used, the coefficients used for codeword assignment represent the difference between the predicted pixel value and the actual pixel value of the image. By making good predictions, the error is made small which in turn yields for shorter codeword assignments. Some of the common prediction algorithms used for lossless data compression are differential pulse code modulation (DPCM) and the median edge detector (MED) 1 that is used with JPEG-LS 19.
4.3.2 Lossy Compression
In lossy compression, the reconstructed image is not identical to the original image but reasonably close to the size of the image 20. It also degrades the image as it completely discards the redundancy from the signal or image. It also results loss of information by using quantization process, which sorts the data into different bins and each bin represented by a value, but provides much higher compression ratio.
4.4 Limitation of Lossy and Lossless Compression Technique
It is a type of compression that allocates a given amount of information to a region in an image where a viewer has more interest, (ROI) 1 more preferred than the remaining region, which is non-ROI. “This approach results in higher restored image quality for ROI than that for non-ROI. This prevents an unallowable loss of the information for most important regions such as the tumor in Brain is the focus in the medical imaging 2”.The compression of the ROI image is an interesting topic in the field of image compression. In various image compression standards, the entire image is compressed by lossless compression or data loss compression.
4.4.1 REGION OF INTEREST (ROI)
ROI is designed to owing to the boundaries of lossy and lossless compression techniques. Most of the lossless compression techniques, the compression ratio are near to 80% of original size, while for lossy coding method, the compression ratio is much higher (up to 5-30 %) 6,8 but there may be major loss in data. Therefore, ROI is presented primarily for medical images, as medical images do not allow the loss of information in important high diagnostic parts.Medical image is divided into two parts: ROI and non-ROI part. ROI part is considered as most diagnostic important part, also called foreground part, other remaining part is known as non-ROI part, also called background part. Therefore, a lossless compression technique must be applied to preserve the quality of the diagnostic part (ROI), as well as providing a high compression ratio 6, 22.During the transmission of the image for telemedicine purposes, the ROI part is required to be transmitted first or at a higher priority, so the coefficients associated with ROI are transferred first before those associated with non-ROI part.
4.4.2 Region Growing
Region growing is a simplest region-based method. It is also known as pixel-based method. Region growing method examines the entire neighbouring pixels of primary seed points and check whether these neighbouring pixels are adding to the region or not. This procedure is same as data clustering method. Firstly, region growing choose a set of seeds. Selection of seed point is depend on the user criteria such as it may be depend on pixel intensity, gray level texture or color level. Primary region starts at the exact location of the seed point and seed point (selected pixel) select all the adjacent pixels to add in the region. The image information is also important in the region growing.
4.5 Methodology
Earlier studies have proved that medical images face different types of problems due to the various factors as discussed in previous chapter. The proposed algorithm works using haar wavelet technique and provides best PSNR results as compared to previous technique.
Figure 4.1 Block diagram of image compression
In this thesis we are used to combine two individual techniques. One is IWT based and another SPIHT based Steps may be as follows:
Start the process to browse the medical image.
Apply region growing algorithm on the browsed image.
Classify the resultant image in three segments as –
ROI(Region of Interest)-as it is an important region so its compression must be in a lossless manner.
Non-ROI-compression via lossy method.
Background- compression via lossy method
Now, Apply Compression algorithm, IWT for ROI – because it is loseless, SPIHT for non ROI -Lossy method and get the required parameters (CR, PSNR, MSE, SSIM).
Combine above both compressed images to get resultant image
Apply decompression to reconstruct original image.
4.6Conclusion
In this chapter, the objective and subjective evaluation parameter generally used for imagecompression analysis has been presented. For the objective evaluation, “PSNR”, “CR” and “variance” were discussed.All these metrics are simple to calculate and show the statistical properties of the reconstructed data.However, in some cases, this parameter may notexactly define the visual reconstruction quality. Thus, subjective evaluation parameter has alsobeen discussed for the reconstruction quality analysis.
CHAPTER 5
RESULTS AND DISCUSSION
CHAPTER 5
RESULTS AND DISCUSSION
5.1 Introduction
This section evaluates the performance of the proposed SPECK algorithm. The proposed SPECK algorithm is applied on several types of images: natural images, toots images, painting captured images such that the performance of proposed algorithm can be verified forvarious applications.
5.2 Simulation tool
The algorithm was implemented in MATLAB simulation tool. The evaluation parameters (PSNR, CR, and Variance), sub-sampling, quantization and scalingroutines were manually programmed in MATLAB.
For a better assessment, the following approach has been adopted:
The PSNR is evaluated for a constant CR. When evaluating the PSNR, we intend to obtain the maximum CR using the scale factor without visually distorting the reconstructed image.
The average PSNR is determined and the CR is computed for that particular PSNR.
The SSIM is computed for the initial constant average CR.
The proposed algorithm is compared with general, DWT and gray images.
5.3Result and Analysis
The background parts of the input images are separated byapplying threshold segmentation algorithm. The ROI part of the image separated from image after that the compressionalgorithm that can be applied on that ROI part of the image.Then the reverse part of the compression algorithm is used toobtain original image. We are selected the input image as figure 5.1 and select ROI in figure 5.2 after that figure 5.3 Lena image are selected with ROI.
Figure 5.1Grayscale Image
Figure 5.2 Select ROI
Figure 5.3 Selected ROI
5.3.1 Region of Interest Selected Image
Image compression has applied on Lena image at different compression rates. Figure 5.4 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.4 showsevaluation parameter PSNR=50.897606 and Comprn=75.320435 with ROI compress image.
Figure 5.4Original Image and With ROI PSNR=50.897606 and Comprn=75.320435
Image compression has applied on Lena image at different compression rates. Figure 5.5 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.5 shows evaluation parameter PSNR=52.749981 and Comprn= 77.853394 with ROI compress image.
Figure 5.5 Original Image and With ROI PSNR=52.749981 and Comprn=77.853394
Image compression has applied on Lena image at different compression rates. Figure 5.6 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.6 shows evaluation parameter PSNR=54.541440 and Comprn=80.229187 with ROI compress image.
Figure 5.6 Original Image and With ROI PSNR=54.541440and Comprn=80.229187
Image compression has applied on Lena image at different compression rates. Figure 5.7 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.7 shows evaluation parameter PSNR=55.876681 and Comprn=82.743835 with ROI compress image.
Figure 5.7 Original Image and With ROI PSNR=55.876681 and Comprn=82.743835
Image compression has applied on Lena image at different compression rates. Figure 5.8 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.8shows evaluation parameter PSNR=57.489429 and Comprn=85.209656 with ROI compress image.
Figure 5.8 Original Image and With ROI PSNR=57.489429 and Comprn=85.209656
Image compression has applied on Lena image at different compression rates. Figure 5.9 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.9shows evaluation parameter PSNR=68.111906 and Comprn=110.360718 with ROI compress image.
Figure 5.9 Original Image and With ROI PSNR=68.111906 and Comprn=110.360718
Image compression has applied on Lena image at different compression rates. Figure 5.10 is an original image when ROI method is selected so its compression must be in a lossless manner, and in lossless manner IWT Algorithm applies. Figure 5.10 shows evaluation parameter PSNR=69.105514 and Comprn=112.847900 with ROI compress image.
Figure 5.10 Original Image and With ROI PSNR=69.105514 and Comprn=112.847900
5.3.2 NON Region of Interest(ROI) Selected Image
Image compression has applied on Lena image at different compression rates. Figure 5.11 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.11 shows evaluation parameter PSNR=37.202724 and Comprn=16.850281 with Non ROI compress image.
Figure 5.11 Original Image and Without ROI PSNR=37.202724 and Comprn=16.850281
Image compression has applied on Lena image at different compression rates. Figure 5.12 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.12 shows evaluation parameter PSNR=39.275405 and Comprn=20.124817with Non ROI compress image.
Figure 5.12 Original Image and Without ROI PSNR=39.275405 and Comprn=20.124817
Image compression has applied on Lena image at different compression rates. Figure 5.13 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.13 shows evaluation parameter PSNR=41.471323 and Comprn=23.481750 with Non ROI compress image.
Figure 5.13 Original Image and Without ROI PSNR=41.471323 and Comprn=23.481750
Image compression has applied on Lena image at different compression rates. Figure 5.14 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.14 shows evaluation parameter PSNR=43.232214 and Comprn=26.730347 with Non ROI compress image.
Figure 5.14 Original Image and Without ROI PSNR=43.232214 and Comprn=26.730347
Image compression has applied on Lena image at different compression rates. Figure 5.15 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.15 shows evaluation parameter PSNR=44.837925 and Comprn=30.055237 with Non ROI compress image.
Figure 5.15 Original Image and Without ROI PSNR=44.837925 and Comprn=30.055237
Image compression has applied on Lena image at different compression rates. Figure 5.16 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.16 shows evaluation parameter PSNR=57.932713 and Comprn=63.237000 with Non ROI compress image.
Figure 5.16 Original Image and Without ROI PSNR=57.932713 and Comprn=63.237000
Image compression has applied on Lena image at different compression rates. Figure 5.17 is an original image when Non ROI method is selected so its compression must be in a lossy manner, and in lossy manner SPIHT Algorithm applies. Figure 5.17 shows evaluation parameter PSNR=58.996235 and Comprn=66.534424 with Non ROI compress image.
Figure 5.17 Original Image and Without ROI PSNR=58.996235 and Comprn=66.534424
Figure 18 PSNR vs BPP
5.4 Conclusion
In this work, we have successfully analyzed an efficient compression scheme to obtain better quality and higher compression ratio using wavelet transform with SPIHT and IWT. The performance of the SPIHT is compared with JPEG2000. The SPIHT algorithm has some important features that are of low complexity, encrustation, progressive coding, exploit the grouping of energy to approach high energy areas within a region (block) to encode them with maps of minimum importance, a better perception visual.
CHAPTER 6
CONCLUSION AND FUTURE WORK
CHAPTER 6
CONCLUSION AND FUTURE WORK
6.1 Introduction
In the wireless networks the use of handheld multimediadevices like mobile is increasing day-by-day. Alsoemerging wireless multimedia sensor network needs realtime image transmission among its nodes. Due to limitedmemory, battery life and processing power of these portabledevices and sensor nodes, the real timetransmission of images through these devices require animage coding algorithm that can compress imagesefficiently, requiring minimum possible dynamic memoryand with reduced computational complexity .Weintroduced a new set partitioning, block-based embeddedimage coding scheme, the SPIHT image coder. The keyto the algorithm is too properly and efficiently sort sets ofwavelet coefficients according to their maximummagnitude with respect to a sequence of decliningthresholds. The sorting mechanism is partitioning byquadric section guided by threshold significance tests.
This allows processing of different regions of thetransformed image based on their energy content. We explained theoperation of SPIHT in various modes of computationaland memory complexity and presented extensive codingsimulation results for monochrome.
6.2 Applications
Image compression techniques have numerous applications in the field of multimediabroadcasting channels, video conferencing, medical imaging and neural networks. If wetalk about TV programs broadcasting, Image compression technique using SPIHT Coding has become essential tool. Usually a full colour image is made up of three primary colours: red, green, and blue. In most cases, there is a strong correlation between the red, green, and blue components of the colour image. However, this colour space is not optimal for lossless compression since the energy is distributed over these planes. Reversible RGB to LC transformations reduce the correlation between the R, G and B components and pack the energy of the colour image into one component and by using SPIHT Coding we can save the cost of heavy storage device. SPIHT Coding also reduces the cost of extra bandwidth used when uncompressed image is transmitted to satellite for broadcasting. SPIHT Coding can also be used in compression of live video streaming in video conferencing.
6.3 Future scope
The thesis has contributed in development of low memory wavelet based embedded coders. The significant contribution is SPIHT algorithm.Following the results and analysis described in this thesis, a number of futureworks could be taken up as follows.
The concept behind the low memory coders can be extended for color images, and volumetric images and videos.
“The encoding efficiency of the low memory encoder is slightly reduced in some places compared to SPIHT.” You can try to improve the efficiency of coding through context formations.
All of the analysis presented in this thesis work involved exhaustive simulations.The algorithm can be realized in hardware implementation as a future work.
The proposed algorithm can also be a good option for the image processor of the wireless capsule endoscopy system.
REFERENCES
1Kheli_, F. (2007) Image Compression and Watermarking in the WaveletTransform Domain. Ph.D.thesis, School of Electronics, Electrical Engineering,and Computer Science, Queen’s University of Belfast. 3
2 Islam, A. (1999) Set-Partitioned Image Coding. Ph.D. thesis, RensselaerPolytechnic Institute, United States, New York. 3, 4
3″Energy efficient image compression platforms”, H Kim,M Rahimi, IEEE Trans. Image P2100- 2113, Sept. 2009.
4 Image Communication, vol. 17, Santa&. Grosbois pp113-130,Jan-2002
5 A. Said and W. A. Pearlman, “Anbased on set partitioningin hieraron Circuits and Systems for Vide243-250, June1996.6 W. A. Pearlman, A. Islam, N. Naglow-complexity imagecodingEmbedded Block Coder,”IEEESystems for VideoTechnology,2004.
7 W-K. Lin & Burgess, “Listlessimages”, In Proc. of the32nd ASystems and Computers, vol. 1,pp231-235,Nov-1998
8 F. W. Wheeler and W. A. Pcompression without lists”,IEEEspeech and signal processing (ICA2050, May 2000.
9H. G. Musmann, P. Pirsch, and H. J. Grallert, “Advances in picture coding,” in Proceedingsof IEEE, Vol. 73, 1985, pp. 523-548.
10R. J. Clarke, Transform Coding of Images, Academic Press, New York, 1985.
11 O. J. Kwon and R. Chellappa, “Region adaptive subband image coding,” IEEE Transactionson Image Processing, Vol. 7, 1988, pp. 632-648.
12ANTONINI Metal, “Image Coding Using Wavelet Trans-form,” IEEE Transactions on Image Processing, Vol. 1, 1992, pp. 205-2203.
13 A. Said and W. Pearlman, “A New Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical Tress,” IEEE Transactions on Circuit and System for Video Technology, Vol. 6, 1996, pp. 243-250.
14 JPEG2000 Core Coding System (Part 1), ISO/IEC 15444-1, Dec. 2000.
15 B. E. Usevitch, “A tutorial on modern lossy wavelet image compression: Foundationsof JPEG2000,” IEEE Signal Processing Magazine, Vol. 18, 2001, pp. 22-35.
16 J. M. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,”IEEE Transactions on Signal Processing, Vol. 41, 1993, pp. 3445-3462.
17 A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on setpartitioning in hierarchical trees,” IEEE Transactions on Circuits Systems for VideoTechnology, Vol. 6, 1996, pp. 243 250.18S. D. Servetto, K. Ramchandran, and M. T. Orchard, “Image coding based on a morphologicalrepresentation of wavelet data,” IEEE Transactions on Image Processing,Vol. 8, 1999, pp. 1161-1174.
19 E. S. Hong and R. E. Ladner, “Group testing for image compression,” IEEE Transactionson Image Processing, Vol. 11, 2002, pp. 901-911.
20 A. Islam and W. A. Pearlman, “An embedded and efficient low-complexity hierarchicalimage coder,” in Proceedings of SPIE Visual Communications and ImageProcessing, Vol. 3653, 1999, pp. 294-305.
21 W. A. Pearlman, A. Islam, N. Nagaraj, and A. Said, “Efficient, low-complexity imagecoding with a set-partitioning embedded block coder,” IEEE Transactions onCircuits Systems for Video Technology, Vol. 14, 2004, pp. 1219-1235.
22 G. Strang and T. Nguyen, Wavelets and Filter Banks, Wellesley-Cambridge, MA,199623C. Chrysafis, A. Said, A. Drukarev, and W. A. Pearlman, “SBHP ? A low complexitywavelet coder,” in Proceedings of IEEE International Conference on Acoustics,Speech, and Signal Processing, 2000, pp. 2035-2038.
24S. T. Hsiang and J. W. Woods, “Embedded image coding using zero blocks of subband/wavelet coefficients and context modeling,” in Proceedings of IEEE InternationalConference on Circuits and Systems, 2000, pp. 662-665.25C. K. Su, H. C. Hsin, and S. F. Lin, “Wavelet tree classification and hybrid codingfor image compression,” IEE Proceedings of Vision, Image, and Signal Processing,Vol. 152, 2005, pp. 752-756.
26Islam A. and William A., “Efficient, Low- Complexity Image Coding with a Set-Partitioning Embedded Block Coder,” IEEE Transactions Circuits and Systems for Video Technology, vol. 14, pp. 1219-1235, November 2004.
27Pearlman A., “Efficient, Low-Complexity Image Coding with A Set-Partitioning Embedded Block Coder,” http://www.cipr.~$ltiwavelet_paper. docrpi.edu/~pearlman/papers/speck_example.pdf, IEEE Transactions on Circuits and Systems for Video Technology , vol. 14, no. 11, pp. 1219 – 1235, 2004.
28A. Said and W.A. Pearlman, “A new, fast and efficient image codec based on set partitioning in hierarchical trees”, IEEE Transaction on circuits and systems for Video Technology, vol. 6, pp. 243–250, June 1996.
29A. Islam and W.A Pearlman, “An embedded and efficient low- complexity hierarchical image coder”, Visual Communications and Image Processing, Proceedings of SPIE, vol.36 No.53, pp. 294–305, Jan.1999.
30Grgic S, Grgic M, Zovko-Cihlar B, “Performance analysis of image compression using wavelets”, IEEE Transaction on Industrial Electronics 2002; 48:682-95.
31N Sprljan, Sonia Grgic, MislavGrgic, “Modified SPIHT algorithm for Wavelet packet image coding”, Elsevier Real time Imaging pp 378- 388,11, 2005 .
32A. Said and W.A. Pearlman. A new, fast and efficient image codec based on set partitioning in hierarchical trees. In IEEE Trans. Circuitsand Systems for Video Technology, volume 6(3), pages 243–250, June 1996.
33S. E. Ghrare, M. A. M. Ali, M. Ismail, and K. Jumari, “Diagnostic quality of compressedmedical images: Objective and subjective evaluation,” in Proc. Second Asia Int. Conf.Modeling & Simulation AICMS 08, 2008, pp. 923–927.
34D. S. Hands, Q. Huynh-Thu, A. W. Rix, A. G. Davis, and R. M. Voelcker, “Objectiveperceptual quality measurement of 3g video services,” in Proc. Fifth IEE Int. Conf. 3GMobile Communication Technologies 3G 2004, 2004, pp. 437–441.35R. W. R. Gonzalez and S. Eddins, Digital Image Processing using MATLAB. PearsonPrentice Hall, 2004.36N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal Processing. Springer,1975.
37A. K. Jain, Fundamentals of Digital Image Processing. Prentice Hall Inc., 1989.
APPENDIX
(A) SAMPLE IMAGE USED FOR PERFORMANCE ANALYSIS
Lena Image
(B) MATLAB PROGRAMMING
Test ALL
Test.mclcclearcloseallnamefile,pathname=uigetfile({‘*.tiff;*.pgm;*.bmp;*.tif;*.jpg;*.jpeg;*.gif;*.png’,’IMAGE Files (*.pgm,*.bmp,*.tiff,*.jpg,*.jpeg,*.gif)’},’Chose Image’);
ifnamefile~=0
im_name=strcat(pathname,namefile);
im=imread(im_name);
h,w,d=size(im);
if d==3
im=rgb2gray(im);
endim=imresize(im,256,256);
level = graythresh(im);
bwim= im2bw(im,level*0.5);
%bwim=adaptivethreshold(im,8,0.0001,1);
% thresh = multithresh(im);
% bwim = imquantize(im,thresh)
figure(101),imshow(bwim);
ROI,Non_ROI,labels=select_ROI(im,bwim);
type = ‘bior4.4’;
ii=0;
for rate=1:0.2:4
ii=ii+1;
compr,rate,PSNR=with_ROI(im,bwim,type,rate,ROI,Non_ROI,labels);
res(1:3,ii)=compr,rate,PSNR;
compr,rate,PSNR=without_ROI(im,bwim,type,rate);
res(4:6,ii)=compr,rate,PSNR;
resendsave(‘Result_table’,’res’);
figure,
set(gcf,’DefaultLineLineWidth’,2);
set(gca, ‘FontSize’, 12);
plot(res(2,:),res(3,:),’r’);
holdon;
plot(res(5,:),res(6,:),’b’);
gridontitle(‘PSNR vsBpp’);
xlabel(‘BPP’)
ylabel(‘PSNR’)
endwith_ROI.mfunction compr,rate,PSNR=with_ROI(im,bwim,type,rate,ROI,Non_ROI,labels)
Non_ROI=im;
Lo_D,Hi_D,Lo_R,Hi_R = wfilters(type);
h,w= size(Non_ROI);
level = log2(h)-1;
max_bits = floor(rate * h*w);
I_W, S = my_DWT(Non_ROI, level, Lo_D, Hi_D);
%———– Coding —————-
NROI_enc = (SPIHT_Enc(I_W, max_bits, h*w, level));
header=NROI_enc(1:3);
NROI_enc=(NROI_enc(4:end));
h0 w0=size(NROI_enc);
xC=cell(2,1); xC{1}=reshape(NROI_enc,1,h0*w0);
NROI_enc = (Huff(xC));
Ms = 8; Ns = Ms;
M,N = size(ROI);
LS = liftwave(‘haar’,’Int2Int’);
ROI_enc = lwt2(double(ROI), LS, 3);
ROI_enc= im2col(ROI_enc, Ms,Ns, ‘distinct’);
h1 w1=size(ROI_enc);
xC=cell(2,1); xC{1}=reshape(ROI_enc,1,h1*w1);
ROI_enc = int16(Huff(xC));
save(‘temp.dat’,’ROI_enc’,’NROI_enc’,’header’,’S’,’level’,’h1′,’w1′);
hh = dir(‘temp.dat’);
compr = (hh.bytes)*100.0/(256.0*256.0); % compression
%———– Decoding —————-
size(NROI_enc)
size(header)
xC = Huff(NROI_enc);
xc=xC{1};
NROI_enc=reshape(xc,h0,w0);
NROI_enc1=header,double(NROI_enc);
NROI_dec = SPIHT_Dec((double(NROI_enc1)));
%———– Wavelet Reconstruction —————-
Non_ROI2 = my_InvDWT(double(NROI_dec), S, Lo_R, Hi_R, level);
%Non_ROI2=reshape(Non_ROI2,size(Non_ROI2,1)*size(Non_ROI2,2),1);
xC = Huff(double(ROI_enc));
xc=xC{1};
ROI2=reshape(xc,h1,w1);
ROI2 = col2im(ROI2, Ms,Ns, M,N, ‘distinct’);
ROI2 = ilwt2(ROI2, LS, 3);
out=Non_ROI2;%zeros(labels(1),labels(2));
out(labels(3):labels(5),labels(4):labels(6))=ROI2;
% k=1;
% for i=1:size(im,1)
% for j=1:size(im,2)
% if(~(j>=labels(4) && j<=labels(6) && i>=labels(3) && i<=labels(5)))
% out(i,j)=Non_ROI2(k);
% k=k+1;
% end
%
%
% end
% end
R = double(im)-out;
PSNR = 10*log10(((numel(im)*255^2)/sum(sum(R.*R))));
‘compression BPP PSNR’
compr,rate,PSNR
display=100 100 900 400;
handle=figure(‘name’,’panel’, ‘position’, display);
subplot(1,2,1),imshow(im);title(‘Original Image’);
subplot(1,2,2),imshow(uint8(out));
title(sprintf(‘with ROI PSNR=%f and Comprn=%f’,PSNR,compr));
endwithout_ROI.mfunction compr,rate,PSNR=without_ROI(im,bwim,type,rate)
im=im.*uint8(bwim);
Lo_D,Hi_D,Lo_R,Hi_R = wfilters(type);
h,w= size(im);
level = log2(h);
max_bits = floor(rate * h*w);
I_W, S = my_DWT(im, level, Lo_D, Hi_D);
%———– Coding —————-
im_enc = (SPIHT_Enc(I_W, max_bits, h*w, level));
header=im_enc(1:3);
im_enc=uint8(im_enc(4:end));
save(‘temp.dat’,’im_enc’,’header’,’S’,’level’);
hh = dir(‘temp.dat’);
compr = (hh.bytes)*100.0/(256.0*256.0); % compression
%———– Decoding —————-
im_enc1=header,double(im_enc);
im_dec = SPIHT_Dec((double(im_enc1)));
%———– Wavelet Reconstruction —————-
out = my_InvDWT(double(im_dec), S, Lo_R, Hi_R, level);
R = double(im)-out;
PSNR = 10*log10(((numel(im)*255^2)/sum(sum(R.*R))));
‘compression BPP PSNR’
compr,rate,PSNR
display=100 100 900 400;
handle=figure(‘name’,’panel’, ‘position’, display);
subplot(1,2,1),imshow(im);title(‘Original Image’);
subplot(1,2,2),imshow(uint8(out));
title(sprintf(‘without ROI PSNR=%f and Comprn=%f’,PSNR,compr));
endSPIHT_enc.mfunction out = SPIHT_Enc(m, max_bits, block_size, level)
%———– Initialization —————–bitctr = 0;
out = 2*ones(1,max_bits – 14);
n_max = floor(log2(abs(max(max(m)’))));
Bits_Header = 0;
Bits_LSP = 0;
Bits_LIP = 0;
Bits_LIS = 0;
%———– output bit stream header —————-
% image size, number of bit plane, wavelet decomposition level should be
% written as bit stream header.out(1,1 2 3) = size(m,1) n_max level; bitctr = bitctr + 24;
index = 4;
Bits_Header = Bits_Header + 24;
%———– Initialize LIP, LSP, LIS —————-
temp = ;
bandsize = 2.^(log2(size(m, 1)) – level + 1);
temp1 = 1 :bandsize;
for i = 1 : bandsizetemp = temp; temp1;
endLIP(:, 1) = temp(:);
temp = temp’;
LIP(:, 2) = temp(:);
LIS(:, 1) = LIP(:, 1);
LIS(:, 2) = LIP(:, 2);
LIS(:, 3) = zeros(length(LIP(:, 1)), 1);
pstart = 1;
pend = bandsize / 2;
for i = 1 : bandsize / 2
LIS(pstart : pend, 🙂 = ;
pdel = pend – pstart + 1;
pstart = pstart + bandsize – pdel;
pend = pend + bandsize – pdel;
endLSP = ;
n = n_max;
%———– coding —————-
while(bitctr<max_bits)
% Sorting Pass
LIPtemp = LIP; temp = 0;
for i = 1:size(LIPtemp,1)
temp = temp+1;
if (bitctr + 1) >= max_bitsif (bitctr<max_bits)
out(length(out))=;
endreturnendif abs(m(LIPtemp(i,1),LIPtemp(i,2))) >= 2^n % 1: positive; 0: negative
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIP = Bits_LIP + 1;
sgn = m(LIPtemp(i,1),LIPtemp(i,2))>=0;
out(index) = sgn; bitctr = bitctr + 1;
index = index +1; Bits_LIP = Bits_LIP + 1;
LSP = LSP; LIPtemp(i,:);
LIP(temp,:) = ; temp = temp – 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1;
Bits_LIP = Bits_LIP + 1;
endendLIStemp = LIS; temp = 0; i = 1;
while ( i <= size(LIStemp,1))
temp = temp + 1;
ifLIStemp(i,3) == 0
ifbitctr>= max_bitsreturnendmax_d = my_Descendant(LIStemp(i,1),LIStemp(i,2),LIStemp(i,3),m);
ifmax_d>= 2^n
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
x = LIStemp(i,1); y = LIStemp(i,2);
if (bitctr + 1) >= max_bitsif (bitctr<max_bits)
out(length(out))=;
endreturnendif abs(m(2*x-1,2*y-1)) >= 2^n
LSP = LSP; 2*x-1 2*y-1;
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
sgn = m(2*x-1,2*y-1)>=0;
out(index) = sgn; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
LIP = LIP; 2*x-1 2*y-1;
endif (bitctr + 1) >= max_bitsif (bitctr<max_bits)
out(length(out))=;
endreturnendif abs(m(2*x-1,2*y)) >= 2^n
LSP = LSP; 2*x-1 2*y;
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
sgn = m(2*x-1,2*y)>=0;
out(index) = sgn; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
LIP = LIP; 2*x-1 2*y;
endif (bitctr + 1) >= max_bitsif (bitctr<max_bits)
out(length(out))=;
endreturnendif abs(m(2*x,2*y-1)) >= 2^n
LSP = LSP; 2*x 2*y-1;
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
sgn = m(2*x,2*y-1)>=0;
out(index) = sgn; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
LIP = LIP; 2*x 2*y-1;
endif (bitctr + 1) >= max_bitsif (bitctr<max_bits)
out(length(out))=;
endreturnendif abs(m(2*x,2*y)) >= 2^n
LSP = LSP; 2*x 2*y;
out(index) = 1; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
sgn = m(2*x,2*y)>=0;
out(index) = sgn; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
LIP = LIP; 2*x 2*y;
endif ((2*(2*x)-1) < size(m) & (2*(2*y)-1) < size(m))
LIS = LIS; LIStemp(i,1) LIStemp(i,2) 1;
LIStemp = LIStemp; LIStemp(i,1) LIStemp(i,2) 1;
endLIS(temp,:) = ; temp = temp-1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
endelseifbitctr>= max_bitsreturnendmax_d = my_Descendant(LIStemp(i,1),LIStemp(i,2),LIStemp(i,3),m);
ifmax_d>= 2^n
out(index) = 1; bitctr = bitctr + 1;
index = index +1;
x = LIStemp(i,1); y = LIStemp(i,2);
LIS = LIS; 2*x-1 2*y-1 0; 2*x-1 2*y 0; 2*x 2*y-1 0; 2*x 2*y 0;
LIStemp = LIStemp; 2*x-1 2*y-1 0; 2*x-1 2*y 0; 2*x 2*y-1 0; 2*x 2*y 0;
LIS(temp,:) = ; temp = temp – 1;
elseout(index) = 0; bitctr = bitctr + 1;
index = index +1; Bits_LIS = Bits_LIS + 1;
endend i = i+1;
end% Refinement Pass
temp = 1;
value = floor(abs(2^(n_max-n+1)*m(LSP(temp,1),LSP(temp,2))));
while (value >= 2^(n_max+2) & (temp <= size(LSP,1)))
ifbitctr>= max_bitsreturnend s = bitget(value,n_max+2);
out(index) = s; bitctr = bitctr + 1;
index = index +1; Bits_LSP = Bits_LSP + 1;
temp = temp + 1;
if temp <= size(LSP,1)
value = floor(abs(2^(n_max-n+1)*m(LSP(temp,1),LSP(temp,2))));
endend n = n – 1;
endfunction value = my_Descendant(i, j, type, m)
s = size(m,1);
S = ;
index = 0; a = 0; b = 0;
while ((2*i-1)<s & (2*j-1)<s)
a = i-1; b = j-1;
mind = 2*(a+1)-1:2*(a+2^index);
nind = 2*(b+1)-1:2*(b+2^index);
chk = mind <= s;
len = sum(chk);
iflen< length(mind)
mind(len+1:length(mind)) = ;
endchk = nind<= s;
len = sum(chk);
iflen< length(nind)
nind(len+1:length(nind)) = ;
end S = S reshape(m(mind,nind),1,);
index = index + 1;
i = 2*a+1; j = 2*b+1;
endif type == 1
S(:,1:4) = ;;
endvalue = max(abs(S));
SPIHT_dec.mfunction m = SPIHT_Dec(in)
m = zeros(in(1,1));
n_max = in(1,2);
level = in(1,3);
ctr = 4;
%———– Initialize LIP, LSP, LIS —————-
temp = ;
bandsize = 2.^(log2(in(1,1)) – level + 1);
temp1 = 1 :bandsize;
for i = 1 : bandsizetemp = temp; temp1;
endLIP(:, 1) = temp(:);
temp = temp’;
LIP(:, 2) = temp(:);
LIS(:, 1) = LIP(:, 1);
LIS(:, 2) = LIP(:, 2);
LIS(:, 3) = zeros(length(LIP(:, 1)), 1);
pstart = 1;
pend = bandsize / 2;
for i = 1 : bandsize / 2
LIS(pstart : pend, 🙂 = ;
pdel = pend – pstart + 1;
pstart = pstart + bandsize – pdel;
pend = pend + bandsize – pdel;
endLSP = ;
%———– coding —————-
n = n_max;
while (ctr;= size(in,2))
%Sorting Pass
LIPtemp = LIP; temp = 0;
for i = 1:size(LIPtemp,1)
temp = temp+1;
ifctr; size(in,2)
returnendif in(1,ctr) == 1
ctr = ctr + 1;
if in(1,ctr) ; 0
m(LIPtemp(i,1),LIPtemp(i,2)) = 2^n + 2^(n-1);
elsem(LIPtemp(i,1),LIPtemp(i,2)) = -2^n – 2^(n-1);
end LSP = LSP; LIPtemp(i,:);
LIP(temp,:) = ; temp = temp – 1;
endctr = ctr + 1;
endLIStemp = LIS; temp = 0; i = 1;
while ( i ;= size(LIStemp,1))
temp = temp + 1;
ifctr; size(in,2)
returnendifLIStemp(i,3) == 0
if in(1,ctr) == 1
ctr = ctr + 1;
x = LIStemp(i,1); y = LIStemp(i,2);
ifctr; size(in,2)
returnendif in(1,ctr) == 1
LSP = LSP; 2*x-1 2*y-1;
ctr = ctr + 1;
if in(1,ctr) == 1
m(2*x-1,2*y-1) = 2^n + 2^(n-1);
elsem(2*x-1,2*y-1) = -2^n – 2^(n-1);
endctr = ctr + 1;
else LIP = LIP; 2*x-1 2*y-1;
ctr = ctr + 1;
endifctr; size(in,2)
returnendif in(1,ctr) == 1
ctr = ctr + 1;
LSP = LSP; 2*x-1 2*y;
if in(1,ctr) == 1;
m(2*x-1,2*y) = 2^n + 2^(n-1);
elsem(2*x-1,2*y) = -2^n – 2^(n-1);
endctr = ctr + 1;
else LIP = LIP; 2*x-1 2*y;
ctr = ctr + 1;
endifctr; size(in,2)
returnendif in(1,ctr) == 1
ctr = ctr + 1;
LSP = LSP; 2*x 2*y-1;
if in(1,ctr) == 1
m(2*x,2*y-1) = 2^n + 2^(n-1);
elsem(2*x,2*y-1) = -2^n – 2^(n-1);
endctr = ctr + 1;
else LIP = LIP; 2*x 2*y-1;
ctr = ctr + 1;
endifctr; size(in,2)
returnendif in(1,ctr) == 1
ctr = ctr + 1;
LSP = LSP; 2*x 2*y;
if in(1,ctr) == 1
m(2*x,2*y) = 2^n + 2^(n-1);
elsem(2*x,2*y) = -2^n – 2^(n-1);
endctr = ctr + 1;
else LIP = LIP; 2*x 2*y;
ctr = ctr + 1;
endif ((2*(2*x)-1) ; size(m) ; (2*(2*y)-1) ; size(m))
LIS = LIS; LIStemp(i,1) LIStemp(i,2) 1;
LIStemp = LIStemp; LIStemp(i,1) LIStemp(i,2) 1;
endLIS(temp,:) = ; temp = temp-1;
elsectr = ctr + 1;
endelseif in(1,ctr) == 1
x = LIStemp(i,1); y = LIStemp(i,2);
LIS = LIS; 2*x-1 2*y-1 0; 2*x-1 2*y 0; 2*x 2*y-1 0; 2*x 2*y 0;
LIStemp = LIStemp; 2*x-1 2*y-1 0; 2*x-1 2*y 0; 2*x 2*y-1 0; 2*x 2*y 0;
LIS(temp,:) = ; temp = temp – 1;
endctr = ctr + 1;
end i = i+1;
end% Refinement Pass
temp = 1;
value = m(LSP(temp,1), LSP(temp,2));
while (abs(value) ;= 2^(n+1) ; (temp ;= size(LSP,1)))
ifctr; size(in,2)
returnend value = value + ((-1)^(in(1,ctr) + 1)) * (2^(n-1))*sign(m(LSP(temp,1),LSP(temp,2)));
m(LSP(temp,1),LSP(temp,2)) = value;
ctr = ctr + 1;
temp = temp + 1;
if temp ;= size(LSP,1)
value = m(LSP(temp,1),LSP(temp,2));
endend n = n-1;
end