PROJECTS


I have, in the course of my Bachelor's degree of Computer Engineering, 
done the following projects :


 To other links

DATA MINING

Data Mining is a technology that helps you identify and extract high-value business intelligence from data assets. Analysts and business technologists are empowered to discover patterns which might otherwise be unobserved, across volumes of data they were simply not able to penetrate with other types of analytical tools. Data Mining provides the fundamental technology and tools to support the mining process, as well as application services to enable development of customised applications.

Innovative technology

Central to this technology are its data mining algorithms. Applicable to a wide range of business problems, the algorithms can be employed to facilitate decision-making in business areas such as campaign planning, customer relationship management, process reengineering, product planning and fulfilment, and fraud and abuse management. Capabilities include support for the following methods:
  • Creation of classification and prediction models
  • Discovery of associations and sequential patterns in large databases
  • Automatic segmentation of databases into groups of related records
  • Discovery of similar patterns of behaviour within special time sequences
  • Data analysis and preparation
Automation of some of the most typical data preparation tasks is aimed at improving the analyst's productivity by eliminating the need for programming specialised routines. Depending on the data mining technique, the analyst can select, sample, aggregate, filter, cleanse and transform data in preparation for mining. Statistical functions facilitate analysis and preparation of data as well as provide forecasting capabilities. Statistical functions supported include factor analysis, linear regression, principal component analysis, univariate curve fitting, logistic regression, univariate and bivariate statistics.

Visualisation

Visualisation functions bring out unusual features that might otherwise be missed. Customised visualisation interpret data mining results, depict model quality, and present the results of various statistical functions.
Back to the top

Image Compression Using Vector Quantization

Introduction

Digital storage of photo ID pictures ,along with other personal ID information is required in various personal identification and security systems.Such systems can either be centralized ( the image information is stored in a central database) or decentralized (the image of each individual is stored in his own ID card) In the latter case the memory is very limited. One example of the storage media on an ID card is the two dimensional barcode known as PDF417 label, which has the capacity of 1024 bytes. An 8-bit monochrome facial image at a spatial resolution of 128x128 pixels occupies 16,384 bytes. A compression factor of more than 90 % is required for such an image to be compressed down. At high compression ratios and when starting from low resolution,Vector Quantization (VQ) algorithm can yield a good image with an average compression of more than 90 %. Image data compression techniques fall into two broad categories.In the first category, called predictive coding, are methods that exploit redundancy in data. Redundancy is a characteristic related to the factors as predictability, randomness and smoothness in the data e.g., an image of constant gray levels is fully predictable once the gray level of the first pixel is known. On the other hand, a white noise random field is totally unpredictable and every pixel has to be stored to reproduce the image. Techniques such as Delta Modulation and Differential Pulse Code Modulation fall in this category. In the second category, called transform coding, compression is achieved by transforming the given image into another array such that a large amount of information is packed into a small number of samples. Other image data compression algorithms exist that are generalisations or combinations of these two methods. This project deals with storage and compression of images ( photo IDs) using Vector Quantization .

Vector Quantization

Vector Quantisation (VQ) can be defined as a form of pattern recognition where an input pattern is "approximated" by one of a set of patterns.It is mostly applied to image compression. An input image is first divided into meshes. Each mesh is then approximated by one of a set of patterns. We can call a mesh an input by one of a set of patterns. We call a mesh an input Vector and a set of patterns a code book. A vector in a codebook is reffered to as a code vector. For VQ to be put into use two issues need to be addressed. One is the accelaration of the " nearest neighbor search (NNS)". The nns looks for a vector nearest to an input vector among a large number of input vectors. This requires a substantial amount of time on convential serial processors in relation to the dimension and the number of code vectors. The other issue is the deseign of an optimal codebook. The quality of reconstructed images obtained from a common set of images using DCT-based compression algorithms is independent of the algorithm used. In VQ, however, the quality of the compressed image depends on the codebook design.

Results


The results are as shown below:


This screen gives the options to the user



This is the screen when user selects option 3 above




This is the compressed image (90 % compression) using 2x2 blocks and 128 entries
in the codebook
94.1 % has been achieved with 4x4 blocks and 256 entries of codebook





This gives the mapping of 256 colors to 16 colors for another quantization scheme implemented





The result of above quantization scheme

Back to the top
Home About Me College Resume India U of A Goals Contact Me