LESSER THE MARKS MORE IS THE HUNGER TO DO WELL AND YOU EXPLORE NEW WAYS TO DO THINGS BETTER. SO DONT WORRY ABOUT MARKS


Saturday, August 30, 2014

ANOTHER LIST OF PROJECS


Topic
 1-Computer Aided Detection of Solid Breast Nodules: Performance Evaluation of Support Vector Machine and K- Nearest Neighbor Classifiers


Abstract—Breast Cancer is one of the major health concerns of women all over the world. Computer Aided Detection (CAD) aids radiologists for the early detection of abnormalities in the breast masses. Abnormalities in the breast may be cancerous or non cancerous. This work proposes an effective CAD system that considerably reduces the misclassification rates of these abnormalities. 60 mammogram images were taken and subjected to Segmentation and Feature Extraction techniques. K-means clustering algorithm is employed for segmentation and Fast Fourier Transform has been employed for the extraction of features. The unique set of feature vectors is given to the classification module. The classification of solid masses of breast nodule is done using Supervised Classifiers Support Vector Machine (SVM) and K- Nearest Neighbor (K- NN). The investigation reveals that SVM outperforms K- NN in terms of sensitivity, specificity and accuracy.
Index Terms—Mammogram, Segmentation, K- means clustering, Feature Extraction, Fast Fourier Transform, Support Vector Machine, K- Nearest Neighbor Classifier.
2-

 Textural Features Based Computer Aided Diagnostic System for Mammogram Mass Classification


Abstract— Computer Aided Diagnosis (CAD) could be applied as a solution to reduce the chances of human errors and helps Medical Practioners in the correct classification of Breast Masses. This paper emphasizes an algorithm for the early detection of breast masses. Textural analysis is one of the efficient methods for the early detection of abnormalities. The paper enumerates an efficient Discrete Wavelet Transform (DWT) algorithm and a modified Grey-Level Co-Occurrence Matrix (GLCM) method for textural feature extraction from segmented mammogram images. Each tissue pattern after classification is characterized into Benign and Malignant masses. A total of 148 mammogram images were taken from Mini MIAS database and solid breast nodules were classified into benign and malignant masses using supervised classifiers. The classifier used is Radial Basis Function Neural Network (RBFNN). The proposed system has a high potential for cancer detection from digitized screening mammograms.
Index Terms—Mammogram, Pre-processing, Feature Extraction, Grey Level Co-occurrence Matrix, Discrete Wavelet Transform, Radial Basis Function Neural Networks.

3-
A non-extensive entropy feature and its application to texture classification
a b s t r a c t This paperproposesanewprobabilisticnon-extensiveentropyfeaturefortexturecharacterization, based onaGaussianinformationmeasure.Thehighlightsofthenewentropyarethatitisboundedby finite limitsandthatitisnon-additiveinnature.Thenon-additivepropertyoftheproposedentropy makes itusefulfortherepresentationofinformationcontentinthenon-extensivesystemscontaining some degreeofregularityorcorrelation.Theeffectivenessoftheproposedentropyinrepresentingthe correlatedrandomvariablesisdemonstratedbyapplyingitforthetextureclassification problemsince texturesfoundinnaturearerandomandatthesametimecontainsomedegreeofcorrelationor regularity atsomescale.Thegraylevelco-occurrenceprobabilities(GLCP)areusedforcomputingthe entropyfunction.Theexperimentalresultsindicatehighdegreeoftheclassification accuracy.The performance ofthenewentropyfunctionisfoundsuperiortootherformsofentropysuchasShannon, Renyi,TsallisandPalandPalentropiesoncomparison.Usingthefeaturebasedpolarinteractionmaps (FBIM) theproposedentropyisshowntobethebestmeasureamongtheentropiescomparedfor representingthecorrelatedtextures.
4-
Content-based Image Retrieval by Information Theoretic Measure
ABSTRACT
Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases
based on the content of the images. A new entropy function that serves as a measure of information content in an
image termed as ‘an information theoretic measure’ is devised in this paper. Among the various query paradigms,
query by example (QBE) is adopted to set a query image for retrieval from a large image database. In this paper,
colour and texture features are extracted using the new entropy function and the dominant colour is considered as a
visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature
vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices
in a large database system help retrieve the images relevant to the query image without looking at every image
in the database. The entropy values of colour and texture and the dominant colour are considered for measuring
the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is
demonstrated on a benchmark dataset.
Keywords: Image retrieval, fuzzy features, descriptors, entropy, indexing
5-
A practical design of high-volume steganography
in digital video files
Abstract In this research, we consider exploiting the large volume of audio/video
data streams in compressed video clips/files for effective steganography. By observing
that most of the distributed video files employ H.264 Advanced Video Coding
(AVC) and MPEG Advanced Audio Coding (AAC) for video/audio compression,
we examine the coding features in these data streams to determine appropriate data
for modification so that the reliable high-volume information hiding can be achieved.
Such issues as the perceptual quality, compressed bit-stream length, payload of
embedding, effectiveness of extraction and efficiency of execution will be taken into
consideration. First, the effects of using different coding features are investigated
separately and three embedding profiles, i.e. High, Medium and Low, which indicate
the amount of payload, will then be presented. The High profile is used to embed the
maximum amount of hidden information when the high payload is the only major
concern in the target application. The Medium profile is recommended since it is
designed to achieve a good balance among several requirements. The Low profile is
an efficient implementation for faster information embedding. The performances of
these three profiles are reported and the suggested Medium profile can hide more
than 10%of the compressed video file size in common Flash Video (FLV) files.
Keywords Steganography · H.264/AVC ·MPEG AAC· Information hiding

6-
Block Matching Algorithms
For Motion Estimation
Abstract—This paper is a review of the block matching
algorithms used for motion estimation in video compression. It
implements and compares 7 different types of block matching
algorithms that range from the very basic Exhaustive Search to
the recent fast adaptive algorithms like Adaptive Rood Pattern
Search. The algorithms that are evaluated in this paper are
widely accepted by the video compressing community and have
been used in implementing various standards, ranging from
MPEG1 / H.261 to MPEG4 / H.263. The paper also presents a
very brief introduction to the entire flow of video compression.
Index Terms— Block matching, motion estimation, video
compression, MPEG, H.261, H.263

7-
Visual Cryptography Scheme for Color Image Using Random Number
with Enveloping by Digital Watermarking
Abstract
Visual Cryptography is a special type of encryption technique to
obscure image-based secret information which can be decrypted
by Human Visual System (HVS). This cryptographic system
encrypts the secret image by dividing it into n number of shares
and decryption is done by superimposing a certain number of
shares(k) or more. Simple visual cryptography is insecure
because of the decryption process done by human visual system.
The secret information can be retrieved by anyone if the person
gets at least k number of shares. Watermarking is a technique to
put a signature of the owner within the creation.
In this current work we have proposed Visual Cryptographic
Scheme for color images where the divided shares are enveloped
in other images using invisible digital watermarking. The shares
are generated using Random Number.
Keywords: Visual Cryptography, Digital Watermarking,
Random Number.
8-      

 Image Compression Using Discrete Wavelet Transform

Abstract: This Project presents an approach towards MATLAB implemention of the Discrete Wavelet Transform (DWT) for image compression. The design follows the JPEG2000 standard and can be used for both lossy and lossless compression. In order to reduce complexities of the design linear algebra view of DWT has been used in this concept.With the use of more and more digital still and moving images, huge amount of disk space is required for storage and manipulation purpose. For example, a standard 35-mmphotograph digitized at 12μm per pixel requires about 18 Mbytes of storage and one second of NTSC-quality color video requires 23 Mbytes of storage. JPEG is the most commonly used image compression standard in today’s world. But researchers have found that JPEG has many limitations. In order to overcome all those limitations and to add on new improved features, ISO and ITU-T has come up with new image compression standard, which is JPEG2000
9
Artificial Bee Colony Data Miner (ABC-Miner)

Abstract—Data mining aims to discover interesting, non-trivial,
and meaningful information from large datasets. One of the data
mining tasks is classification, which aims to assign the given
datasets to the most suitable classes. Classification rules are used
in many domains such as medical sciences, banking, and
meteorology. However, discovering classification rules is
challenging due to large size and noisy structure of the datasets,
and the difficulty of discovering general and meaningful rules. In
the literature, there are several classical and heuristic algorithms
proposed to mine classification rules out of large datasets. In this
paper, a new and novel heuristic classification data mining
approach based on artificial bee colony algorithm (ABC) was
proposed (ABC-Miner). The proposed approach was compared
with Particle Swarm Optimization (PSO) rule classification
algorithm and C4.5 algorithm using benchmark datasets. The
experimental results show the efficiency of the proposed method.
Keywords: Artificial bee colony, Classification, Rule learning, Data
mining, ABC-Miner.

10
FACIAL EXPRESSION RECOGNITION USING PRINCIPAL  
         COMPONENT ANALYSIS
ABSTRACT
Facial expressions play an important role in human
communication. The contours of the mouth, eyes and
eyebrows play an important role in classification. Eigen
faces are used to classify facial expression. It has been
assumed that, facial expression can be classified into
some discreet classes (like happiness, sadness, disgust,
fear, anger and surprise) whereas absence of any
expression is the “Neutral” expression. Intensity of a
particular expression can be identified by the level of its
“dissimilarity” from the Neutral expression.
Keywords- Principal component, edge detection, feature
extraction, segmentation
11
Tracking TetrahymenaPyriformis Cells using Decision Trees
Abstract
Matching cells over time has long been the most difficult
this problem by recasting it as a classification problem.
We construct a feature set for each cell, and compute a
feature difference vector between a cell in the current
frame and a cell in a previous frame. Then we determine
whether the two cells represent the same cell over
time by training decision trees as our binary classifiers.
With the output of decision trees, we are able to formulate
an assignment problem for our cell association task
and solve it using a modified version of the Hungarian

algorithm.

1 comment: