NCSU Institutional Repository >
NC State Theses and Dissertations >
Dissertations >

Please use this identifier to cite or link to this item: http://www.lib.ncsu.edu/resolver/1840.16/4989

Title: Dimensionality Reduction and Feature Selection using a Mixed-norm Penalty Function
Authors: Zeng, Huiwen
Advisors: Carl T. Kelley, Committee Member
H. Joel Trussell, Committee Chair
Wesley Snyder, Committee Member
Arne A. Nilsson, Committee Member
Keywords: dimensionality reduction
feature selection
neural networks
machine learning
penalty function
mixed-norm penalty function
Issue Date: 13-Mar-2006
Degree: PhD
Discipline: Electrical Engineering
Abstract: Dimensionality reduction, which is the process of mapping high-dimension patterns to lower dimension subspaces, is a key issues in enhancing the processing efficiency of high dimensional data such as hyperspectral images. Dimensionality reduction has been widely discussed in the areas of data mining, image processing, pattern recognition, etc. Because in most situations, many of the dimensions are redundant or unnecessary for the tasks of interest, removing those dimensionality will produce more efficient computation while maintaining the original performance. Dimensionality reduction also reduces the measurement and storage requirements, reduces training and utilization times and it defies the curse of dimensionality to improve classification performance. Feature selection, the process of constructing and selecting the subsets of features that are useful to build a good predictor is of interest for many years. Before Kohavi and John published a special issue on feature selection in 1997, usually no more than 40 features are studied. Ever since then, people started looking at problems with hundreds to tens of thousands of features. Like dimensionality reduction, feature selection reduces the measurement and storage requirements, reduces training and utilization times, and it facilitates data visualization and data understanding. In this work, popular methods for dimensionality reduction and feature selection, such as vector space method, penalty function and support vector machine (SVM) are reviewed and compared. A novel penalty function called the mixed-norm penalty function is proposed. It minimizes the 1-norm of the weight vector while keeping the 2-norm constant. Both dimensionality reduction and feature selection in this work are realized via artificial neural networks (ANNs). Together with Bi-level optimization (BLO) technique, the mixed-norm penalty establishes great performance for both the synthetic data and hyperspectral images.
URI: http://www.lib.ncsu.edu/resolver/1840.16/4989
Appears in Collections:Dissertations

Files in This Item:

File Description SizeFormat
etd.pdf1.5 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.