Feature Selection using Associative Memory Paradigm and Parallel Computing
Abstract
Performance of most pattern classifiers is improved when redundant or irrelevant features are removed. Nevertheless, this is mainly achieved by highly demanding computational methods or successive classifiers’ construction. This paper shows how the associative memory paradigm and parallel computing can be used to perform Feature Selection tasks. This approach uses associative memories in order to get a mask value which represents a subset of features which clearly identifies irrelevant or redundant information for classification purposes. The performance of the proposed associative memory algorithm is validated by comparing classification accuracy of the suggested model against the performance achieved by other well-known algorithms. Experimental results show that associative memories can be implemented in parallel computing infrastructure, reducing the computational costs needed to find an optimal subset of features which maximizes classification performance.