Random subspace method
Encyclopedia
Random subspace method is an ensemble
Ensemble learning
In statistics and machine learning, ensemble methods use multiple models to obtain better predictive performance than could be obtained from any of the constituent models....

 classifier that consists of several classifiers and outputs the class based on the outputs of these individual classifiers. Random subspace method is a generalization of the random forest
Random forest
Random forest is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. The algorithm for inducing a random forest was developed by Leo Breiman and Adele Cutler, and "Random Forests" is their trademark...

 algorithm.
Whereas random forests are composed of decision trees, a random subspace classifier can be composed from any underlying classifiers. Random subspace method has been used for linear classifiers, support vector machines and other types of classifiers. This method is also applicable to one-class classifiers
One-class classification
One-class classification tries to distinguish one class of objects from all other possible objects, by learning from a training set containing only the objects of that class. This is different from and more difficult than the traditional classification problem, which tries to distinguish between...

.

Algorithm

The ensemble classifier is constructed using the following algorithm
Algorithm
In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning...

:
  1. Let the number of training objects be N and the number of features in the training data be D.
  2. Choose d to be the number of input variables to be used in each individual classifier, d<D. d may also have different values for each individual classifier.
  3. Choose L to be the number of individual classifiers in the ensemble.
  4. For each individual classifier, create a training set by choosing d out of D features without replacement and train the classifier.
  5. For classifying a new object, combine the outputs of the L individual classifiers by majority voting or by combining the posterior probabilities.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK