Probabilistic latent semantic analysis
Encyclopedia
Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. PLSA evolved from latent semantic analysis
Latent semantic analysis
Latent semantic analysis is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close...

, adding a sounder probabilistic model. PLSA has applications in information retrieval
Information retrieval
Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

 and filtering, natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

, machine learning
Machine learning
Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases...

 from text, and related areas. It was introduced in 1999 by Jan Puzicha and Thomas Hofmann, and it is related to non-negative matrix factorization.

Compared to standard latent semantic analysis
Latent semantic analysis
Latent semantic analysis is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close...

 which stems from linear algebra
Linear algebra
Linear algebra is a branch of mathematics that studies vector spaces, also called linear spaces, along with linear functions that input one vector and output another. Such functions are called linear maps and can be represented by matrices if a basis is given. Thus matrix theory is often...

 and downsizes the occurrence tables (usually via a singular value decomposition
Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....

), probabilistic latent semantic analysis is based on a mixture decomposition derived from a latent class model
Latent class model
In statistics, a latent class model relates a set of observed discrete multivariate variables to a set of latent variables. It is a type of latent variable model. It is called a latent class model because the latent variable is discrete...

. This results in a more principled approach which has a solid foundation in statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....

.

Considering observations in the form of co-occurrences of words and documents, PLSA models the probability of each co-occurrence as a mixture of conditionally independent multinomial distributions:


The first formulation is the symmetric formulation, where and are both generated from the latent class in similar ways (using the conditional probabilities and ), whereas the second formulation is the asymmetric formulation, where, for each document , a latent class is chosen conditionally to the document according to , and a word is then generated from that class according to . Although we have used words and documents in this example, the co-occurrence of any couple of discrete variables may be modelled in exactly the same way.

It is reported that the aspect model used in the probabilistic latent semantic analysis has severe overfitting
Overfitting
In statistics, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations...

 problems. The number of parameters grows linearly with the number of documents. In addition, although PLSA is a generative model of the documents in the collection it is estimated on, it is not a generative model of new documents.

PLSA may be used in a discriminative setting, via Fisher kernel
Fisher kernel
In statistical classification, the Fisher kernel, named in honour of Sir Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model...

s.

Evolutions of PLSA

  • Hierarchical extensions:
    • Asymmetric: MASHA ("Multinomial ASymmetric Hierarchical Analysis")
    • Symmetric: HPLSA ("Hierarchical Probabilistic Latent Semantic Analysis")

  • Generative models: The following models have been developed to address an often-criticized shortcoming of PLSA, namely that it is not a proper generative model for new documents.
    • Latent Dirichlet allocation
      Latent Dirichlet allocation
      In statistics, latent Dirichlet allocation is a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar...

       - adds a Dirichlet prior on the per-document topic distribution

  • Higher-order data: Although this is rarely discussed in the scientific literature, PLSA extends naturally to higher order data (three modes and higher), i.e. it can model co-occurrences over three or more variables. In the symmetric formulation above, this is done simply by adding conditional probability distributions for these additional variables. This is the probabilistic analogue to non-negative tensor factorisation.

See also

  • Compound term processing
    Compound term processing
    Compound term processing is the name that is used for a category of techniques in Information retrieval applications that performs matching on the basis of compound terms...

  • Latent Dirichlet allocation
    Latent Dirichlet allocation
    In statistics, latent Dirichlet allocation is a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar...

  • Latent semantic analysis
    Latent semantic analysis
    Latent semantic analysis is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close...

  • Pachinko allocation
    Pachinko allocation
    In machine learning and natural language processing, the pachinko allocation model is a topic model, i.e. a generative statistical model for discovering the abstract "topics" that occur in a collection of documents...

  • Vector space model
    Vector space model
    Vector space model is an algebraic model for representing text documents as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings...


External links

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK