Latent semantic analysis
Encyclopedia
Latent semantic analysis (LSA) is a technique in natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur close together in text. A matrix containing word counts per paragraph (rows represent unique words and columns represent each paragraph) is constructed from a large piece of text and a mathematical technique called singular value decomposition
Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....

 (SVD) is used to reduce the number of columns while preserving the similarity structure among rows. Words are then compared by taking the cosine of the angle between the two vectors formed by any two rows. Values close to 1 represent very similar words while values close to 0 represent very dissimilar words.

LSA was patented in 1988 (US Patent 4,839,853) by Scott Deerwester
Scott Deerwester
Scott Deerwester is one of the inventors of latent semantic analysis. He was a member of the faculty of the Colgate University, University of Chicago and the Hong Kong University of Science and Technology. He has been resident in Hong Kong since 1991, where he has been working in the humanitarian...

, Susan Dumais
Susan Dumais
Susan Dumais is a Principal Researcher in the Context, Learning, and User Experience for Search Group of Microsoft Research and an Affiliate Professor at the University of Washington Information School....

, George Furnas
George Furnas
Prof. George W. Furnas is a professor and Associate Dean for Academic Strategy at the School of Information of the University of Michigan. Prior to his position at the University of Michigan, Furnas worked at Bell Labs for 15 years where he was a distinguished member of technical staff, and then...

, Richard Harshman
Richard Harshman
Dr Richard A. Harshman was a member of the Department of Psychology of the University of Western Ontario since 1976, rising in the ranks to the level of Full Professor. He died suddenly on Thursday, January 10, 2008....

, Thomas Landauer
Thomas Landauer
Dr. Thomas K. Landauer is a professor at the Department of Psychology of the University of Colorado.He was one of the pioneers of Latent semantic analysis, and in 1995 published a controversial analysis of the productivity paradox of information technology.He is currently a Vice President at...

, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval
Information retrieval
Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

, it is sometimes called Latent Semantic Indexing (LSI)
Latent semantic indexing
Latent Semantic Indexing is an indexing and retrieval method that uses a mathematical technique called Singular value decomposition to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words...

.

Occurrence matrix

LSA can use a term-document matrix which describes the occurrences of terms in documents; it is a sparse matrix
Sparse matrix
In the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....

 whose rows correspond to terms
Terminology
Terminology is the study of terms and their use. Terms are words and compound words that in specific contexts are given specific meanings, meanings that may deviate from the meaning the same words have in other contexts and in everyday language. The discipline Terminology studies among other...

 and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations:
  • The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil").
  • The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original).
  • The original term-document matrix is presumed overly sparse
    Sparse matrix
    In the subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros . The term itself was coined by Harry M. Markowitz....

     relative to the "true" term-document matrix. That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document—generally a much larger set due to synonymy.


The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
{(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}


This mitigates the problem of identifying synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates the problem with polysemy
Polysemy
Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation

Let be a matrix where element describes the occurrence of term in document (this can be, for example, the frequency). will look like this:


Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:


Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:


Now the dot product
Dot product
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number obtained by multiplying corresponding entries and then summing those products...

  between two term vectors gives the correlation
Correlation
In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence....

 between the terms over the documents. The matrix product  contains all these dot products. Element (which is equal to element ) contains the dot product (). Likewise, the matrix contains the dot products between all the document vectors, giving their correlation over the terms: .

Now assume that there exists a decomposition of such that and are orthogonal matrices
Orthogonal matrix
In linear algebra, an orthogonal matrix , is a square matrix with real entries whose columns and rows are orthogonal unit vectors ....

 and is a diagonal matrix
Diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero. The diagonal entries themselves may or may not be zero...

. This is called a singular value decomposition
Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....

 (SVD):


The matrix products giving us the term and document correlations then become


Since and are diagonal we see that must contain the eigenvectors of , while must be the eigenvectors of . Both products have the same non-zero eigenvalues, given by the non-zero entries of , or equally, by the non-zero entries of . Now the decomposition looks like this:


The values are called the singular values, and and the left and right singular vectors.
Notice the only part of that contributes to is the row.
Let this row vector be called .
Likewise, the only part of that contributes to is the column, .
These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the largest singular values, and their corresponding singular vectors from and , you get the rank approximation to X with the smallest error (Frobenius norm). This approximation has a minimal error. But more importantly we can now treat the term and document vectors as a "concept space". The vector then has entries, each giving the occurrence of term in one of the concepts. Likewise, the vector gives the relation between document and each concept. We write this approximation as


You can now do the following:
  • See how related documents and are in the concept space by comparing the vectors and (typically by cosine similarity
    Vector space model
    Vector space model is an algebraic model for representing text documents as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings...

    ). This gives you a clustering of the documents.
  • Comparing terms and by comparing the vectors and , giving you a clustering of the terms in the concept space.
  • Given a query, view this as a mini document, and compare it to your documents in the concept space.


To do the latter, you must first translate your query into the concept space. It is then intuitive that you must use the same transformation that you use on your documents:



This means that if you have a query vector , you must do the translation before you compare it with the document vectors in the concept space. You can do the same for pseudo term vectors:



Applications

The new concept space typically can be used to:
  • Compare the documents in the concept space (data clustering
    Data clustering
    Cluster analysis or clustering is the task of assigning a set of objects into groups so that the objects in the same cluster are more similar to each other than to those in other clusters....

    , document classification
    Document classification
    Document classification or document categorization is a problem in both library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" or algorithmically...

    ).
  • Find similar documents across languages, after analyzing a base set of translated documents (cross language retrieval).
  • Find relations between terms (synonymy and polysemy
    Polysemy
    Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

    ).
  • Given a query of terms, translate it into the concept space, and find matching documents (information retrieval
    Information retrieval
    Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

    ).
  • Find the best similarity between small groups of terms, in a semantic way (i.e. in a context of a knowledge corpus), as for example in multi choice questions MCQ
    MCQ
    MCQ may refer to* McQ, a 1974 crime drama* McQ Inc, a American defense company based in Pennsylvania* Mathematical Citation Quotient, a measure of the impact of a mathematics journal* Multiple choice question* McQuary limit, or McQ limit...

     answering model.


Synonymy and polysemy are fundamental problems in natural language processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

:
  • Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
  • Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

Applications in human memory

The use of Latent Semantic Analysis has been prevalent in the study of human memory, especially in areas of free recall
Free recall
Free recall is a basic paradigm in the psychological study of memory. In this paradigm, participants study a list of items on each trial, and then are prompted to recall the items in any order...

 and memory search. Howard and Kahana found a positive correlation between the semantic similarity of two words (as measured by LSA) and the probability that the words would be recalled one after another in free recall tasks using study lists of random common nouns. They also noted that in these situations, the inter-response time between the similar words was much quicker than between dissimilar words. These findings are referred to as the Semantic Proximity Effect.

Expanding on these semantic proximity effects, in 2006 Zaromb et al. found that when participants made mistakes in recalling studied items, these mistakes tended to be items that were more semantically related to the desired item and found in a previously studied list. These prior-list intrusions, as they have come to be called, seem to compete with items on the current list for recall. Zaromb et al. found that the PLIs had larger cosine values to the just-recalled word in LSA space than the correct item did.

Another model, termed Word Association Spaces (WAS) is also used in memory studies and was developed by Douglas Nelson at the University of South Florida by collecting free association data from a series of experiments and which includes measures of word relatedness for over 72,000 distinct word pairs.

Implementation

The SVD
Singular value decomposition
In linear algebra, the singular value decomposition is a factorization of a real or complex matrix, with many useful applications in signal processing and statistics....

 is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network
Neural network
The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes...

-like approach, which does not require the large, full-rank matrix to be held in memory.
A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed. MATLAB and Python implementations of these fast algorithms are available. Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's algorithm (2003) provides an exact solution.

Limitations

Some of LSA's drawbacks include:
  • The resulting dimensions might be difficult to interpret. For instance, in
{(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}
the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} --> {(1.3452 * car + 0.2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.

  • LSA cannot capture polysemy
    Polysemy
    Polysemy is the capacity for a sign or signs to have multiple meanings , i.e., a large semantic field.Charles Fillmore and Beryl Atkins’ definition stipulates three elements: the various senses of a polysemous word have a central origin, the links between these senses form a network, and ...

     (i.e., multiple meanings of a word). Each occurrence of a word is treated as having the same meaning due to the word being represented as a single point in space. For example, the occurrence of "chair" in a document containing "The Chair of the Board" and in a separate document containing "the chair maker" are considered the same. The behavior results in the vector representation being an average of all the word's different meanings in the corpus, which can make it difficult for comparison. However, the effect is often lessened due to words having a predominant sense
    Word sense disambiguation
    In computational linguistics, word-sense disambiguation is an open problem of natural language processing, which governs the process of identifying which sense of a word is used in a sentence, when the word has multiple meanings...

     throughout a corpus (i.e. not all meanings are equally likely).

  • Limitations of bag of words model
    Bag of words model
    The bag-of-words model is a simplifying assumption used in natural language processing and information retrieval. In this model, a text is represented as an unordered collection of words, disregarding grammar and even word order.The bag-of-words model is used in some methods of document...

     (BOW), where a text is represented as an unordered collection of words.

  • The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis
    Ergodic hypothesis
    In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a particle in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a...

    ), while a Poisson distribution
    Poisson distribution
    In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since...

     has been observed. Thus, a newer alternative is probabilistic latent semantic analysis
    Probabilistic latent semantic analysis
    Probabilistic latent semantic analysis , also known as probabilistic latent semantic indexing is a statistical technique for the analysis of two-mode and co-occurrence data. PLSA evolved from latent semantic analysis, adding a sounder probabilistic model...

    , based on a multinomial model, which is reported to give better results than standard LSA.

Commercial Applications

LSA has been used to assist in performing prior art
Prior art
Prior art , in most systems of patent law, constitutes all information that has been made available to the public in any form before a given date that might be relevant to a patent's claims of originality...

 searches for patents.

See also

  • Compound term processing
    Compound term processing
    Compound term processing is the name that is used for a category of techniques in Information retrieval applications that performs matching on the basis of compound terms...

  • Latent Dirichlet allocation
    Latent Dirichlet allocation
    In statistics, latent Dirichlet allocation is a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar...

  • Latent semantic mapping
    Latent semantic mapping
    Latent semantic mapping is a data-driven framework to model globally meaningful relationships implicit in large volumes of data. It is a generalization of latent semantic analysis...

  • Latent Semantic Structure Indexing
    Latent Semantic Structure Indexing
    Latent semantic structure indexing is a technique for calculating chemical similarity derived from latent semantic analysis .LaSSI was developed at Merck & Co...

  • Principal components analysis
    Principal components analysis
    Principal component analysis is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. The number of principal components is less than or equal to...

  • Probabilistic latent semantic analysis
    Probabilistic latent semantic analysis
    Probabilistic latent semantic analysis , also known as probabilistic latent semantic indexing is a statistical technique for the analysis of two-mode and co-occurrence data. PLSA evolved from latent semantic analysis, adding a sounder probabilistic model...

  • Spamdexing
    Spamdexing
    In computing, spamdexing is the deliberate manipulation of search engine indexes...

  • Vectorial semantics
  • Coh-Metrix
    Coh-Metrix
    Coh-Metrix is a computational tool that produces indices of the linguistic and discourse representations of a text. Coh-Metrix calculates the coherence of texts on many different measures.-CohMetrix Measurements:...


Articles on LSA


Talks and Demonstrations

  • LSA Overview, talk by Prof. Thomas Hofmann describing LSA, its applications in Information Retrieval, and its connections to [Probabilistic latent semantic analysis].
  • Complete LSA DEMO in C# for Windows. DEMO includes enumeration of text files, filtering stop words, stemming, making document-term matrix and SVD. Project can be used as template. C# version of SVDLIBC is used.

Implementations

Due to its cross-domain applications in Information Retrieval
Information retrieval
Information retrieval is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web...

, Natural Language Processing
Natural language processing
Natural language processing is a field of computer science and linguistics concerned with the interactions between computers and human languages; it began as a branch of artificial intelligence....

 (NLP), Cognitive Science
Cognitive science
Cognitive science is the interdisciplinary scientific study of mind and its processes. It examines what cognition is, what it does and how it works. It includes research on how information is processed , represented, and transformed in behaviour, nervous system or machine...

 and Computational Linguistics
Computational linguistics
Computational linguistics is an interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective....

, LSA has been implemented to support many different kinds of applications.
  • Sense Clusters, an Information Retrieval-oriented perl implementation of LSA
  • S-Space Package, a Computational Linguistics and Cognitive Science-oriented Java implementation of LSA
  • Semantic Vectors applies Random Projection, LSA, and Reflective Random Indexing to Lucene
    Lucene
    Apache Lucene is a free/open source information retrieval software library, originally created in Java by Doug Cutting. It is supported by the Apache Software Foundation and is released under the Apache Software License....

    term-document matrices
  • Infomap Project, an NLP-oriented C implementation of LSA (superseded by semanticvectors project)
  • Text to Matrix Generator, A MATLAB Toolbox for generating term-document matrices from text collections, with support for LSA
  • Gensim contains a fast, online Python implementation of LSA for matrices larger than RAM.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK