Storage (memory)
Encyclopedia
Storage in human memory
Memory
In psychology, memory is an organism's ability to store, retain, and recall information and experiences. Traditional studies of memory began in the fields of philosophy, including techniques of artificially enhancing memory....

 is one of three core process of memory, along with Recall
Recall
Recall may refer to:* Recollection, recall from memory* Product recall* Recall election* Letter to recall sent to return an ambassador from a country, either as a diplomatic protest or because the diplomat is being reassigned elsewhere and is being replaced by another envoy* Recall to employment...

 and Encoding. It refers to the retention of information, which has been achieved through the encoding process, in brain for prolonged period of time until it is accessed by the recall process. Modern memory psychology differentiates the two distinct type of memory storage; short-term memory and long-term memory. In addition, different memory models have suggested variations of existing short-term and long-term memory to account for different ways of storing memory.

Short Term Memory

The Short Term Memory refers to the ability to hold information from immediate past for a short duration of time. According to the Atkinson-Shiffrin Model of Memory, in process of Encoding, perceived memory first resides in the Short-Term Memory before moving to the Long-Term Memory through practice. Memory in Short-Term Memory Storage is said to be readily accessible, while the memory itself is fragile and its duration and capacity is much smaller when compared to more permanent, seemingly indefinite Long-Term Storage. Baddeley suggested that memory stored in Short-Term Memory is continuously subject to a decay process, which can eventually lead to forgetting in absence of rehearsal. George A. Miller suggested in his paper that the approximate capacity of the Short-Term Memory storage is approximately seven items, plus or minus two, but modern researches are showing that this itself is subject to numerous variability, including the stored items’ phonological properties.

Long term memory

In contrast to the Short Term Memory, Long Term Memory refers to the ability to hold information for a prolonged period of time. The Atkinson-Shiffrin Model of Memory (Atkinson 1968) suggests that the item stored in Short-Term Memory moves to Long-Term Memory through repeated practice and rehearsal. Miller (1956), while suggesting limited capacity for Short-Term Memory, suggested that the capacity of Long-Term Memory is much greater than that of Short-Term Memory, if not limitless; such have led to development of models that assume long-term memory is capable of housing an ever-growing matrix of stored memory. The duration of long-term memory, on the other hand, is not permanent; unless memory is occasionally recalled, which, according to the Dual-Store Memory Search Model, enhances the long-term memory, the memory may be failed to recall on later occasions.


Models of Memory Storage

Varieties of different memory models have been proposed to account for different type of recall process, including the cued recall, free recall, and serial recall. In order to explain the recall process, however, the memory model must identify how an encoded memory can reside in the memory storage for prolonged period of time until the memory is accessed again, during the recall process. Not all models, however, use the terminology of Short-Term and Long-Term Memory to explain the memory storage; the Dual-Store theory and refined version of Atkinson-Shiffrin Model of Memory (Atkinson 1968) uses both Short-Term and Long-Term memory storage, but others do not.


Multi-Trace Distributed Memory Model

The multi-trace distributed memory model suggests that the memories that are being encoded are converted to vectors of values, with each scalar quantity of each vector representing a different attribute of the item to be encoded. Such notion was first suggested by early theories of Hooke (1969) and Semon (1923). A single memory is distributed to multiple attributes, or features, so that each attribute represents one aspect of the memory being encoded. Such vector of values is then added into the memory array or a matrix, composed of different traces or vectors of memory.


Therefore, every time a new memory is encoded, such memory is converted to a vector or a trace, composed of scalar quantities representing variety of attributes, which is then added to pre-existing and ever-growing memory matrix, composed of multiple different traces – hence the name of the model.


Once memory traces corresponding to specific memory are stored in the matrix; in order to retrieve the memory for the recall process, one must cue the memory matrix with a specific probe, which would be used to calculate the similarity between the test vector and the vectors stored in the memory matrix. As the memory matrix is constantly growing with new traces being added in, one would have to perform parallel search through all the traces present within the memory matrix in order to calculate the similarity, whose result can be used to perform either associative recognition, or with probabilistic choice rule, used to perform a cued recall.


While it has been said that human memory seems to be capable of storing great amount of information, to the extent that some had thought of infinite amount, presence of such ever-growing matrix within human memory sounds implausible. In addition, the model suggests that in order to perform the recall process, parallel-search between every single traces that reside within the ever-growing matrix is required, which also raises doubt on whether such computations can be done in short amount of time. Such doubts, however, have been challenged by findings of Gallistel and King who present evidences on brain’s enormous computational abilities that can be in support of such parallel support.

Neural Network Models

Multi-Trace model had two key limitations: one, notion of the presence of ever-growing matrix within human memory sounds implausible, and two, computational searches for similarity against millions of traces that would be present within memory matrix to calculate similarity sounds far beyond the scope of human recalling process. The neural network model is the ideal model in this case; as it overcomes the limitations posed by the multi-trace model while at the same time maintains the useful features of the model as well.


The Neural Network model assumes that ‘neurons’ in neural network forms complex network with other neurons, forming a highly interconnected network; each neuron is characterized by the activation value, and the connection between two neurons are characterized by the weight value. Interaction between each neuron is characterized by the McCullough-Pitts Dynamical Rule, and change of weight and connections between neurons resulting from learning is represented by the Hebbian Learning Rule
Hebbian theory
Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell...

.


Anderson shows that combination of Hebbian Learning rule and McCullough-Pitts Dynamical rule allow network to generate a weight matrix that can store associations between different memory patterns – such matrix is the form of memory storage for the Neural Network Model. Major difference between matrix of multiple traces hypothesis and the neural network model is that while new memory indicates extension of the existing matrix for multiple traces hypothesis, weight matrix of the neural network model does not extends; rather, the weight is said to be updated with introduction of new association between neurons.


Using the weight matrix and Learning / Dynamic rule, neurons cued with one value can retrieve the different value that is ideally a close approximation of the desired target memory vector.

Main Article: Hopfield Net
Hopfield net
A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems with binary threshold units. They are guaranteed to converge to a local minimum, but convergence to one of the stored patterns is not guaranteed...



As the Anderson’s weight matrix between neurons will only retrieve the approximation of the target item when cued, modified version of the model was sought in order to be able to recall the exact target memory when cued. The Hopfield Net is currently the simplest and most popular neural network model of associative memory; the model allows the recall of clear target vector when cued with the part or the ‘noisy’ version of the vector.


The weight matrix of Hopfield Net, that stores the memory, closely resembles the one used in weight matrix proposed by Anderson. Again, when new association is introduced, the weight matrix is said to be ‘updated’ to accommodate the introduction of new memory; it is stored until the matrix is cued by a different vector.


Dual-Store Memory Search Model


Main Article: Search of Associative Memory

First developed by Atkinson and Shiffrin (1968), and refined by others, including Raajimakers and Shiffrin, the Dual-store Memory Search model, which the modern version is referred to as SAM or Search of Associative Memory model, remains as one of the most influential computational model of memory [8]. The model utilizes both Short-Term memory, termed Short-Term Store (STS), and the Long-Term Memory, termed Long-Term Store (LTS) or Episodic Matrix, in its mechanism.


Item first encoded is introduced into the Short-Term Store. While item stays in the Short-Term Store, their vector representations in Long-Term store goes through variety of association; Item introduced in Short-Term Store goes through three different types of association: autoassociation, the self-association in Long-Term Store, Heteroassociation, the inter-item association in Long-Term Store, and the Context Association, which refers to association between the item and its encoded context. For each item in Short-Term Store, longer the duration of time an item resides within the Short-Term Store, greater will be its association with itself, with other items that co-resides within Short-Term store, and with its encoded context.


The size of the Short-Term store is defined by a parameter, r. As item is introduced into the Short-Term Store, and if the Short-Term store has already been occupied by maximal number of item, item will probabilistically drop out of the Short-Term Storage.

As items co-reside in the short-term store, their associations are constantly being updated in the Long-term store matrix; as strength of association between two item depends on the amount of time the two memory item spends together within the short-term store, the contiguity effect, prominent effect in memory where recall of item close to recalled item is favored, is easily explained, as the item that was contiguous would have had greatest associative strength formed with the recalled item in the Long-Term Storage.


Furthermore, Primacy effect, an effect seen in memory recall paradigm where first few items have greater chance of being recalled over others, can be explained by early-list item residing longer amount of time in Short-Term matrix; while older items have greater chance of dropping out of STS, the item has chance of staying for greater duration of time as the function is probabilistic, and item that managed to stay extended amount of time in STS would have formed stronger autoassociation, heteroassociation and context association than others, ultimately leading to greater associative strength and higher chance of being recalled.


Recency effect of recall experiments, where last few items are recalled exceptionally well over other items, are explained by the Short-Term Store. When the study of a given list of memory have been finished, what resides in the Short-Term store in the end would be the last few items that were introduced the last; because Short-Term store is readily accessible, such items would be recalled before any item stored within long-term store. This also explains the fragile nature of Recency Effect, where recency effect is removed with even the simplest distractors, as the last items would not have had enough time to form any meaningful association within the Long-Term Store; if they are dropped out of the Short-Term store by distractors, probability of the last items being recalled would be expected to be lower than even the pre-recency items at the middle of the list.


The Dual-Store SAM model also utilizes memory storage, which itself can be classified as type of a long-term storage: the Semantic Matrix. The Long-Term store in SAM represents the episodic memory, which only deals with new associations that were formed during the study of an experimental list; pre-existing associations between items of the list, then, needs to be represented on different matrix, the Semantic matrix. The semantic matrix remains as the another source of information that does not get modified by episodic associations that gets formed during the exam.


Thus, two memory storages, Short-Term Store and Long-Term Store, is utilized in SAM model. In recall process, items residing in Short-Term memory store will be recalled the first, followed by items residing in Long-Term Store, with probability of being recalled proportional to the strength of association present within the long-term store. Another Memory storage, the Semantic Matrix, is used to explain the semantic effect associated with memory recall.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK