Perceptual Computing
Encyclopedia
Perceptual Computing is an application of Zadeh's theory of computing with words
Computing with words and perceptions
In computing with words and perceptions , the objects of computation are words, perceptions, and propositions drawn from a natural language. The central theme of CWP is the concept of a generalised constraint...

 on the field of assisting people to make subjective judgments.

Perceptual Computer

The Perceptual ComputerPer-C — an instantiation of perceptual computing — has the architecture that is depicted in Fig. 1 [2]-[6]. It consists of three components: encoder, CWW engine and decoder. Perceptions — words — activate the Per-C and are the Per-C output (along with data); so, it is possible for a human to interact with the Per-C using just a vocabulary.
A vocabulary is application (context) dependent, and must be large enough so that it lets the end-user interact with the Per-C in a user-friendly manner. The encoder transforms words into fuzzy set
Fuzzy set
Fuzzy sets are sets whose elements have degrees of membership. Fuzzy sets were introduced simultaneously by Lotfi A. Zadeh and Dieter Klaua in 1965 as an extension of the classical notion of set. In classical set theory, the membership of elements in a set is assessed in binary terms according to...

s (FSs) and leads to a codebook — words with their associated FS models. The outputs of the encoder activate a Computing With Words (CWW) engine, whose output is one or more other FSs, which are then mapped by the decoder into a recommendation (subjective judgment) with supporting data. The recommendation may be in the form of a word, group of similar words, rank or class.

Although there are lots of details needed in order to implement the Per-C’s three components — encoder, decoder and CWW engine — and they are covered in [5], it is when the Per-C is applied to specific applications, that the focus on the methodology becomes clear. Stepping back from those details, the methodology of perceptual computing is:
  1. Focus on an application (A).
  2. Establish a vocabulary (or vocabularies) for A.
  3. Collect interval end-point data from a group of subjects (representative of the subjects who will use the Per-C) for all of the words in the vocabulary.
  4. Map the collected word data into word-FOUs by using the Interval Approach [1], [5, Ch. 3]. The result of doing this is the codebook (or codebooks) for A, and completes the design of the encoder of the Per-C.
  5. Choose an appropriate CWW engine for A. It will map IT2 FSs into one or more IT2 FSs. Examples of CWW engines are: IF-THEN rules [5, Ch. 6] and Linguistic Weighted Averages [6], [5, Ch. 5].
  6. If an existing CWW engine is available for A, then use its available mathematics to compute its output(s). Otherwise, develop such mathematics for the new kind of CWW engine. The new CWW engine should be constrained so that its output(s) resemble the FOUs in the codebook(s) for A.
  7. Map the IT2 FS outputs from the CWW engine into a recommendation at the output of the decoder. If the recommendation is a word, rank or class, then use existing mathematics to accomplish this mapping [5, Ch. 4]. Otherwise, develop such mathematics for the new kind of decoder.

Applications of Per-C

To-date a Per-C has been implemented for the following four applications: (1) Investment decision-making, (2) Social judgment making, (3) Distributed decision making, and (4) Hierarchical and distributed decision-making. A specific example of the fourth application is the so-called Journal Publication Judgment Advisor [5, Ch. 10] in which for the first time only words are used at every level of the following hierarchical and distributed decision making process:


n reviewers have to provide a subjective recommendation about a journal article that has been sent to them by the Associate Editor, who then has to aggregate the independent recommendations into a final recommendation that is sent to the Editor-in-Chief of the journal. Because it is very problematic to ask reviewers to provide numerical scores for paper-evaluation sub-categories (the two major categories are Technical Merit and Presentation), such as importance, content, depth, style, organization, clarity, references, etc., each reviewer will only be asked to provide a linguistic score for each of these categories. They will not be asked for an overall recommendation about the paper because in the past it is quite common for reviewers who provide the same numerical scores for such categories to give very different publishing recommendations. By leaving a specific recommendation to the Associate Editor such inconsistencies can hope to be eliminated.


How words can be aggregated to reflect each reviewer’s recommendation as well as the expertise of each reviewer about the paper’s subject matter is done using a linguistic weighted average. Although the journal publication judgment advisor uses reviewers and an associate editor, the word “reviewer” could be replaced by judge, expert, low-level manager, commander, referee, etc, and the term “associate editor” could be replaced by control center, command center, higher-level manager, etc. So, this application has potential wide applicability to many other applications.

In summary, the Per-C (whose development has taken more than a decade) is the first complete implementation of Zadeh’s CWW paradigm, as applied to assisting people to make subjective judgments.

Software

Freeware MATLAB implementations of Per-C are available at: http://sipi.usc.edu/~mendel/software.

See also

  • Computing with words and perceptions
    Computing with words and perceptions
    In computing with words and perceptions , the objects of computation are words, perceptions, and propositions drawn from a natural language. The central theme of CWP is the concept of a generalised constraint...

  • Computational intelligence
    Computational intelligence
    Computational intelligence is a set of Nature-inspired computational methodologies and approaches to address complex problems of the real world applications to which traditional methodologies and approaches are ineffective or infeasible. It primarily includes Fuzzy logic systems, Neural Networks...

  • Expert system
    Expert system
    In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, like an expert, and not by following the procedure of a developer as is the case in...

  • Fuzzy control system
    Fuzzy control system
    A fuzzy control system is a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 .-...

  • Fuzzy logic
    Fuzzy logic
    Fuzzy logic is a form of many-valued logic; it deals with reasoning that is approximate rather than fixed and exact. In contrast with traditional logic theory, where binary sets have two-valued logic: true or false, fuzzy logic variables may have a truth value that ranges in degree between 0 and 1...

  • Fuzzy set
    Fuzzy set
    Fuzzy sets are sets whose elements have degrees of membership. Fuzzy sets were introduced simultaneously by Lotfi A. Zadeh and Dieter Klaua in 1965 as an extension of the classical notion of set. In classical set theory, the membership of elements in a set is assessed in binary terms according to...

  • Granular computing
    Granular computing
    Granular computing is an emerging computing paradigm of information processing. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of knowledge from information...

  • Rough set
    Rough set
    In computer science, a rough set, first described by a Polish computer scientist Zdzisław I. Pawlak, is a formal approximation of a crisp set in terms of a pair of sets which give the lower and the upper approximation of the original set...

  • Soft computing
    Soft computing
    Soft computing is a term applied to a field within computer science which is characterized by the use of inexact solutions to computationally-hard tasks such as the solution of NP-complete problems, for which an exact solution cannot be derived in polynomial time.-Introduction:Soft Computing became...

  • Type-2 fuzzy sets and systems
    Type-2 fuzzy sets and systems
    Type-2 fuzzy sets and systems generalize fuzzy sets and systems so that more uncertainty can be handled. From the very beginning of fuzzy sets, criticism was made about the fact that the membership function of a type-1 fuzzy set has no uncertainty associated with it, something that seems to...

  • Vagueness
    Vagueness
    The term vagueness denotes a property of concepts . A concept is vague:* if the concept's extension is unclear;* if there are objects which one cannot say with certainty whether belong to a group of objects which are identified with this concept or which exhibit characteristics that have this...

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK