Fluid Concepts and Creative Analogies
Encyclopedia
Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought is a 1995 book by Douglas Hofstadter
Douglas Hofstadter
Douglas Richard Hofstadter is an American academic whose research focuses on consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics...

 and other members of the Fluid Analogies Research Group exploring the mechanisms of intelligence
Intelligence
Intelligence has been defined in different ways, including the abilities for abstract thought, understanding, communication, reasoning, learning, planning, emotional intelligence and problem solving....

 through computer modeling. It contends that the notions of analogy
Analogy
Analogy is a cognitive process of transferring information or meaning from a particular subject to another particular subject , and a linguistic expression corresponding to such a process...

 and fluidity
Cognitive fluidity
Cognitive fluidity is a term first popularly applied by Steven Mithen in his book The Prehistory of the Mind, a search for the origins of Art, Religion and Science....

 are fundamental to explain how the human mind
Mind
The concept of mind is understood in many different ways by many different traditions, ranging from panpsychism and animism to traditional and organized religious views, as well as secular and materialist philosophies. Most agree that minds are constituted by conscious experience and intelligent...

 solves problems and to create computer programs that show intelligent behavior. It analyzes several computer programs that members of the group have created over the years to solve problems that require intelligence.

The book was the first to be sold on Amazon.com
Amazon.com
Amazon.com, Inc. is a multinational electronic commerce company headquartered in Seattle, Washington, United States. It is the world's largest online retailer. Amazon has separate websites for the following countries: United States, Canada, United Kingdom, Germany, France, Italy, Spain, Japan, and...

, in July, 1995.

Origin of the book

The book is a collection of revised articles that appeared in precedence, each preceded by an introduction by Hofstadter.
They describe the scientific work by him and his collaborators in the 1980s and 1990s.
The project started in the late 1970s at Indiana University, where Hofstadter collaborated with Marsha Meredith and Gray Clossman.
In 1983 he took a sabbatical year at MIT, working in Marvin Minsky
Marvin Minsky
Marvin Lee Minsky is an American cognitive scientist in the field of artificial intelligence , co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy.-Biography:...

's Artificial Intelligence Lab.
There he met and collaborated with Melanie Mitchell
Melanie Mitchell
Melanie Mitchell is a professor of computer science at Portland State University. She has worked at the Santa Fe Institute and Los Alamos National Laboratory...

, who subsequently did her Ph.D. with him, David Rogers
David Rogers
David Rogers is an American racing driver who won the NASCAR Weekly Series national championship in 1994.Driving a Late Model that he owned, Rogers won all 22 races at a paved half-mile track in Barberville, Florida, making him the first NASCAR Weekly Series national champion to complete a season...

, and Marek Lugowski.
Subsequently Hofstadter moved to the University of Michigan, where the FARG (Fluid Analogies Research Group) was founded.
Eventually he returned to Indiana University in 1988, continuing the FARG research there.
The book was written during a sabbatical year at the Istituto per la Ricerca Scientifica e Tecnologica in Trento, Italy.

Chapters and authors

  • To Seek Whence Cometh a Sequence (D.H.)
  • The Architecture of Jumbo (D.H.)
  • Numbo: A Study in Cognition and Recognition (Daniel Defays)
  • High-level Perception, Representation, and Analogy: A Critique of Artificial-intelligence Methodology (David Chalmers
    David Chalmers
    David John Chalmers is an Australian philosopher specializing in the area of philosophy of mind and philosophy of language, whose recent work concerns verbal disputes. He is Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University...

    , Robert French, and D.H.)
  • The Copycat Project: A Model of Mental Fluidity and Analogy-making (D.H. and Melanie Mitchell)
  • Perspectives on Copycat: Comparisons with Recent Work (Melanie Mitchell and D.H.)
  • Prolegomena to Any Future Metacat (D.H.)
  • Tabletop, BattleOp, Ob-Platte, Potelbat, Belpatto, Platobet (D.H. and Robert French)
  • The Emergent Personality of Tabletop, a Perception-based Model of Analogy-making (D.H. and Robert French)
  • Letter Spirit: Esthetic Perception and Creative Play in the Rich Microcosm of the Roman Alphabet (D.H. and Gary McGraw
    Gary McGraw
    Gary McGraw is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include , , , , , and ; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific...

    )

To Seek Whence Cometh a Sequence

The first AI
Ai
AI, A.I., Ai, or ai may refer to:- Computers :* Artificial intelligence, a branch of computer science* Ad impression, in online advertising* .ai, the ISO Internet 2-letter country code for Anguilla...

 project by Hofstadter stemmed from his teenage fascination with number sequences
Integer sequence
In mathematics, an integer sequence is a sequence of integers.An integer sequence may be specified explicitly by giving a formula for its nth term, or implicitly by giving a relationship between its terms...

.
When he was 17, he studied the way that triangular and square numbers interleave, and eventually found a recursive relation describing it.
In his first course on AI, he set to the students and to himself the task of writing a program that could extrapolate the rule by which a numeric sequence is generated.
He discusses breadth-first
Breadth-first search
In graph theory, breadth-first search is a graph search algorithm that begins at the root node and explores all the neighboring nodes...

 and depth-first
Depth-first search
Depth-first search is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root and explores as far as possible along each branch before backtracking....

 techniques, but eventually concludes that the results represent expert system
Expert system
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, like an expert, and not by following the procedure of a developer as is the case in...

s that incarnate a lot of technical knowledge but don't shine much light on the mental processes that humans use to solve such puzzles.

Instead he devised a simplified version of the problem, called SeekWhence, where sequences are based on very simple basic rules not requiring advanced mathematical knowledge.
He argues that pattern recognition
Pattern recognition
In machine learning, pattern recognition is the assignment of some sort of output value to a given input value , according to some specific algorithm. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes...

, analogy, and fluid working hypotheses are fundamental to understand how humans tackle such problems.

The Architecture of Jumbo

Jumbo is a program to solve jumble
Jumble
Jumble is a word puzzle with a clue, a drawing illustrating the clue, and a set of words, each of which is “jumbled” by permuting its letters to make an anagram. A solver reconstructs the words, then arranges letters at marked positions in the words to spell the answer to the clue...

s, word puzzles consisting in five or six scrambled letters that need to be anagram
Anagram
An anagram is a type of word play, the result of rearranging the letters of a word or phrase to produce a new word or phrase, using all the original letters exactly once; e.g., orchestra = carthorse, A decimal point = I'm a dot in place, Tom Marvolo Riddle = I am Lord Voldemort. Someone who...

med to form an English word.
The resulting word does not need to be a real one but just to a plausible, that is, to consists of a sequence of letters that is normal in English.

The constituent elements of Jumbo are the following:
  • The "chunkabet": a database of chunks, small sequences of letters, with a numeric value giving their strength as possible components of a word.
  • The "cytoplasm
    Cytoplasm
    The cytoplasm is a small gel-like substance residing between the cell membrane holding all the cell's internal sub-structures , except for the nucleus. All the contents of the cells of prokaryote organisms are contained within the cytoplasm...

    ": a loose data structure containing partial associations of letter, modeling a form of working memory
    Working memory
    Working memory has been defined as the system which actively holds information in the mind to do verbal and nonverbal tasks such as reasoning and comprehension, and to make it available for further information processing...

    . The name is inspired by the place in a cell where molecular fragments are assembled into proteins.
  • The "Coderack": a structure containing "codelets", small pieces of programs that are waiting to be executed in the cytoplasm; the codelet that is executed next is chosen non-deterministically, based on urgencies attached to them; a codelet may form new associations, break down old ones, or generate more codelets.

A "temperature" is associated to the present state of the cytoplasm; it determines how probable it is that a destructive codelet is executed.
There is a "freezing" temperature at which no destruction can occur anymore: a solution has been found.

Numbo: A Study in Cognition and Recognition

Numbo is a program by Daniel Defays that tries to solve numerical problems similar to those used in the French game "Le compte est bon". The game consists in combining some numbers called "bricks", using the operations of multiplication, addition, and subtraction, to obtain a given result.

The program is modeled on Jumbo and Copycat
Copycat (software)
Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the at , Indiana University Bloomington...

 and uses a permanent network of known mathematical facts, a working memory in the form of a cytoplasm, and a coderack containing codelets to produce free associations of bricks in order to arrive at the result.

Highlevel Perception, Representation, and Analogy

The subtitle A Critique of Artificial-intelligence Methodology indicates that this is a polemical article, in which David Chalmers
David Chalmers
David John Chalmers is an Australian philosopher specializing in the area of philosophy of mind and philosophy of language, whose recent work concerns verbal disputes. He is Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University...

, Robert French, and Hofstadter criticize most of the research going on at that time (the early '80s) as exaggerating results and missing the central features of human intelligence.

Some of these AI projects, like the structure mapping engine
Structure Mapping Engine
In artificial intelligence and cognitive science, the structure mapping engine is an implementation in software of an algorithm for analogical matching based on the psychological theory of Dedre Gentner [1983]. The basis of Gentner's structure-mapping idea is that an analogy is a mapping of...

 (SME), claimed to model high faculties of the human mind and to be able to understand literary analogies and to rediscover important scientific breakthroughs.
In the introduction, Hofstadter warns about the Eliza effect
ELIZA effect
The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors.In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols —...

 that lead people to attribute understanding to a computer program that only uses a few stock phrases.
The authors claim that the input data for such impressive results are already heavily structured in the direction of the intended discovery and only a simple matching task is left to the computer.

Their main claim is that it is impossible to model high-level cognition without at the same time modeling low-level perception.
While cognition is necessarily based on perception, they argue that it in turn influences perception itself.
Therefore, a sound AI project should try to model the two together.
In a slogan repeated several times throughout the book: cognition is recognition.

Since human perception is too complex to be model by available technology, they favor the restriction of AI projects to limited domains like the one used for the Copycat project.

The Copycat Project

This chapter presents, as stated in the full title, A Model of Mental Fluidity and Analogy-making.
It is a description of the architecture of the Copycat
Copycat (software)
Copycat is a model of analogy making and human cognition based on the concept of the parallel terraced scan, developed in 1988 by Douglas Hofstadter, Melanie Mitchell, and others at the at , Indiana University Bloomington...

 program, developed by Hofstadter and Melanie Mitchell
Melanie Mitchell
Melanie Mitchell is a professor of computer science at Portland State University. She has worked at the Santa Fe Institute and Los Alamos National Laboratory...

.
The field of application of the program is a domain of short alphabetic sequences.
A typical puzzle is: If abc were changed to abd, how would you change ijk in the same way?.
The program tries to find an answer using a strategy supposedly similar to the way the human mind tackles the question.

Copycat has three major components:
  • The Slipnet, a model of long-term memory in humans. It contains concepts of various degrees of abstraction, from the letter types to the notion of opposite. Concepts are connected with links indicating their similarity. The activation of a node may cause the activation of a neighbor with a probability proportional to the inverse of the length of their link. The length of the links are not static; they have a value at the beginning but the may change elastically during computation according to the partial results achieved.
  • The Workspace, a model of short-term memory. Here partial structures are constructed and dismantled. The temporary results may cause the activation of concepts in the slipnet. A temperature measures the satisfaction of the program with the structure obtained at each moment. High temperature means dissatisfaction and leads to the adoption of a different strategy. Low temperature means satisfaction and the continuation of the present strategy.
  • The Coderack, a collection of codelets, that is small fragments of code, that wait to be selected and executed in the workspace. Each has a weight associated to it that determined its probability to be selected for execution.

The resulting software displays emergent properties.
It works according to a parallel terraced scan
Parallel terraced scan
The parallel terraced scan is a multi-agent based search technique that is basic to cognitive architectures, such as Copycat, Letter-string, the Examiner, Tabletop, and others...

that runs several possible processes at the same time.
It shows mental fluidity in that concepts may slip into similar ones.
It emulates human behavior in tending to find the most obvious solutions most of the time but being more satisfied (as witnessed by low temperature) by more clever and deep answers that it finds more rarely.

Perspectives on Copycat

This chapter compares Copycat with other recent (at the time) work in artificial intelligence
Artificial intelligence
Artificial intelligence is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its...

.
Specifically, it matches it with the claimed results from the structure mapping engine
Structure Mapping Engine
In artificial intelligence and cognitive science, the structure mapping engine is an implementation in software of an algorithm for analogical matching based on the psychological theory of Dedre Gentner [1983]. The basis of Gentner's structure-mapping idea is that an analogy is a mapping of...

 SME and the Analogical Constraint Mapping Engine (ACME).
The authors' judgment is that those programs suffer from two defects: Their input is pre-structured by the developers to highlight the analogies that the software is supposed to find; and the general architecture of the programs is serial and deterministic rather than parallel and stochastic like Copycat's, which they consider psychologically more plausible.

Severe criticism is put on the claim that these tools can solve "real-life" problems.
In fact, only the terms used in the example suggest that the input to the programs comes from a concrete situation.
The logical structures don't actually imply any meaning for the term.

Finally a more positive assessment is given to two other projects: Indurkhya' PAN model and Kokinov's AMBR system.

Prolegomena to Any Future Metacat

This chapter looks at those aspects of human creativity that are not yet modeled by Copycat and lays down a research plan for a future extension of the software.
The main missing element is the mind's ability to observe itself and reflect on its own thinking process.
Also important is the ability to learn and to remember the results of the mental activity.

The creativity displayed in finding analogies should be applicable at ever higher levels: making analogies between analogies (expression inspired by the title of a book by Stanisław Ulam), analogies between these second-order analogies, and so on.

Tabletop, BattleOp, Ob-Platte, Potelbat, Belpatto, Platobet

Another of Hofstadter's students, Robert French, was assigned the task of applying the architecture of Copycat to a different domain, consisting in analogies between objects lying on a table in a coffeehouse.
The resulting program was named Tabletop.

The authors present a different and vaster domain to justify the relevance of attacking such a trivial-seeming project.
The alternative domain is called Ob-Platte and consists in discovering analogies between geographical locations in different regions or countries.

Once again arguments are offered against a brute-force approach, which would work on the small Tabletop domain but would become unfeasible on the larger Ob-Platte domain.
Instead a parallel non-deterministic architecture is used, similar to the one adopted by the Copycat project.

The Emergent Personality of Tabletop, a Perception-based Model of Analogy-making

In the premise to the chapter, title The Knotty Problem of Evaluating Research, Hofstadter considers the question of how research in AI should be assessed.
He argues against a strict adherence to a match between the results of an AI program with the average answer of human test subjects.
He gives two reasons for his rejection: the AI program is supposed to emulate creativity, while an average of human responses will delete any original insight by any of the single subjects; and the architecture of the program should be more important that its mere functional description.

In the main article, the architecture of Tabletop is described: it is strongly inspired by that of Copycat and consists of a Slipnet, a Workspace, and a Corerack.

Letter Spirit

This last chapter is about a more ambitious project that Hofstadter started with student Gary McGraw.
The microdomain used is that of grid fonts: typographic alphabets constructed using a rigid system of small rigid components.
The goal is to construct a program that, given only a few or just one letter from the grid font, can generate the whole alphabet in the same style.
The difficulty lies in the ambiguity and undefinability of style.
The projected program would have a structure very similar to that of Jumble, Numble, Copycat, and Tabletop.

Epilogue

In the concluding part of the book, Hofstadter analyses some AI projects with a critical eye.
He finds that today's AI is missing the gist of human creativity and is making exaggerated claims.
The project under scrutiny are the following.

AARON
AARON
AARON is a software program written by artist Harold Cohen that creates original artistic images.Proceeding from Cohen's initial question "What are the minimum conditions under which a set of marks functions as an image?", AARON has been in continual development since 1973...

, a computer artist that can draw images of people in outdoor settings in a distinctive style reminiscent of that of a human artist; criticism: the program doesn't have any understanding of the objects it draws, it just uses some graphical algorithms with some randomness thrown in to generate different scenes at every run and to give the style a more natural feel.

Racter
Racter
Racter was an artificial intelligence computer program that generated English language prose at random.-History:The name of the program is short for raconteur. The sophistication claimed for the program was likely exaggerated, as could be seen by investigation of the template system of text...

, a computer author that wrote a book entitled The Policeman's Beard Is Half Constructed.
Although some of the prose generated by the program are quite impressive, due in part to the Eliza effect
ELIZA effect
The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors.In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols —...

, the computer does not have any notion of plot or of the meaning of the words it uses. Furthermore, the book is made up of selected texts from thousands produced by the computer over several years.

AM
Automated Mathematician
The Automated Mathematician is one of the earliest successful discovery systems. It was created by Doug Lenat in Lisp, and in 1977 led to Lenat being awarded the IJCAI Computers and Thought Award....

, a computer mathematician that generates new mathematical concepts. It managed to produce by itself the notion of prime number
Prime number
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a prime number is called a composite number. For example 5 is prime, as only 1 and 5 divide it, whereas 6 is composite, since it has the divisors 2...

 and the Goldbach conjecture. As with Racter, the question is how much the programmer filtered the output of the program, keeping only the occasional interesting output.
Also, mathematics being a very specialized domain, it is doubtful whether the techniques used can be abstracted to general cognition.

Another mathematical program, called Geometry, was celebrated for making an insightful discovery of an original proof that an isosceles triangle has equal base angles. The proof is based on seeing the triangle in two different ways. However, the program generates all possible ways of seeing the triangle, not even knowing that it is the same triangle.

Hofstadter concludes with some methodological remarks on the Turing Test.
In his opinion it is still a good definition and he argues that by interacting with a program, a human may be able to have insight not just on its behaviour but also on its structure.
However, he criticises the use that is made of it at present: it encourages the development of fancy natural-language interfaces instead of the investigation of deep cognitive faculties.

External links

  • Review in Byte
    Byte (magazine)
    BYTE magazine was a microcomputer magazine, influential in the late 1970s and throughout the 1980s because of its wide-ranging editorial coverage...

    , March 1995
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK