List of important publications in theoretical computer science
Encyclopedia

Computability: An Introduction to Recursive Function Theory

Description: A very popular textbook.

"Decidability of second order theories and automata on infinite trees"

  • Michael O. Rabin
    Michael O. Rabin
    Michael Oser Rabin , is an Israeli computer scientist and a recipient of the Turing Award.- Biography :Rabin was born in 1931 in Breslau, Germany, , the son of a rabbi. In 1935, he emigrated with his family to Mandate Palestine...

  • Transactions of the American Mathematical Society
    Transactions of the American Mathematical Society
    Transactions of the American Mathematical Society is a monthly mathematics journal published by the American Mathematical Society. It started in 1900...

    , vol. 141, pp. 1–35, 1969


Description: The paper presented the tree automaton
Tree automaton
A tree automaton is a type of state machine. Tree automata deal with tree structures, rather than the strings of more conventional state machines.The following article deals with branching tree automata, which correspond to regular languages of trees...

, an extension of the automata
Finite state machine
A finite-state machine or finite-state automaton , or simply a state machine, is a mathematical model used to design computer programs and digital logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states...

. The tree automaton had numerous applications to proofs of correctness of programs
Formal verification
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics .- Usage :Formal verification can be...

.

"Finite automata and their decision problems"

  • Michael O. Rabin
    Michael O. Rabin
    Michael Oser Rabin , is an Israeli computer scientist and a recipient of the Turing Award.- Biography :Rabin was born in 1931 in Breslau, Germany, , the son of a rabbi. In 1935, he emigrated with his family to Mandate Palestine...

     and Dana S. Scott
    Dana Scott
    Dana Stewart Scott is the emeritus Hillman University Professor of Computer Science, Philosophy, and Mathematical Logic at Carnegie Mellon University; he is now retired and lives in Berkeley, California...

  • IBM Journal of Research and Development, vol. 3, pp. 114–125, 1959
  • Online version (Not Free)


Description: Mathematical treatment of automata
Finite state machine
A finite-state machine or finite-state automaton , or simply a state machine, is a mathematical model used to design computer programs and digital logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states...

, proof of core properties, and definition of non-deterministic finite automaton.

Introduction to Automata Theory, Languages, and Computation

  • John E. Hopcroft, Jeffrey D. Ullman, and Rajeev Motwani
    Rajeev Motwani
    Rajeev Motwani was a professor of Computer Science at Stanford University whose research focused on theoretical computer science. He was an early advisor and supporter of companies including Google and PayPal, and a special advisor to Sequoia Capital. He was a winner of the Gödel Prize in...

  • Addison-Wesley
    Addison-Wesley
    Addison-Wesley was a book publisher in Boston, Massachusetts, best known for its textbooks and computer literature. As well as publishing books, Addison-Wesley also distributed its technical titles through the Safari Books Online e-reference service...

    , 2001, ISBN 0-201-02988-X


Description: A popular textbook.

"On certain formal properties of grammars"

Description: This article introduced what is now known as the Chomsky hierarchy
Chomsky hierarchy
Within the field of computer science, specifically in the area of formal languages, the Chomsky hierarchy is a containment hierarchy of classes of formal grammars....

, a containment hierarchy of classes of formal grammar
Formal grammar
A formal grammar is a set of formation rules for strings in a formal language. The rules describe how to form strings from the language's alphabet that are valid according to the language's syntax...

s that generate formal language
Formal language
A formal language is a set of words—that is, finite strings of letters, symbols, or tokens that are defined in the language. The set from which these letters are taken is the alphabet over which the language is defined. A formal language is often defined by means of a formal grammar...

s.

"On computable numbers, with an application to the Entscheidungsproblem"

  • Alan Turing
    Alan Turing
    Alan Mathison Turing, OBE, FRS , was an English mathematician, logician, cryptanalyst, and computer scientist. He was highly influential in the development of computer science, providing a formalisation of the concepts of "algorithm" and "computation" with the Turing machine, which played a...

  • Proceedings of the London Mathematical Society, Series 2, vol. 42, pp. 230–265, 1937, .
    Errata appeared in vol. 43, pp. 544–546, 1938, .
  • HTML version, PDF version


Description: This article set the limits of computer science. It defined the Turing Machine
Turing machine
A Turing machine is a theoretical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a...

, a model for all computations.
On the other hand it proved the undecidability of the halting problem
Halting problem
In computability theory, the halting problem can be stated as follows: Given a description of a computer program, decide whether the program finishes running or continues to run forever...

 and Entscheidungsproblem
Entscheidungsproblem
In mathematics, the is a challenge posed by David Hilbert in 1928. The asks for an algorithm that will take as input a description of a formal language and a mathematical statement in the language and produce as output either "True" or "False" according to whether the statement is true or false...

 and by doing so found the limits of possible computation.

"A machine-independent theory of the complexity of recursive functions"

Description: The Blum axioms
Blum axioms
In computational complexity theory the Blum axioms or Blum complexity axioms are axioms which specify desirable properties of complexity measures on the set of computable functions. The axioms were first defined by Manuel Blum in 1967....

.

"Algebraic methods for interactive proof systems"

Description: This paper showed that PH is contained in IP
IP (complexity)
In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985...

.

"The complexity of theorem proving procedures"

Description: This paper introduced the concept of NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

ness and proved that Boolean satisfiability problem
Boolean satisfiability problem
In computer science, satisfiability is the problem of determining if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to TRUE...

 (SAT) is NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

. Note that similar ideas were developed independently slightly later by Leonid Levin
Leonid Levin
-External links:* at Boston University....

 at "Levin, Universal Search Problems. Problemy Peredachi Informatsii 9(3):265-266, 1973".

Computers and Intractability: A Guide to the Theory of NP-Completeness

Description: The main importance of this book is due to its extensive list of more than 300 NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

 problems. This list became a common reference and definition. Though the book was published only few years after the concept was defined such an extensive list was found.

"Degree of difficulty of computing a function and a partial ordering of recursive sets"

Description: This technical report was the first publication talking about what later was renamed computational complexity
Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...


"How good is the simplex method?"

  • Victor Klee
    Victor Klee
    Victor L. Klee, Jr. was a mathematician specialising in convex sets, functional analysis, analysis of algorithms, optimization, and combinatorics. He spent almost his entire career at the University of Washington in Seattle.Born in San Francisco, Vic Klee earned his B.A...

     and George J. Minty


Description: Constructed the "Klee–Minty cube" in dimension D, whose 2D corners are each visited by Dantzig
George Dantzig
George Bernard Dantzig was an American mathematical scientist who made important contributions to operations research, computer science, economics, and statistics....

's simplex algorithm
Simplex algorithm
In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming. The journal Computing in Science and Engineering listed it as one of the top 10 algorithms of the twentieth century....

 for linear optimization.

"How to construct random functions"

Description: This paper showed that the existence of one way functions leads to computational randomness.

"IP = PSPACE"

Description: IP is a complexity class whose characterization (based on interactive proof system
Interactive proof system
In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties. The parties, the verifier and the prover, interact by exchanging messages in order to ascertain whether a given string belongs to a...

s) is quite different from the usual time/space bounded computational classes. In this paper, Shamir extended the technique of the previous paper by Lund, et al., to show that PSPACE
PSPACE
In computational complexity theory, PSPACE is the set of all decision problems which can be solved by a Turing machine using a polynomial amount of space.- Formal definition :...

 is contained in IP
IP (complexity)
In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of an interactive proof system was first introduced by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in 1985...

, and hence IP = PSPACE, so that each problem in one complexity class is solvable in the other.

"Reducibility among combinatorial problems"

  • R. M. Karp
    Richard Karp
    Richard Manning Karp is a computer scientist and computational theorist at the University of California, Berkeley, notable for research in the theory of algorithms, for which he received a Turing Award in 1985, The Benjamin Franklin Medal in Computer and Cognitive Science in 2004, and the Kyoto...

  • In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, Plenum Press, New York, NY, 1972, pp. 85–103


Description: This paper showed that 21 different problems
Karp's 21 NP-complete problems
One of the most important results in computational complexity theory was Stephen Cook's 1971 demonstration of the first NP-complete problem, the boolean satisfiability problem...

 are NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

 and showed the importance of the concept.

"The Knowledge Complexity of Interactive Proof Systems"

Description: This paper introduced the concept of zero knowledge
Zero-knowledge proof
In cryptography, a zero-knowledge proof or zero-knowledge protocol is an interactive method for one party to prove to another that a statement is true, without revealing anything other than the veracity of the statement....

.

A letter from Gödel to von Neumann

  • Kurt Gödel
    Kurt Gödel
    Kurt Friedrich Gödel was an Austrian logician, mathematician and philosopher. Later in his life he emigrated to the United States to escape the effects of World War II. One of the most significant logicians of all time, Gödel made an immense impact upon scientific and philosophical thinking in the...

  • A Letter from Gödel
    Godel
    Godel or similar can mean:*Kurt Gödel , an Austrian logician, mathematician and philosopher*Gödel...

     to John von Neumann
    John von Neumann
    John von Neumann was a Hungarian-American mathematician and polymath who made major contributions to a vast number of fields, including set theory, functional analysis, quantum mechanics, ergodic theory, geometry, fluid dynamics, economics and game theory, computer science, numerical analysis,...

    , March 20, 1956
  • Online version


Description: Gödel
Godel
Godel or similar can mean:*Kurt Gödel , an Austrian logician, mathematician and philosopher*Gödel...

 discusses the idea of efficient universal theorem prover.

"On the computational complexity of algorithms"

Description: This paper gave computational complexity
Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...

 its name and seed.

"Paths, trees, and flowers"

Description: There is a polynomial time algorithm to find a maximum matching in a graph that is not bipartite and another step toward the idea of computational complexity
Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...

. For more information see http://nvl.nist.gov/pub/nistpubs/sp958-lide/140-144.pdf.

"Theory and applications of trapdoor functions"

Description: This paper creates a theoretical framework for trapdoor function
Trapdoor function
A trapdoor function is a function that is easy to compute in one direction, yet believed to be difficult to compute in the opposite direction without special information, called the "trapdoor"...

s and described some of their applications, like in cryptography
Cryptography
Cryptography is the practice and study of techniques for secure communication in the presence of third parties...

. Note that the concept of trapdoor functions was brought at "New directions in cryptography" six years earlier (See section V "Problem Interrelationships and Trap Doors.").

Computational Complexity

  • C.H. Papadimitriou
    Christos Papadimitriou
    Christos Harilaos Papadimitriou is a Professor in the Computer Science Division at the University of California, Berkeley, United States...

  • Addison-Wesley
    Addison-Wesley
    Addison-Wesley was a book publisher in Boston, Massachusetts, best known for its textbooks and computer literature. As well as publishing books, Addison-Wesley also distributed its technical titles through the Safari Books Online e-reference service...

    , 1994, ISBN 0-201-53082-1


Description: An introduction to computational complexity theory
Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other...

, the book explains its author's characterization of P-Space and other results.

"Proof verification and the hardness of approximation problems"

Description: These three papers established the surprising fact that certain problems in NP remain hard even when only an approximative solution is required. See PCP theorem
PCP theorem
In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity .The PCP theorem says that for some universal constant K, for every...

.

"A machine program for theorem proving"

Description: The DPLL algorithm
DPLL algorithm
The Davis–Putnam–Logemann–Loveland algorithm is a complete, backtracking-based algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem....

. The basic algorithm for SAT and other NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

 problems.

"A machine-oriented logic based on the resolution principle"

Description: First description of resolution
Resolution (logic)
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation theorem-proving technique for sentences in propositional logic and first-order logic...

 and unification used in automated theorem proving
Automated theorem proving
Automated theorem proving or automated deduction, currently the most well-developed subfield of automated reasoning , is the proving of mathematical theorems by a computer program.- Decidability of the problem :...

; used in Prolog
Prolog
Prolog is a general purpose logic programming language associated with artificial intelligence and computational linguistics.Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is declarative: the program logic is expressed in terms of...

 and logic programming.

"The traveling-salesman problem and minimum spanning trees"

Description: The use of an algorithm for minimum spanning tree
Minimum spanning tree
Given a connected, undirected graph, a spanning tree of that graph is a subgraph that is a tree and connects all the vertices together. A single graph can have many different spanning trees...

 as an approximation algorithm
Approximation algorithm
In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since it is unlikely that there can ever be efficient polynomial time exact...

 for the NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

 travelling salesman problem
Travelling salesman problem
The travelling salesman problem is an NP-hard problem in combinatorial optimization studied in operations research and theoretical computer science. Given a list of cities and their pairwise distances, the task is to find the shortest possible tour that visits each city exactly once...

. Approximation algorithm
Approximation algorithm
In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since it is unlikely that there can ever be efficient polynomial time exact...

s became a common method for coping with NP-Complete problems.

"A polynomial algorithm in linear programming"

  • L. G. Khachiyan
    Leonid Khachiyan
    Leonid Genrikhovich Khachiyan was a Soviet mathematician of Armenian descent who taught Computer Science at Rutgers University. He was most famous for his Ellipsoid algorithm for linear programming, which was the first such algorithm known to have a polynomial running time...

  • Soviet Mathematics Doklady, vol. 20, pp. 191–194, 1979


Description: For long, there was no provably polynomial time algorithm for the linear programming
Linear programming
Linear programming is a mathematical method for determining a way to achieve the best outcome in a given mathematical model for some list of requirements represented as linear relationships...

 problem. Khachiyan was the first to provide an algorithm that was polynomial (and not just was fast enough most of the time as previous algorithms). Later, Narendra Karmarkar
Narendra Karmarkar
Narendra K. Karmarkar is an Indian mathematician, renowned for developing Karmarkar's algorithm. He is listed as an ISI highly cited researcher.- Biography :...

 presented a faster algorithm at: Narendra Karmarkar, "A new polynomial time algorithm for linear programming", Combinatorica
Combinatorica
Combinatorica is an international journal of mathematics, publishing papers in the fields of combinatorics and computer science. It started in 1981, with László Babai and László Lovász as the editors-in-chief with Paul Erdős as honorary editor-in-chief. The current editors-in-chief are László...

, vol 4, no. 4, p. 373–395, 1984.

"Probabilistic algorithm for testing primality"

Description: The paper presented the Miller-Rabin primality test
Miller-Rabin primality test
The Miller–Rabin primality test or Rabin–Miller primality test is a primality test: an algorithmwhich determines whether a given number is prime,...

 and outlined the program of randomized algorithms.

"Optimization by simulated annealing

Description: This article described simulated annealing
Simulated annealing
Simulated annealing is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete...

 which is now a very common heuristic for NP-Complete
NP-complete
In computational complexity theory, the complexity class NP-complete is a class of decision problems. A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard...

 problems.

The Art of Computer Programming

  • Donald Knuth
    Donald Knuth
    Donald Ervin Knuth is a computer scientist and Professor Emeritus at Stanford University.He is the author of the seminal multi-volume work The Art of Computer Programming. Knuth has been called the "father" of the analysis of algorithms...



Description: This monograph
Monograph
A monograph is a work of writing upon a single subject, usually by a single author.It is often a scholarly essay or learned treatise, and may be released in the manner of a book or journal article. It is by definition a single document that forms a complete text in itself...

 has three popular algorithms books and a number of fascicles. The algorithms are written in both English and MIX
MIX
MIX is a hypothetical computer used in Donald Knuth's monograph, The Art of Computer Programming . MIX's model number is 1009, which was derived by combining the model numbers and names of several contemporaneous, commercial machines deemed significant by the author...

 assembly language (or MMIX
MMIX
MMIX is a 64-bit RISC instruction set architecture designed by Donald Knuth, with significant contributions by John L. Hennessy and Richard L. Sites...

 assembly language in more recent fascicles). This makes algorithms both understandable and precise. However, the use of a low-level programming language
Low-level programming language
In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture. Generally this refers to either machine code or assembly language...

 frustrates some programmers more familiar with modern structured programming
Structured programming
Structured programming is a programming paradigm aimed on improving the clarity, quality, and development time of a computer program by making extensive use of subroutines, block structures and for and while loops - in contrast to using simple tests and jumps such as the goto statement which could...

 languages.

Algorithms + Data Structures = Programs

  • Niklaus Wirth
    Niklaus Wirth
    Niklaus Emil Wirth is a Swiss computer scientist, best known for designing several programming languages, including Pascal, and for pioneering several classic topics in software engineering. In 1984 he won the Turing Award for developing a sequence of innovative computer languages.-Biography:Wirth...

  • Prentice Hall, 1976, ISBN 0-13-022418-9


Description: An early, influential book on algorithms and data structures, with implementations in Pascal.

The Design and Analysis of Computer Algorithms

  • Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman
  • Addison-Wesley, 1974, ISBN 0-201-00029-6


Description: One of the standard texts on algorithms for the period of approximately 1975–1985.

How to Solve It By Computer

Description: Explains the Whys of algorithms and data-structures. Explains the Creative Process, the Line of Reasoning, the Design Factors behind innovative solutions.

Algorithms

  • Robert Sedgewick
    Robert Sedgewick (computer scientist)
    Robert Sedgewick is a computer science professor at Princeton University and a member of the board of directors of Adobe Systems....

  • Addison-Wesley, 1983, ISBN 0-201-06672-6


Description: A very popular text on algorithms in the late 1980s. It was more accessible and readable (but more elementary) than Aho, Hopcroft, and Ullman. There are more recent editions.

Introduction to Algorithms

  • Thomas H. Cormen
    Thomas H. Cormen
    Thomas H. Cormen is the co-author of Introduction to Algorithms, along with Charles Leiserson, Ron Rivest, and Cliff Stein. He is a Full Professor of computer science at Dartmouth College and currently Chair of the Dartmouth College Department of Computer Science. Between 2004 and 2008 he directed...

    , Charles E. Leiserson
    Charles E. Leiserson
    Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, and particularly practical applications thereof; as part of this effort, he developed the Cilk multithreaded language...

    , Ronald L. Rivest
    Ron Rivest
    Ronald Linn Rivest is a cryptographer. He is the Andrew and Erna Viterbi Professor of Computer Science at MIT's Department of Electrical Engineering and Computer Science and a member of MIT's Computer Science and Artificial Intelligence Laboratory...

    , and Clifford Stein
    Clifford Stein
    Clifford Stein, a computer scientist, is currently a professor of industrial engineering and operations research at Columbia University in New York, NY, where he also holds an appointment in the Department of Computer Science. Stein is chair of the Industrial Engineering and Operations Research...

  • MIT Press and McGraw-Hill. 2nd Edition, 2001. 1st Edition (with first three authors) published in 1991.


Description: As its name indicates this textbook is a very good introduction to algorithms. This book became so popular that it is almost the de facto standard for basic algorithms teaching.

"On Tables of Random Numbers"

Description: Proposed a computational and combinatorial approach to probability.

"A formal theory of inductive inference"

  • Ray Solomonoff
    Ray Solomonoff
    Ray Solomonoff was the inventor of algorithmic probability, and founder of algorithmic information theory, He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability...

  • Information and Control, vol. 7, pp. 1–22 and 224–254, 1964
  • Online copy: part I, part II


Description: This was the beginning of algorithmic information theory
Algorithmic information theory
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information...

 and Kolmogorov complexity
Kolmogorov complexity
In algorithmic information theory , the Kolmogorov complexity of an object, such as a piece of text, is a measure of the computational resources needed to specify the object...

. Note that though Kolmogorov complexity
Kolmogorov complexity
In algorithmic information theory , the Kolmogorov complexity of an object, such as a piece of text, is a measure of the computational resources needed to specify the object...

 is named after Andrey Kolmogorov
Andrey Kolmogorov
Andrey Nikolaevich Kolmogorov was a Soviet mathematician, preeminent in the 20th century, who advanced various scientific fields, among them probability theory, topology, intuitionistic logic, turbulence, classical mechanics and computational complexity.-Early life:Kolmogorov was born at Tambov...

, he said that the seeds of that idea are due to Ray Solomonoff
Ray Solomonoff
Ray Solomonoff was the inventor of algorithmic probability, and founder of algorithmic information theory, He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability...

. Andrey Kolmogorov
Andrey Kolmogorov
Andrey Nikolaevich Kolmogorov was a Soviet mathematician, preeminent in the 20th century, who advanced various scientific fields, among them probability theory, topology, intuitionistic logic, turbulence, classical mechanics and computational complexity.-Early life:Kolmogorov was born at Tambov...

 contributed a lot to this area but in later articles.

"Algorithmic information theory"

Description: A good introduction to algorithmic information theory
Algorithmic information theory
Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information...

 by one of the important people in the area.

"A mathematical theory of communication"

Description: This paper created the field of information theory
Information theory
Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...

.

"Error detecting and error correcting codes"

Description: In this paper, Hamming introduced the idea of error-correcting code. He created the Hamming code
Hamming code
In telecommunication, Hamming codes are a family of linear error-correcting codes that generalize the Hamming-code invented by Richard Hamming in 1950. Hamming codes can detect up to two and correct up to one bit errors. By contrast, the simple parity code cannot correct errors, and can detect only...

 and the Hamming distance
Hamming distance
In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different...

 and developed methods for code optimality proofs.

"A method for the construction of minimum redundancy codes"

Description: The Huffman coding
Huffman coding
In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol where the variable-length code table has been derived in a particular way based on...

.

"A universal algorithm for sequential data compression"

Description: The LZ77 compression algorithm.

Elements of Information Theory

Description: A good and popular introduction to information theory.

Assigning Meaning to Programs

Description: Robert Floyd's landmark paper Assigning Meanings to Programs introduces the method of inductive assertions and describes how a program annotated with first-order assertions may be shown to satisfy a pre- and post-condition specification - the paper also introduces the concepts of loop invariant and verification condition.

An Axiomatic Basis for Computer Programming

Description: Tony Hoare's paper An Axiomatic Basis for Computer Programming describes a set of inference (i.e. formal proof) rules for fragments of an Algol-like programming language described in terms of (what are now called) Hoare-triples.

Guarded Commands, Nondeterminacy and Formal Derivation of Programs

Description: Edsger Dijkstra's paper Guarded Commands, Nondeterminacy and Formal Derivation of Programs (expanded by his 1976 postgraduate-level textbook A Discipline of Programming) proposes that, instead of formally verifying a program after it has been written (i.e. post facto), programs and their formal proofs should be developed hand-in-hand (using predicate transformers to progressively refine weakest pre-conditions), a method known as program (or formal) refinement (or derivation), or sometimes "correctness-by-construction".

Proving Assertions about Parallel Programs

  • Edward A. Ashcroft
  • J. Comput. Syst. Sci. 10(1): 110-135 (1975)


Description: The paper that introduced invariance proofs of concurrent programs.

An Axiomatic Proof Technique for Parallel Programs I

  • Susan S. Owicki, David Gries
    David Gries
    David Gries is an American computer scientist at Cornell University, United States. He is currently Associate Dean for Undergraduate Programs in the College of Engineering. His research interests include programming methodology and related areas such as programming languages, programming...

  • Acta Inf. 6: 319-340 (1976)


Description: In this paper, along with the same authors paper "Verifying Properties of Parallel Programs: An Axiomatic Approach. Commun. ACM 19(5): 279-285 (1976)", the axiomatic approach to parallel programs verification was presented.

A Discipline of Programming

  • Edsger W. Dijkstra
  • 1976


Description: Edsger Dijkstra's classic postgraduate-level textbook A Discipline of Programming extends his earlier paper Guarded Commands, Nondeterminacy and Formal Derivation of Programs and firmly establishes the principle of formally deriving programs (and their proofs) from their specification.

Denotational Semantics

  • Joe Stoy
    Joe Stoy
    Joseph E. "Joe" Stoy is a British computer scientist. He originally studied physics at Oxford University. Early in his career, in the 1970s, he worked on denotational semantics with Christopher Strachey in the Programming Research Group at the Oxford University Computing Laboratory. He was a Fellow...

  • 1977


Description: Joe Stoy's Denotational Semantics is the first (postgraduate level) book-length exposition of the mathematical (or functional) approach to the formal semantics of programming languages (in contrast to the operational and algebraic approaches).

The Temporal Logic of Programs


Description: The use of temporal logic
Temporal logic
In logic, the term temporal logic is used to describe any system of rules and symbolism for representing, and reasoning about, propositions qualified in terms of time. In a temporal logic we can then express statements like "I am always hungry", "I will eventually be hungry", or "I will be hungry...

 was suggested as a method for formal verification.

Characterizing correctness properties of parallel programs using fixpoints (1980)

  • E. Allen Emerson
    E. Allen Emerson
    Ernest Allen Emerson is a computer scientist and endowed professor at the University of Texas, Austin, USA.He won the 2007 A.M. Turing Award along with Edmund M...

     Edmund Clarke
  • In Proc. 7th International Colloquium on Automata Languages and Programming, pages 169-181, 1980


Description: Model checking was introduced as a procedure to check correctness of concurrent programs.

Communicating Sequential Processes (1978)

  • C.A.R. Hoare
  • 1978


Description: Tony Hoare's (original) communicating sequential processes
Communicating sequential processes
In computer science, Communicating Sequential Processes is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi...

 (CSP) paper introduces the idea of concurrent processes (i.e. programs) that do not share variables but instead cooperate solely by exchanging synchronous messages.

A Calculus of Communicating Systems

  • Robin Milner
    Robin Milner
    Arthur John Robin Gorell Milner FRS FRSE was a prominent British computer scientist.-Life, education and career:...

  • 1980


Description: Robin Milner's A Calculus of Communicating Systems (CCS) paper describes a process algebra permitting systems of concurrent processes to be reasoned about formally, something which has not been possible for earlier models of concurrency (semaphores, critical sections, original CSP).

Software Development: A Rigorous Approach

  • Cliff Jones
  • 1980


Description: Cliff Jones' textbook Software Development: A Rigorous Approach is the first full-length exposition of the Vienna Development Method (VDM), which had evolved (principally) at IBM's Vienna research lab over the previous decade and which combines the idea of program refinement as per Dijkstra with that of data refinement (or reification) whereby algebraically-defined abstract data types are formally transformed into progressively more "concrete" representations.

The Science of Programming

  • David Gries
    David Gries
    David Gries is an American computer scientist at Cornell University, United States. He is currently Associate Dean for Undergraduate Programs in the College of Engineering. His research interests include programming methodology and related areas such as programming languages, programming...

  • 1981


Description: David Gries' textbook The Science of Programming describes Dijkstra's weakest precondition method of formal program derivation, except in a very much more accessible manner than Dijkstra's earlier A Discipline of Programming.

Communicating Sequential Processes (1985)

  • C.A.R. Hoare
  • 1985


Description: Tony Hoare's Communicating Sequential Processes (CSP) textbook (currently the third most cited computer science reference of all time) presents an updated CSP model in which cooperating processes do not even have program variables and which, like CCS, permits systems of processes to be reasoned about formally.

Linear logic (1987)

Description: Girard's linear logic
Linear logic
Linear logic is a substructural logic proposed by Jean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter...

 was a breakthrough in designing typing systems for sequential and concurrent computation, especially for resource conscious typing systems.

A Calculus of Mobile Processes (1989)

  • R. Milner, J. Parrow, D. Walker
  • 1989
  • Online version: Part 1 and Part 2


Description: This paper introduces the Pi-Calculus
Pi-calculus
In theoretical computer science, the π-calculus is a process calculus originally developed by Robin Milner, and David Walker as a continuation of work on the process calculus CCS...

, a generalisation of CCS which allows process mobility. The calculus is extremely simple and has become the dominant paradigm in the theoretical study of programming languages, typing systems and program logics.

The Z Notation: A Reference Manual

Description: Mike Spivey's classic textbook The Z Notation: A Reference Manual summarises the formal specification language Z notation
Z notation
The Z notation , named after Zermelo–Fraenkel set theory, is a formal specification language used for describing and modelling computing systems. It is targeted at the clear specification of computer programs and computer-based systems in general.-History:...

 which, although originated by Jean-Raymond Abrial, had evolved (principally) at Oxford University over the previous decade.

Communication and Concurrency

  • Robin Milner
    Robin Milner
    Arthur John Robin Gorell Milner FRS FRSE was a prominent British computer scientist.-Life, education and career:...

  • Prentice-Hall International, 1989


Description: Robin Milner's textbook Communication and Concurrency is a more accessible, although still technically advanced, exposition of his earlier CCS work.

See also

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK