ISBN-13: 9783642082177 / Angielski / Miękka / 2010 / 602 str.
The foundations of computational complexity theory go back to Alan Thring in the 1930s who was concerned with the existence of automatic procedures deciding the validity of mathematical statements. The first example of such a problem was the undecidability of the Halting Problem which is essentially the question of debugging a computer program: Will a given program eventu- ally halt? Computational complexity today addresses the quantitative aspects of the solutions obtained: Is the problem to be solved tractable? But how does one measure the intractability of computation? Several ideas were proposed: A. Cobham Cob65] raised the question of what is the right model in order to measure a "computation step," M. Rabin Rab60] proposed the introduction of axioms that a complexity measure should satisfy, and C. Shannon Sha49] suggested the boolean circuit that computes a boolean function. However, an important question remains: What is the nature of computa- tion? In 1957, John von Neumann vN58] wrote in his notes for the Silliman Lectures concerning the nature of computation and the human brain that . . . logics and statistics should be primarily, although not exclusively, viewed as the basic tools of 'information theory'. Also, that body of experience which has grown up around the planning, evaluating, and coding of complicated logical and mathematical automata will be the focus of much of this information theory. The most typical, but not the only, such automata are, of course, the large electronic computing machines.