Feeds:
Posts
Comments

Archive for the ‘Jacques Lacan’ Category

Teoremas de la incompletitud de Gödel:

Problema de la parada

De Wikipedia, la enciclopedia libre

El problema de la parada o problema de la detención para máquinas de Turing es el ejemplo de problema irresoluble más conocido. Consiste en determinar si una máquina de Turing se detendrá con cierta entrada, o bien quedará en un ciclo infinito. Este fue el primer problema que se demostró formalmente que no tenía solución.

El concepto de problema indecidible o irresoluble se aplica a problemas de decisión, es decir, problemas a los que podemos decir si tienen solución o no. Dentro de estos problemas, existe un conjunto al que no le podemos asignar una respuesta, ni afirmativa ni negativa: no existe un algoritmo que nos permita determinar si el problema tiene solución.

Una de las razones por la que es importante conocer que el problema de la parada no tiene solución, es que nos permite decidir si otros problemas son resolubles o no. El razonamiento a seguir sería: si suponiendo que un problema es decidible, podemos demostrar que el problema de la parada tiene solución, entonces podemos llegar a la conclusión de que el problema en cuestión no la tiene, por reducción al absurdo.

Definición [editar]

Sea M una máquina de Turing arbitraria con un alfabeto de entrada Σ. Sea w \in \Sigma^*. ¿Puede decidirse si la máquina M se detendrá con la entrada w?

Solución [editar]

La respuesta a esta pregunta es negativa. No se puede determinar si una máquina de Turing se detiene con una entrada arbitraria.

Demostración [editar]

Para demostrarlo, supongamos que el problema de la parada tiene solución, es decir, supondremos que existe una máquina de Turing que es capaz de determinar si otra máquina de Turing para con una entrada determinada.

Consideremos una máquina de Turing P, que recibe como entrada una máquina de Turing M y una cadena w codificadas en la cinta y una a continuación de la otra (Mw), y que se encarga de ejecutar M sobre la cadena w. La máquina P parará y aceptará la entrada si M para con w, y parará y rechazará la entrada si M no para con w.

Modificamos la máquina P, creando una máquina P’ equivalente. Esta máquina no parará si M para con w, y parará si M no para con w. Obsérvese que esta modificación es trivial en términos de máquinas de Turing.

Ahora crearemos una máquina D, cuya función es la siguiente. Recibe una máquina M, la pasa por una máquina que se encarga de copiar la máquina M a continuación. Por lo tanto, a la salida de la máquina copia, la cinta contendrá MM (la codificación de la máquina repetida). A continuación, D coge este resultado y lo pasa a través de P’. Con esto intentamos decidir si la máquina M para con la entrada M. Es decir, si M para con la entrada M, entonces D no para, y si M no para con la entrada M, entonces D para. Nótese que la máquina de copia no es difícil de implementar.

Por último, tomaremos una máquina D (denominaremos SD), y le aplicaremos como entrada una máquina D. SD aplica como entrada a la máquina que recibe, la misma máquina. Por lo tanto, esta máquina en principio parará si D no para con entrada D, y no parará si D para con entrada D. Pero si SD no para y si D para con entrada D, sabiendo que D=SD, llegamos a una contradicción, por que aplicar D a SD debería dar como resultado lo mismo que aplicar D sobre D. Del mismo modo para el otro caso. Por lo tanto, el problema de la parada no tiene solución.

Todas las máquinas que hemos ido implementando en la demostración son, exceptuando P, relativamente fáciles de hacer, por lo que la clave de la demostración se encuentra, por reducción al absurdo, efectivamente en P, que es quien sostenía la hipótesis acerca de la resolubilidad del problema.

Read Full Post »

Computability theory (computer science)

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In computer science, computability theory is the branch of the theory of computation that studies which problems are computationally solvable using different models of computation.

Computability theory differs from the related discipline of computational complexity theory, which deals with the question of how efficiently a problem can be solved, rather than whether it is solvable at all.

Contents

[hide]

//

[edit] Introduction

A central question of computer science is to address the limits of computing devices. One approach to addressing this question is understanding the problems we can use computers to solve. Modern computing devices often seem to possess infinite capacity for calculation, and it’s easy to imagine that, given enough time, we might use computers to solve any problem. However, it is possible to show clear limits to the ability of computers, even given arbitrarily vast computational resources, to solve even seemingly simple problems. Problems are formally expressed as a decision problem which is to construct a mathematical function that for each input returns either 0 or 1. If the value of the function on the input is 0 then the answer is “no” and otherwise the answer is “yes”.

To explore this area, computer scientists invented automata theory which addresses problems such as the following: Given a formal language, and a string, is the string a member of that language? This is a somewhat esoteric way of asking this question, so an example is illuminating. We might define our language as the set of all strings of digits which represent a prime number. To ask whether an input string is a member of this language is equivalent to asking whether the number represented by that input string is prime. Similarly, we define a language as the set of all palindromes, or the set of all strings consisting only of the letter ‘a’. In these examples, it is easy to see that constructing a computer to solve one problem is easier in some cases than in others.

But in what real sense is this observation true? Can we define a formal sense in which we can understand how hard a particular problem is to solve on a computer? It is the goal of computability theory of automata to answer just this question.

[edit] Formal models of computation

In order to begin to answer the central question of automata theory, it is necessary to define in a formal way what an automaton is. There are a number of useful models of automata. Some widely known models are:

Deterministic finite state machine
Also called a deterministic finite automaton (DFA), or simply a finite state machine. All real computing devices in existence today can be modeled as a finite state machine, as all real computers operate on finite resources. Such a machine has a set of states, and a set of state transitions which are affected by the input stream. Certain states are defined to be accepting states. An input stream is fed into the machine one character at a time, and the state transitions for the current state are compared to the input stream, and if there is a matching transition the machine may enter a new state. If at the end of the input stream the machine is in an accepting state, then the whole input stream is accepted.
Nondeterministic finite state machine
Similarly called a nondeterministic finite automaton (NFA), it is another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is proved that any NFA is exactly reducible to an equivalent DFA.
Pushdown automaton
Similar to the finite state machine, except that it has available an execution stack, which is allowed to grow to arbitrary size. The state transitions additionally specify whether to add a symbol to the stack, or to remove a symbol from the stack. It is more powerful than a DFA due to its infinite-memory stack, although only some information in the stack is ever freely accessible.
Turing machine
Also similar to the finite state machine, except that the input is provided on an execution “tape”, which the Turing machine can read from, write to, or move back and forth past its read/write “head”. The tape is allowed to grow to arbitrary size. The Turing machine is capable of performing complex calculations which can have arbitrary duration. This model is perhaps the most important model of computation in computer science, as it simulates computation in the absence of predefined resource limits.
Multi-tape Turing machine
Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

[edit] Power of automata

With these computational models in hand, we can determine what their limits are. That is, what classes of languages can they accept?

[edit] Power of finite state machines

Computer scientists call any language that can be accepted by a finite state machine a regular language. Because of the restriction that the number of possible states in a finite state machine is finite, we can see that to find a language that is not regular, we must construct a language that would require an infinite number of states.

An example of such a language is the set of all strings consisting of the letters ‘a’ and ‘b’ which contain an equal number of the letter ‘a’ and ‘b’. To see why this language cannot be correctly recognized by a finite state machine, assume first that such a machine M exists. M must have some number of states n. Now consider the string x consisting of (n + 1) ‘a’s followed by (n + 1) ‘b’s.

As M reads in x, there must be some state in the machine that is repeated as it reads in the first series of ‘a’s, since there are (n + 1) ‘a’s and only n states by the pigeonhole principle. Call this state S, and further let d be the number of ‘a’s that our machine read in order to get from the first occurrence of S to some subsequent occurrence during the ‘a’ sequence. We know, then, that at that second occurrence of S, we can add in an additional d (where d > 0) ‘a’s and we will be again at state S. This means that we know that a string of (n + d + 1) ‘a’s must end up in the same state as the string of (n + 1) ‘a’s. This implies that if our machine accepts x, it must also accept the string of (n + d + 1) ‘a’s followed by (n + 1) ‘b’s, which is not in the language of strings containing an equal number of ‘a’s and ‘b’s.

We know, therefore, that this language cannot be accepted correctly by any finite state machine, and is thus not a regular language. A more general form of this result is called the Pumping lemma for regular languages, which can be used to show that broad classes of languages cannot be recognized by a finite state machine.

[edit] Power of pushdown automata

Computer scientists define a language that can be accepted by a pushdown automaton as a Context-free language, which can be specified as a Context-free grammar. The language consisting of strings with equal numbers of ‘a’s and ‘b’s, which we showed was not a regular language, can be decided by a push-down automaton. Also, in general, a push-down automaton can behave just like a finite-state machine, so it can decide any language which is regular. This model of computation is thus strictly more powerful than finite state machines.

However, it turns out there are languages that cannot be decided by push-down automaton either. The result is similar to that for regular expressions, and won’t be detailed here. There exists a Pumping lemma for context-free languages. An example of such a language is the set of prime numbers.

[edit] Power of Turing machines

Turing machines can decide any context-free language, in addition to languages not decidable by a push-down automata, such as the language consisting of prime numbers. It is therefore a strictly more powerful model of computation.

Because Turing machines have the ability to “back up” in their input tape, it is possible for a Turing machine to run for a long time in a way that is not possible with the other computation models previously described. It is possible to construct a Turing machine that will never finish running (halt) on some inputs. We say that a Turing machine can decide a language if it eventually will halt on all inputs and give an answer. A language that can be so decided is called a recursive language. We can further describe Turing machines that will eventually halt and give an answer for any input in a language, but which may run forever for input strings which are not in the language. Such Turing machines could tell us that a given string is in the language, but we may never be sure based on its behavior that a given string is not in a language, since it may run forever in such a case. A language which is accepted by such a Turing machine is called a recursively enumerable language.

The Turing machine, it turns out, is an exceedingly powerful model of automata. Attempts to amend the definition of a Turing machine to produce a more powerful machine are surprisingly met with failure. For example, adding an extra tape to the Turing machine, giving it a 2-dimensional (or 3 or any-dimensional) infinite surface to work with can all be simulated by a Turing machine with the basic 1-dimensional tape. These models are thus not more powerful. In fact, a consequence of the Church-Turing thesis is that there is no reasonable model of computation which can decide languages that cannot be decided by a Turing machine.

The question to ask then is: do there exist languages which are recursively enumerable, but not recursive? And, furthermore, are there languages which are not even recursively enumerable?

[edit] The halting problem

Main article: Halting problem

The halting problem is one of the most famous problems in computer science, because it has profound implications on the theory of computability and on how we use computers in everyday practice. The problem can be phrased:

Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Here we are asking not a simple question about a prime number or a palindrome, but we are instead turning the tables and asking a Turing machine to answer a question about another Turing machine. It can be shown (See main article: Halting problem) that it is not possible to construct a Turing machine that can answer this question in all cases.

That is, the only general way to know for sure if a given program will halt on a particular input in all cases is simply to run it and see if it halts. If it does halt, then you know it halts. If it doesn’t halt, however, you may never know if it will eventually halt. The language consisting of all Turing machine descriptions paired with all possible input streams on which those Turing machines will eventually halt, is not recursive. The halting problem is therefore called non-computable or undecidable.

An extension of the halting problem is called Rice’s Theorem, which states that it is undecidable (in general) whether a given language possesses any specific nontrivial property.

[edit] Beyond recursive languages

The halting problem is easy to solve, however, if we allow that the Turing machine that decides it may run forever when given input which is a representation of a Turing machine that does not itself halt. The halting language is therefore recursively enumerable. It is possible to construct languages which are not even recursively enumerable, however.

A simple example of such a language is the complement of the halting language; that is the language consisting of all Turing machines paired with input strings where the Turing machines do not halt on their input. To see that this language is not recursively enumerable, imagine that we construct a Turing machine M which is able to give a definite answer for all such Turing machines, but that it may run forever on any Turing machine that does eventually halt. We can then construct another Turing machine M that simulates the operation of this machine, along with simulating directly the execution of the machine given in the input as well, by interleaving the execution of the two programs. Since the direct simulation will eventually halt if the program it is simulating halts, and since by assumption the simulation of M will eventually halt if the input program would never halt, we know that M will eventually have one of its parallel versions halt. M is thus a decider for the halting problem. We have previously shown, however, that the halting problem is undecidable. We have a contradiction, and we have thus shown that our assumption that M exists is incorrect. The complement of the halting language is therefore not recursively enumerable.

[edit] Concurrency-based models

A number of computational models based on concurrency have been developed, including the Parallel Random Access Machine and the Petri net. These models of concurrent computation still do not implement any mathematical functions that cannot be implemented by Turing machines.

[edit] Unreasonable models of computation

The Church-Turing thesis conjectures that there is no reasonable model of computing that can compute more mathematical functions than a Turing machine. In this section we will explore some of the “unreasonable” ideas for computational models which violate this conjecture. Computer scientists have imagined many varieties of hypercomputers.

[edit] Infinite execution

Imagine a machine where each step of the computation requires half the time of the previous step. If we normalize to 1 time unit the amount of time required for the first step, the execution would require

time to run. This infinite series converges to 2 time units, which means that this Turing machine can run an infinite execution in 2 time units. This machine is capable of deciding the halting problem by directly simulating the execution of the machine in question. By extension, any convergent series would work. Assuming that the series converges to a value n, the Turing machine would complete an infinite execution in n time units.

[edit] Oracle machines

Main article: Oracle machine

So-called Oracle machines have access to various “oracles” which provide the solution to specific undecidable problems. For example, the Turing machine may have a “halting oracle” which answers immediately whether a given Turing machine will ever halt on a given input. These machines are a central topic of study in recursion theory.

[edit] Limits of Hyper-computation

Even these machines, which seemingly represent the limit of automata that we could imagine, run into their own limitations. While each of them can solve the halting problem for a Turing machine, they cannot solve their own version of the halting problem. For example, an Oracle machine cannot answer the question of whether a given Oracle machine will ever halt.

[edit] History of computability theory

The lambda calculus, an important precursor to formal computability theory, was developed by Alonzo Church and Stephen Cole Kleene. Alan Turing is most often considered the father of modern computer science, and laid many of the important foundations of computability and complexity theory, including the first description of the Turing machine (in [1], 1936) as well as many of the important early results.

[edit] See also

[edit] References

Read Full Post »

Gödel’s incompleteness theorems

From Wikipedia, the free encyclopedia

[edit] Relationship with computability

As early as 1943, Kleene gave a proof of Godel’s incompleteness theorem using basic results of computability theory.[8] A basic result of computability shows that the halting problem is unsolvable: there is no computer program that can correctly determine, given a program P as input, whether P eventually halts when run with no input. Kleene showed that the existence of a complete effective theory of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. An exposition of this proof at the undergraduate level was given by Charlesworth (1980).[9]

By enumerating all possible proofs, it is possible to enumerate all the provable consequences of any effective first-order theory. This makes is possible to search for proofs of a certain form. Moreover, the method of arithmetization introduced by Gödel can be used to show that any sufficiently strong theory of arithmetic can represent the workings of computer programs. In particular, for each program P there is a formula Q such that Q expresses the idea that P halts when run with no input. The formula Q says, essentially, that there is a natural number that encodes the entire computation history of P and this history ends with P halting.

If, for every such formula Q, either Q or the negation of Q was a logical consequence of the axiom system, then it would be possible, by enumerating enough theorems, to determine which of these is the case. In particular, for each program P, the axiom system would either prove “P halts when run with no input,” or “P doesn’t halt when run with no input.”

Consistency assumptions imply that the axiom system is correct about these theorems. If the axioms prove that a program P doesn’t halt when the program P actually does halt, then the axiom system is inconsistent, because it is possible to use the complete computation history of P to make a proof that P does halt. This proof would just follow the computation of P step-by-step until P halts after a finite number of steps.

The mere consistency of the axiom system is not enough to obtain a contradiction, however, because a consistent axiom system could still prove the ω-inconsistent theorem that a program halts, when it actually doesn’t halt. The assumption of ω-consistency implies, however, that if the axiom system proves a program doesn’t halt then the program actually does not halt. Thus if the axiom system was consistent and ω-consistent, its proofs about which programs halt would correctly reflect reality. Thus it would be possible to effectively decide which programs halt by merely enumerating proofs in the system; this contradiction shows that no effective, consistent, ω-consistent formal theory of arithmetic that is strong enough to represent the workings of a computer can be complete.

Read Full Post »

http://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_computabilidad  

Teoría de la computabilidad

De Wikipedia, la enciclopedia libre

La Teoría de la computabilidad es la parte de la computación que estudia los problemas de decisión que pueden ser resueltos con un algoritmo o equivalentemente con una máquina de Turing. La teoría de la computabilidad se interesa a cuatro preguntas:

  • ¿Qué problemas puede resolver una máquina de Turing?
  • ¿Qué otros formalismos equivalen a las máquinas de Turing?
  • ¿Qué problemas requieren máquinas más poderosas?
  • ¿Qué problemas requieren máquinas menos poderosas?

La teoría de la complejidad computacional clasifica las funciones computables según el uso que hacen de diversos recursos en diversos tipos de máquina.

Tabla de contenidos

[ocultar]

//

Antecedentes [editar]

El origen de los modelos abstractos de computación se encuadra en los años ’30 (antes de que existieran los ordenadores modernos), para el trabajo de los lógicos Alonzo Church, Kurt Gödel, Stephen Kleene, Emil Leon Post, y Alan Turing. Estos trabajos iniciales han tenido una profunda influencia, tanto en el desarrollo teórico como en abundantes aspectos de la práctica de la computación; previendo incluso la existencia de ordenadores de propósito general, la posibilidad de interpretar programas, la dualidad entre software y hardware, y la representación de lenguajes por estructuras formales basados en reglas de producción.

El punto inicial de estos primeros trabajos fueron las cuestiones fundamentales que David Hilbert formuló en 1900, durante el transcurso de un congreso internacional.

Lo que Hilbert pretendía era crear un sistema matemático formal completo y consistente en el cual, todas las aseveraciones fueran planteadas con precisión. Su intención era encontrar un algoritmo que determinara la verdad o falsedad de cualquier proposición en el sistema formal. Al problema en cuestión se le denominó Entscheidungsproblem. En caso de que Hilbert hubiese cumplido su objetivo, cualquier problema bien definido se resolvería simplemente al ejecutar dicho algoritmo.

Pero fueron otros los que mediante una serie de investigaciones mostraron que esto no era posible. En contra de esta idea K. Gödel sacó a la luz su conocido Primer Teorema de Incompletitud. Este viene a expresar que todo sistema de primer orden consistente que contenga los teoremas de la aritmética y cuyo conjunto de axiomas sea recursivo no es completo. Gödel construyó una fórmula que es satisfactoria pero que no puede ser probada en el sistema. Como consecuencia, no es posible encontrar el sistema formal deseado por Hilbert en el marco de la lógica de primer orden, a no ser que se tome un conjunto no recursivo de axiomas.

Una posterior versión, que resulta más general, del teorema de incompletitud de Gödel, indica que ningún sistema deductivo que contenga los teoremas de la aritmética, y con los axiomas recursivamente enumerables puede ser consistente y completo a la vez. Esto hace pensar, a nivel intuitivo, que no va a ser posible definir un sistema formal.

¿Qué problemas puede resolver una máquina de Turing? [editar]

No todos los problemas pueden ser resueltos. Un problema indecidible es uno que no puede ser resuelto con un algoritmo aún si se dispone de espacio y tiempo ilimitado. Actualmente se conocen muchos problemas indecidibles, como por ejemplo:

  • El Entscheidungsproblem (problema de decisión en alemán) que se define como: Dada una frase del cálculo de predicados de primer orden, decidir si ella es un teorema. Church y Turing demostraron independientemente que este problema es indecidible.
  • El Problema de la parada, que se define así: Dado un programa y su entrada, decidir si ese programa terminará para esa entrada o si correrá indefinidamente. Turing demostró que se trata de un problema indecidible.
  • Un número computable es un número real que puede ser aproximado por un algoritmo con un nivel de exactitud arbitrario. Turing demostró que casi todos los números no son computables. Por ejemplo, la Constante de Chaitin no es computable aunque sí que está bien definido.

¿Qué otros formalismos equivalen a las máquinas de Turing? [editar]

Los lenguajes formales que son aceptados por una máquina de Turing son exactamente aquellos que pueden ser generados por una gramática formal. El cálculo Lambda es una forma de definir funciones. Las funciones que pueden ser computadas con el cálculo Lambda son exactamente aquellas que pueden ser computadas con una máquina de Turing. Estos tres formalismos, las máquinas de Turing, los lenguajes formales y el cálculo Lambda son formalismos muy disímiles y fueron desarrollados por diferentes personas. Sin embargo, ellos son todos equivalentes y tienen el mismo poder de expresión. Generalmente se toma esta notable coincidencia como evidencia de que la tesis de Church-Turing es cierta, que la afirmación de que la noción intuitiva de algoritmo o procedimiento efectivo de cómputo corresponde a la noción de cómputo en una máquina de Turing.

Los computadores electrónicos, basados en la arquitectura Eckert-Mauchly así como las máquinas cuánticas tendrían exactamente el mismo poder de expresión que el de una máquina de Turing si dispusieran de recursos ilimitados de tiempo y espacio. Como consecuencia, los lenguajes de programación tienen a lo sumo el mismo poder de expresión que el de los programas para una máquina de Turing y en la práctica no todos lo alcanzan. Los lenguajes con poder de expresión equivalente al de una máquina de Turing se denominan Turing completos.

Entre los formalismos equivalentes a una máquina de Turing están:

Los últimos tres ejemplos utilizan una definición ligeramente diferente de aceptación de un lenguaje. Ellas aceptan una palabra si cualquiera, cómputo acepta (en el caso de no determinismo), o la mayoría de los cómputos aceptan (para las versiones probabilística y cuántica). Con estas definiciones, estas máquinas tienen el mismo poder de expresión que una máquina de Turing.

¿Qué problemas requieren máquinas más poderosas? [editar]

Se considera que algunas máquinas tienen mayor poder que las máquinas de Turing. Por ejemplo, una máquina oráculo que utiliza una caja negra que puede calcular una función particular que no es calculable con una máquina de Turing. La fuerza de cómputo de una máquina oráculo viene descrita por su grado de Turing. La teoría de cómputos reales estudia máquinas con precisión absoluta en los números reales. Dentro de esta teoría, es posible demostrar afirmaciones interesentes, tales como «el complemento de un conjunto de Mandelbrot es solo parcialmente decidible».

Read Full Post »

Consistency proof

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In mathematical logic, a logical system is consistent if it does not contain a contradiction, or, more precisely, for no proposition φ is it the case that both φ and ¬φ are theorems of that system.

A consistency proof is a mathematical proof that a logical system is consistent. The early development of mathematical proof theory was driven by the desire to provide finitary consistency proofs for all of mathematics as part of Hilbert’s program. Hilbert’s program fell to Gödel’s insight, as expressed in his two incompleteness theorems, that sufficiently strong proof theories cannot prove their own consistency.

Although consistency can be proved by means of model theory, it is often done in a purely syntactical way, without any need to reference some model of the logic. The cut-elimination (or equivalently the normalization of the underlying calculus if there is one) implies the consistency of the calculus: since there is obviously no cut-free proof of falsity, there is no contradiction in general.

//

[edit] Consistency and completeness

The fundamental results relating consistency and completeness were proven by Kurt Gödel:

By applying these ideas, we see that we can find first-order theories of the following four kinds:

  1. Inconsistent theories, which have no models;
  2. Theories which cannot talk about their own provability relation, such as Tarski’s axiomatisation of point and line geometry, and Presburger arithmetic. Since these theories are satisfactorily described by the model we obtain from the completeness theorem, such systems are complete;
  3. Theories which can talk about their own consistency, and which include the negation of the sentence asserting their own consistency. Such theories are complete with respect to the model one obtains from the completeness theorem, but contain as a theorem the derivability of a contradiction, in contradiction to the fact that they are consistent;
  4. Essentially incomplete theories.

In addition, it has recently been discovered that there is a fifth class of theory, the self-verifying theories, which are strong enough to talk about their own provability relation, but are too weak to carry out Gödelian diagonalisation, and so which can consistently prove their own consistency. However as with any theory, a theory proving its own consistency provides us with no interesting information, since inconsistent theories also prove their own consistency.

[edit] Formulas

A set of formulas Φ in first-order logic is consistent (written ConΦ) if and only if there is no formula φ such that and . Otherwise Φ is inconsistent and is written IncΦ.

Φ is said to be simply consistent iff for no formula φ of Φ are both φ and the negation of φ theorems of Φ.

Φ is said to be absolutely consistent or Post consistent iff at least one formula of Φ is not a theorem of Φ.

Φ is said to be maximally consistent if and only if for every formula φ, if Con then .

Φ is said to contain witnesses if and only if for every formula of the form there exists a term t such that . See First-order logic.

[edit] Basic Results

1. The following are equivalent:

(a) IncΦ

(b) For all

2. Every satisfiable set of formulas is consistent, where a set of formulas Φ is satisfiable if and only if there exists a model such that .

3. For all Φ and φ:

(a) if not , then Con;

(b) if Con Φ and , then Con;

(c) if Con Φ, then Con or Con.

4. Let Φ be a maximally consistent set of formulas and contain witnesses. For all φ and ψ:

(a) if , then ,

(b) either or ,

(c) if and only if or ,

(d) if and , then ,

(e) if and only if there is a term t such that .

[edit] Henkin’s Theorem

Let Φ be a maximally consistent set of formulas containing witnesses.

Define a binary relation on the set of S-terms if and only if ; and let denote the equivalence class of terms containing ; and let where is the set of terms based on the symbol set .

Define the S-structure over the term-structure corresponding to Φ by:

(1) For n-ary , if and only if ,

(2) For n-ary , ,

(3) For , .

Let be the term interpretation associated with Φ, where .

For all φ, if and only if .

[edit] Sketch of Proof

There are several things to verify. First, that is an equivalence relation. Then, it needs to be verified that (1), (2), and (3) are well defined. This falls out of the fact that is an equivalence relation and also requires a proof that (1) and (2) are independent of the choice of class representatives. Finally, can be verified by induction on formulas.

[edit] See also

[edit] References

H.D. Ebbinghaus, J. Flum, W. Thomas, Mathematical Logic

Read Full Post »

http://es.wikipedia.org/wiki/Consistencia_l%C3%B3gica  

  

Consistencia lógica

De Wikipedia, la enciclopedia libre

La consistencia lógica es una propiedad de un conjunto de axiomas. Se dice que un conjunto de axiomas es consistente si a partir de él no puede deducirse simultáneamente una proposición (p) y su contraria (¬p, no-p). Por el teorema de incompletitud de Gödel sabemos que para sistemas de una cierta complejidad dicha propiedad está relacionada con la de completitud.

Referido a un argumento es la necesidad de que todas las premisas tengan que ser necesariamente y a la vez, como producto, todas verdaderas, para que el argumento, si es consistente, pueda ser válido o no válido.

Referido al discurso la consistencia tiene que ver con que las implicaciones lógicas del mismo no sean autocontradictorias.

Icono de esbozo

Read Full Post »

 http://es.wikipedia.org/wiki/Teoremas_de_la_incompletitud_de_G%C3%B6del 

  

Teoremas de la incompletitud de Gödel

De Wikipedia, la enciclopedia libre

En lógica matemática, los teoremas de la incompletitud de Gödel son dos célebres teoremas demostrados por Kurt Gödel en 1930. Simplificando, el primer teorema afirma:

En cualquier formalización consistente de las matemáticas que sea lo bastante fuerte para definir el concepto de números naturales, se puede construir una afirmación que ni se puede demostrar ni se puede refutar dentro de ese sistema.

Este teorema es uno de los más famosos fuera de las matemáticas, y uno de los peor comprendidos. Es un teorema en lógica formal, y como tal es fácil malinterpretarlo. Hay multitud de afirmaciones que parecen similares a este primer teorema de incompletud de Gödel, pero que en realidad no son ciertas. Éstas se comentan en Malentendidos en torno a los teoremas de Gödel.

El segundo teorema de la incompletitud de Gödel, que se demuestra formalizando parte de la prueba del primer teorema dentro del propio sistema, afirma:

Ningún sistema consistente se puede usar para demostrarse a sí mismo.

Este resultado fue devastador para la aproximación filosófica a las matemáticas conocida como el programa de formalización Hilbert. David Hilbert propuso que la consistencia de los sistemas más complejos, tales como el análisis real, se podía probar en términos de sistemas más sencillos. Finalmente, la consistencia de todas las matemáticas se podría reducir a la aritmética básica. El segundo teorema de la incompletud de Gödel demuestra que la aritmética básica no se puede usar para demostrar su propia consistencia, y por lo tanto tampoco puede demostrar la consistencia de nada más fuerte.

Tabla de contenidos

[ocultar]

//

Significado de los teoremas de Gödel [editar]

Los teoremas de Gödel son teoremas en lógica de primer orden, y deben entenderse en ese contexto. En lógica formal, tanto las afirmaciones matemáticas como las demostraciones se escriben en un lenguaje simbólico en el que se puede comprobar mecánicamente la validez de las pruebas. De este modo no puede haber ninguna duda de que un teorema se deduce de nuestra lista inicial de axiomas. En teoría, este tipo de pruebas se puede verificar con un ordenador, y de hecho hay programas que lo hacen (se llama razonamiento automatizado).

Para poder realizar este proceso se necesita saber cuáles son estos axiomas. Se puede partir de un conjunto finito de axiomas, como en la geometría euclídea, o más en general se puede permitir un número infinito de axiomas con el requisito de que dada una afirmación se pueda verificar mecánicamente si ésta es uno de los axiomas. Aunque pueda sonar extraño el uso de un número infinito de axiomas, esto es precisamente lo que se hace habitualmente con los números naturales, los axiomas de Peano.

El primer teorema de la incompletud de Gödel demuestra que cualquier sistema que permita definir los números naturales es necesariamente incompleto: contiene afirmaciones que ni se pueden demostrar ni refutar.

La existencia de un sistema incompleto no es en sí particularmente sorprendente. Por ejemplo, si se elimina el postulado del paralelismo de la geometría euclídea se obtiene un sistema incompleto. Un sistema incompleto puede significar simplemente que no se han descubierto todos los axiomas necesarios.

Lo que mostró Gödel es que en la mayoría de los casos, como en la teoría de números o en análisis real, nunca se puede descubrir el conjunto completo de axiomas. Cada vez que se añada un nuevo axioma siempre habrá otro que quede fuera de alcance.

También se puede añadir un conjunto infinito de axiomas. Por ejemplo, todas las afirmaciones verdaderas sobre los números naturales, pero esa lista no será un conjunto recursivo. Dada una afirmación cualquiera, no habrá forma de saber si es un axioma en el sistema o no. Dada una prueba no habrá en general una manera de verificar que esa prueba es válida.

El teorema de Gödel tiene otra interpretación en el contexto de la informática. En lógica de primer orden, los teoremas son recursivamente enumerables: se puede construir un programa de ordenador que terminará por dar una demostración válida. Sin embargo, no cumplen la propiedad más fuerte de ser un conjunto recursivo: no se puede construir un programa que dada una afirmación cualquiera determine si ésta es cierta o no.

Muchos lógicos piensan que los teoremas de incompletud de Gödel asestaron un mazazo fatal al programa de formalización de Hilbert que apuntaba a un formalismo matemático universal. La postura aceptada generalmente es que fue el segundo teorema el que asestó este golpe. Algunos sin embargo piensan que fue el primero, e incluso hay quien piensa que ninguno de ellos lo hizo.

Ejemplos de afirmaciones indecidibles [editar]

La existencia de una afirmación indecidible dentro de un sistema formal no es en sí misma un fenómeno sorprendente.

El subsiguiente trabajo combinado de Gödel y Paul Cohen ha dado ejemplos concretos de afirmaciones indecidibles: tanto el axioma de elección como la hipótesis del continuo son indecidibles en la axiomatización estándar de teoría de conjuntos. Esos resultados no requieren del teorema de incompletitud.

En 1936, Alan Turing demostró que el problema de la parada (la cuestión de si una máquina de Turing parará al introducirle unos datos) es indecidible. Más tarde este resultado se generalizó en el campo de las funciones recursivas en el Teorema de Rice que demuestra que todos los problemas de decisión que no son triviales son indecidibles en un sistema que sea Turing-completo.

En 1973, se demostró que el problema de Whitehead en teoría de grupos es indecidible en la teoría estándar de grupos. En 1977, Kirby, Paris y Harringon demostraron que una afirmación en combinatoria, una versión del teorema de Ramsey, es indecidible en la axiomatización de la aritmética dada por los axiomas de Peano pero se puede demostrar cierta en el más amplio sistema de la teoría de conjuntos. El algoritmo de Kruskal, que tiene implicaciones en informática, también es indecidible a partir de los axiomas de Peano pero demostrable en teoría de conjuntos. Asimismo, el teorema de Goodstein es una afirmación relativamente simple sobre los números naturales que es indecidible en la aritmética de Peano.

Gregory Chaitin produjo afirmaciones indecidibles en teoría algorítmica de la información y de hecho demostró su propio teorema de la incompletud en ese contexto.

Uno de los primeros problemas de los que se sospechó que serían indecidibles fue el problema de equivalencia de enunciados sobre grupos, propuesto inicialmente por Max Dehn en 1911, el cual establece que existe un grupo representado de forma finita para el cual no existe algoritmo que decida si dos fórmulas que sólo hablan sobre propiedades de esos grupos son equivalentes. El carácter indecidible de este enunciado no fue demostrado sino hasta 1952.

Malentendidos en torno a los teoremas de Gödel [editar]

Puesto que el primer teorema de la incompletud de Gödel es tan famoso, ha dado origen a multitud de malentendidos. Aquí resumimos algunos:

  1. El teorema no implica que todo sistema axiomático interesante sea incompleto. Por ejemplo, la geometría euclídea se puede axiomatizar de forma que sea un sistema completo. (De hecho, los axiomas originales de Euclides son casi una axiomatización completa. Los axiomas que faltan expresan propiedades que parecen tan obvias que fue necesaria la aparición de la idea de la prueba formal hasta que se echaron en falta). Sin embargo hasta en un sistema completo como el de la geometría habrá construcciones imposibles (trisección del ángulo, cuadratura del círculo).
  2. El teorema sólo se aplica a sistemas que permitan definir los números naturales como un conjunto. No basta con que el sistema contenga los números naturales. Además debe ser capaz de expresar el concepto “x es un número natural” usando los axiomas y la lógica de primer orden. Hay multitud de sistemas que contienen a los números naturales y son completos. Por ejemplo, tanto los números reales como los números complejos tienen axiomatizaciones completas.

Discusión e implicaciones [editar]

Los resultados de incompletud afectan a la filosofía de las matemáticas, particularmente a los puntos de vista tales como el formalismo, que usa la lógica formal para definir sus principios. Se puede parafrasear el primer teorema diciendo “nunca se podrá encontrar un sistema axiomático que sea capaz de demostrar todas las verdades matemáticas y ninguna falsedad.”

Por otra parte, desde una perspectiva estrictamente formalista esta paráfrasis se consideraría sin significado porque presupone que la «verdad» y «falsedad» matemáticas están bien definidas en un sentido absoluto, en lugar de ser relativas a cada sistema formal

La siguiente reformulación del segundo teorema es todavía más inquietante para los fundamentos de las matemáticas:

Si un sistema axiomático se puede demostrar que es consistente a partir de sí mismo, entonces es inconsistente.

Por tanto, para establecer la consistencia de un sistema S se necesita utilizar otro sistema T, pero una prueba en T no es totalmente convincente a menos que la consistencia de T ya se haya probado sin emplear S. La consistencia de los axiomas de Peano para los números naturales por ejemplo se puede demostrar en la teoría de conjuntos, pero no en la teoría de los números naturales por sí sola. Esto proporciona una respuesta negativa al problema número dos de la famosa lista de cuestiones abiertas importantes en matemáticas de David Hilbert (llamada problemas de Hilbert).

En principio, los teoremas de Gödel todavía dejan alguna esperanza: podría ser posible producir un algoritmo general que para una afirmación dada determine si es indecidible o no, permitiendo a los matemáticos evitar completamente los problemas indecidibles. Sin embargo, la respuesta negativa al Entscheidungsproblem demuestra que no existe tal algoritmo.

Es de notar que los teoremas de Gödel sólo son aplicables a sistemas axiomáticos suficientemente fuertes. Este término significa que la teoría contiene la suficiente aritmética para llevar a cabo las instrucciones de codificación requeridas por la prueba del primer teorema de incompletud. Esencialmente, todo lo que se exige son algunos hechos básicos sobre la adición y la multiplicación tal y como por ejemplo se formalizan en la aritmética Q de Robinson. Hay sistemas axiomáticos incluso más débiles que son consistentes y completos, por ejemplo la aritmética de Presburger que demuestra todas las afirmaciones de primer orden ciertas aplicando sólo la suma.

El sistema axiomático puede consistir en un número infinito de axiomas (tal y como hace la aritmética de primer orden de Peano), pero para poder aplicarse el teorema de Gödel debe haber un algoritmo efectivo que sea capaz a verificar la corrección de las pruebas. Por ejemplo, el conjunto de todas las declaraciones de primer orden que son ciertas en el modelo estándar de los números naturales es completo. El teorema de Gödel no se puede aplicar porque no hay ningún procedimiento efectivo que decide si una cierta declaración es un axioma. De hecho, que esto sea así es una consecuencia del primer teorema de incompletud de Gödel.

Otro ejemplo de una especificación de una teoría en la que el primer teorema de Gödel no es aplicable se puede construir de la siguiente manera: ordenemos todas las posibles declaraciones sobre los números naturales primero por su longitud y luego en orden lexicográfico; comencemos con un sistema axiomático inicialmente igual a los axiomas de Peano, repasemos la lista de declaraciones una a una, y, si la declaración actual no se puede demostrar ni refutar a partir del actual sistema de axiomas, entonces añadámosla a la lista. Esto crea un sistema que es completo, consistente y suficientemente potente, pero no recursivamente enumerable.

El propio Gödel sólo demostró una versión de los teoremas arriba expuestos que es técnicamente un poco más débil; la primera demostración de las versiones descritas arriba fue dada por J. Barkley Rosser en 1936.

En esencia, la prueba del primer teorema consiste en construir una declaración p dentro de un sistema formal axiomático al que se le puede dar la siguiente interpretación meta matemática:

p = «Esta declaración no se puede probar.»

Como tal, puede verse como una versión moderna de la paradoja del mentiroso. Al contrario de la declaración del mentiroso, p no se refiere directamente a sí mismo; la interpretación de arriba sólo se puede “ver” desde fuera del sistema formal.

En un trabajo publicado en 1957 en Journal of Symbolic Logic, Raymond Smullyan mostró que los resultados de incompletitud de Gödel pueden obtenerse para sistemas mucho más elementales que los considerados por Gödel. Smullyan también ha reivindicado las pruebas más simples con el mismo alcance, basadas en los trabajos de Alfred Tarski sobre el concepto de verdad en los sistemas formales. Más simples, pero no menos perturbadoras filosóficamente. Smullyan no ha plasmado sus reflexiones sobre incompletitud sólo en obras técnicas; también han inspirado célebres libros de divulgación como ¿Cómo se llama este libro?.

Si el sistema axiomático es consistente, la prueba de Gödel muestra que p (y su negación) no se pueden demostrar en el sistema. Por tanto p es cierto (p afirma no ser demostrable y no lo es) y, sin embargo, no se puede probar formalmente en el sistema. Fíjese que añadir p a los axiomas del sistema no resolvería el problema: habría otra sentencia de Gödel para la teoría ampliada.

Roger Penrose afirma que esta (presunta) diferencia entre lo que se puede probar mecánicamente y lo que los humanos pueden ver como cierto muestra que la inteligencia humana no es mecánica en su naturaleza. También JR Lucas ha atendido esta reivindicación en Mentes, Máquinas y Gödel (en inglés).

Esta perspectiva no está ampliamente aceptada, porque tal y como lo plantea Marvin Minsky, la inteligencia humana es capaz de errar y de comprender declaraciones que son en realidad inconsistentes o falsas. Sin embargo, Minsky ha informado de que Kurt Gödel le dijo a él en persona que él creía que los seres humanos tienen una forma intuitiva, no solamente computacional, de llegar a la verdad y por tanto su teorema no limita lo que puede llegar a ser sabido como cierto por los humanos.

La posición de que el teorema muestra que los humanos tienen una habilidad que transciende la lógica formal también se puede criticar de la siguiente manera: No sabemos si la sentencia p es cierta o no, porque no sabemos (ni podemos saber) si el sistema es consistente. De modo que en realidad no sabemos ninguna verdad que esté fuera del sistema. Todo lo que sabemos es lo siguiente:

O p es indemostrable dentro del sistema, o el sistema es inconsistente.

Esta declaración es fácilmente demostrable dentro del sistema.

Otra implicación es que el trabajo de Gödel motivó a Alan Turing (1912-1954) a estudiar qué funciones eras susceptibles de poder ser calculadas y cuáles no. Para ello se sirvió de su Máquina de Turing, una máquina de propósito general mediante la que formalizó las funciones y procedimientos de cálculo. Demostrando que existían funciones que no son posibles de calcular mediante la Máquina de Turing. El paradigma de este conjunto de funciones lo representa la función que establece “si dada una Máquina de Turing, ésta produce un resultado o, por el contrario, se queda calculando indefinidamente”. Esta función, conocida con el nombre de Problema de parada (Halting Problem), será pieza fundamental para demostrar la incomputabilidad de ciertas funciones.

Esbozo de prueba para el primer teorema [editar]

El principal problema en ensamblar la idea de demostración arriba mencionada es el siguiente: para construir una declaración p que sea equivalente a «p no se puede demostrar», p tendría que de alguna manera contener una referencia a p que pudiese dar lugar a una regresión infinita. Describiremos abajo el ingenioso truco de Gödel, que más tarde sería utilizado por Alan Turing para resolver el Entscheidungsproblem.

Para empezar, toda fórmula o declaración que se puede formular en nuestro sistema obtiene un identificador único, llamado su número de Gödel. Esto se hace de una manera tal que es fácil convertir mecánicamente entre fórmulas y números de Gödel. Dado que nuestro sistema es lo bastante fuerte para razonar sobre números, ahora también es posible razonar sobre fórmulas.

Una fórmula F(x) que contiene exactamente una variable libre x se llama una forma declarativa. Tan pronto como x se reemplaza por un número específico, la declaración se transforma en una declaración bona fide, y es o bien demostrable en el sistema o no. Las formas declarativas no son declaraciones y por tanto no se pueden probar o refutar. Sin embargo, cada forma declarativa F(x) tiene un número de Gödel que denotaremos como G(F). La selección de la variable libre elegida en la forma F(x) no es relevante para la asignación del número de Gödel G(F).

Mediante el análisis cuidadoso de los axiomas y reglas del sistema, se puede escribir una forma declarativa P(x) que encarna la idea de x es el número de Gödel de una declaración que puede demostrarse en nuestro sistema. Formalmente: P(x) se puede probar si x es el número de Gödel de una declaración demostrable, y su negación \bar P(x) se puede probar si no lo es. (Aunque esto es adecuado para este esbozo de prueba, técnicamente no es completamente exacto. Vea el artículo de Gödel para este problema y el artículo de Rosser para la resolución. La palabra clave es omega-consistencia).

Ahora viene el truco: una forma declarativa F(x) se denomina auto-indemostrable si la forma F, aplicada a su propio número de Gödel, no es demostrable. Este concepto se puede definir formalmente, y se puede construir una forma declarativa SU(z) cuya interpretación es que z es el número de Gödel de una forma declarativa auto-indemostrable. Formalmente, SU(z) se define como: z = G(F) para alguna forma particular F(x), e y es el número de Gödel de la declaración F(G(F)), y \bar P(y). Ahora la declaración deseada p, que fue mencionada previamente, se puede definir como:

p = SU(G(SU)).

Intuitivamente, cuando nos preguntamos si p es cierto, preguntamos «¿Es la propiedad de ser auto-indemostrable ella misma auto-indemostrable?.» Esto es reminiscente de la paradoja del barbero sobre un barbero que afeita a todas aquellas personas del pueblo que no se afeitan a sí mismas: ¿se afeita él a sí mismo?

Ahora asumiremos que nuestro sistema axiomático es consistente.

Si p fuese demostrable, entonces SU(G(SU)) sería cierto, y por la definición de SU, z = G(SU) sería el número de Gödel de una forma declarativa auto-indemostrable. Por tanto, SU sería auto-indemostrable, lo que por definición de ese término implica que SU(G(SU)) no es demostrable, pero ese era nuestro p: pnoesdemostrable. Esta contradicción muestra que p no puede ser demostrable.

Si la negación de p = SU(G(SU)) fuese probable, entonces por definición de SU esto significaría que z = G(SU) no es el número de Gödel de una forma auto-indemostrable, lo que implica que SU no es auto-indemostrable. Por definición de auto-indemostrable, concluimos que SU(G(SU)) es demostrable, y por tanto p es demostrable. De nuevo una contradicción. Esto deja manifiesto que tampoco la negación de p puede ser demostrable.

De modo que la afirmación p ni se puede probar ni refutar en nuestro sistema.

Esbozo de prueba del segundo teorema [editar]

Sea p la sentencia indecidible construida previamente, y asumamos que la consistencia del sistema se puede probar dentro del propio sistema. Hemos visto arriba que si el sistema es consistente, entonces p no es demostrable. La prueba de esta implicación se puede formalizar en el propio sistema, y por tanto la afirmación «p no es demostrable», o «no P(p)» se puede demostrar en el sistema.

Pero esta última declaración es equivalente a p mismo (y esta equivalencia se puede demostrar en el sistema), de modo que p se puede demostrar en el sistema. Esta contradicción pone de manifiesto que el sistema debe ser inconsistente.

Véase también [editar]

Enlaces externos y referencias [editar]

Raymond Smullyan,Gödel’s Incompleteness Theorems, Oxford University Press, 1992 ISBN 0195046722

Enlaces en inglés.

Traducido al castellano en: Kurt Gödel: Obras completas. Jesús Mosterín y otros (Trad.) Alianza Editorial, Madrid (1981). ISBN 8420622869

Enlaces en castellano

Ignacio Jané, La obra de Gödel en lógica matemática y teoría de conjuntos Una introducción sintética e histórica que respeta los conceptos originales, evitando malentendidos.

Reseña en castellano de Torkel Frazen, Gödel’s theorem : an incomplete guide to its use and abuse. El libro de Franzen, de 2005, está siendo muy citado como obra de interés para introducir al verdadero sentido de los teoremas de Gödel y prevenir frente a su aplicación injustificada en campos no matemáticos.

Read Full Post »

Gödel’s incompleteness theorems

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In mathematical logic, Gödel’s incompleteness theorems, proved by Kurt Gödel in 1931, are two theorems stating inherent limitations of all but the most trivial formal systems for arithmetic of mathematical interest.

The theorems are also of considerable importance to the philosophy of mathematics. They are widely regarded as showing that Hilbert’s program to find a complete and consistent set of axioms for all of mathematics is impossible, thus giving a negative answer to Hilbert’s second problem. Authors such as J. R. Lucas have argued that the theorems have implications in wider areas of philosophy and even cognitive science as well as preventing any complete Theory of Everything from being found in physics, but these claims are less generally accepted.

Contents

[hide]

//

[edit] First incompleteness theorem

Gödel’s first incompleteness theorem, perhaps the single most celebrated result in mathematical logic, states that:

For any consistent formal, computably enumerable theory that proves basic arithmetical truths, an arithmetical statement that is true, but not provable in the theory, can be constructed.1 That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete.

Here, “theory” refers to an infinite set of statements, some of which are taken as true without proof (these are called axioms), and others (the theorems) that are taken as true because they are implied by the axioms. “Provable in the theory” means “derivable from the axioms and primitive notions of the theory, using standard first-order logic“. A theory is “consistent” if it never proves a contradiction. “Can be constructed” means that some mechanical procedure exists which can construct the statement, given the axioms, primitives, and first order logic. “Elementary arithmetic” consists merely of addition and multiplication over the natural numbers. The resulting true but unprovable statement is often referred to as “the Gödel sentence” for the theory, although there are infinitely many other statements in the theory that share with the Gödel sentence the property of being true but not provable from the theory.

The hypothesis that the theory is computably enumerable means that it is possible in principle to write a computer program that (if allowed to run forever) would list all the theorems of the theory and no other statements. In fact, it is enough to enumerate the axioms in this manner since the theorems can then be effectively generated from them.

The first incompleteness theorem first appeared as “Theorem VI” in his 1931 paper On Formally Undecidable Propositions in Principia Mathematica and Related Systems I. In Gödel’s original notation, it states:

“The general result about the existence of undecidable propositions reads as follows:

“Theorem VI. For every ω-consistent recursive class κ of FORMULAS there are recursive CLASS SIGNS r, such that neither v Gen r nor Neg(v Gen r) belongs to Flg(κ) (where v is the FREE VARIABLE of r).2 (van Heijenoort translation and typsetting 1967:607. “Flg” is from “Folgerungsmenge = set of consequences” and “Gen” is from “Generalisation = generalization” (cf Meltzer and Braithwaite 1962, 1992 edition:33-34) )

Roughly speaking, the Gödel statement, G, asserts: “G cannot be proven true”. If G were able to be proven true under the theory’s axioms, then the theory would have a theorem, G, which contradicts itself, and thus the theory would be inconsistent. But if G were not provable, then it would be true (for G expresses this very fact) and thus the theory would be incomplete.

The argument just given is in ordinary English and thus not mathematically rigorous. In order to provide a rigorous proof, Gödel represented statements by numbers; then the theory, which is already about numbers, also pertains to statements, including its own. Questions about the provability of statements are represented as questions about the properties of numbers, which would be decidable by the theory if it were complete. In these terms, the Gödel sentence is a claim that there does not exist a natural number with a certain property. A number with that property would be a proof of inconsistency of the theory. If there were such a number then the theory would be inconsistent, contrary to hypothesis. So, assuming the theory is consistent (as done in the theorem’s hypothesis) there is no such number, and the Gödel statement is true, but the theory cannot prove it. An important conceptual point is that we must assume that the theory is consistent in order to state that this statement is true.

[edit] Extensions of Gödel’s original result

Gödel demonstrated the incompleteness of the theory of Principia Mathematica, a particular theory of arithmetic, but a parallel demonstration could be given for any effective theory of a certain expressiveness. Gödel commented on this fact in the introduction to his paper, but restricted the proof to one system for concreteness. In modern statements of the theorem, it is common to state the effectiveness and expressiveness conditions as hypotheses for the incompleteness theorem, so that it is not limited to any particular formal theory.

Gödel’s original statement and proof of the incompleteness theorem requires the assumption that the theory is not just consistent but ω-consistent. A theory is ω-consistent if it is not ω-inconsistent, and is ω-inconsistent if there is a predicate P such that for every specific natural number n the theory proves ~P(n), and yet the theory also proves that there exists a natural number n such that P(n). That is, the theory says that a number with property P exists while denying that it has any specific value. The ω-consistency of a theory implies its consistency, but consistency does not imply ω-consistency. J. Barkley Rosser later strengthened the incompleteness theorem by finding a variation of the proof that does not require the theory to be ω-consistent, merely consistent. This is mostly of technical interest, since all true formal theories of arithmetic, that is, theories with only axioms that are true statements about natural numbers, are ω-consistent and thus Gödel’s theorem as originally stated applies to them. The stronger version of the incompleteness theorem that only assumes consistency, not ω-consistency, is now commonly known as Gödel’s incompleteness theorem.

[edit] Second incompleteness theorem

Gödel’s second incompleteness theorem can be stated as follows:

For any formal recursively enumerable (i.e. effectively generated) theory T including basic arithmetical truths and also certain truths about formal provability, T includes a statement of its own consistency if and only if T is inconsistent.

(Proof of the “if” part:) If T is inconsistent then anything can be proved, including that T is consistent. (Proof of the “only if” part:) If T is consistent then T does not include the statement of its own consistency. This follows from the first theorem.

There is a technical subtlety involved in the second incompleteness theorem, namely how exactly are we to express the consistency of T in the language of T. There are many ways to do this, and not all of them lead to the same result. In particular, different formalizations of the claim that T is consistent may be inequivalent in T, and some may even be provable. For example, first order arithmetic (Peano arithmetic or PA for short) can prove that the largest consistent subset of PA is consistent. But since PA is consistent, the largest consistent subset of PA is just PA, so in this sense PA “proves that it is consistent”. What PA does not prove is that the largest consistent subset of PA is, in fact, the whole of PA. (The term “largest consistent subset of PA” is rather vague, but what is meant here is the largest consistent initial segment of the axioms of PA ordered according to some criteria, e.g. by “Gödel numbers”, the numbers encoding the axioms as per the scheme used by Gödel mentioned above).

In the case of Peano arithmetic or any familiar explicitly axiomatized theory T, it is possible to define the consistency “Con(T)” of T in terms of the non-existence of a number with a certain property, as follows: “there does not exist an integer coding a sequence of sentences, such that each sentence is either one of the (canonical) axioms of T, a logical axiom, or an immediate consequence of preceding sentences according to the rules of inference of first order logic, and such that the last sentence is a contradiction”. However, for arbitrary T there is no canonical choice for Con(T).

The formalization of Con(T) depends on two factors: formalizing the notion of a sentence being derivable from a set of sentences and formalizing the notion of being an axiom of T. Formalizing derivability can be done in canonical fashion, so given an arithmetical formula A(x) defining a set of axioms, we can canonically form the predicate ProvA(P) which expresses that P is provable from the set of axioms defined by A(x). Using this predicate we can express Con(T) as “not ProvA(‘P and not-P’)”. Solomon Feferman showed that Gödel’s second incompleteness theorem goes through when the formula A(x) is chosen so that it has the form “there exists a number n satisfying the decidable predicate P” for some P. In addition, ProvA(P) must satisfy the so-called HilbertBernays provability conditions:

1. If T proves P, then T proves ProvA(P)

2. T proves 1., i.e. T proves that if T proves P, then T proves ProvA(P)

3. T proves that if T proves that (P implies Q) then T proves that provability of P implies provability of Q

Gödel’s second incompleteness theorem also implies that a theory T1 satisfying the technical conditions outlined above can’t prove the consistency of any theory T2 which proves the consistency of T1. This is because then T1 can prove that if T2 proves the consistency of T1, then T1 is in fact consistent. For the claim that T1 is consistent has form “for all numbers n, n has the decidable property of not being a code for a proof of contradiction in T1“. If T1 were in fact inconsistent, then T2 would prove for some n that n is the code of a contradiction in T1. But if T2 also proved that T1 is consistent, i.e. there is no such n, it would itself be inconsistent. We can carry out this reasoning in T1 and conclude that if T2 is consistent, then T1 is consistent. Since by second incompleteness theorem, T1 does not prove its consistency, it can’t prove the consistency of T2 either.

This easy corollary of the second incompleteness theorem shows that there is no hope of proving e.g. the consistency of first order arithmetic using finitistic means provided we accept that finitistic means are correctly formalized in a theory the consistency of which is provable in PA. It’s generally accepted that the theory of primitive recursive arithmetic (PRA) is an accurate formalization of finitistic mathematics, and PRA is provably consistent in PA. Thus PRA can’t prove the consistency of PA. This is generally seen to show that Hilbert’s program, which is to use “ideal” mathematical principles to prove “real” (finitistic) mathematical statements by showing that the “ideal” principles are consistent by finitistically acceptable principles, can’t be carried out.

This corollary is actually what makes the second incompleteness theorem epistemically relevant. As Georg Kreisel remarked, it would actually provide no interesting information if a theory T proved its consistency. This is because inconsistent theories prove everything, including their consistency. Thus a consistency proof of T in T would give us no clue as to whether T really is consistent; no doubts about T’s consistency would be resolved by such a consistency proof. The interest in consistency proofs lies in the possibility of proving the consistency of a theory T in some theory T’ which is in some sense less doubtful than T itself, e.g. weaker than T. For most naturally occurring T and T’, such as T = Zermelo-Fraenkel set theory and T’ = primitive recursive arithmetic, the consistency of T’ is provable in T, and thus T’ can’t prove the consistency of T by the above corollary of the second incompleteness theorem.

The consistency of first-order arithmetic has been proved assuming that a certain ordinal called ε0 is wellfounded. See Gentzen’s consistency proof.

[edit] Original statement of Gödel’s Theorem XI

While contemporary usage calls it the “Second incompleteness Theorem”, in the original Gödel presented it as his “Theorem XI”. It is stated thus (in the following, “Section 2” is where his Theorem VI appears, and P is Gödel’s abbreviation for Peano Arithmetic ):

”The results of Section 2 have a surprising consequence concerning a consistency proof for the system P (and its extensions), which can be stated as follows:

”Theorem XI. Let κ be any recursive consistent63 class of FORMULAS; then the SENTENTIAL FORMULA stating that κ is consistent is not κ-PROVABLE; in particular, the consistency of P is not provable in P,64 provided P is consistent (in the opposite case, of course, every proposition is provable [in P])”. (Brackets in original added by Gödel “to help the reader”, translation and typography in van Heijenoort 1967:614)

63 “κ is consistent” (abbreviated by “Wid(κ)”) is defined as thus: Wid(κ)≡ (Ex)(Form(x) & ~Bewκ(x)).”

(Note: In the original “Bew” has a negation-“bar” written over it, indicated here by ~. “Wid” abbreviates “Widerspruchfreiheit = consistency”, “Form” abbreviates “Formel = formula”, “Bew” abbreviates “Beweisbar = provable” (translations from Meltzer and Braithwaite 1962, 1996 edition:33-34) )
64 This follows if we substitute the empty class of FORMULAS for κ.”

[edit] Meaning of Gödel’s theorems

Gödel’s theorems are theorems about first-order logic, and must ultimately be understood in that context. In formal logic, both mathematical statements and proofs are written in a symbolic language, one where we can mechanically check the validity of proofs so that there can be no doubt that a theorem follows from our starting list of axioms. In theory, such a proof can be checked by a computer, and in fact there are computer programs that will check the validity of proofs. (Automatic proof verification is closely related to automated theorem proving, though proving and checking the proof are usually different tasks.)

To be able to perform this process, we need to know what our axioms are. We could start with a finite set of axioms, such as in Euclidean geometry, or more generally we could allow an infinite list of axioms, with the requirement that we can mechanically check for any given statement whether it is an axiom from that set or not (an axiom schema). In computer science, this is known as having a recursive set of axioms. While an infinite list of axioms may sound strange, this is exactly what’s used in the usual axioms for the natural numbers, the Peano axioms: the inductive axiom is in fact an axiom schema — it states that if zero has any property and whenever any natural number has that property, its successor also has that property, then all natural numbers have that property — it does not specify which property and the only way to say in first-order logic that this is true of all definable properties is to have infinitely many statements, one for each property.

Gödel’s first incompleteness theorem shows that any formal system that includes enough of the theory of the natural numbers is incomplete: it contains statements that are neither provably true nor provably false. Or one might say, no formal system which aims to define the natural numbers can actually do so, as there will be true number-theoretical statements which that system cannot prove. This has severe consequences for the program of logicism proposed by Gottlob Frege and Bertrand Russell, which aimed to define the natural numbers in terms of logic.[1]

The existence of an incomplete system is in itself not particularly surprising. For example, if you take Euclidean geometry and you drop the parallel postulate, you get an incomplete system (in the sense that the system does not contain all the true statements about Euclidean space). A system can be incomplete simply because you haven’t discovered all the necessary axioms.

What Gödel showed is that in most cases, such as in number theory or real analysis, you can never create a complete and consistent finite list of axioms, or even an infinite list that can be produced by a computer program. Each time you add a statement as an axiom, there will always be other true statements that still cannot be proved as true, even with the new axiom. Furthermore if the system can prove that it is consistent, then it is inconsistent.

It is possible to have a complete and consistent list of axioms that cannot be produced by a computer program (that is, the list is not computably enumerable). For example, one might take all true statements about the natural numbers to be axioms (and no false statements). But then there is no mechanical way to decide, given a statement about the natural numbers, whether it is an axiom or not.

Gödel’s theorem has another interpretation in the language of computer science. In first-order logic, theorems are computably enumerable: you can write a computer program that will eventually generate any valid proof. You can ask if they have the stronger property of being recursive: can you write a computer program to definitively determine if a statement is true or false? Gödel’s theorem says that in general you cannot.

Many logicians believe that Gödel’s incompleteness theorems struck a fatal blow to David Hilbert‘s program towards a universal mathematical formalism which was based on Principia Mathematica. The generally agreed-upon stance is that the second theorem is what specifically dealt this blow. However some believe it was the first, and others believe that neither did.

[edit] Examples of undecidable statements

There are two distinct senses of the word “undecidable” in contemporary use. The first of these is the sense used in relation to Gödel’s theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either “the answer to A is yes” or “the answer to A is no”.

Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the “neither provable nor refutable” sense. The usage of “independent” is also ambiguous, however. Some use it to mean just “not provable”, leaving open whether an independent statement might be refuted.

Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called “absolutely undecidable” statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools.

One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1952.

The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC.

In 1970, Soviet mathematician Yuri Matiyasevich showed that Hilbert’s Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert’s challenge sought an algorithm which finds all solutions of a Diophantine Equation. A Diophantine Equation is a more general case of Fermat’s Last Theorem; we seek the rational roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n- variables, infinite solutions exist (and are easy to find) in the Complex Plane; the problem becomes difficult (impossible) by constraining solutions to rational values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine Equation to a recursively enumerable set and invoking Gödel’s Incompleteness Theorem.[2]

In 1936, Alan Turing proved that the halting problem—the question of whether or not a Turing machine halts on a given program—is undecidable, in the second sense of the term. This result was later generalized to Rice’s theorem.

In 1973, the Whitehead problem in group theory was shown to be undecidable, in the first sense of the term, in standard set theory.

In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic.

Kruskal’s tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal’s tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism.

Goodstein’s theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic.

Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin’s theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel’s theorem is related to the liar paradox, Chaitin’s result is related to Berry’s paradox.

Douglas Hofstadter gives a notable alternative proof of incompleteness, inspired by Gödel, in his book Gödel, Escher, Bach.

[edit] Limitations of Gödel’s theorems

The conclusions of Gödel’s theorems only hold for the formal systems that satisfy the necessary hypotheses (which have not been fully described in this article). Not all axiom systems satisfy these hypotheses, even when these systems have models that include the natural numbers as a subset. For example, there are first-order axiomatizations of Euclidean geometry and real closed fields that do not meet the hypotheses of Gödel’s theorems. The key fact is that these axiomatizations are not expressive enough to define the set of natural numbers or develop basic properties of the natural numbers.

A second limitation is that Gödel’s theorems only apply to systems that are used as their own proof systems. For example, the consistency of the Peano arithmetic can be proved in set theory if set theory is consistent (however, one cannot prove that the latter is consistent in that framework). In 1936, Gerhard Gentzen proved the consistency of Peano arithmetic using a formal system which was more powerful in certain aspects than arithmetic, but less powerful than standard set theory.

[edit] Discussion and implications

The incompleteness results affect the philosophy of mathematics, particularly versions of formalism, which use a single system formal logic to define their principles. One can paraphrase the first theorem as saying, “we can never find an all-encompassing axiomatic system which is able to prove all mathematical truths, but no falsehoods.”

On the other hand, from a strict formalist perspective this paraphrase would be considered meaningless because it presupposes that mathematical “truth” and “falsehood” are well-defined in an absolute sense, rather than relative to each formal system.

On the other hand, from a strict formalist perspective this paraphrase would be considered meaningless because it presupposes that mathematical “truth” and “falsehood” are well-defined in an absolute sense, rather than relative to each formal system.

The following rephrasing of the second theorem is even more unsettling to the foundations of mathematics:

If an axiomatic system can be proven to be consistent and complete from within itself, then it is inconsistent.

Therefore, in order to establish the consistency of a system S, one needs to use some other more powerful system T, but a proof in T is not completely convincing unless T’s consistency has already been established without using S.

At first, Gödel’s theorems seemed to leave some hope—it was thought that it might be possible to produce a general algorithm that indicates whether a given statement is undecidable or not, thus allowing mathematicians to bypass the undecidable statements altogether. However, the negative answer to the Entscheidungsproblem, obtained in 1936, showed that no such algorithm exists.

There are some who hold that a statement that is unprovable within a deductive system may be quite provable in a metalanguage. And what cannot be proven in that metalanguage can likely be proven in a meta-metalanguage, recursively, ad infinitum, in principle. By invoking such a system of typed metalanguages, along with an axiom of Reducibility — which by an inductive assumption applies to the entire stack of languages — one may, for all practical purposes, overcome the obstacle of incompleteness.

Note that Gödel’s theorems only apply to sufficiently strong axiomatic systems. “Sufficiently strong” means that the theory contains enough arithmetic to carry out the coding constructions needed for the proof of the first incompleteness theorem. Essentially, all that is required are some basic facts about addition and multiplication as formalized, e.g., in Robinson arithmetic Q. There are even weaker axiomatic systems that are consistent and complete, for instance Presburger arithmetic which proves every true first-order statement involving only addition.

The axiomatic system may consist of infinitely many axioms (as first-order Peano arithmetic does), but for Gödel’s theorem to apply, there has to be an effective algorithm which is able to check proofs for correctness. For instance, one might take the set of all first-order sentences which are true in the standard model of the natural numbers. This system is complete; Gödel’s theorem does not apply because there is no effective procedure that decides if a given sentence is an axiom. In fact, that this is so is a consequence of Gödel’s first incompleteness theorem.

Another example of a specification of a theory to which Gödel’s first theorem does not apply can be constructed as follows: order all possible statements about natural numbers first by length and then lexicographically, start with an axiomatic system initially equal to the Peano axioms, go through your list of statements one by one, and, if the current statement cannot be proven nor disproven from the current axiom system, add it to that system. This creates a system which is complete, consistent, and sufficiently powerful, but not computably enumerable.

Gödel himself only proved a technically slightly weaker version of the above theorems; the first proof for the versions stated above was given by J. Barkley Rosser in 1936.

In essence, the proof of the first theorem consists of constructing a statement p within a formal axiomatic system that can be given a meta-mathematical interpretation of:

p = “This statement cannot be proven in the given formal theory”

As such, it can be seen as a modern variant of the Liar paradox, although unlike the classical paradoxes it’s not really paradoxical.

If the axiomatic system is consistent, Gödel’s proof shows that p (and its negation) cannot be proven in the system. Therefore p is true (p claims to be not provable, and it is not provable) yet it cannot be formally proved in the system. If the axiomatic system is ω-consistent, then the negation of p cannot be proven either, and so p is undecidable. In a system which is not ω-consistent (but consistent), either we have the same situation, or we have a false statement which can be proven (namely, the negation of p).

Adding p to the axioms of the system would not solve the problem: there would be another Gödel sentence for the enlarged theory. Theories such as Peano arithmetic, for which any computably enumerable consistent extension is incomplete, are called essentially incomplete.

[edit] Minds and machines

Authors including J. R. Lucas have debated what, if anything, Gödel’s incompleteness theorems imply about human intelligence. Much of the debate centers on whether the human mind is equivalent to a Turing machine, or by the Church-Turing thesis, any finite machine at all. If it is, and if the machine is consistent, then Gödel’s incompleteness theorems would apply to it.

Hilary Putnam (1960) suggested that while Gödel’s theorems cannot be applied to humans, since they make mistakes and are therefore inconsistent, it may be applied to the human faculty of science or mathematics in general. If we are to believe that it is consistent, then either we cannot prove its consistency, or it cannot be represented by a Turing machine.

[edit] Postmodernism and continental philosophy

Appeals are sometimes made to the incompleteness theorems to support by analogy ideas which go beyond mathematics and logic. For instance, Régis Debray applies it to politics.[3] A number of authors have commented, mostly negatively, on such extensions and interpretations, including Torkel Franzen, Alan Sokal and Jean Bricmont, Ophelia Benson and Jeremy Stangroom. The last two quote[4] biographer Rebecca Goldstein[5] commenting on the disparity between Gödel’s avowed Platonism and the anti-realist uses to which his ideas are put by humanist intellectuals.

[edit] Theories of everything and physics

Stanley Jaki followed much later by Stephen Hawking and others argue that (an analogous argument to) Gödel’s theorem implies that even the most sophisticated formulation of physics will be incomplete, and that therefore there can never be an ultimate theory that can be formulated as a finite number of principles, known for certain as “final”. [6] [7]

[edit] Relationship with computability

As early as 1943, Kleene gave a proof of Godel’s incompleteness theorem using basic results of computability theory.[8] A basic result of computability shows that the halting problem is unsolvable: there is no computer program that can correctly determine, given a program P as input, whether P eventually halts when run with no input. Kleene showed that the existence of a complete effective theory of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. An exposition of this proof at the undergraduate level was given by Charlesworth (1980).[9]

By enumerating all possible proofs, it is possible to enumerate all the provable consequences of any effective first-order theory. This makes is possible to search for proofs of a certain form. Moreover, the method of arithmetization introduced by Gödel can be used to show that any sufficiently strong theory of arithmetic can represent the workings of computer programs. In particular, for each program P there is a formula Q such that Q expresses the idea that P halts when run with no input. The formula Q says, essentially, that there is a natural number that encodes the entire computation history of P and this history ends with P halting.

If, for every such formula Q, either Q or the negation of Q was a logical consequence of the axiom system, then it would be possible, by enumerating enough theorems, to determine which of these is the case. In particular, for each program P, the axiom system would either prove “P halts when run with no input,” or “P doesn’t halt when run with no input.”

Consistency assumptions imply that the axiom system is correct about these theorems. If the axioms prove that a program P doesn’t halt when the program P actually does halt, then the axiom system is inconsistent, because it is possible to use the complete computation history of P to make a proof that P does halt. This proof would just follow the computation of P step-by-step until P halts after a finite number of steps.

The mere consistency of the axiom system is not enough to obtain a contradiction, however, because a consistent axiom system could still prove the ω-inconsistent theorem that a program halts, when it actually doesn’t halt. The assumption of ω-consistency implies, however, that if the axiom system proves a program doesn’t halt then the program actually does not halt. Thus if the axiom system was consistent and ω-consistent, its proofs about which programs halt would correctly reflect reality. Thus it would be possible to effectively decide which programs halt by merely enumerating proofs in the system; this contradiction shows that no effective, consistent, ω-consistent formal theory of arithmetic that is strong enough to represent the workings of a computer can be complete.

[edit] Proof sketch for the first theorem

Throughout the proof we assume a formal system is fixed and satisfies the necessary hypotheses. The proof has three essential parts. The first part is to show that statements can be represented by natural numbers, known as Gödel numbers, and that properties of the statements can be detected by examining their Gödel numbers. This part culminates in the construction of a formula expressing the idea that a statement is provable in the system. The second part of the proof is to construct a particular statement that, essentially, says that it is unprovable. The third part of the proof is to analyze this statement to show that is neither provable nor disprovable in the system.

[edit] Arithmetization of syntax

The main problem in fleshing out the above mentioned proof idea is the following: in order to construct a statement p that is equivalent to “p cannot be proved”, p would have to somehow contain a reference to p, which could easily give rise to an infinite regress. Gödel’s ingenious trick, which was later used by Alan Turing in his work on the Entscheidungsproblem, is to represent statements as numbers, which is often called the arithmetization of syntax.

To begin with, every formula or statement that can be formulated in our system gets a unique number, called its Gödel number. This is done in such a way that it is easy to mechanically convert back and forth between formulas and Gödel numbers. It is similar, for example, to the way English sentences are encoded as sequences (or “strings”) of numbers using ASCII: such a sequence is considered as a single (if potentially very large) number. Because our system is strong enough to reason about numbers, it is now also possible to reason about formulas within the system.

A formula F(x) that contains exactly one free variable x is called a statement form or class-sign. As soon as x is replaced by a specific number, the statement form turns into a bona fide statement, and it is then either provable in the system, or not. For certain formulas one can show that for every natural number n, F(n) is true if and only if it can be proven (the precise requirement in the original proof is weaker, but for the proof sketch this will suffice). In particular, this is true for every specific arithmetic operation between a finite number of natural numbers, such as “2*3=6”.

Statement forms themselves are not statements and therefore cannot be proved or disproved. But every statement form F(x) can be assigned with a Gödel number which we will denote by G(F). The choice of the free variable used in the form F(x) is not relevant to the assignment of the Gödel number G(F).

Now comes the trick: The notion of provability itself can also be encoded by Gödel numbers, in the following way. Since a proof is a list of statements which obey certain rules, we can define the Gödel number of a proof. Now, for every statement p, we may ask whether a number x is the Gödel number of its proof. The relation between the Gödel number of p and x, the Gödel number of its proof, is an arithmetical relation between two numbers. Therefore there is a statement form Bew(x) that uses this arithmetical relation to state that a Gödel number of a proof of x exists:

Bew(y) = ∃ x ( y is the Gödel number of a formula and x is the Gödel number of a proof of the formula encoded by y).

The name Bew is short for beweisbar, the German word for “provable”. An important feature of Bew is that if a statement p is provable in the system then Bew(G(p)) is also provable. This is because any proof of p would have a corresponding Gödel number, the existence of which causes Bew(G(p)) to be satisfied.

[edit] Diagonalization

The next step in the proof is to obtain a statement that says it is unprovable. Although Gödel constructed this statement directly, the existence of at least one such statement follows from the diagonal lemma, which says that for any sufficiently strong formal system and any statement form F there is a statement p such that the system proves

pF(G(p)).

We obtain p by letting F be the negation of Bew(x); thus p roughly states that its own Gödel number is the Gödel number of an unprovable formula.

The statement p is not literally equal to ~Bew(G(p)); rather, p states that if a certain calculation is performed, the resulting Gödel number will be that of an unprovable statement. But when this calculation is performed, the resulting Gödel number turns out to be the Gödel number of p itself. This is similar to the following sentence in English:

“, when preceded by itself in quotes, is unprovable.”, when preceded by itself in quotes, is unprovable.

This sentence does not directly refer to itself, but when the stated transformation is made the original sentence is obtained as a result, and thus this sentence asserts its own unprovability. The proof of the diagonal lemma employs a similar method.

[edit] Proof of independence

We will now assume that our axiomatic system is ω-consistent. We let p be the statement obtained in the previous section.

If p were provable, then Bew(G(p)) would be provable, as argued above. But p asserts the negation of Bew(G(p)). Thus our system would be inconsistent, proving both a statement and its negation. This contradiction shows that p cannot be provable.

If the negation of p were provable, then Bew(G(p)) would be provable (because p was constructed to be equivalent to the negation of Bew(G(p))). However, for each specific number x, x cannot be the Gödel number of the proof of p, because p is not provable (from the previous paragraph). Thus on one hand the system proves there is a number with a certain property (that it is the Gödel number of the proof of p), but on the other hand, for every specific number x, we can prove that it does not have this property. This is impossible in an ω-consistent system. Thus the negation of p is not provable.

So the statement p is undecidable: it can neither be proved nor disproved within our system. ∎

It should be noted that p is not provable (and thus true) in every consistent system. The assumption of ω-consistency is only required for the negation of p to be not provable. Thus:

  • In an ω-consistent formal system, we may prove neither p nor its negation, and so p is undecidable.
  • In a consistent formal system we may either have the same situation, or we may prove the negation of p; In the later case, we have a statement (“not p“) which is false but provable.

Note that if one tries to “add the missing axioms” in order to avoid the undecidability of the system, then one has to add either p or “not p” as axioms. But then the definition of “being a Gödel number of a proof” of a statement changes. which means that the statement form Bew(x) is now different. Thus when we apply the diagonal lemma to this new form Bew, we obtain a new statement p, different from the previous one, which will be undecidable in the new system if it is ω-consistent.

Rosser (1936) showed, by employing a Gödel sentence more complicated than p, that ordinary consistency sufficed for this proof.

[edit] Boolos’s short proof

George Boolos (1998) vastly simplified the proof of the First Theorem, if one agrees that that theorem is equivalent to:

“There is no algorithm M whose output contains all true sentences of arithmetic and no false ones.”

“Arithmetic” refers to Peano or Robinson arithmetic, but the proof invokes no specifics of either. It is tacitly assumed that these systems allow ‘<‘ and ‘×’ to have their usual meanings (these are also the only defined arithmetical notions the proof requires). The Gödel sentence draws on Berry’s paradox, except that “fewer than n symbols of the language of arithmetic” replace “fewer than n natural language syllables.” Boolos proves the theorem in about two pages, employing the language of first order logic but invoking no facts about the connectives or quantifiers. The domain is the natural numbers, but the proof is innocent of infinity in any form.

Let [n] abbreviate (the natural number) n successive applications of the successor function, starting from 0. Boolos then defines several related predicates, starting with Cxz, which comes out true iff an arithmetic formula containing z symbols “names” (see below) the number x. The construction of C is only sketched. This sketch assumes that every formula has a Gödel number; this is the only mention of Gödel numbering in the entire proof. The other predicates are:

Bxy ↔ ∃z(z<yCxz),
Axy ↔ ¬Bxy ∧ ∀a(a<xBay),
Fx ↔ ∃y((y=[10]×[k]) ∧ Axy). k = the number of symbols appearing in Axy.

Fx “names” n if the output of M includes the sentence ∀x(Fx ↔(x=[n])). Thus Berry’s paradox is formalized. The balance of the proof, requiring but 12 lines of text, shows that this sentence is true in a semantic sense, but no algorithm M will identify it as true. Thus arithmetic truth outruns proof. QED.

The proof is intuitionistically valid, and requires but two existential quantifiers. The proof nowhere mentions recursive functions or any facts from number theory; Boolos even claims that the proof dispenses with diagonalization. For more on this proof, see Berry’s paradox.

[edit] Proof sketch for the second theorem

The main difficulty in proving the second incompleteness theorem is to show that various facts about provability used in the proof of the first incompleteness theorem can be formalized within the system using a formal predicate for provability. Once this is done, the second incompleteness theorem essentially follows by formalizing the entire proof of the first incompleteness theorem within the system itself.

Let p stand for the undecidable sentence constructed above, and assume that the consistency of the system can be proven from within the system itself. We have seen above that if the system is consistent, then p is not provable. The proof of this implication can be formalized within the system, and therefore the statement “p is not provable”, or “not P(p)” can be proven in the system.

But this last statement is equivalent to p itself (and this equivalence can be proven in the system), so p can be proven in the system. This contradiction shows that the system must be inconsistent.

[edit] See also

[edit] Footnotes

1 The word “true” here is being used disquotationally; that is, the statement “GT is true” means the same thing as GT itself. Thus a formalist might reinterpret the claim

for every theory T satisfying the hypotheses, if T is consistent, then GT is true

to mean

for every theory T satisfying the hypotheses, it is a theorem of Peano Arithmetic that Con(T)→GT

where Con(T) is the natural formalization of the claim “T is consistent”, and GT is the Gödel sentence for T.

2 Here Flg(κ) represents the theory generated by κ and “v Gen r” is a particular formula in the language of arithmetic.

[edit] References

[edit] In-text references

  1. ^ Geoffrey Hellman, How to Gödel a Frege-Russell: Gödel’s Incompleteness Theorems and Logicism. Noûs, Vol. 15, No. 4, Special Issue on Philosophy of Mathematics. (Nov., 1981), pp. 451-468.
  2. ^ Enumerable sets are Diophantine, Yuri Matiyasevich (1970). Doklady Akademii Nauk SSSR, 279-82. 
  3. ^ “The secret takes the form of a logical law, an extension of Gödel’s theorem: There can be no organised system without closure and no system can be closed by elements internal to that system alone“.Debray, R. Critique of Political Reason, quoted in Sokal and Bricmont’s Fashionable Nonsense.
  4. ^ In their Why Truth Matters
  5. ^ The Proof and paradox of Kurt Gödel
  6. ^ Stanley Jaki“A Late Awakening to Gödel in Physics”
  7. ^ Stephen Hawking “Gödel and the end of physics”
  8. ^ Kleene 1943, Theorem VIII.
  9. ^ A more rigorous proof-sketch can be found on pages 354 and 371 in John Hopcroft and Jeffrey Ullman 1979, Introduction to Automata theory, Addison-Wesley, ISBN 0-201-02988-X. More insight into the notion of “proofs as strings of symbols on Turing machines” can be found in p.221-226ff of Marvin Minsky 1967, Computation: Finite and Infinite Machines, Prentice-Hall, NJ, no ISBN. Minsky’s argument relies on the question that if one were to start with a “theorem” (i.e. a symbol-string that represents the last line of possible proof) and a machine to generate well-formed proofs as strings of symbols, will the machine ever match the theorem to a proof and then halt? It may make a match, but then again it may not, and it must continue forever searching. In general, this halting problem is undecidable. See more at tag system and Post correspondence problem.

[edit] Articles by Gödel

[edit] Translations, during his lifetime, of Gödel’s paper into English

None of the following agree in all translated words and in typography. The typography is a serious matter, because Gödel expressly wished to emphasize “those metamathematical notions that had been defined in their usual sense before . . .”(van Heijenoort 1967:595). Three translations exist. Of the first John Dawson states that: “The Meltzer translation was seriously deficient and received a devastating review in the Journal of Symbolic Logic; ”Gödel also complained about Braithwaite’s commentary (Dawson 1997:216). “Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis’s anthology The Undecidable . . . he found the translation “not quite so good” as he had expected . . . [but due to time constraints he] agreed to its publication” (ibid). (In a footnote Dawson states that “he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints” (ibid)). Dawson states that “The translation that Gödel favored was that by Jean van Heijenoort”(ibid). For the serious student another version exists as a set of lecture notes recorded by Stephen Kleene and J. B. Rosser “during lectures given by Gödel at to the Institute for Advanced Study during the spring of 1934” (cf commentary by Davis 1965:39 and beginning on p. 41); this version is titled “On Undecidable Propositions of Formal Mathematical Systems”. In their order of publication:

  • B. Meltzer (translation) and R. B. Braithwaite (Introduction), 1962. On Formally Undecidable Propositions of Principia Mathematica and Related Systems, Dover Publications, New York (Dover edition 1992), ISBN 0-486-66980-7 (pbk.) This contains a useful translation of Gödel’s German abbreviations on pp.33-34. As noted above, typography, translation and commentary is suspect. Unfortunately, this translation was reprinted with all its suspect content by
  • Stephen Hawking editor, 2005. God Created the Integers: The Mathematical Breakthroughs That Changed History, Running Press, Philadelphia, ISBN-10: 0-7624-1922-9. Gödel’s paper appears starting on p. 1097, with Hawking’s commentary starting on p. 1089.
  • Martin Davis editor, 1965. The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions, Raven Press, New York, no ISBN. Godel’s paper begins on page 5, preceded by one page of commentary.
  • Jean van Heijenoort editor, 1967, 3rd edition 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1979-1931, Harvard University Press, Cambridge Mass., ISBN 0-674-32449-8 (pbk). van Heijenoort did the translation. He states that “Professor Gödel approved the translation, which in many places was accommodated to his wishes.”(p. 595). Gödel’s paper begins on p. 595; van Heijenoort’s commentary begins on p. 592.
  • Martin Davis editor, 1965, ibid. “On Undecidable Propositions of Formal Mathematical Systems.” A copy with Gödel’s corrections of errata and Gödel’s added notes begins on page 41, preceded by two pages of Davis’s commentary. Until Davis included this in his volume this lecture existed only as mimeographed notes.

[edit] Articles by others

  • George Boolos, 1998, “A New Proof of the Gödel Incompleteness Theorem” in Boolos, G., Logic, Logic, and Logic. Harvard Univ. Press.
  • Arthur Charlesworth, 1980, “A Proof of Godel’s Theorem in Terms of Computer Programs,” Mathematics Magazine, v. 54 n. 3, pp. 109-121. JStor
  • David Hilbert, 1900, “Mathematical Problems.” English translation of a lecture delivered before the International Congress of Mathematicians at Paris, containing Hilbert’s statement of his Second Problem.
  • Putnam, Hilary, 1960, Minds and Machines in Sidney Hook, ed., Dimensions of Mind: A Symposium. New York University Press. Reprinted in Anderson, A. R., ed., 1964. Minds and Machines. Prentice-Hall: 77.
  • Stephen Kleene, 1943, “Recursive predicates and quantifiers,” reprinted from Transactions of the American Mathemaical Society, v. 53 n. 1, pp. 41–73 in Martin Davis 1965, The Undecidable (loc. cit.) pp. 255–287.
  • John Barkley Rosser, 1936, “Extensions of some theorems of Gödel and Church,” reprinted from the Journal of Symbolic Logic vol. 1 (1936) pp. 87-91, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 230-235.
  • John Barkley Rosser, 1939, “An Informal Exposition of proofs of Gödel’s Theorem and Church’s Theorem”, Reprinted from the Journal of Symbolic Logic, vol. 4 (1939) pp. 53-60, in Martin Davis 1965, The Undecidable (loc. cit.) pp. 223-230
  • Jean van Heijenoort, 1963. “Gödel’s Theorem” in Edwards, Paul, ed., Encyclopedia of Philosophy, Vol. 3. Macmillan: 348-57.

[edit] Books about the theorems

[edit] Miscellaneous References

  • John W. Dawson, Jr., 1997. Logical Dilemmas: The Life and Work of Kurt Gödel, A.K. Peters, Wellesley Mass, ISBN 1-56881-256-6.
  • Goldstein, Rebecca, 2005, Incompleteness – The Proof & Paradox of Kurt Godel.

[edit] External links

Read Full Post »

Logical argument

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about arguments in logic. For other uses, see argument.

In logic, an argument is a set of declarative sentences (statements) known as the premises, and another declarative sentence (statement) known as the conclusion in which it is asserted that the truth of the conclusion follows from (is entailed by) the premisses. Such an argument may or may not be valid. Note: in logic declarative sentences (statements) are either true or false (not valid or invalid); arguments are valid or invalid (not true or false). Many authors in logic now use the term ‘sentence‘ to mean a declarative sentence rather than ‘statement‘ or ‘proposition‘ to avoid certain philosophical implications of these last two terms.

//

[edit] Validity

A valid argument is one in which a specific structure is followed. An invalid argument is one in which a specfic structure is NOT followed.

The validity of an argument does not guarantee the truth of its conclusion, since a valid argument may have false premises. Only a valid argument with true premises must have a true conclusion.
The validity of an argument depends on its form, not on the truth or falsity of its premises and conclusions. Logic seeks to discover the forms of valid arguments. Since a valid argument is one such that if the premises are true then the conclusion must be true it follows that a valid argument cannot have true premises and a false conclusion. Since the validity of an argument depends on its form, an argument can be shown to be invalid by showing that its form is invalid because other arguments of the same form have true premises and false conclusions. In informal logic this is called a counter argument.

[edit] Proof

A proof is a demonstration that an argument is valid (see Proof procedure).

[edit] Validity, soundness and effectiveness

Some authors define a sound argument is a valid argument with true premises (see also Validity, Soundness, Truth.)

Arguments can be invalid for a variety of reasons. There are well-established patterns of reasoning that arguments may follow which render them invalid; these patterns are known as logical fallacies.

Even if an argument is sound (and hence also valid), an argument may still fail in its primary task of persuading us of the truth of its conclusion. Such an argument is then sound, but ineffective. An argument may fail to be effective because it is not scrutinizable, in the sense that it is not open to public examination. This may be because the argument is too long or too complex, because the terms occurring in it are obscure, or because the reasoning it employs is not well understood. The validity and soundness of an argument are logical properties of it, known as semantic properties. Effectiveness, on the other hand, is not a logical notion but a practical concern.

[edit] Formal arguments and mathematical arguments

In mathematics, an argument can often be formalized by writing each of its statements in a formal language such as first-order Peano Arithmetic. A formalized argument should have the following properties:

  • its premises are clearly identified as such
  • each of the inferences is justified by appeal to a specific rule of reasoning of the formal language in which the argument is written
  • the conclusion of the argument appears as the final inference

Checking the validity of a formal argument is thus a straightforward matter, since the presence of these three properties is easily verified.

Most arguments used in mathematics are not formal in quite so strict a sense. Strictly formal proofs of all but the most trivial assertions are extremely tedious to construct and often so long as to be hard to follow without assistance from a computer. Automated theorem proving is sometimes used to overcome these problems.

In general mathematical practice arguments are formal insofar as they are formalizable in theory; this is sometimes expressed by saying that mathematical arguments are rigorous. Mathematicians are happy to make a single inference that would, if formalized, amount to a long chain of inferences, because they are confident that the formal chain could be constructed if required.

Nevertheless, one advantage of formalizing arguments is the possibility of constructing a theory of valid mathematical arguments such as proof theory. Proof theory investigates the class of valid arguments in mathematics as a whole, and hence elucidates what kinds of statements can occur as conclusions to sound mathematical arguments. Gödel’s incompleteness theorems are proof-theoretic results which show the surprising fact that not all true mathematical statements can occur as the conclusion of formalized, sound mathematical arguments. In effect, not all true statements of mathematics are provable.

[edit] Logical arguments in science

In ordinary, philosophical and scientific argumentation abductive arguments and arguments by analogy are also commonly used. Arguments can be valid or invalid, although how arguments are determined to be in either of these two categories can often itself be an object of much discussion. Informally one should expect that a valid argument should be compelling in the sense that it is capable of convincing someone about the truth of the conclusion. However, such a criterion for validity is inadequate or even misleading since it depends more on the skill of the person constructing the argument to manipulate the person who is being convinced and less on the objective truth or undeniability of the argument itself.

Less subjective criteria for validity of arguments are often clearly desirable, and in some cases we should even expect an argument to be rigorous, that is, to adhere to precise rules of validity. This is the case for arguments used in mathematical proofs. Note that a rigorous proof does not have to be a formal proof.

In ordinary language, people refer to the logic of an argument or use terminology that suggests that an argument is based on inference rules of formal logic. Though arguments do use inferences that are indisputably purely logical (such as syllogisms), other kinds of inferences are almost always used in practical arguments. For example, arguments commonly deal with causality, probability and statistics or even specialized areas such as economics. In these cases, rather than to the well-defined principles of pure logic as explicitly set out and agreed upon in an academic, professional or other strictly understood context, logic in everyday usage almost always refers to something the reader or audience of the argument believe they perceive in the argument, and which drives them inexorably towards some conclusion, something perhaps ill-defined in their own minds (as opposed to putting the emphasis on examining by what criteria they actually accept this apparently compelling force as correct, which is how the formal rules of pure logic are constructed). And yet this feeling of inexorable conviction is also the foundation of those begrudgingly somewhat unsatisfying definitions we give of “logic”; in that we who are driven to construct these most conscientious, circumspect and clear definitions were initially drawn to do so by a similar belief that we recognized some intrinsic logic or compelling rational force in the world- even in the most everyday arguments, although such a view may have been naive, and is in any case incapable of being tested in any objective and/or universally satisfying fashion.

[edit] Theories of arguments

Theories of arguments are closely related to theories of informal logic. Ideally, a theory of argument should provide some mechanism for explaining validity of arguments.

One natural approach would follow the mathematical paradigm and attempt to define validity in terms of semantics of the assertions in the argument. Though such an approach is appealing in its simplicity, the obstacles to proceeding this way are very difficult for anything other than purely logical arguments. Among other problems, we need to interpret not only entire sentences, but also components of sentences, for example noun phrases such as The present value of government revenue for the next twelve years.

One major difficulty of pursuing this approach is that determining an appropriate semantic domain is not an easy task, raising numerous thorny ontological issues. It also raises the discouraging prospect of having to work out acceptable semantic theories before being able to say anything useful about understanding and evaluating arguments. For this reason the purely semantic approach is usually replaced with other approaches that are more easily applicable to practical discourse.

For arguments regarding topics such as probability, economics or physics, some of the semantic problems can be conveniently shoved under the rug if we can avail ourselves of a model of the phenomenon under discussion. In this case, we can establish a limited semantic interpretation using the terms of the model and the validity of the argument is reduced to that of the abstract model. This kind of reduction is used in the natural sciences generally, and would be particularly helpful in arguing about social issues if the parties can agree on a model. Unfortunately, this prior reduction seldom occurs, with the result that arguments about social policy rarely have a satisfactory resolution.

Another approach is to develop a theory of argument pragmatics, at least in certain cases where argument and social interaction are closely related. This is most useful when the goal of logical argument is to establish a mutually satisfactory resolution of a difference of opinion between individuals.

[edit] Argumentative dialogue

Arguments as discussed in the preceding paragraphs are static, such as one might find in a textbook or research article. They serve as a published record of justification for an assertion. Arguments can also be interactive, in which the proposer and the interlocutor have a more symmetrical relationship. The premises are discussed, as well the validity of the intermediate inferences. For example, consider the following exchange, illustrated by the No true Scotsman fallacy:

Argument: “No Scotsman puts sugar on his porridge.”
Reply: “But my friend Angus likes sugar with his porridge.”
Rebuttal: “Ah yes, but no true Scotsman puts sugar on his porridge.”

In this dialogue, the proposer first offers a premise, the premise is challenged by the interlocutor, and finally the proposer offers a modification of the premise. This exchange could be part of a larger discussion, for example a murder trial, in which the defendant is a Scotsman, and it had been established earlier that the murderer was eating sugared porridge when he or she committed the murder.

In argumentative dialogue, the rules of interaction may be negotiated by the parties to the dialogue, although in many cases the rules are already determined by social mores. In the most symmetrical case, argumentative dialogue can be regarded as a process of discovery more than one of justification of a conclusion. Ideally, the goal of argumentative dialogue is for participants to arrive jointly at a conclusion by mutually accepted inferences. In some cases however, the validity of the conclusion is secondary. For example; emotional outlet, scoring points with an audience, wearing down an opponent or lowering the sale price of an item may instead be the actual goals of the dialogue. Walton distinguishes several types of argumentative dialogue which illustrate these various goals:

  • Personal quarrel.
  • Forensic debate.
  • Persuasion dialogue.
  • Bargaining dialogue.
  • Action seeking dialogue.
  • Educational dialogue.

Van Eemeren and Grootendorst identify various stages of argumentative dialogue. These stages can be regarded as an argument protocol. In a somewhat loose interpretation, the stages are as follows:

  • Confrontation: Presentation of the problem, such as a debate question or a political disagreement
  • Opening: Agreement on rules, such as for example, how evidence is to be presented, which sources of facts are to be used, how to handle divergent interpretations, determination of closing conditions.
  • Argumentation: Application of logical principles according to the agreed-upon rules
  • Closing: This occurs when the termination conditions are met. Among these could be for example, a time limitation or the determination of an arbiter.

Van Eemeren and Grootendorst provide a detailed list of rules that must be applied at each stage of the protocol. Moreover, in the account of argumentation given by these authors, there are specified roles of protagonist and antagonist in the protocol which are determined by the conditions which set up the need for argument.

Many cases of argument are highly unsymmetrical, although in some sense they are dialogues. A particularly important case of this is political argument.

Much of the recent work on argument theory has considered argumentation as an integral part of language and perhaps the most important function of language (Grice, Searle, Austin, Popper). This tendency has removed argumentation theory away from the realm of pure formal logic.

One of the original contributors to this trend is the philosopher Chaim Perelman, who together with Lucie Olbrechts-Tyteca, introduced the French term La nouvelle rhetorique in 1958 to describe an approach to argument which is not reduced to application of formal rules of inference. Perelman’s view of argumentation is much closer to a juridical one, in which rules for presenting evidence and rebuttals play an important role. Though this would apparently invalidate semantic concepts of truth, this approach seems useful in situations in which the possibility of reasoning within some commonly accepted model does not exist or this possibility has broken down because of ideological conflict. Retaining the notion enunciated in the introduction to this article that logic usually refers to the structure of argument, we can regard the logic of rhetoric as a set of protocols for argumentation.

[edit] Other theories

In recent decades one of the more influential discussions of philosophical arguments is that by Nicholas Rescher in his book The Strife of Systems. Rescher models philosophical problems on what he calls aporia or an aporetic cluster: a set of statements, each of which has initial plausibility but which are jointly inconsistent. The only way to solve the problem, then, is to reject one of the statements. If this is correct, it constrains how philosophical arguments are formulated.

[edit] References

  • Robert Audi, Epistemology, Routledge, 1998. Particularly relevant is Chapter 6, which explores the relationship between knowledge, inference and argument.
  • J. L. Austin How to Do Things With Words, Oxford University Press, 1976.
  • H. P. Grice, Logic and Conversation in The Logic of Grammar, Dickenson, 1975.
  • Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8
  • R. A. DeMillo, R. J. Lipton and A. J. Perlis, Social Processes and Proofs of Theorems and Programs, Communications of the ACM, Vol. 22, No. 5, 1979. A classic article on the social process of acceptance of proofs in mathematics.
  • Yu. Manin, A Course in Mathematical Logic, Springer Verlag, 1977. A mathematical view of logic. This book is different from most books on mathematical logic in that it emphasizes the mathematics of logic, as opposed to the formal structure of logic.
  • Ch. Perelman and L. Olbrechts-Tyteca, The New Rhetoric, Notre Dame, 1970. This classic was originally published in French in 1958.
  • Henri Poincaré, Science and Hypothesis, Dover Publications, 1952
  • Frans van Eemeren and Rob Grootendorst, Speech Acts in Argumentative Discussions, Foris Publications, 1984.
  • K. R. Popper Objective Knowledge; An Evolutionary Approach, Oxford: Clarendon Press, 1972.
  • L. S. Stebbing, A Modern Introduction to Logic, Methuen and Co., 1948. An account of logic that covers the classic topics of logic and argument while carefully considering modern developments in logic.
  • Douglas Walton, Informal Logic: A Handbook for Critical Argumentation, Cambridge, 1998
  • Carlos Chesñevar, Ana Maguitman and Ronald Loui, Logical Models of Argument, ACM Computing Surveys, vol. 32, num. 4, pp.337-383, 2000.
  • T. Edward Damer. Attacking Faulty Reasoning, 5th Edition, Wadsworth, 2005. ISBN 0-534-60516-8

[edit] See also

Read Full Post »

Older Posts »