Feeds:
Posts
Comments

Archive for the ‘Jacques Lacan’ Category

Teoremas de la incompletitud de Gödel:

Problema de la parada

De Wikipedia, la enciclopedia libre

El problema de la parada o problema de la detención para máquinas de Turing es el ejemplo de problema irresoluble más conocido. Consiste en determinar si una máquina de Turing se detendrá con cierta entrada, o bien quedará en un ciclo infinito. Este fue el primer problema que se demostró formalmente que no tenía solución.

El concepto de problema indecidible o irresoluble se aplica a problemas de decisión, es decir, problemas a los que podemos decir si tienen solución o no. Dentro de estos problemas, existe un conjunto al que no le podemos asignar una respuesta, ni afirmativa ni negativa: no existe un algoritmo que nos permita determinar si el problema tiene solución.

Una de las razones por la que es importante conocer que el problema de la parada no tiene solución, es que nos permite decidir si otros problemas son resolubles o no. El razonamiento a seguir sería: si suponiendo que un problema es decidible, podemos demostrar que el problema de la parada tiene solución, entonces podemos llegar a la conclusión de que el problema en cuestión no la tiene, por reducción al absurdo.

Definición [editar]

Sea M una máquina de Turing arbitraria con un alfabeto de entrada Σ. Sea w \in \Sigma^*. ¿Puede decidirse si la máquina M se detendrá con la entrada w?

Solución [editar]

La respuesta a esta pregunta es negativa. No se puede determinar si una máquina de Turing se detiene con una entrada arbitraria.

Demostración [editar]

Para demostrarlo, supongamos que el problema de la parada tiene solución, es decir, supondremos que existe una máquina de Turing que es capaz de determinar si otra máquina de Turing para con una entrada determinada.

Consideremos una máquina de Turing P, que recibe como entrada una máquina de Turing M y una cadena w codificadas en la cinta y una a continuación de la otra (Mw), y que se encarga de ejecutar M sobre la cadena w. La máquina P parará y aceptará la entrada si M para con w, y parará y rechazará la entrada si M no para con w.

Modificamos la máquina P, creando una máquina P’ equivalente. Esta máquina no parará si M para con w, y parará si M no para con w. Obsérvese que esta modificación es trivial en términos de máquinas de Turing.

Ahora crearemos una máquina D, cuya función es la siguiente. Recibe una máquina M, la pasa por una máquina que se encarga de copiar la máquina M a continuación. Por lo tanto, a la salida de la máquina copia, la cinta contendrá MM (la codificación de la máquina repetida). A continuación, D coge este resultado y lo pasa a través de P’. Con esto intentamos decidir si la máquina M para con la entrada M. Es decir, si M para con la entrada M, entonces D no para, y si M no para con la entrada M, entonces D para. Nótese que la máquina de copia no es difícil de implementar.

Por último, tomaremos una máquina D (denominaremos SD), y le aplicaremos como entrada una máquina D. SD aplica como entrada a la máquina que recibe, la misma máquina. Por lo tanto, esta máquina en principio parará si D no para con entrada D, y no parará si D para con entrada D. Pero si SD no para y si D para con entrada D, sabiendo que D=SD, llegamos a una contradicción, por que aplicar D a SD debería dar como resultado lo mismo que aplicar D sobre D. Del mismo modo para el otro caso. Por lo tanto, el problema de la parada no tiene solución.

Todas las máquinas que hemos ido implementando en la demostración son, exceptuando P, relativamente fáciles de hacer, por lo que la clave de la demostración se encuentra, por reducción al absurdo, efectivamente en P, que es quien sostenía la hipótesis acerca de la resolubilidad del problema.

Read Full Post »

Computability theory (computer science)

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In computer science, computability theory is the branch of the theory of computation that studies which problems are computationally solvable using different models of computation.

Computability theory differs from the related discipline of computational complexity theory, which deals with the question of how efficiently a problem can be solved, rather than whether it is solvable at all.

Contents

[hide]

//

[edit] Introduction

A central question of computer science is to address the limits of computing devices. One approach to addressing this question is understanding the problems we can use computers to solve. Modern computing devices often seem to possess infinite capacity for calculation, and it’s easy to imagine that, given enough time, we might use computers to solve any problem. However, it is possible to show clear limits to the ability of computers, even given arbitrarily vast computational resources, to solve even seemingly simple problems. Problems are formally expressed as a decision problem which is to construct a mathematical function that for each input returns either 0 or 1. If the value of the function on the input is 0 then the answer is “no” and otherwise the answer is “yes”.

To explore this area, computer scientists invented automata theory which addresses problems such as the following: Given a formal language, and a string, is the string a member of that language? This is a somewhat esoteric way of asking this question, so an example is illuminating. We might define our language as the set of all strings of digits which represent a prime number. To ask whether an input string is a member of this language is equivalent to asking whether the number represented by that input string is prime. Similarly, we define a language as the set of all palindromes, or the set of all strings consisting only of the letter ‘a’. In these examples, it is easy to see that constructing a computer to solve one problem is easier in some cases than in others.

But in what real sense is this observation true? Can we define a formal sense in which we can understand how hard a particular problem is to solve on a computer? It is the goal of computability theory of automata to answer just this question.

[edit] Formal models of computation

In order to begin to answer the central question of automata theory, it is necessary to define in a formal way what an automaton is. There are a number of useful models of automata. Some widely known models are:

Deterministic finite state machine
Also called a deterministic finite automaton (DFA), or simply a finite state machine. All real computing devices in existence today can be modeled as a finite state machine, as all real computers operate on finite resources. Such a machine has a set of states, and a set of state transitions which are affected by the input stream. Certain states are defined to be accepting states. An input stream is fed into the machine one character at a time, and the state transitions for the current state are compared to the input stream, and if there is a matching transition the machine may enter a new state. If at the end of the input stream the machine is in an accepting state, then the whole input stream is accepted.
Nondeterministic finite state machine
Similarly called a nondeterministic finite automaton (NFA), it is another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is proved that any NFA is exactly reducible to an equivalent DFA.
Pushdown automaton
Similar to the finite state machine, except that it has available an execution stack, which is allowed to grow to arbitrary size. The state transitions additionally specify whether to add a symbol to the stack, or to remove a symbol from the stack. It is more powerful than a DFA due to its infinite-memory stack, although only some information in the stack is ever freely accessible.
Turing machine
Also similar to the finite state machine, except that the input is provided on an execution “tape”, which the Turing machine can read from, write to, or move back and forth past its read/write “head”. The tape is allowed to grow to arbitrary size. The Turing machine is capable of performing complex calculations which can have arbitrary duration. This model is perhaps the most important model of computation in computer science, as it simulates computation in the absence of predefined resource limits.
Multi-tape Turing machine
Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

[edit] Power of automata

With these computational models in hand, we can determine what their limits are. That is, what classes of languages can they accept?

[edit] Power of finite state machines

Computer scientists call any language that can be accepted by a finite state machine a regular language. Because of the restriction that the number of possible states in a finite state machine is finite, we can see that to find a language that is not regular, we must construct a language that would require an infinite number of states.

An example of such a language is the set of all strings consisting of the letters ‘a’ and ‘b’ which contain an equal number of the letter ‘a’ and ‘b’. To see why this language cannot be correctly recognized by a finite state machine, assume first that such a machine M exists. M must have some number of states n. Now consider the string x consisting of (n + 1) ‘a’s followed by (n + 1) ‘b’s.

As M reads in x, there must be some state in the machine that is repeated as it reads in the first series of ‘a’s, since there are (n + 1) ‘a’s and only n states by the pigeonhole principle. Call this state S, and further let d be the number of ‘a’s that our machine read in order to get from the first occurrence of S to some subsequent occurrence during the ‘a’ sequence. We know, then, that at that second occurrence of S, we can add in an additional d (where d > 0) ‘a’s and we will be again at state S. This means that we know that a string of (n + d + 1) ‘a’s must end up in the same state as the string of (n + 1) ‘a’s. This implies that if our machine accepts x, it must also accept the string of (n + d + 1) ‘a’s followed by (n + 1) ‘b’s, which is not in the language of strings containing an equal number of ‘a’s and ‘b’s.

We know, therefore, that this language cannot be accepted correctly by any finite state machine, and is thus not a regular language. A more general form of this result is called the Pumping lemma for regular languages, which can be used to show that broad classes of languages cannot be recognized by a finite state machine.

[edit] Power of pushdown automata

Computer scientists define a language that can be accepted by a pushdown automaton as a Context-free language, which can be specified as a Context-free grammar. The language consisting of strings with equal numbers of ‘a’s and ‘b’s, which we showed was not a regular language, can be decided by a push-down automaton. Also, in general, a push-down automaton can behave just like a finite-state machine, so it can decide any language which is regular. This model of computation is thus strictly more powerful than finite state machines.

However, it turns out there are languages that cannot be decided by push-down automaton either. The result is similar to that for regular expressions, and won’t be detailed here. There exists a Pumping lemma for context-free languages. An example of such a language is the set of prime numbers.

[edit] Power of Turing machines

Turing machines can decide any context-free language, in addition to languages not decidable by a push-down automata, such as the language consisting of prime numbers. It is therefore a strictly more powerful model of computation.

Because Turing machines have the ability to “back up” in their input tape, it is possible for a Turing machine to run for a long time in a way that is not possible with the other computation models previously described. It is possible to construct a Turing machine that will never finish running (halt) on some inputs. We say that a Turing machine can decide a language if it eventually will halt on all inputs and give an answer. A language that can be so decided is called a recursive language. We can further describe Turing machines that will eventually halt and give an answer for any input in a language, but which may run forever for input strings which are not in the language. Such Turing machines could tell us that a given string is in the language, but we may never be sure based on its behavior that a given string is not in a language, since it may run forever in such a case. A language which is accepted by such a Turing machine is called a recursively enumerable language.

The Turing machine, it turns out, is an exceedingly powerful model of automata. Attempts to amend the definition of a Turing machine to produce a more powerful machine are surprisingly met with failure. For example, adding an extra tape to the Turing machine, giving it a 2-dimensional (or 3 or any-dimensional) infinite surface to work with can all be simulated by a Turing machine with the basic 1-dimensional tape. These models are thus not more powerful. In fact, a consequence of the Church-Turing thesis is that there is no reasonable model of computation which can decide languages that cannot be decided by a Turing machine.

The question to ask then is: do there exist languages which are recursively enumerable, but not recursive? And, furthermore, are there languages which are not even recursively enumerable?

[edit] The halting problem

Main article: Halting problem

The halting problem is one of the most famous problems in computer science, because it has profound implications on the theory of computability and on how we use computers in everyday practice. The problem can be phrased:

Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Here we are asking not a simple question about a prime number or a palindrome, but we are instead turning the tables and asking a Turing machine to answer a question about another Turing machine. It can be shown (See main article: Halting problem) that it is not possible to construct a Turing machine that can answer this question in all cases.

That is, the only general way to know for sure if a given program will halt on a particular input in all cases is simply to run it and see if it halts. If it does halt, then you know it halts. If it doesn’t halt, however, you may never know if it will eventually halt. The language consisting of all Turing machine descriptions paired with all possible input streams on which those Turing machines will eventually halt, is not recursive. The halting problem is therefore called non-computable or undecidable.

An extension of the halting problem is called Rice’s Theorem, which states that it is undecidable (in general) whether a given language possesses any specific nontrivial property.

[edit] Beyond recursive languages

The halting problem is easy to solve, however, if we allow that the Turing machine that decides it may run forever when given input which is a representation of a Turing machine that does not itself halt. The halting language is therefore recursively enumerable. It is possible to construct languages which are not even recursively enumerable, however.

A simple example of such a language is the complement of the halting language; that is the language consisting of all Turing machines paired with input strings where the Turing machines do not halt on their input. To see that this language is not recursively enumerable, imagine that we construct a Turing machine M which is able to give a definite answer for all such Turing machines, but that it may run forever on any Turing machine that does eventually halt. We can then construct another Turing machine M that simulates the operation of this machine, along with simulating directly the execution of the machine given in the input as well, by interleaving the execution of the two programs. Since the direct simulation will eventually halt if the program it is simulating halts, and since by assumption the simulation of M will eventually halt if the input program would never halt, we know that M will eventually have one of its parallel versions halt. M is thus a decider for the halting problem. We have previously shown, however, that the halting problem is undecidable. We have a contradiction, and we have thus shown that our assumption that M exists is incorrect. The complement of the halting language is therefore not recursively enumerable.

[edit] Concurrency-based models

A number of computational models based on concurrency have been developed, including the Parallel Random Access Machine and the Petri net. These models of concurrent computation still do not implement any mathematical functions that cannot be implemented by Turing machines.

[edit] Unreasonable models of computation

The Church-Turing thesis conjectures that there is no reasonable model of computing that can compute more mathematical functions than a Turing machine. In this section we will explore some of the “unreasonable” ideas for computational models which violate this conjecture. Computer scientists have imagined many varieties of hypercomputers.

[edit] Infinite execution

Imagine a machine where each step of the computation requires half the time of the previous step. If we normalize to 1 time unit the amount of time required for the first step, the execution would require

time to run. This infinite series converges to 2 time units, which means that this Turing machine can run an infinite execution in 2 time units. This machine is capable of deciding the halting problem by directly simulating the execution of the machine in question. By extension, any convergent series would work. Assuming that the series converges to a value n, the Turing machine would complete an infinite execution in n time units.

[edit] Oracle machines

Main article: Oracle machine

So-called Oracle machines have access to various “oracles” which provide the solution to specific undecidable problems. For example, the Turing machine may have a “halting oracle” which answers immediately whether a given Turing machine will ever halt on a given input. These machines are a central topic of study in recursion theory.

[edit] Limits of Hyper-computation

Even these machines, which seemingly represent the limit of automata that we could imagine, run into their own limitations. While each of them can solve the halting problem for a Turing machine, they cannot solve their own version of the halting problem. For example, an Oracle machine cannot answer the question of whether a given Oracle machine will ever halt.

[edit] History of computability theory

The lambda calculus, an important precursor to formal computability theory, was developed by Alonzo Church and Stephen Cole Kleene. Alan Turing is most often considered the father of modern computer science, and laid many of the important foundations of computability and complexity theory, including the first description of the Turing machine (in [1], 1936) as well as many of the important early results.

[edit] See also

[edit] References

Read Full Post »

Gödel’s incompleteness theorems

From Wikipedia, the free encyclopedia

[edit] Relationship with computability

As early as 1943, Kleene gave a proof of Godel’s incompleteness theorem using basic results of computability theory.[8] A basic result of computability shows that the halting problem is unsolvable: there is no computer program that can correctly determine, given a program P as input, whether P eventually halts when run with no input. Kleene showed that the existence of a complete effective theory of arithmetic with certain consistency properties would force the halting problem to be decidable, a contradiction. An exposition of this proof at the undergraduate level was given by Charlesworth (1980).[9]

By enumerating all possible proofs, it is possible to enumerate all the provable consequences of any effective first-order theory. This makes is possible to search for proofs of a certain form. Moreover, the method of arithmetization introduced by Gödel can be used to show that any sufficiently strong theory of arithmetic can represent the workings of computer programs. In particular, for each program P there is a formula Q such that Q expresses the idea that P halts when run with no input. The formula Q says, essentially, that there is a natural number that encodes the entire computation history of P and this history ends with P halting.

If, for every such formula Q, either Q or the negation of Q was a logical consequence of the axiom system, then it would be possible, by enumerating enough theorems, to determine which of these is the case. In particular, for each program P, the axiom system would either prove “P halts when run with no input,” or “P doesn’t halt when run with no input.”

Consistency assumptions imply that the axiom system is correct about these theorems. If the axioms prove that a program P doesn’t halt when the program P actually does halt, then the axiom system is inconsistent, because it is possible to use the complete computation history of P to make a proof that P does halt. This proof would just follow the computation of P step-by-step until P halts after a finite number of steps.

The mere consistency of the axiom system is not enough to obtain a contradiction, however, because a consistent axiom system could still prove the ω-inconsistent theorem that a program halts, when it actually doesn’t halt. The assumption of ω-consistency implies, however, that if the axiom system proves a program doesn’t halt then the program actually does not halt. Thus if the axiom system was consistent and ω-consistent, its proofs about which programs halt would correctly reflect reality. Thus it would be possible to effectively decide which programs halt by merely enumerating proofs in the system; this contradiction shows that no effective, consistent, ω-consistent formal theory of arithmetic that is strong enough to represent the workings of a computer can be complete.

Read Full Post »

http://es.wikipedia.org/wiki/Teor%C3%ADa_de_la_computabilidad  

Teoría de la computabilidad

De Wikipedia, la enciclopedia libre

La Teoría de la computabilidad es la parte de la computación que estudia los problemas de decisión que pueden ser resueltos con un algoritmo o equivalentemente con una máquina de Turing. La teoría de la computabilidad se interesa a cuatro preguntas:

  • ¿Qué problemas puede resolver una máquina de Turing?
  • ¿Qué otros formalismos equivalen a las máquinas de Turing?
  • ¿Qué problemas requieren máquinas más poderosas?
  • ¿Qué problemas requieren máquinas menos poderosas?

La teoría de la complejidad computacional clasifica las funciones computables según el uso que hacen de diversos recursos en diversos tipos de máquina.

Tabla de contenidos

[ocultar]

//

Antecedentes [editar]

El origen de los modelos abstractos de computación se encuadra en los años ’30 (antes de que existieran los ordenadores modernos), para el trabajo de los lógicos Alonzo Church, Kurt Gödel, Stephen Kleene, Emil Leon Post, y Alan Turing. Estos trabajos iniciales han tenido una profunda influencia, tanto en el desarrollo teórico como en abundantes aspectos de la práctica de la computación; previendo incluso la existencia de ordenadores de propósito general, la posibilidad de interpretar programas, la dualidad entre software y hardware, y la representación de lenguajes por estructuras formales basados en reglas de producción.

El punto inicial de estos primeros trabajos fueron las cuestiones fundamentales que David Hilbert formuló en 1900, durante el transcurso de un congreso internacional.

Lo que Hilbert pretendía era crear un sistema matemático formal completo y consistente en el cual, todas las aseveraciones fueran planteadas con precisión. Su intención era encontrar un algoritmo que determinara la verdad o falsedad de cualquier proposición en el sistema formal. Al problema en cuestión se le denominó Entscheidungsproblem. En caso de que Hilbert hubiese cumplido su objetivo, cualquier problema bien definido se resolvería simplemente al ejecutar dicho algoritmo.

Pero fueron otros los que mediante una serie de investigaciones mostraron que esto no era posible. En contra de esta idea K. Gödel sacó a la luz su conocido Primer Teorema de Incompletitud. Este viene a expresar que todo sistema de primer orden consistente que contenga los teoremas de la aritmética y cuyo conjunto de axiomas sea recursivo no es completo. Gödel construyó una fórmula que es satisfactoria pero que no puede ser probada en el sistema. Como consecuencia, no es posible encontrar el sistema formal deseado por Hilbert en el marco de la lógica de primer orden, a no ser que se tome un conjunto no recursivo de axiomas.

Una posterior versión, que resulta más general, del teorema de incompletitud de Gödel, indica que ningún sistema deductivo que contenga los teoremas de la aritmética, y con los axiomas recursivamente enumerables puede ser consistente y completo a la vez. Esto hace pensar, a nivel intuitivo, que no va a ser posible definir un sistema formal.

¿Qué problemas puede resolver una máquina de Turing? [editar]

No todos los problemas pueden ser resueltos. Un problema indecidible es uno que no puede ser resuelto con un algoritmo aún si se dispone de espacio y tiempo ilimitado. Actualmente se conocen muchos problemas indecidibles, como por ejemplo:

  • El Entscheidungsproblem (problema de decisión en alemán) que se define como: Dada una frase del cálculo de predicados de primer orden, decidir si ella es un teorema. Church y Turing demostraron independientemente que este problema es indecidible.
  • El Problema de la parada, que se define así: Dado un programa y su entrada, decidir si ese programa terminará para esa entrada o si correrá indefinidamente. Turing demostró que se trata de un problema indecidible.
  • Un número computable es un número real que puede ser aproximado por un algoritmo con un nivel de exactitud arbitrario. Turing demostró que casi todos los números no son computables. Por ejemplo, la Constante de Chaitin no es computable aunque sí que está bien definido.

¿Qué otros formalismos equivalen a las máquinas de Turing? [editar]

Los lenguajes formales que son aceptados por una máquina de Turing son exactamente aquellos que pueden ser generados por una gramática formal. El cálculo Lambda es una forma de definir funciones. Las funciones que pueden ser computadas con el cálculo Lambda son exactamente aquellas que pueden ser computadas con una máquina de Turing. Estos tres formalismos, las máquinas de Turing, los lenguajes formales y el cálculo Lambda son formalismos muy disímiles y fueron desarrollados por diferentes personas. Sin embargo, ellos son todos equivalentes y tienen el mismo poder de expresión. Generalmente se toma esta notable coincidencia como evidencia de que la tesis de Church-Turing es cierta, que la afirmación de que la noción intuitiva de algoritmo o procedimiento efectivo de cómputo corresponde a la noción de cómputo en una máquina de Turing.

Los computadores electrónicos, basados en la arquitectura Eckert-Mauchly así como las máquinas cuánticas tendrían exactamente el mismo poder de expresión que el de una máquina de Turing si dispusieran de recursos ilimitados de tiempo y espacio. Como consecuencia, los lenguajes de programación tienen a lo sumo el mismo poder de expresión que el de los programas para una máquina de Turing y en la práctica no todos lo alcanzan. Los lenguajes con poder de expresión equivalente al de una máquina de Turing se denominan Turing completos.

Entre los formalismos equivalentes a una máquina de Turing están:

Los últimos tres ejemplos utilizan una definición ligeramente diferente de aceptación de un lenguaje. Ellas aceptan una palabra si cualquiera, cómputo acepta (en el caso de no determinismo), o la mayoría de los cómputos aceptan (para las versiones probabilística y cuántica). Con estas definiciones, estas máquinas tienen el mismo poder de expresión que una máquina de Turing.

¿Qué problemas requieren máquinas más poderosas? [editar]

Se considera que algunas máquinas tienen mayor poder que las máquinas de Turing. Por ejemplo, una máquina oráculo que utiliza una caja negra que puede calcular una función particular que no es calculable con una máquina de Turing. La fuerza de cómputo de una máquina oráculo viene descrita por su grado de Turing. La teoría de cómputos reales estudia máquinas con precisión absoluta en los números reales. Dentro de esta teoría, es posible demostrar afirmaciones interesentes, tales como «el complemento de un conjunto de Mandelbrot es solo parcialmente decidible».

Read Full Post »

Consistency proof

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In mathematical logic, a logical system is consistent if it does not contain a contradiction, or, more precisely, for no proposition φ is it the case that both φ and ¬φ are theorems of that system.

A consistency proof is a mathematical proof that a logical system is consistent. The early development of mathematical proof theory was driven by the desire to provide finitary consistency proofs for all of mathematics as part of Hilbert’s program. Hilbert’s program fell to Gödel’s insight, as expressed in his two incompleteness theorems, that sufficiently strong proof theories cannot prove their own consistency.

Although consistency can be proved by means of model theory, it is often done in a purely syntactical way, without any need to reference some model of the logic. The cut-elimination (or equivalently the normalization of the underlying calculus if there is one) implies the consistency of the calculus: since there is obviously no cut-free proof of falsity, there is no contradiction in general.

//

[edit] Consistency and completeness

The fundamental results relating consistency and completeness were proven by Kurt Gödel:

By applying these ideas, we see that we can find first-order theories of the following four kinds:

  1. Inconsistent theories, which have no models;
  2. Theories which cannot talk about their own provability relation, such as Tarski’s axiomatisation of point and line geometry, and Presburger arithmetic. Since these theories are satisfactorily described by the model we obtain from the completeness theorem, such systems are complete;
  3. Theories which can talk about their own consistency, and which include the negation of the sentence asserting their own consistency. Such theories are complete with respect to the model one obtains from the completeness theorem, but contain as a theorem the derivability of a contradiction, in contradiction to the fact that they are consistent;
  4. Essentially incomplete theories.

In addition, it has recently been discovered that there is a fifth class of theory, the self-verifying theories, which are strong enough to talk about their own provability relation, but are too weak to carry out Gödelian diagonalisation, and so which can consistently prove their own consistency. However as with any theory, a theory proving its own consistency provides us with no interesting information, since inconsistent theories also prove their own consistency.

[edit] Formulas

A set of formulas Φ in first-order logic is consistent (written ConΦ) if and only if there is no formula φ such that and . Otherwise Φ is inconsistent and is written IncΦ.

Φ is said to be simply consistent iff for no formula φ of Φ are both φ and the negation of φ theorems of Φ.

Φ is said to be absolutely consistent or Post consistent iff at least one formula of Φ is not a theorem of Φ.

Φ is said to be maximally consistent if and only if for every formula φ, if Con then .

Φ is said to contain witnesses if and only if for every formula of the form there exists a term t such that . See First-order logic.

[edit] Basic Results

1. The following are equivalent:

(a) IncΦ

(b) For all

2. Every satisfiable set of formulas is consistent, where a set of formulas Φ is satisfiable if and only if there exists a model such that .

3. For all Φ and φ:

(a) if not , then Con;

(b) if Con Φ and , then Con;

(c) if Con Φ, then Con or Con.

4. Let Φ be a maximally consistent set of formulas and contain witnesses. For all φ and ψ:

(a) if , then ,

(b) either or ,

(c) if and only if or ,

(d) if and , then ,

(e) if and only if there is a term t such that .

[edit] Henkin’s Theorem

Let Φ be a maximally consistent set of formulas containing witnesses.

Define a binary relation on the set of S-terms if and only if ; and let denote the equivalence class of terms containing ; and let where is the set of terms based on the symbol set .

Define the S-structure over the term-structure corresponding to Φ by:

(1) For n-ary , if and only if ,

(2) For n-ary , ,

(3) For , .

Let be the term interpretation associated with Φ, where .

For all φ, if and only if .

[edit] Sketch of Proof

There are several things to verify. First, that is an equivalence relation. Then, it needs to be verified that (1), (2), and (3) are well defined. This falls out of the fact that is an equivalence relation and also requires a proof that (1) and (2) are independent of the choice of class representatives. Finally, can be verified by induction on formulas.

[edit] See also

[edit] References

H.D. Ebbinghaus, J. Flum, W. Thomas, Mathematical Logic

Read Full Post »

http://es.wikipedia.org/wiki/Consistencia_l%C3%B3gica  

  

Consistencia lógica

De Wikipedia, la enciclopedia libre

La consistencia lógica es una propiedad de un conjunto de axiomas. Se dice que un conjunto de axiomas es consistente si a partir de él no puede deducirse simultáneamente una proposición (p) y su contraria (¬p, no-p). Por el teorema de incompletitud de Gödel sabemos que para sistemas de una cierta complejidad dicha propiedad está relacionada con la de completitud.

Referido a un argumento es la necesidad de que todas las premisas tengan que ser necesariamente y a la vez, como producto, todas verdaderas, para que el argumento, si es consistente, pueda ser válido o no válido.

Referido al discurso la consistencia tiene que ver con que las implicaciones lógicas del mismo no sean autocontradictorias.

Icono de esbozo

Read Full Post »

Older Posts »

%d bloggers like this: