Archive for June 19th, 2009

Dear All,

I am not spamming the Wikipedia pages, neither Britannica.

I realize that all of you already know how to read the Wikipedia pages, and more, all the scientific literature about the “semantic web”…

I am researching about how to build the semantic web in a new and accurate way, from the foundations on linguistics, mathematics, logics, computers, philosophy, psychology, psychoanalysis, and many areas of knowledge and lot of people ask me:

Where did you read this or that? Then is useful for them to get the feedback to post step by step all the researching that I am doing!

Do you understand this new way of working and researching?

Please take a look about my theorem: The Limit of the “Semantic Web“, and you could realize what I am saying…

If you take a look on the Ninety-two posts below down, then you could realize that you understand the entire frame and the background that is needed to build the semantic web…

My next step is to review the original paper “Minds, Machines and Gödel” (“Minds, Machines and Gödel: A Retrospect”), and introduce the concept of “Time” on logic…

All of the foundations of the semantic web researching on Internet are on the basis of “Description logic”, who is a family of “knowledge representation” languages, but as my theorem has established this is the wrong way!

Then “the right way” is on the basis of the foundations on linguistics, mathematics, logics, computers, philosophy, psychology, psychoanalysis, and many areas of knowledge, where I am researching in a hard way…


Best Regards,

Francisco Antonio Ceron Garcia

fcerong @ gmail.com

Read Full Post »

Computability theory (computer science)

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For the branch of mathematical logic called computability theory, see Recursion theory.

In computer science, computability theory is the branch of the theory of computation that studies which problems are computationally solvable using different models of computation.

Computability theory differs from the related discipline of computational complexity theory, which deals with the question of how efficiently a problem can be solved, rather than whether it is solvable at all.



[edit] Introduction

A central question of computer science is to address the limits of computing devices. One approach to addressing this question is understanding the problems we can use computers to solve. Modern computing devices often seem to possess infinite capacity for calculation, and it’s easy to imagine that, given enough time, we might use computers to solve any problem. However, it is possible to show clear limits to the ability of computers, even given arbitrarily vast computational resources, to solve even seemingly simple problems. Problems are formally expressed as a decision problem which is to construct a mathematical function that for each input returns either 0 or 1. If the value of the function on the input is 0 then the answer is “no” and otherwise the answer is “yes”.

To explore this area, computer scientists invented automata theory which addresses problems such as the following: Given a formal language, and a string, is the string a member of that language? This is a somewhat esoteric way of asking this question, so an example is illuminating. We might define our language as the set of all strings of digits which represent a prime number. To ask whether an input string is a member of this language is equivalent to asking whether the number represented by that input string is prime. Similarly, we define a language as the set of all palindromes, or the set of all strings consisting only of the letter ‘a’. In these examples, it is easy to see that constructing a computer to solve one problem is easier in some cases than in others.

But in what real sense is this observation true? Can we define a formal sense in which we can understand how hard a particular problem is to solve on a computer? It is the goal of computability theory of automata to answer just this question.

[edit] Formal models of computation

In order to begin to answer the central question of automata theory, it is necessary to define in a formal way what an automaton is. There are a number of useful models of automata. Some widely known models are:

Deterministic finite state machine
Also called a deterministic finite automaton (DFA), or simply a finite state machine. All real computing devices in existence today can be modeled as a finite state machine, as all real computers operate on finite resources. Such a machine has a set of states, and a set of state transitions which are affected by the input stream. Certain states are defined to be accepting states. An input stream is fed into the machine one character at a time, and the state transitions for the current state are compared to the input stream, and if there is a matching transition the machine may enter a new state. If at the end of the input stream the machine is in an accepting state, then the whole input stream is accepted.
Nondeterministic finite state machine
Similarly called a nondeterministic finite automaton (NFA), it is another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is possible to prove that any NFA is reducible to an equivalent DFA.
Pushdown automaton
Similar to the finite state machine, except that it has available an execution stack, which is allowed to grow to arbitrary size. The state transitions additionally specify whether to add a symbol to the stack, or to remove a symbol from the stack. It is more powerful than a DFA due to its infinite-memory stack, although only some information in the stack is ever freely accessible.
Turing machine
Also similar to the finite state machine, except that the input is provided on an execution “tape”, which the Turing machine can read from, write to, or move back and forth past its read/write “head”. The tape is allowed to grow to arbitrary size. The Turing machine is capable of performing complex calculations which can have arbitrary duration. This model is perhaps the most important model of computation in computer science, as it simulates computation in the absence of predefined resource limits.
Multi-tape Turing machine
Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

[edit] Power of automata

With these computational models in hand, we can determine what their limits are. That is, what classes of languages can they accept?

[edit] Power of finite state machines

This section may require cleanup to meet Wikipedia’s quality standards. Please improve this section if you can. (April 2009)

Computer scientists call any language that can be accepted by a finite state machine a regular language. Because of the restriction that the number of possible states in a finite state machine is finite, we can see that to find a language that is not regular, we must construct a language that would require an infinite number of states.

An example of such a language is the set of all strings consisting of the letters ‘a’ and ‘b’ which contain an equal number of the letter ‘a’ and ‘b’. To see why this language cannot be correctly recognized by a finite state machine, assume first that such a machine M exists. M must have some number of states n. Now consider the string x consisting of (n + 1) ‘a’s followed by (n + 1) ‘b’s.

As M reads in x, there must be some state in the machine that is repeated as it reads in the first series of ‘a’s, since there are (n + 1) ‘a’s and only n states by the pigeonhole principle. Call this state S, and further let d be the number of ‘a’s that our machine read in order to get from the first occurrence of S to some subsequent occurrence during the ‘a’ sequence. We know, then, that at that second occurrence of S, we can add in an additional d (where d > 0) ‘a’s and we will be again at state S. This means that we know that a string of (n + d + 1) ‘a’s must end up in the same state as the string of (n + 1) ‘a’s. This implies that if our machine accepts x, it must also accept the string of (n + d + 1) ‘a’s followed by (n + 1) ‘b’s, which is not in the language of strings containing an equal number of ‘a’s and ‘b’s. In other words, M cannot correctly distinguish between a string of equal number of ‘a’s and ‘b’s and a string with (n + d + 1) ‘a’s and n + 1 ‘b’s.

We know, therefore, that this language cannot be accepted correctly by any finite state machine, and is thus not a regular language. A more general form of this result is called the Pumping lemma for regular languages, which can be used to show that broad classes of languages cannot be recognized by a finite state machine.

[edit] Power of pushdown automata

Computer scientists define a language that can be accepted by a pushdown automaton as a Context-free language, which can be specified as a Context-free grammar. The language consisting of strings with equal numbers of ‘a’s and ‘b’s, which we showed was not a regular language, can be decided by a push-down automaton. Also, in general, a push-down automaton can behave just like a finite-state machine, so it can decide any language which is regular. This model of computation is thus strictly more powerful than finite state machines.

However, it turns out there are languages that cannot be decided by push-down automaton either. The result is similar to that for regular expressions, and won’t be detailed here. There exists a Pumping lemma for context-free languages. An example of such a language is the set of prime numbers.

[edit] Power of Turing machines

Turing machines can decide any context-free language, in addition to languages not decidable by a push-down automaton, such as the language consisting of prime numbers. It is therefore a strictly more powerful model of computation.

Because Turing machines have the ability to “back up” in their input tape, it is possible for a Turing machine to run for a long time in a way that is not possible with the other computation models previously described. It is possible to construct a Turing machine that will never finish running (halt) on some inputs. We say that a Turing machine can decide a language if it eventually will halt on all inputs and give an answer. A language that can be so decided is called a recursive language. We can further describe Turing machines that will eventually halt and give an answer for any input in a language, but which may run forever for input strings which are not in the language. Such Turing machines could tell us that a given string is in the language, but we may never be sure based on its behavior that a given string is not in a language, since it may run forever in such a case. A language which is accepted by such a Turing machine is called a recursively enumerable language.

The Turing machine, it turns out, is an exceedingly powerful model of automata. Attempts to amend the definition of a Turing machine to produce a more powerful machine are surprisingly met with failure. For example, adding an extra tape to the Turing machine, giving it a 2-dimensional (or 3 or any-dimensional) infinite surface to work with can all be simulated by a Turing machine with the basic 1-dimensional tape. These models are thus not more powerful. In fact, a consequence of the Church-Turing thesis is that there is no reasonable model of computation which can decide languages that cannot be decided by a Turing machine.

The question to ask then is: do there exist languages which are recursively enumerable, but not recursive? And, furthermore, are there languages which are not even recursively enumerable?

[edit] The halting problem

Main article: Halting problem

The halting problem is one of the most famous problems in computer science, because it has profound implications on the theory of computability and on how we use computers in everyday practice. The problem can be phrased:

Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Here we are asking not a simple question about a prime number or a palindrome, but we are instead turning the tables and asking a Turing machine to answer a question about another Turing machine. It can be shown (See main article: Halting problem) that it is not possible to construct a Turing machine that can answer this question in all cases.

That is, the only general way to know for sure if a given program will halt on a particular input in all cases is simply to run it and see if it halts. If it does halt, then you know it halts. If it doesn’t halt, however, you may never know if it will eventually halt. The language consisting of all Turing machine descriptions paired with all possible input streams on which those Turing machines will eventually halt, is not recursive. The halting problem is therefore called non-computable or undecidable.

An extension of the halting problem is called Rice’s Theorem, which states that it is undecidable (in general) whether a given language possesses any specific nontrivial property.

[edit] Beyond recursive languages

The halting problem is easy to solve, however, if we allow that the Turing machine that decides it may run forever when given input which is a representation of a Turing machine that does not itself halt. The halting language is therefore recursively enumerable. It is possible to construct languages which are not even recursively enumerable, however.

A simple example of such a language is the complement of the halting language; that is the language consisting of all Turing machines paired with input strings where the Turing machines do not halt on their input. To see that this language is not recursively enumerable, imagine that we construct a Turing machine M which is able to give a definite answer for all such Turing machines, but that it may run forever on any Turing machine that does eventually halt. We can then construct another Turing machine M that simulates the operation of this machine, along with simulating directly the execution of the machine given in the input as well, by interleaving the execution of the two programs. Since the direct simulation will eventually halt if the program it is simulating halts, and since by assumption the simulation of M will eventually halt if the input program would never halt, we know that M will eventually have one of its parallel versions halt. M is thus a decider for the halting problem. We have previously shown, however, that the halting problem is undecidable. We have a contradiction, and we have thus shown that our assumption that M exists is incorrect. The complement of the halting language is therefore not recursively enumerable.

[edit] Concurrency-based models

A number of computational models based on concurrency have been developed, including the Parallel Random Access Machine and the Petri net. These models of concurrent computation still do not implement any mathematical functions that cannot be implemented by Turing machines.

[edit] Unreasonable models of computation

The Church-Turing thesis conjectures that there is no reasonable model of computing that can compute more mathematical functions than a Turing machine. In this section we will explore some of the “unreasonable” ideas for computational models which violate this conjecture. Computer scientists have imagined many varieties of hypercomputers.

[edit] Infinite execution

Main article: Zeno machine

Imagine a machine where each step of the computation requires half the time of the previous step. If we normalize to 1 time unit the amount of time required for the first step, the execution would require

1 + {1 \over 2} + {1 \over 4} + \cdots

time to run. This infinite series converges to 2 time units, which means that this Turing machine can run an infinite execution in 2 time units. This machine is capable of deciding the halting problem by directly simulating the execution of the machine in question. By extension, any convergent series would work. Assuming that the series converges to a value n, the Turing machine would complete an infinite execution in n time units.

[edit] Oracle machines

Main article: Oracle machine

So-called Oracle machines have access to various “oracles” which provide the solution to specific undecidable problems. For example, the Turing machine may have a “halting oracle” which answers immediately whether a given Turing machine will ever halt on a given input. These machines are a central topic of study in recursion theory.

[edit] Limits of hyper-computation

Even these machines, which seemingly represent the limit of automata that we could imagine, run into their own limitations. While each of them can solve the halting problem for a Turing machine, they cannot solve their own version of the halting problem. For example, an Oracle machine cannot answer the question of whether a given Oracle machine will ever halt.

[edit] History of computability theory

The lambda calculus, an important precursor to formal computability theory, was developed by Alonzo Church and Stephen Cole Kleene. Alan Turing is most often considered the father of modern computer science, and laid many of the important foundations of computability and complexity theory, including the first description of the Turing machine (in [1], 1936) as well as many of the important early results.

[edit] See also

[edit] References


v  d  e

Computable knowledge

Topics and

Proposals and

In fiction

Dr. Know (A.I. Artificial Intelligence) • The Librarian (Snow Crash)

Read Full Post »

Computer science

Computer science

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Computer science (or computing science) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.[1][2][3] It is frequently described as the systematic study of algorithmic processes that describe and transform information; the fundamental question underlying computer science is, “What can be (efficiently) automated?”[4] Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.

The general public sometimes confuses computer science with vocational areas that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.[5]



[edit] History

The early foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard built the first mechanical calculator in 1623.[6] Charles Babbage designed a difference engine in Victorian times[7] helped by Ada Lovelace.[8] Around 1900, punch-card machines[9] were introduced. However, all of these machines were constrained to perform a single task, or at best some subset of all possible tasks.

During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s, with the creation of the first computer science departments and degree programs.[4][10] Since practical computers became available, many applications of computing have become distinct areas of study in their own right.

Although many initially believed it impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[11] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704 and later the IBM 709 computers, which were widely used during the exploration period of such devices. “Still, working with the IBM [computer] was frustrating…if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again”.[11] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.

Time has seen significant improvements in the usability and effectiveness of computer science technology. Modern society has seen a significant shift from computers being used solely by experts or professionals to a more widespread user base.

[edit] Major achievements

This section requires expansion.

The German military used the Enigma machine during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[12]

Despite its relatively short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include:

[edit] Fields of computer science

As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[17][18] The Computer Sciences Accreditation Board (CSAB) – which is made up of representatives of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers Computer Society, and the Association for Information Systems – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[17]

[edit] Theory of computation

The study of the theory of computation is focused on answering fundamental questions about what can be computed, and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a computational problem.

The famous “P=NP?” problem, one of the Millennium Prize Problems,[19] is an open problem in the theory of computation.

P = NP ?
Computability theory Computational complexity theory

[edit] Theoretical computer science

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

 P \rightarrow Q \, \Gamma\vdash x : Int
Mathematical logic Automata theory Number theory Graph theory Type theory Category theory Computational geometry Quantum computing theory

[edit] Algorithms and data structures

Analysis of algorithms Algorithms Data structures

[edit] Programming methodology and languages

Compilers Programming languages

[edit] Computer elements and architecture

Digital logic Microarchitecture Multiprocessing

[edit] Numerical and symbolic computation

y = sin(x) + c
Bioinformatics Cognitive Science Computational chemistry Computational neuroscience Computational physics Numerical algorithms Symbolic mathematics

[edit] Applications

The following disciplines are often studied from a more theoretical, computer science viewpoint, as well as from a more practical, engineering perspective.

Operating systems Computer networks Computer graphics Computer vision Databases
Computer security Artificial intelligence Robotics Human-computer interaction Ubiquitous computing

[edit] Relationship with other fields

Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[20] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[21] The term computics has also been suggested.[22] Informatik was a term used in Europe with more frequency.

The renowned computer scientist Edsger Dijkstra stated, “Computer science is no more about computers than astronomy is about telescopes.” The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as cognitive science, economics, mathematics, physics (see quantum computing), and linguistics.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[4] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term “software engineering” means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[23]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

[edit] Computer science education

Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study.

Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The practical aspects of computer programming are often referred to as software engineering. However, there is a lot of disagreement over the meaning of the term, and whether or not it is the same thing as programming.

[edit] See also

[edit] References

  1. ^Computer science is the study of informationNew Jersey Institute of Technology, Gutenberg Information Technologies
  2. ^Computer science is the study of computation.Computer Science Department, College of Saint Benedict, Saint John’s University
  3. ^Computer Science is the study of all aspects of computer systems, from the theoretical foundations to the very practical aspects of managing large software projects.Massey University
  4. ^ a b c Denning, P.J. (2000). “Computer Science: The Discipline” (PDF). Encyclopedia of Computer Science. http://web.archive.org/web/20060525195404/http://www.idi.ntnu.no/emner/dif8916/denning.pdf. 
  5. ^Common myths and preconceptions about Cambridge Computer ScienceComputer Science Department, University of Cambridge
  6. ^ Nigel Tout (2006). “Calculator Timeline”. Vintage Calculator Web Museum. http://www.vintagecalculators.com/html/calculator_time-line.html. 
  7. ^ “Science Museum – Introduction to Babbage”. http://www.sciencemuseum.org.uk/on-line/babbage/index.asp. Retrieved on 2006-09-24. 
  8. ^ “A Selection and Adaptation From Ada’s Notes found in “Ada, The Enchantress of Numbers,” by Betty Alexandra Toole Ed.D. Strawberry Press, Mill Valley, CA”. http://www.scottlan.edu/Lriddle/women/ada-love.htm. Retrieved on 2006-05-04. 
  9. ^ “IBM Punch Cards in the U.S. Army”. http://www.pattonhq.com/ibm.html. Retrieved on 2006-09-24. 
  10. ^ http://www.cl.cam.ac.uk/conference/EDSAC99/statistics.html
  11. ^ a b Levy, Steven (1984). Hackers: Heroes of the Computer Revolution. Doubleday. ISBN 0-385-19195-2. 
  12. ^ a b David Kahn, The Codebreakers, 1967, ISBN 0-684-83130-9.
  13. ^ a b http://www.cis.cornell.edu/Dean/Presentations/Slides/bgu.pdf
  14. ^ Constable, R.L. (March 2000) (PDF). Computer Science: Achievements and Challenges circa 2000. http://www.cs.cornell.edu/cis-dean/bgu.pdf. 
  15. ^ Abelson, H.; G.J. Sussman with J.Sussman (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. ISBN 0-262-01153-0. “The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology — the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.” 
  16. ^ Black box traders are on the march The Telegraph, August 26, 2006
  17. ^ a b Computing Sciences Accreditation Board (28 May 1997). “Computer Science as a Profession”. http://www.csab.org/comp_sci_profession.html. Retrieved on 2008-09-01. 
  18. ^ Committee on the Fundamentals of Computer Science: Challenges and Opportunities, National Research Council (2004). Computer Science: Reflections on the Field, Reflections from the Field. National Academies Press. ISBN 978-0-309-09301-9. http://www.nap.edu/catalog.php?record_id=11106#toc. 
  19. ^ Clay Mathematics Institute P=NP
  20. ^ Communications of the ACM 1(4):p.6
  21. ^ Communications of the ACM 2(1):p.4
  22. ^ IEEE Computer 28(12):p.136
  23. ^ Parnas, David L. (1998). “Software Engineering Programmes are not Computer Science Programmes“. Annals of Software Engineering 6: 19–37. doi:10.1023/A:1018949113292. , p. 19: “Rather than treat software engineering as a subfield of computer science, I treat it as an element of the set, Civil Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering, ..”

[edit] Further reading

[edit] External links

Sister projectWikibooks has more on the topic of

Sister project Wikiversity has learning materials about Portal:Computer Science

[edit] Webcasts


v  d  e

Major fields of computer science

Mathematical foundations

Theory of computation

Algorithms and data structures

Programming languages and Compilers

Concurrent, Parallel, and Distributed systems

Software engineering

System architecture

Telecommunication & Networking


Artificial intelligence

Computer graphics

Human computer interaction

Scientific computing

NOTE: Computer science can also be split up into different topics or fields according to the ACM Computing Classification System.
Computer architecture · Computer organization · Operating systems
Computer audio · Routing · Network topology · Cryptography
Data mining · Relational databases · SQL • OLAP
Visualization · Image processing

Read Full Post »

Computational complexity theory

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article may require cleanup to meet Wikipedia’s quality standards. Please improve this article if you can. (June 2009)

Computational complexity theory is a branch of the theory of computation in computer science that investigates the problems related to the resources required to run algorithms, and the inherent difficulty in providing algorithms that are efficient for both general and specific computational problems.



[edit] Definitions

An engineer might ask “As the size of the input to an algorithm increases, by what factor does the running time and memory requirements increase? And what are the implications of that?” In other words, complexity theory, among other things, investigates the scalability of computational problems and algorithms. In particular, it places practical limits on what computers can and cannot do.

A problem is a collection of related questions, where each question is a finite string, not enumerated but written in an algebra called Big O notation, where the actual amount of resources uses numbers only to represent orders of magnitude and not the exact resources used by a particular machine.

[edit] Analysis

[edit] Big O notation

Main article: Big O notation

The time complexity of a problem is the number of steps that it takes to solve an instance of the problem as a function of the size of the input, using the most efficient known algorithm. If an instance has length n and can be solved in n2 steps we can say the problem has a time complexity of n2.

But of course, the exact amount of resources will depend on what machine or language is being used. To avoid that difficulty, the Big O notation is generally used (the O stands for the “order” of the calculation). If a problem runs in O(n2) on one computer, then (barring perverse effects) it will also run in O(n2) on all others, even though it may take longer or shorter depending on what resources they have. So this notation allows us to generalize away from the details of a particular computer.

For example, searching an unsorted list of words for a particular word will take, on average, half the time of the size of the list, because if one starts from the beginning or end one must (on average) inspect half the words in the list before finding it. If the word does not exist, one must inspect the whole list to discover that fact, so actually it could be worse, depending on how likely it is that a word in the input is in the list.

But a binary search algorithm is logarithmic in time over size (it is O(log n)). By analogy, humans roughly know where an entry would be in a phone book or dictionary, and use strategies quickly to get to their target, such as using headwords or a thumb index quickly to get roughly to the right place, then use a linear search when they are close to the target.

It is important to note that most computational complexity theory is useful in other engineering fields and has made contributions to mathematics that would be of interest even if computers did not exist.

The master theorem can be useful to calculate the time complexity of a recursive algorithm.

[edit] Worst case analysis

A common approach to computational complexity is the worst-case analysis, the estimation of the largest amount of computational resources (time or space) needed to run an algorithm. In many cases, the worst-case estimates are rather too pessimistic to explain the good performance of some algorithm in real use on real input. To address this, other approaches have been sought, such as average-case analysis or smoothed analysis.

[edit] Axiomatic analysis

Main article: NP-complete

An axiomatic approach to computational complexity was developed by Manuel Blum. He introduced measures of complexity such as the size of a machine and measures of the computational overhead of recursive functions. The complexity of algorithms are special cases of the axiomatic complexity measure. Blum axioms help in the study of computational complexity in the most general cases.

An important aspect of the theory is to categorize computational problems and algorithms into complexity classes. The most important open question of complexity theory is whether the complexity class P is the same as the complexity class NP, or whether it is a strict subset as is generally believed.

Shortly after the question was first posed, it was realised that many important industry problems in the field of operations research are of an NP subclass called NP-complete. NP-complete problems have solutions that are quick to check, yet current methods to find those solutions are not “efficiently scalable” (as described more formally below). If the NP class is larger than P, then NP-complete problems admit no polynomial-time algorithm and any all-purpose solution to the problem is not efficiently scalable.

The fact that the P vs. NP problem has not been solved, prompts and justifies various research areas in the theory, such as:

  • spotting and solving special cases of common computational problems
  • the study of the computational complexity itself
  • finding heuristic solutions
  • researching into the hierarchies of complexity classes
  • exploring multivariate algorithmic approaches such as Parameterized complexity.

[edit] History

The first publication on computational complexity appeared in 1956.[1] But this publication in Russian was for a long time unknown in the Western world.

The beginning of studies in computational complexity can be attributed not only to Trahtenbrot,[2] but also to Rabin.[3][4][5][6][7]

Hartmanis and Stearns’ paper[7] became the most popular in this area. As a result, some researchers attribute the beginning of computational complexity theory only to Hartmanis, Lewis and Stearns.[8]

Andrey Kolmogorov‘s research in the theory of algorithms influenced the development of computational complexity theory. A notable early discovery was the Karatsuba algorithm in 1960, for the multiplication of two numbers. This algorithm disproved Kolmogorov’s 1956 conjecture that the fastest multiplication algorithm must be O(n2), and thus helped launch the study of algorithms in earnest. In 1967, Manuel Blum developed an axiomatic complexity theory based on his axioms and proved an important result, the so called, speed-up theorem.

The field was subsequently expanded by many researchers, including:

On August 6, 2002, the AKS primality test was published in a paper “PRIMES is in P”[9] by three Indian computer scientists. They showed a deterministic primality-proving algorithm they had made. The algorithm determines whether a number is a Prime number or Composite number in polynomial time, and was soon improved by others. The key significance of AKS is that it was the first published primality-proving algorithm to be all of general, polynomial, deterministic, and unconditional. The authors received the 2006 Gödel Prize and the 2006 Fulkerson Prize for their work.

[edit] Computational complexity theory topics

[edit] Time and space complexity

Complexity theory attempts to describe how difficult it is for an algorithm to find a solution to a problem. This differs from computability theory, which describes whether a problem can be solved at all: a branch of science probably made most famous by Alan Turing‘s essay On Computable Numbers, with an Application to the Entscheidungsproblem.

[edit] Decision problems

Much of complexity theory deals with decision problems. A decision problem is one where the answer is always “yes” or “no”. Some problems are undecidable, or at least seem so, so complexity theory can be used to distinguish problems where it is certain to get a correct “yes” or “no” (not necessarily both). A problem that reverses which can be relied upon is called a complement of that problem.

For example, the problem IS-PRIME says “yes” if its input is a prime and “no” otherwise, while the problem IS-COMPOSITE says “yes” if the number is not prime. Either may be unpredictable when the correct result would be “no”, they may say “yes” or perhaps never finish even if they were actually going to produce the right result, given an infinite number of monkeys.

Decision problems are often interesting to researchers because a problem can always be reduced to a decision problem. For instance, a problem HAS-FACTOR may be:

Given integers n and k find whether n has any prime factors less than k.

If we can solve this problem with known maximum resources we can use that solution to solve the next problem FACTORIZE without many more, using a binary search on factors of k until we find the smallest, divide out that factor and repeat until all the factors are found (i.e. we end up with 1 or 0).

[edit] Resources

Complexity theory analyzes the difficulty of computational problems in terms of many different computational resources. A problem can be described in terms of many requirements it makes on resources: time, space, randomness, alternation, and other less-intuitive measures[vague]. A complexity class is the class of all problems which can be solved using a certain amount of a certain computational resource.

There are other measures of computational complexity. For instance, communication complexity is a measure of complexity for distributed computations.

A different measure of problem complexity, which is useful in some cases, is circuit complexity. This is a measure of the size of a boolean circuit needed to compute the answer to a problem, in terms of the number of logic gates required to build the circuit. Such a measure is useful, for example, when designing hardware microchips to compute the function instead of software.

Perhaps the most studied computational resources are for determinism in time or space. These resources represent the amount of time or space needed by a deterministic computer. These resources are of great practical interest, and are well-studied.

Some computational problems are easier to analyze in terms of more unusual resources. For example, a nondeterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The nondeterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that nondeterministic time is a very important resource in analyzing computational problems.

Many more unusual computational resources have been used in complexity theory. Technically, any complexity measure can be viewed as a computational resource, and complexity measures are very broadly defined by the Blum complexity axioms.

[edit] Complexity classes

A complexity class is the class of all of the computational problems which can be solved using a certain amount of a certain computational resource.

The complexity class P is the class of decision problems that can be solved by a deterministic machine in polynomial time. This class corresponds to an intuitive idea of the problems which can be effectively solved in the worst cases.[10]

The complexity class NP is the set of decision problems that can be solved by a non-deterministic Turing machine in polynomial time. This class contains many problems that people would like to be able to solve effectively, including the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. All the problems in this class have the property that their solutions can be checked efficiently.[10]

Many complexity classes can be characterized in terms of the mathematical logic needed to express them – this field is called descriptive complexity.

[edit] NP completeness and other open questions

[edit] P = NP

Main article: Complexity classes P and NP
See also: Oracle machine

The question of whether P = NP (can problems that can be solved in non-deterministic polynomial time also always be solved in deterministic polynomial time?) is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[10] If the answer is yes, many important problems can be shown to have more efficient solutions that are now used with reluctance because of unknown edge cases. These include various types of integer programming in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems.[11][12]

The P = NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute the solution of which is a US$1,000,000 prize for the first person to provide a solution.[13]

Questions like this motivate the concepts of hard and complete

[edit] Hard

A class of problems X is hard for a class of problems Y if every problem in Y can be transformed “easily” (that is to say, it may take a lot of effort but the person solving the problem knows a solution exists) into some problem in X that produces the same solution. The definition of “hard” (or rather “easy”) problems differs by context. For P = NP, “hard” means NP-hard— a class of problems that are not necessarily in NP themselves, but to which any they can be reduced to an NP problem in polynomial time.

[edit] Complete

The class X is complete for Y if it is hard for Y and is also a subset of it.

Thus, the class of NP-complete problems contains the most “difficult” problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce another problem, Π1, to a known NP-complete problem, Π2, would indicate that there is no known polynomial-time solution for Π1. This is due to the fact that a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.[10]

[edit] Incomplete

Diagram of complexity classes provided that P ≠ NP. The existence of problems in NP outside both P and NP-complete in this case was established by Ladner.[14]

Incomplete problems are those in NP that are neither NP-complete nor in P. In other words, incomplete problems can neither be solved in polynomial time nor are they hard problems.

It has been shown that if P ≠ NP then there exist NP-incomplete problems.[14][15]

[edit] Graph isomorphism

An important unsolved problem in this context is, whether the graph isomorphism problem is in P, NP-complete, or NP-incomplete. The answer is not known, but there are strong hints that the problem is at least not NP-complete.[16] The graph isomorphism problem asks whether two given graphs are isomorphic.

[edit] The NP = co-NP problem

Co-NP is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed[who?] that the two classes are not equal; however, it has not yet been proven. It has been shown[15] that if these two complexity classes are not equal no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP.

Every problem in P is also in both NP and co-NP. Integer factorization is an example of a problem in both NP and co-NP that is not known to be in P.

[edit] Intractability

Problems that can be solved but not fast enough for the solution to be useful are called intractable[17]. Naive complexity theory assumes problems that lack polynomial-time solutions are intractable for more than the smallest inputs. Problems that are known to be intractable in this sense include those that are EXPTIME-complete. If NP is not the same as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithms might be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is roughly the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. Nevertheless a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.

The graph shows time (average of 100 instances in msec using a 933 MHz Pentium III) vs.problem size for knapsack problems for a state-of-the-art specialized algorithm. Quadratic fit suggests that empirical algorithmic complexity for instances with 50–10,000 variables is O((log n)2). The data comes from [18]

[edit] Theory vs. practice

What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time (see graph) and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.

[edit] See also

[edit] References

  1. ^ Trahtenbrot (1956). 
  2. ^ Trahtenbrot (1967). 
  3. ^ Rabin, M. O. (1959). “Speed of Computation of Functions and Classification of Recursive Sets”.: 1–2, Proc. 3rd Conference of Scientific Societies. 
  4. ^ Rabin, M. O. (1960), Tech. Report 2, Degree of Difficulty of Computing a Function and a Partial Ordering of Recursive Sets, Jerusalem: Hebrew University, Jerusalem 
  5. ^ Rabin, M. O. (1963). Real-time computation. Israel Journal of Math. pp. 203–211. , Hartmanis, Lewis and Stearns
  6. ^ Hartmanis and Stearns (1964) 
  7. ^ a b Hartmanis and Stearns (1965) 
  8. ^ Aho, Hopcroft and Ullman (1976) 
  9. ^ Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin, “PRIMES is in P”, Annals of Mathematics, 160 (2004), 781–793
  10. ^ a b c d Sipser, Michael (2006). “Time Complexity”. Introduction to the Theory of Computation (2nd ed.). USA: Thomson Course Technology. ISBN 0534950973. 
  11. ^ Berger, Bonnie A.; Leighton, Terrance (1998). “Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete”. Journal of Computational Biology 5 (1): p27–40. doi:10.1089/cmb.1998.5.27. PMID 9541869. 
  12. ^ Cook, Stephen (April 2000). The P versus NP Problem. Clay Mathematics Institute. http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf. Retrieved on 2006-10-18. 
  13. ^ Jaffe, Arthur M. (2006). “The Millennium Grand Challenge in Mathematics“. Notices of the AMS 53 (6). http://www.ams.org/notices/200606/fea-jaffe.pdf. Retrieved on 2006-10-18. 
  14. ^ a b Ladner, Richard E. (1975). “On the structure of polynomial time reducibility” (PDF). Journal of the ACM (JACM) 22 (1): 151–171. doi:10.1145/321864.321877. http://delivery.acm.org/10.1145/330000/321877/p155-ladner.pdf?key1=321877&key2=7146531911&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618. 
  15. ^ a b Du, Ding-Zhu; Ko, Ker-I (2000). Theory of Computational Complexity. John Wiley & Sons. ISBN 978-0-471-34506-0. 
  16. ^ Arvind, Vikraman; Kurur, Piyush P. (2006), “Graph isomorphism is in SPP”, Information and Computation 204 (5): 835–852, doi:10.1016/j.ic.2006.02.002 
  17. ^ Hopcroft, et al. (2007), p. 368 
  18. ^ Pisinger, D. 2003. “Where are the hard knapsack problems?” Technical Report 2003/08, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark

[edit] Further reading

  • Blum, Manuel (1967) On the Size of Machines, Information and Control, v. 11, pp. 257–265
  • Blum M. (1967) A Machine-independent Theory of Complexity of Recursive Functions, Journal of the ACM, v. 14, No.2, pp. 322–336
  • Downey R. and M. Fellows (1999) Parameterized Complexity, Springer-Verlag.
  • Hartmanis, J., Lewis, P.M., and Stearns, R.E. (1965) Classification of computations by time and memory requirements, Proc. IfiP Congress 65, pp. 31–35
  • Hartmanis, J., and Stearns, R.E. (1964) Computational Complexity of Recursive Sequences, IEEE Proc. 5th Annual Symp. on Switching Circuit Theory and Logical Design, pp. 82–90
  • Hartmanis, J., and Stearns, R.E. (1965) On the Computational Complexity of Algorithms, Transactions American Mathematical Society, No. 5, pp. 285–306
  • Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Boston/San Francisco/New York
  • van Leeuwen, Jan, (ed.) Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, 1990. ISBN 978-0-444-88071-0 (Volume A). QA 76.H279 1990. Huge compendium of information, 1000s of references in the various articles.
  • Trahtenbrot, B.A. (1956) Signalizing Functions and Table Operators, Research Notices of the Pensa Pedagogical Institute, v. 4, pp. 75–87 (in Russian)
  • Trahtenbrot, B.A. Complexity of Algorithms and Computations, NGU, Novosibirsk, 1967 (in Russian)

[edit] External links


v  d  e

Important complexity classes (more)

P • NP • co-NP • NP-C • co-NP-C • NP-hard • UP • #P • #P-C • L • NL • NC • P-C • PSPACE • PSPACE-C • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • PR • RE • Co-RE • RE-C • Co-RE-C • R • BQP • BPP • RP • ZPP • PCP • IP • PH 

Read Full Post »

semantic technology | semantic Web | semantic technologies | Web technology | semantic search engine | search engine | semantic web technology | Web technologies | semantic web technologies | semantic search | Web Consortium | Web Services | search engines | query technology | natural language processing | semantic search technology | Web service | database technology | semantic technology products | Web standards | semantic software | semantic applications | semantic web applications | web applications | e – commerce | search technology | semantic analysis technology | semantic solutions | social networking | social networks | software | application infrastructure software | information technology | search results | semantic systems | social networking sites | social software | technology | technology fields | Web component enabling people | Web search services | clinical trials management software system | corporate semantic technology crown jewels | data mining techniques | healthcare | internet evolution | learning tool | semantic technology applications | semantic technology kicks | software giant | technology development | technology fits | text mining | venture capital | Web development | Web extension | web search | data mining | biotechnology | automated semantic technology-based search engine | distributed Internet applications | educational technology field | eService software | food | integrated solutions | Internet Applications | knowledge discovery software | knowledge management solutions | media technologies | media technology | real-world applications | search | search engine technology | search platform | search queries | search query | search software | semantic advertising platform | semantic search engines | social media | software agents | software technology | start-up | web application | Web Conference | Web content | Web data | web developers | Web research | application data management software solutions | ad network | automotive applications | bank | communications industry | computational linguistic technology | content filtering software | conventional keyword technology | computing | cool technology | data aware& applications


Google | Microsoft | IBM | Yahoo! | Oracle | Reuters | Semantic Universe | Sprylogics International Corp. | Cognition Technologies | Gilbane Group | Industrial Software Technology S.A. | Metatomix Inc. | AOL | Cerebra | Progress Software Corporation | Velocity Interactive Group | Vulcan Capital | Cisco | Franz Inc. | nexTier Networks | PercipEnz Technologies Inc. | Semantic Web Technologies | Telstra | Thetus Corporation | TopQuadrant Inc | W3C | Xerox | YouTube | Elsevier | eBay | Dow Jones | Dow Jones Enterprise Media Group | Collexis Holdings Inc. | ClearForest | Business Information Systems | BBC | Adobe | Gridstone Research | Hurwitz & Associates | IBM Corp. | Joost | Linguamatics Ltd | Morgan Stanley | Powerset | Radar Networks | SAP AG | Semantic Social Networks | Semantic Technology Solutions | Semantic Web Company | Sonic | Sun Microsystems Inc. | TextWise | the New York Times | Transversal | Vantage Learning | Wipro Limited | Wipro Technologies | Aberdeen Group | Ars Technica | AskMeNow Inc. | Atmel Corporation | Bayer | Battery Ventures | Bizo Inc. | Capgemini | Canada NewsWire | AVR32 AP7 Series Atmel Corporation | Charles River Ventures | Citigroup | Cerebra Inc. | dbMotion | Devesys Technologies Inc. | Devestil | Dream Ventures | Comcast | Editions Profil | Emerging Technologies | Evri | eWeek Chief Technology | Fast Search & Transfer | Gamma Enterprise Technologies | Health Language Inc. | Helion Venture Partners | Hewlett Packard Research Labs | HP Labs | Interactive Supercomputing Inc. | IONA Technologies | Jarg | Kosmix | Life Sciences (HCLS) Interest Group | Matrixware Information Services | Maverick Capital | Medical Knowledge Group | Merrill Lynch | Microsoft Corp. | MITRE Corporation | nexTier Networks Inc. | Pakistan Telecom | PowerOne Capital Markets Limited | Project10X


Tim Berners-Lee | John Goodwin | Dave McComb | Paul Miller | Amit Sheth | Nova Spivack | Christine Connors | Alex Iskold | Janie Davies | Jeremy Carroll | Jim Rapoza | Paul Allen | Richard MacManus | Steven Pinker | Ben Adida | Andy Plesser | Bill Gates | Chris Morrison | Chris Augustin | Dan Brickley | David Crystal | David Barnes | Gabor Karsai | Holger Knublauch(TopQuadrant) | David Michael | Devin Wenig | Jason Gaedtke | Jim Hendler | Johann Johannson | John Turato | Josh Dilworth | Julia Kent | Karel Teige | Laura Bennett | Lee Feigenbaum | Marc Horowitz | Mark Greaves | Matthew West | Michael Singer | Neil Roseman | Nikita Ogievetsky | Paolo Dellapiana | Phil Muncaster | Philippe Le Hegaret | Roberto Maria Clemente | Stefan Decker | Stephen Conroy | Stephen Solocof | Steve Ballmer | Steve Ganz | Taylor Cowan | Thomas Tague | Tim O | Tom Morris | Venky Harinarayan | Walter Perry | Yossi Vardi | Dean Whitney | David Provost | David J. Neff | Hiroshi Yasukawa | Hideki Isozaki | Henry Blodget | Harry Chen | Harry Barfoot | Hanson Dodge | Hamish Cunningham | Greg Farr | Gordon M | Gord Hotchkiss | Golda Velez | Gilberto Gil | Geraldine Peters | George Lucas | George Box | Geoff Brown | G. Daniel Martich | Frank Gilbane | Fay Foley-McDunnough | Evans Hadoop | Eva Oliveira | Erika Preuss | Erik Collier | Erick Schonfeld | Eric Hoffer | Eran Shir | Emma Tonkin | Eddy Choi | Draper Fisher Jurvetson | Douglas Turnbull | Douglas Reeves | Dominic White | Dirk-Willem van Gulik | Diane Mueller | Dewey Decimal | Dave Roberts | Dave Raggett | Clay Shirky | Christopher Bancroft | Christian Dupuis


MIT | European Union | City University London | Securities and Exchange Commission | SEMANTIC TECHNOLOGY INSTITUTE | National Council for the Blind | Ireland Centre For Inclusive Technology | Digital Enterprise Research Institute | Carnegie Mellon University | Center for Semantic Excellence | Collective Intelligence | European Commission | German Federal Institute for Risk Assessment | Information Integration Intelligence | Institute of Electrical and Electronics Engineers | Missouri Council for History Education | Mozilla Foundation | Northeastern University | ABAP | Palm Springs Unified School District | Pittsburgh Medical Center | SAP Integration and Certification Center | Silicon Valley | sixth Arab Computer Society | Stanford | University of Pittsburgh | UN’s Food and Agricultural Organisation | U.S. Government | VocaLink’s Euro Payment Service | NASD | New York University’s Stern School of Business | New School for Design | National University of Ireland | National Institute of Health | Moscow Institute of Physics and Technology | Mohawk College | Mohawk Applied Research Centre for Health Informatics | MIT Media Lab | Massachusetts Institute of Technology | ManTech MBI’s government | LSDIS Lab . | Lake Washington School District | LA school district | Knowledge Society | Knowledge Organization | Knowledge Media Institute | Knewco’s mission | Ken North Computing LLC Panel | Keio University in Japan | Kansas State University | Joint Information Systems Committee | Jeff Gray Institute | Japan Branch Office | International WIC Institute | International Telecommunication Union | International Software & Productivity Engineering Institute | International Society | international child protection and development charity | Institute of Traffic Safety | Institute of Technology | Institute for Traffic Safety and Automation Engineering | Institute for Traffic Safety | Institute for Technology Assessment | Information Systems Department | Information Society | Imperial College | IEEE Industry Standards and Technology Organization | Highland Consumer Fund | Global | Gene B. Glick Junior Achievement Education Center | Founders Fund | Federal Systems Division | federal government’s sports administration | Federal Department of Innovation, Industry, Science and Research | Euro Payment Service | Environmental Systems Research Institute | Division of Research & Education Systems | Cloud Computing Interoperability Forum | Center for Coordination Science | Churchill Club | Cerebra University | Center Party | Department of Information Technology | Department of Defense | Department of Control and Transport Automation | Department of Control | Department of Computing | Department of Computer Science and Engineering | Defense Logistics Information Research Program’s Technical Solutions Council | Defense Advanced Research Projects Agency | Dalian University of Technology | Cyc Foundation | Cortex Intelligence | Control and Transport Automation Budapest University of Technology and Economics | Consortium | Computer Writing and Research Lab | Columbia University in New York | Columbia University | California Institute of Technology | Business Intelligence for HR

Read Full Post »



From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article may require cleanup to meet Wikipedia’s quality standards. Please improve this article if you can. (April 2007)

For other uses, see Information (disambiguation).

The ASCII codes for the word “Wikipedia” represented in binary, the numeral system most commonly used for encoding computer information

Information as a concept has a diversity of meanings, from everyday usage to technical settings. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.

Many people[who?] speak about the Information Age as the advent of the Knowledge Age[citation needed]or knowledge society, the information society, the Information revolution, and information technologies, and even though informatics, information science and computer science are often in the spotlight, the word “information” is often used without careful consideration of the various meanings it has acquired.

[edit] Etymology

According to the Oxford English Dictionary, the first known historical meaning of the word information in English was the act of informing, or giving form or shape to the mind, as in education, instruction, or training. A quote from 1387 English text: “Five books come down from heaven for information of mankind.” It was also used for an item of training, e.g. a particular instruction. “Melibee had heard the great skills and reasons of Dame Prudence, and her wise information and techniques.” (1386)

The English word was apparently derived from the Latin accusative form (informationem) of the nominative (informatio): this noun is on its turn derived from the verb “informare” (to inform) in the sense of “to give form to the mind”, “to discipline”, “instruct”, “teach”: “Men so wise should go and inform their kings.” (1330) Inform itself comes (via French) from the Latin verb informare, to give form to, to form an idea of. Furthermore, Latin itself already even contained the word informatio meaning concept or idea, but the extent to which this may have influenced the development of the word information in English is unclear.

As a final note, the ancient Greek word for form was “μορφή” (morf -> morphe, Morph) and also είδος eidos (kind, idea, shape, set), the latter word was famously used in a technical philosophical sense by Plato (and later Aristotle) to denote the ideal identity or essence of something (see Theory of forms). “Eidos” can also be associated with thought, proposition or even concept.

[edit] As a message

Information is a term with many meanings depending on context, but is as a rule closely related to such concepts as meaning, knowledge, instruction, communication, representation, and mental stimulus. Simply stated, information is a message received and understood. In terms of data, it can be defined as a collection of facts from which conclusions may be drawn. There are many other aspects of information since it is the knowledge acquired through study or experience or instruction. But overall, information is the result of processing, manipulating and organizing data in a way that adds to the knowledge of the person receiving it.

Information is the state of a system of interest. Message is the information materialized.

Information is a quality of a message from a sender to one or more receivers. Information is always about something (size of a parameter, occurrence of an event, value, ethics, etc). Viewed in this manner, information does not have to be accurate; it may be a truth or a lie, or just the sound of a falling tree. Even a disruptive noise used to inhibit the flow of communication and create misunderstanding would in this view be a form of information. However, generally speaking, if the amount of information in the received message increases, the message is more accurate.

This model assumes there is a definite sender and at least one receiver. Many refinements of the model assume the existence of a common language understood by the sender and at least one of the receivers. An important variation identifies information as that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message. In another variation, it is not required that the sender be capable of understanding the message, or even cognizant that there is a message, making information something that can be extracted from an environment, e.g., through observation, reading or measurement.

Communication theory provides a numerical measure of the uncertainty of an outcome. For example, we can say that “the signal contained thousands of bits of information”. Communication theory tends to use the concept of information entropy, generally attributed to Claude Shannon, see below.

Another form of information is Fisher information, a concept of R.A. Fisher. This is used in application of statistics to estimation theory and to science in general. Fisher information is thought of as the amount of information that a message carries about an unobservable parameter. It can be computed from knowledge of the likelihood function defining the system. For example, with a normal likelihood function, the Fisher information is the reciprocal of the variance of the law. In the absence of knowledge of the likelihood law, the Fisher information may be computed from normally distributed score data as the reciprocal of their second moment.

Even though information and data are often used interchangeably, they are actually very different. Data is a set of unrelated information, and as such is of no use until it is properly evaluated. Upon evaluation, once there is some significant relation between data, and they show some relevance, then they are converted into information. Now this same data can be used for different purposes. Thus, till the data convey some information, they are not useful and therefore not information.

[edit] Measuring information entropy

The view of information as a message came into prominence with the publication in 1948 of an influential paper by Claude Shannon, “A Mathematical Theory of Communication.” This paper provides the foundations of information theory and endows the word information not only with a technical meaning but also a measure. If the sending device is equally likely to send any one of a set of N messages, then the preferred measure of “the information produced when one message is chosen from the set” is the base two logarithm of N (This measure is called self-information). In this paper, Shannon continues:

The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits…[1]

A complementary way of measuring information is provided by algorithmic information theory. In brief, this measures the information content of a list of symbols based on how predictable they are, or more specifically how easy it is to compute the list through a program: the information content of a sequence is the number of bits of the shortest program that computes it. The sequence below would have a very low algorithmic information measurement since it is a very predictable pattern, and as the pattern continues the measurement would not change. Shannon information would give the same information measurement for each symbol, since they are statistically random, and each new symbol would increase the measurement.


It is important to recognize the limitations of traditional information theory and algorithmic information theory from the perspective of human meaning. For example, when referring to the meaning content of a message Shannon noted “Frequently the messages have meaning… these semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages” (emphasis in original).

In information theory signals are part of a process, not a substance; they do something, they do not contain any specific meaning. Combining algorithmic information theory and information theory we can conclude that the most random signal contains the most information as it can be interpreted in any way and cannot be compressed.[citation needed]

Michael Reddy noted that “‘signals’ of the mathematical theory are ‘patterns that can be exchanged’. There is no message contained in the signal, the signals convey the ability to select from a set of possible messages.” In information theory “the system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design”.

[edit] As a pattern

Information is any represented pattern. This view assumes neither accuracy nor directly communicating parties, but instead assumes a separation between an object and its representation. Consider the following example: economic statistics represent an economy, however inaccurately. What are commonly referred to as data in computing, statistics, and other fields, are forms of information in this sense. The electro-magnetic patterns in a computer network and connected devices are related to something other than the pattern itself, such as text characters to be displayed and keyboard input. Signals, signs, and symbols are also in this category. On the other hand, according to semiotics, data is symbols with certain syntax and information is data with a certain semantic. Painting and drawing contain information to the extent that they represent something such as an assortment of objects on a table, a profile, or a landscape. In other words, when a pattern of something is transposed to a pattern of something else, the latter is information. This would be the case whether or not there was anyone to perceive it.

But if information can be defined merely as a pattern, does that mean that neither utility nor meaning are necessary components of information? Arguably a distinction must be made between raw unprocessed data and information which possesses utility, value or some quantum of meaning. On this view, information may indeed be characterized as a pattern; but this is a necessary condition, not a sufficient one.

An individual entry in a telephone book, which follows a specific pattern formed by name, address and telephone number, does not become “informative” in some sense unless and until it possesses some degree of utility, value or meaning. For example, someone might look up a girlfriend’s number, might order a take away etc. The vast majority of numbers will never be construed as “information” in any meaningful sense. The gap between data and information is only closed by a behavioral bridge whereby some value, utility or meaning is added to transform mere data or pattern into information.

When one constructs a representation of an object, one can selectively extract from the object (sampling) or use a system of signs to replace (encoding), or both. The sampling and encoding result in representation. An example of the former is a “sample” of a product; an example of the latter is “verbal description” of a product. Both contain information of the product, however inaccurate. When one interprets representation, one can predict a broader pattern from a limited number of observations (inference) or understand the relation between patterns of two different things (decoding). One example of the former is to sip a soup to know if it is spoiled; an example of the latter is examining footprints to determine the animal and its condition. In both cases, information sources are not constructed or presented by some “sender” of information. Regardless, information is dependent upon, but usually unrelated to and separate from, the medium or media used to express it. In other words, the position of a theoretical series of bits, or even the output once interpreted by a computer or similar device, is unimportant, except when someone or something is present to interpret the information. Therefore, a quantity of information is totally distinct from its medium.

[edit] As sensory input

Often information is viewed as a type of input to an organism or designed device. Inputs are of two kinds. Some inputs are important to the function of the organism (for example, food) or device (energy) by themselves. In his book Sensory Ecology, Dusenbery called these causal inputs. Other inputs (information) are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of association with other information but eventually there must be a connection to a causal input. In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or device. For example, light is often a causal input to plants but provides information to animals. The colored light reflected from a flower is too weak to do much photosynthetic work but the visual system of the bee detects it and the bee’s nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, which are causal inputs, serving a nutritional function.

Information is any type of sensory input. When an organism with a nervous system receives an input, it transforms the input into an electrical signal. This is regarded information by some. The idea of representation is still relevant, but in a slightly different manner. That is, while abstract painting does not represent anything concretely, when the viewer sees the painting, it is nevertheless transformed into electrical signals that create a representation of the painting. Defined this way, information does not have to be related to truth, communication, or representation of an object. Entertainment in general is not intended to be informative. Music, the performing arts, amusement parks, works of fiction and so on are thus forms of information in this sense, but they are not necessarily forms of information according to some definitions given above. Consider another example: food supplies both nutrition and taste for those who eat it. If information is equated to sensory input, then nutrition is not information but taste is.

[edit] As an influence which leads to a transformation

Information is any type of pattern that influences the formation or transformation of other patterns. In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern. Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind. Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose.

If, however, the premise of “influence” implies that information has been perceived by a conscious mind and also interpreted by it, the specific context associated with this interpretation may cause the transformation of the information into knowledge. Complex definitions of both “information” and “knowledge” make such semantic and logical analysis difficult, but the condition of “transformation” is an important point in the study of information as it relates to knowledge, especially in the business discipline of knowledge management. In this practice, tools and processes are used to assist a knowledge worker in performing research and making decisions, including steps such as:

  • reviewing information in order to effectively derive value and meaning
  • referencing metadata if any is available
  • establishing a relevant context, often selecting from many possible contexts
  • deriving new knowledge from the information
  • making decisions or recommendations from the resulting knowledge

Stewart (2001) argues that the transformation of information into knowledge is a critical one, lying at the core of value creation and competitive advantage for the modern enterprise.

When Marshall McLuhan speaks of media and their effects on human cultures, he refers to the structure of artifacts that in turn shape our behaviors and mindsets. Also, pheromones are often said to be “information” in this sense.

[edit] As a property in physics

Main article: Physical information

In 2003, J. D. Bekenstein claimed there is a growing trend in physics to define the physical world as being made of information itself (and thus information is defined in this way) (see Digital physics). Information has a well defined meaning in physics. Examples of this include the phenomenon of quantum entanglement where particles can interact without reference to their separation or the speed of light. Information itself cannot travel faster than light even if the information is transmitted indirectly. This could lead to the fact that all attempts at physically observing a particle with an “entangled” relationship to another are slowed down, even though the particles are not connected in any other way other than by the information they carry.

Another link is demonstrated by the Maxwell’s demon thought experiment. In this experiment, a direct relationship between information and another physical property, entropy, is demonstrated. A consequence is that it is impossible to destroy information without increasing the entropy of a system; in practical terms this often means generating heat. Another, more philosophical, outcome is that information could be thought of as interchangeable with energy. Thus, in the study of logic gates, the theoretical lower bound of thermal energy released by an AND gate is higher than for the NOT gate (because information is destroyed in an AND gate and simply converted in a NOT gate). Physical information is of particular importance in the theory of quantum computers.

[edit] As records

Records are a specialized form of information. Essentially, records are information produced consciously or as by-products of business activities or transactions and retained because of their value. Primarily their value is as evidence of the activities of the organization but they may also be retained for their informational value. Sound records management ensures that the integrity of records is preserved for as long as they are required.

The international standard on records management, ISO 15489, defines records as “information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business”. The International Committee on Archives (ICA) Committee on electronic records defined a record as, “a specific piece of recorded information generated, collected or received in the initiation, conduct or completion of an activity and that comprises sufficient content, context and structure to provide proof or evidence of that activity”.

Records may be retained because of their business value, as part of the corporate memory of the organization or to meet legal, fiscal or accountability requirements imposed on the organization. Willis (2005) expressed the view that sound management of business records and information delivered “…six key requirements for good corporate governance…transparency; accountability; due process; compliance; meeting statutory and common law requirements; and security of personal and corporate information.”

[edit] Information and semiotics

Beynon-Davies [2] explains the multi-faceted concept of information in terms of that of signs and sign-systems. Signs themselves can be considered in terms of four inter-dependent levels, layers or branches of semiotics: pragmatics, semantics, syntax, and empirics. These four layers serve to connect the social world on the one hand with the physical or technical world on the other.

Pragmatics is concerned with the purpose of communication. Pragmatics links the issue of signs with that of intention. The focus of pragmatics is on the intentions of human agents underlying communicative behaviour. In other words, intentions link language to action.

Semantics is concerned with the meaning of a message conveyed in a communicative act. Semantics considers the content of communication. Semantics is the study of the meaning of signs – the association between signs and behaviour. Semantics can be considered as the study of the link between symbols and their referents or concepts; particularly the way in which signs relate to human behaviour.

Syntax is concerned with the formalism used to represent a message. Syntax as an area studies the form of communication in terms of the logic and grammar of sign systems. Syntax is devoted to the study of the form rather than the content of signs and sign-systems.

Empirics is the study of the signals used to carry a message; the physical characteristics of the medium of communication. Empirics is devoted to the study of communication channels and their characteristics, e.g., sound, light, electronic transmission etc.

Communication normally exists within the context of some social situation. The social situation sets the context for the intentions conveyed (pragmatics) and the form in which communication takes place. In a communicative situation intentions are expressed through messages which comprise collections of inter-related signs taken from a language which is mutually understood by the agents involved in the communication. Mutual understanding implies that agents involved understand the chosen language in terms of its agreed syntax (syntactics) and semantics. The sender codes the message in the language and sends the message as signals along some communication channel (empirics). The chosen communication channel will have inherent properties which determine outcomes such as the speed with which communication can take place and over what distance.

[edit] See also

[edit] References

  1. ^ The Bell System Technical Journal, Vol. 27, p. 379, (July 1948).
  2. ^ Beynon-Davies P. (2002). Information Systems: an introduction to informatics in Organisations. Palgrave, Basingstoke, UK. ISBN 0-333-96390-3

[edit] Further reading

  • Alan Liu (2004). The Laws of Cool: Knowledge Work and the Culture of Information, University of Chicago Press
  • Bekenstein, Jacob D. (2003, August). Information in the holographic universe. Scientific American.
  • Luciano Floridi, (2005). ‘Is Information Meaningful Data?’, Philosophy and Phenomenological Research, 70 (2), pp. 351 – 370. Available online at Oxford University
  • Luciano Floridi, (2005). ‘Semantic Conceptions of Information’, The Stanford Encyclopedia of Philosophy (Winter 2005 Edition), Edward N. Zalta (ed.). Available online at Stanford University
  • Stewart, Thomas, (2001). Wealth of Knowledge. Doubleday, New York, NY, 379 p.
  • Young, Paul. The Nature of Information (1987). Greenwood Publishing Group, Westport, Ct.

[edit] External links

Sister project Look up information in Wiktionary, the free dictionary.

v  d  e


Applied science







Health / safety



Read Full Post »

From Logic to Ontology: The limit of “The Semantic Web”
Francisco Antonio Cerón García
Physics’s Spanish Royal Society

The limits of the semantic web (http://en.wikipedia.org/wiki/Semantic_Web) are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic (http://en.wikipedia.org/wiki/Logic) that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems (http://en.wikipedia.org/wiki/Gödel’s_incompleteness_theorems) remark, the final tautology of each construction or metonymic language mathematical (http://en.wikipedia.org/wiki/Mathematical_logic), which leads to inconsistencies. The construction of the Semantic Web is a undecidible problem (http://en.wikipedia.org/wiki/Undecidable_problem).
This consistent logic is completely opposite to the logic that makes inconsistent use of time (http://en.wikipedia.org/wiki/Jacques_Lacan), inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.
The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem (http://en.wikipedia.org/wiki/Halting_problem) or stop of calculation.
So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.
As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine (http://en.wikipedia.org/wiki/Turing_machine) and “the halting problem”.
One solution for the problem to build the semantic web could be the use of Non-formal or Inconsistency Logic (https://methainternet.wordpress.com/2008/01/27/non-formal-or-inconsistency-logic-lacans-logic-godels-incompleteness-theorems).

Read Full Post »

%d bloggers like this: