Feeds:
Posts
Comments

Archive for June, 2009

Dear All,

I am not spamming the Wikipedia pages, neither Britannica.

I realize that all of you already know how to read the Wikipedia pages, and more, all the scientific literature about the “semantic web”…

I am researching about how to build the semantic web in a new and accurate way, from the foundations on linguistics, mathematics, logics, computers, philosophy, psychology, psychoanalysis, and many areas of knowledge and lot of people ask me:

Where did you read this or that? Then is useful for them to get the feedback to post step by step all the researching that I am doing!

Do you understand this new way of working and researching?

Please take a look about my theorem: The Limit of the “Semantic Web“, and you could realize what I am saying…

If you take a look on the Ninety-two posts below down, then you could realize that you understand the entire frame and the background that is needed to build the semantic web…

My next step is to review the original paper “Minds, Machines and Gödel” (“Minds, Machines and Gödel: A Retrospect”), and introduce the concept of “Time” on logic…

All of the foundations of the semantic web researching on Internet are on the basis of “Description logic”, who is a family of “knowledge representation” languages, but as my theorem has established this is the wrong way!

Then “the right way” is on the basis of the foundations on linguistics, mathematics, logics, computers, philosophy, psychology, psychoanalysis, and many areas of knowledge, where I am researching in a hard way…

 

Best Regards,

Francisco Antonio Ceron Garcia

fcerong @ gmail.com

Read Full Post »

Computability theory (computer science)

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For the branch of mathematical logic called computability theory, see Recursion theory.

In computer science, computability theory is the branch of the theory of computation that studies which problems are computationally solvable using different models of computation.

Computability theory differs from the related discipline of computational complexity theory, which deals with the question of how efficiently a problem can be solved, rather than whether it is solvable at all.

Contents

[hide]

[edit] Introduction

A central question of computer science is to address the limits of computing devices. One approach to addressing this question is understanding the problems we can use computers to solve. Modern computing devices often seem to possess infinite capacity for calculation, and it’s easy to imagine that, given enough time, we might use computers to solve any problem. However, it is possible to show clear limits to the ability of computers, even given arbitrarily vast computational resources, to solve even seemingly simple problems. Problems are formally expressed as a decision problem which is to construct a mathematical function that for each input returns either 0 or 1. If the value of the function on the input is 0 then the answer is “no” and otherwise the answer is “yes”.

To explore this area, computer scientists invented automata theory which addresses problems such as the following: Given a formal language, and a string, is the string a member of that language? This is a somewhat esoteric way of asking this question, so an example is illuminating. We might define our language as the set of all strings of digits which represent a prime number. To ask whether an input string is a member of this language is equivalent to asking whether the number represented by that input string is prime. Similarly, we define a language as the set of all palindromes, or the set of all strings consisting only of the letter ‘a’. In these examples, it is easy to see that constructing a computer to solve one problem is easier in some cases than in others.

But in what real sense is this observation true? Can we define a formal sense in which we can understand how hard a particular problem is to solve on a computer? It is the goal of computability theory of automata to answer just this question.

[edit] Formal models of computation

In order to begin to answer the central question of automata theory, it is necessary to define in a formal way what an automaton is. There are a number of useful models of automata. Some widely known models are:

Deterministic finite state machine
Also called a deterministic finite automaton (DFA), or simply a finite state machine. All real computing devices in existence today can be modeled as a finite state machine, as all real computers operate on finite resources. Such a machine has a set of states, and a set of state transitions which are affected by the input stream. Certain states are defined to be accepting states. An input stream is fed into the machine one character at a time, and the state transitions for the current state are compared to the input stream, and if there is a matching transition the machine may enter a new state. If at the end of the input stream the machine is in an accepting state, then the whole input stream is accepted.
Nondeterministic finite state machine
Similarly called a nondeterministic finite automaton (NFA), it is another simple model of computation, although its processing sequence is not uniquely determined. It can be interpreted as taking multiple paths of computation simultaneously through a finite number of states. However, it is possible to prove that any NFA is reducible to an equivalent DFA.
Pushdown automaton
Similar to the finite state machine, except that it has available an execution stack, which is allowed to grow to arbitrary size. The state transitions additionally specify whether to add a symbol to the stack, or to remove a symbol from the stack. It is more powerful than a DFA due to its infinite-memory stack, although only some information in the stack is ever freely accessible.
Turing machine
Also similar to the finite state machine, except that the input is provided on an execution “tape”, which the Turing machine can read from, write to, or move back and forth past its read/write “head”. The tape is allowed to grow to arbitrary size. The Turing machine is capable of performing complex calculations which can have arbitrary duration. This model is perhaps the most important model of computation in computer science, as it simulates computation in the absence of predefined resource limits.
Multi-tape Turing machine
Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

[edit] Power of automata

With these computational models in hand, we can determine what their limits are. That is, what classes of languages can they accept?

[edit] Power of finite state machines

This section may require cleanup to meet Wikipedia’s quality standards. Please improve this section if you can. (April 2009)

Computer scientists call any language that can be accepted by a finite state machine a regular language. Because of the restriction that the number of possible states in a finite state machine is finite, we can see that to find a language that is not regular, we must construct a language that would require an infinite number of states.

An example of such a language is the set of all strings consisting of the letters ‘a’ and ‘b’ which contain an equal number of the letter ‘a’ and ‘b’. To see why this language cannot be correctly recognized by a finite state machine, assume first that such a machine M exists. M must have some number of states n. Now consider the string x consisting of (n + 1) ‘a’s followed by (n + 1) ‘b’s.

As M reads in x, there must be some state in the machine that is repeated as it reads in the first series of ‘a’s, since there are (n + 1) ‘a’s and only n states by the pigeonhole principle. Call this state S, and further let d be the number of ‘a’s that our machine read in order to get from the first occurrence of S to some subsequent occurrence during the ‘a’ sequence. We know, then, that at that second occurrence of S, we can add in an additional d (where d > 0) ‘a’s and we will be again at state S. This means that we know that a string of (n + d + 1) ‘a’s must end up in the same state as the string of (n + 1) ‘a’s. This implies that if our machine accepts x, it must also accept the string of (n + d + 1) ‘a’s followed by (n + 1) ‘b’s, which is not in the language of strings containing an equal number of ‘a’s and ‘b’s. In other words, M cannot correctly distinguish between a string of equal number of ‘a’s and ‘b’s and a string with (n + d + 1) ‘a’s and n + 1 ‘b’s.

We know, therefore, that this language cannot be accepted correctly by any finite state machine, and is thus not a regular language. A more general form of this result is called the Pumping lemma for regular languages, which can be used to show that broad classes of languages cannot be recognized by a finite state machine.

[edit] Power of pushdown automata

Computer scientists define a language that can be accepted by a pushdown automaton as a Context-free language, which can be specified as a Context-free grammar. The language consisting of strings with equal numbers of ‘a’s and ‘b’s, which we showed was not a regular language, can be decided by a push-down automaton. Also, in general, a push-down automaton can behave just like a finite-state machine, so it can decide any language which is regular. This model of computation is thus strictly more powerful than finite state machines.

However, it turns out there are languages that cannot be decided by push-down automaton either. The result is similar to that for regular expressions, and won’t be detailed here. There exists a Pumping lemma for context-free languages. An example of such a language is the set of prime numbers.

[edit] Power of Turing machines

Turing machines can decide any context-free language, in addition to languages not decidable by a push-down automaton, such as the language consisting of prime numbers. It is therefore a strictly more powerful model of computation.

Because Turing machines have the ability to “back up” in their input tape, it is possible for a Turing machine to run for a long time in a way that is not possible with the other computation models previously described. It is possible to construct a Turing machine that will never finish running (halt) on some inputs. We say that a Turing machine can decide a language if it eventually will halt on all inputs and give an answer. A language that can be so decided is called a recursive language. We can further describe Turing machines that will eventually halt and give an answer for any input in a language, but which may run forever for input strings which are not in the language. Such Turing machines could tell us that a given string is in the language, but we may never be sure based on its behavior that a given string is not in a language, since it may run forever in such a case. A language which is accepted by such a Turing machine is called a recursively enumerable language.

The Turing machine, it turns out, is an exceedingly powerful model of automata. Attempts to amend the definition of a Turing machine to produce a more powerful machine are surprisingly met with failure. For example, adding an extra tape to the Turing machine, giving it a 2-dimensional (or 3 or any-dimensional) infinite surface to work with can all be simulated by a Turing machine with the basic 1-dimensional tape. These models are thus not more powerful. In fact, a consequence of the Church-Turing thesis is that there is no reasonable model of computation which can decide languages that cannot be decided by a Turing machine.

The question to ask then is: do there exist languages which are recursively enumerable, but not recursive? And, furthermore, are there languages which are not even recursively enumerable?

[edit] The halting problem

Main article: Halting problem

The halting problem is one of the most famous problems in computer science, because it has profound implications on the theory of computability and on how we use computers in everyday practice. The problem can be phrased:

Given a description of a Turing machine and its initial input, determine whether the program, when executed on this input, ever halts (completes). The alternative is that it runs forever without halting.

Here we are asking not a simple question about a prime number or a palindrome, but we are instead turning the tables and asking a Turing machine to answer a question about another Turing machine. It can be shown (See main article: Halting problem) that it is not possible to construct a Turing machine that can answer this question in all cases.

That is, the only general way to know for sure if a given program will halt on a particular input in all cases is simply to run it and see if it halts. If it does halt, then you know it halts. If it doesn’t halt, however, you may never know if it will eventually halt. The language consisting of all Turing machine descriptions paired with all possible input streams on which those Turing machines will eventually halt, is not recursive. The halting problem is therefore called non-computable or undecidable.

An extension of the halting problem is called Rice’s Theorem, which states that it is undecidable (in general) whether a given language possesses any specific nontrivial property.

[edit] Beyond recursive languages

The halting problem is easy to solve, however, if we allow that the Turing machine that decides it may run forever when given input which is a representation of a Turing machine that does not itself halt. The halting language is therefore recursively enumerable. It is possible to construct languages which are not even recursively enumerable, however.

A simple example of such a language is the complement of the halting language; that is the language consisting of all Turing machines paired with input strings where the Turing machines do not halt on their input. To see that this language is not recursively enumerable, imagine that we construct a Turing machine M which is able to give a definite answer for all such Turing machines, but that it may run forever on any Turing machine that does eventually halt. We can then construct another Turing machine M that simulates the operation of this machine, along with simulating directly the execution of the machine given in the input as well, by interleaving the execution of the two programs. Since the direct simulation will eventually halt if the program it is simulating halts, and since by assumption the simulation of M will eventually halt if the input program would never halt, we know that M will eventually have one of its parallel versions halt. M is thus a decider for the halting problem. We have previously shown, however, that the halting problem is undecidable. We have a contradiction, and we have thus shown that our assumption that M exists is incorrect. The complement of the halting language is therefore not recursively enumerable.

[edit] Concurrency-based models

A number of computational models based on concurrency have been developed, including the Parallel Random Access Machine and the Petri net. These models of concurrent computation still do not implement any mathematical functions that cannot be implemented by Turing machines.

[edit] Unreasonable models of computation

The Church-Turing thesis conjectures that there is no reasonable model of computing that can compute more mathematical functions than a Turing machine. In this section we will explore some of the “unreasonable” ideas for computational models which violate this conjecture. Computer scientists have imagined many varieties of hypercomputers.

[edit] Infinite execution

Main article: Zeno machine

Imagine a machine where each step of the computation requires half the time of the previous step. If we normalize to 1 time unit the amount of time required for the first step, the execution would require

1 + {1 \over 2} + {1 \over 4} + \cdots

time to run. This infinite series converges to 2 time units, which means that this Turing machine can run an infinite execution in 2 time units. This machine is capable of deciding the halting problem by directly simulating the execution of the machine in question. By extension, any convergent series would work. Assuming that the series converges to a value n, the Turing machine would complete an infinite execution in n time units.

[edit] Oracle machines

Main article: Oracle machine

So-called Oracle machines have access to various “oracles” which provide the solution to specific undecidable problems. For example, the Turing machine may have a “halting oracle” which answers immediately whether a given Turing machine will ever halt on a given input. These machines are a central topic of study in recursion theory.

[edit] Limits of hyper-computation

Even these machines, which seemingly represent the limit of automata that we could imagine, run into their own limitations. While each of them can solve the halting problem for a Turing machine, they cannot solve their own version of the halting problem. For example, an Oracle machine cannot answer the question of whether a given Oracle machine will ever halt.

[edit] History of computability theory

The lambda calculus, an important precursor to formal computability theory, was developed by Alonzo Church and Stephen Cole Kleene. Alan Turing is most often considered the father of modern computer science, and laid many of the important foundations of computability and complexity theory, including the first description of the Turing machine (in [1], 1936) as well as many of the important early results.

[edit] See also

[edit] References

[hide]

v  d  e

Computable knowledge

Topics and
Concepts

Proposals and
Implementations

In fiction

 
 
 
Dr. Know (A.I. Artificial Intelligence) • The Librarian (Snow Crash)

Read Full Post »

Computer science

Computer science

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Computer science (or computing science) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems.[1][2][3] It is frequently described as the systematic study of algorithmic processes that describe and transform information; the fundamental question underlying computer science is, “What can be (efficiently) automated?”[4] Computer science has many sub-fields; some, such as computer graphics, emphasize the computation of specific results, while others, such as computational complexity theory, study the properties of computational problems. Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems, and human-computer interaction focuses on the challenges in making computers and computations useful, usable, and universally accessible to people.

The general public sometimes confuses computer science with vocational areas that deal with computers (such as information technology), or think that it relates to their own experience of computers, which typically involves activities such as gaming, web-browsing, and word-processing. However, the focus of computer science is more on understanding the properties of the programs used to implement software such as games and web-browsers, and using that understanding to create new programs or improve existing ones.[5]

Contents

[hide]

[edit] History

The early foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard built the first mechanical calculator in 1623.[6] Charles Babbage designed a difference engine in Victorian times[7] helped by Ada Lovelace.[8] Around 1900, punch-card machines[9] were introduced. However, all of these machines were constrained to perform a single task, or at best some subset of all possible tasks.

During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s, with the creation of the first computer science departments and degree programs.[4][10] Since practical computers became available, many applications of computing have become distinct areas of study in their own right.

Although many initially believed it impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population.[11] It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704 and later the IBM 709 computers, which were widely used during the exploration period of such devices. “Still, working with the IBM [computer] was frustrating…if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again”.[11] During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.

Time has seen significant improvements in the usability and effectiveness of computer science technology. Modern society has seen a significant shift from computers being used solely by experts or professionals to a more widespread user base.

[edit] Major achievements

This section requires expansion.

The German military used the Enigma machine during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.[12]

Despite its relatively short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include:

[edit] Fields of computer science

As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.[17][18] The Computer Sciences Accreditation Board (CSAB) – which is made up of representatives of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers Computer Society, and the Association for Information Systems – identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, computer-human interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.[17]

[edit] Theory of computation

The study of the theory of computation is focused on answering fundamental questions about what can be computed, and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a computational problem.

The famous “P=NP?” problem, one of the Millennium Prize Problems,[19] is an open problem in the theory of computation.

P = NP ?
Computability theory Computational complexity theory

[edit] Theoretical computer science

The broader field of theoretical computer science encompasses both the classical theory of computation and a wide range of other topics that focus on the more abstract, logical, and mathematical aspects of computing.

 P \rightarrow Q \, \Gamma\vdash x : Int
Mathematical logic Automata theory Number theory Graph theory Type theory Category theory Computational geometry Quantum computing theory

[edit] Algorithms and data structures

O(n2)
Analysis of algorithms Algorithms Data structures

[edit] Programming methodology and languages

Compilers Programming languages

[edit] Computer elements and architecture

Digital logic Microarchitecture Multiprocessing

[edit] Numerical and symbolic computation

y = sin(x) + c
Bioinformatics Cognitive Science Computational chemistry Computational neuroscience Computational physics Numerical algorithms Symbolic mathematics

[edit] Applications

The following disciplines are often studied from a more theoretical, computer science viewpoint, as well as from a more practical, engineering perspective.

Operating systems Computer networks Computer graphics Computer vision Databases
 
Computer security Artificial intelligence Robotics Human-computer interaction Ubiquitous computing

[edit] Relationship with other fields

Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACMturingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist.[20] Three months later in the same journal, comptologist was suggested, followed next year by hypologist.[21] The term computics has also been suggested.[22] Informatik was a term used in Europe with more frequency.

The renowned computer scientist Edsger Dijkstra stated, “Computer science is no more about computers than astronomy is about telescopes.” The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as cognitive science, economics, mathematics, physics (see quantum computing), and linguistics.

Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science.[4] Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.

The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term “software engineering” means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.[23]

The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.

[edit] Computer science education

Some universities teach computer science as a theoretical study of computation and algorithmic reasoning. These programs often feature the theory of computation, analysis of algorithms, formal methods, concurrency theory, databases, computer graphics and systems analysis, among others. They typically also teach computer programming, but treat it as a vessel for the support of other fields of computer science rather than a central focus of high-level study.

Other colleges and universities, as well as secondary schools and vocational programs that teach computer science, emphasize the practice of advanced programming rather than the theory of algorithms and computation in their computer science curricula. Such curricula tend to focus on those skills that are important to workers entering the software industry. The practical aspects of computer programming are often referred to as software engineering. However, there is a lot of disagreement over the meaning of the term, and whether or not it is the same thing as programming.

[edit] See also

[edit] References

  1. ^Computer science is the study of informationNew Jersey Institute of Technology, Gutenberg Information Technologies
  2. ^Computer science is the study of computation.Computer Science Department, College of Saint Benedict, Saint John’s University
  3. ^Computer Science is the study of all aspects of computer systems, from the theoretical foundations to the very practical aspects of managing large software projects.Massey University
  4. ^ a b c Denning, P.J. (2000). “Computer Science: The Discipline” (PDF). Encyclopedia of Computer Science. http://web.archive.org/web/20060525195404/http://www.idi.ntnu.no/emner/dif8916/denning.pdf. 
  5. ^Common myths and preconceptions about Cambridge Computer ScienceComputer Science Department, University of Cambridge
  6. ^ Nigel Tout (2006). “Calculator Timeline”. Vintage Calculator Web Museum. http://www.vintagecalculators.com/html/calculator_time-line.html. 
  7. ^ “Science Museum – Introduction to Babbage”. http://www.sciencemuseum.org.uk/on-line/babbage/index.asp. Retrieved on 2006-09-24. 
  8. ^ “A Selection and Adaptation From Ada’s Notes found in “Ada, The Enchantress of Numbers,” by Betty Alexandra Toole Ed.D. Strawberry Press, Mill Valley, CA”. http://www.scottlan.edu/Lriddle/women/ada-love.htm. Retrieved on 2006-05-04. 
  9. ^ “IBM Punch Cards in the U.S. Army”. http://www.pattonhq.com/ibm.html. Retrieved on 2006-09-24. 
  10. ^ http://www.cl.cam.ac.uk/conference/EDSAC99/statistics.html
  11. ^ a b Levy, Steven (1984). Hackers: Heroes of the Computer Revolution. Doubleday. ISBN 0-385-19195-2. 
  12. ^ a b David Kahn, The Codebreakers, 1967, ISBN 0-684-83130-9.
  13. ^ a b http://www.cis.cornell.edu/Dean/Presentations/Slides/bgu.pdf
  14. ^ Constable, R.L. (March 2000) (PDF). Computer Science: Achievements and Challenges circa 2000. http://www.cs.cornell.edu/cis-dean/bgu.pdf. 
  15. ^ Abelson, H.; G.J. Sussman with J.Sussman (1996). Structure and Interpretation of Computer Programs (2nd ed.). MIT Press. ISBN 0-262-01153-0. “The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology — the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.” 
  16. ^ Black box traders are on the march The Telegraph, August 26, 2006
  17. ^ a b Computing Sciences Accreditation Board (28 May 1997). “Computer Science as a Profession”. http://www.csab.org/comp_sci_profession.html. Retrieved on 2008-09-01. 
  18. ^ Committee on the Fundamentals of Computer Science: Challenges and Opportunities, National Research Council (2004). Computer Science: Reflections on the Field, Reflections from the Field. National Academies Press. ISBN 978-0-309-09301-9. http://www.nap.edu/catalog.php?record_id=11106#toc. 
  19. ^ Clay Mathematics Institute P=NP
  20. ^ Communications of the ACM 1(4):p.6
  21. ^ Communications of the ACM 2(1):p.4
  22. ^ IEEE Computer 28(12):p.136
  23. ^ Parnas, David L. (1998). “Software Engineering Programmes are not Computer Science Programmes“. Annals of Software Engineering 6: 19–37. doi:10.1023/A:1018949113292. , p. 19: “Rather than treat software engineering as a subfield of computer science, I treat it as an element of the set, Civil Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering, ..”

[edit] Further reading

[edit] External links

Sister projectWikibooks has more on the topic of

Sister project Wikiversity has learning materials about Portal:Computer Science

[edit] Webcasts

[hide]

v  d  e

Major fields of computer science

Mathematical foundations

Theory of computation

Algorithms and data structures

Programming languages and Compilers

Concurrent, Parallel, and Distributed systems

Software engineering

System architecture

Telecommunication & Networking

Databases

Artificial intelligence

Computer graphics

Human computer interaction

Scientific computing

NOTE: Computer science can also be split up into different topics or fields according to the ACM Computing Classification System.
 
 
 
 
 
 
 
Computer architecture · Computer organization · Operating systems
 
Computer audio · Routing · Network topology · Cryptography
 
Data mining · Relational databases · SQL • OLAP
 
 
Visualization · Image processing
 
 
 

Read Full Post »

Computational complexity theory

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article may require cleanup to meet Wikipedia’s quality standards. Please improve this article if you can. (June 2009)

Computational complexity theory is a branch of the theory of computation in computer science that investigates the problems related to the resources required to run algorithms, and the inherent difficulty in providing algorithms that are efficient for both general and specific computational problems.

Contents

[hide]

[edit] Definitions

An engineer might ask “As the size of the input to an algorithm increases, by what factor does the running time and memory requirements increase? And what are the implications of that?” In other words, complexity theory, among other things, investigates the scalability of computational problems and algorithms. In particular, it places practical limits on what computers can and cannot do.

A problem is a collection of related questions, where each question is a finite string, not enumerated but written in an algebra called Big O notation, where the actual amount of resources uses numbers only to represent orders of magnitude and not the exact resources used by a particular machine.

[edit] Analysis

[edit] Big O notation

Main article: Big O notation

The time complexity of a problem is the number of steps that it takes to solve an instance of the problem as a function of the size of the input, using the most efficient known algorithm. If an instance has length n and can be solved in n2 steps we can say the problem has a time complexity of n2.

But of course, the exact amount of resources will depend on what machine or language is being used. To avoid that difficulty, the Big O notation is generally used (the O stands for the “order” of the calculation). If a problem runs in O(n2) on one computer, then (barring perverse effects) it will also run in O(n2) on all others, even though it may take longer or shorter depending on what resources they have. So this notation allows us to generalize away from the details of a particular computer.

For example, searching an unsorted list of words for a particular word will take, on average, half the time of the size of the list, because if one starts from the beginning or end one must (on average) inspect half the words in the list before finding it. If the word does not exist, one must inspect the whole list to discover that fact, so actually it could be worse, depending on how likely it is that a word in the input is in the list.

But a binary search algorithm is logarithmic in time over size (it is O(log n)). By analogy, humans roughly know where an entry would be in a phone book or dictionary, and use strategies quickly to get to their target, such as using headwords or a thumb index quickly to get roughly to the right place, then use a linear search when they are close to the target.

It is important to note that most computational complexity theory is useful in other engineering fields and has made contributions to mathematics that would be of interest even if computers did not exist.

The master theorem can be useful to calculate the time complexity of a recursive algorithm.

[edit] Worst case analysis

A common approach to computational complexity is the worst-case analysis, the estimation of the largest amount of computational resources (time or space) needed to run an algorithm. In many cases, the worst-case estimates are rather too pessimistic to explain the good performance of some algorithm in real use on real input. To address this, other approaches have been sought, such as average-case analysis or smoothed analysis.

[edit] Axiomatic analysis

Main article: NP-complete

An axiomatic approach to computational complexity was developed by Manuel Blum. He introduced measures of complexity such as the size of a machine and measures of the computational overhead of recursive functions. The complexity of algorithms are special cases of the axiomatic complexity measure. Blum axioms help in the study of computational complexity in the most general cases.

An important aspect of the theory is to categorize computational problems and algorithms into complexity classes. The most important open question of complexity theory is whether the complexity class P is the same as the complexity class NP, or whether it is a strict subset as is generally believed.

Shortly after the question was first posed, it was realised that many important industry problems in the field of operations research are of an NP subclass called NP-complete. NP-complete problems have solutions that are quick to check, yet current methods to find those solutions are not “efficiently scalable” (as described more formally below). If the NP class is larger than P, then NP-complete problems admit no polynomial-time algorithm and any all-purpose solution to the problem is not efficiently scalable.

The fact that the P vs. NP problem has not been solved, prompts and justifies various research areas in the theory, such as:

  • spotting and solving special cases of common computational problems
  • the study of the computational complexity itself
  • finding heuristic solutions
  • researching into the hierarchies of complexity classes
  • exploring multivariate algorithmic approaches such as Parameterized complexity.

[edit] History

The first publication on computational complexity appeared in 1956.[1] But this publication in Russian was for a long time unknown in the Western world.

The beginning of studies in computational complexity can be attributed not only to Trahtenbrot,[2] but also to Rabin.[3][4][5][6][7]

Hartmanis and Stearns’ paper[7] became the most popular in this area. As a result, some researchers attribute the beginning of computational complexity theory only to Hartmanis, Lewis and Stearns.[8]

Andrey Kolmogorov‘s research in the theory of algorithms influenced the development of computational complexity theory. A notable early discovery was the Karatsuba algorithm in 1960, for the multiplication of two numbers. This algorithm disproved Kolmogorov’s 1956 conjecture that the fastest multiplication algorithm must be O(n2), and thus helped launch the study of algorithms in earnest. In 1967, Manuel Blum developed an axiomatic complexity theory based on his axioms and proved an important result, the so called, speed-up theorem.

The field was subsequently expanded by many researchers, including:

On August 6, 2002, the AKS primality test was published in a paper “PRIMES is in P”[9] by three Indian computer scientists. They showed a deterministic primality-proving algorithm they had made. The algorithm determines whether a number is a Prime number or Composite number in polynomial time, and was soon improved by others. The key significance of AKS is that it was the first published primality-proving algorithm to be all of general, polynomial, deterministic, and unconditional. The authors received the 2006 Gödel Prize and the 2006 Fulkerson Prize for their work.

[edit] Computational complexity theory topics

[edit] Time and space complexity

Complexity theory attempts to describe how difficult it is for an algorithm to find a solution to a problem. This differs from computability theory, which describes whether a problem can be solved at all: a branch of science probably made most famous by Alan Turing‘s essay On Computable Numbers, with an Application to the Entscheidungsproblem.

[edit] Decision problems

Much of complexity theory deals with decision problems. A decision problem is one where the answer is always “yes” or “no”. Some problems are undecidable, or at least seem so, so complexity theory can be used to distinguish problems where it is certain to get a correct “yes” or “no” (not necessarily both). A problem that reverses which can be relied upon is called a complement of that problem.

For example, the problem IS-PRIME says “yes” if its input is a prime and “no” otherwise, while the problem IS-COMPOSITE says “yes” if the number is not prime. Either may be unpredictable when the correct result would be “no”, they may say “yes” or perhaps never finish even if they were actually going to produce the right result, given an infinite number of monkeys.

Decision problems are often interesting to researchers because a problem can always be reduced to a decision problem. For instance, a problem HAS-FACTOR may be:

Given integers n and k find whether n has any prime factors less than k.

If we can solve this problem with known maximum resources we can use that solution to solve the next problem FACTORIZE without many more, using a binary search on factors of k until we find the smallest, divide out that factor and repeat until all the factors are found (i.e. we end up with 1 or 0).

[edit] Resources

Complexity theory analyzes the difficulty of computational problems in terms of many different computational resources. A problem can be described in terms of many requirements it makes on resources: time, space, randomness, alternation, and other less-intuitive measures[vague]. A complexity class is the class of all problems which can be solved using a certain amount of a certain computational resource.

There are other measures of computational complexity. For instance, communication complexity is a measure of complexity for distributed computations.

A different measure of problem complexity, which is useful in some cases, is circuit complexity. This is a measure of the size of a boolean circuit needed to compute the answer to a problem, in terms of the number of logic gates required to build the circuit. Such a measure is useful, for example, when designing hardware microchips to compute the function instead of software.

Perhaps the most studied computational resources are for determinism in time or space. These resources represent the amount of time or space needed by a deterministic computer. These resources are of great practical interest, and are well-studied.

Some computational problems are easier to analyze in terms of more unusual resources. For example, a nondeterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The nondeterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that nondeterministic time is a very important resource in analyzing computational problems.

Many more unusual computational resources have been used in complexity theory. Technically, any complexity measure can be viewed as a computational resource, and complexity measures are very broadly defined by the Blum complexity axioms.

[edit] Complexity classes

A complexity class is the class of all of the computational problems which can be solved using a certain amount of a certain computational resource.

The complexity class P is the class of decision problems that can be solved by a deterministic machine in polynomial time. This class corresponds to an intuitive idea of the problems which can be effectively solved in the worst cases.[10]

The complexity class NP is the set of decision problems that can be solved by a non-deterministic Turing machine in polynomial time. This class contains many problems that people would like to be able to solve effectively, including the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. All the problems in this class have the property that their solutions can be checked efficiently.[10]

Many complexity classes can be characterized in terms of the mathematical logic needed to express them – this field is called descriptive complexity.

[edit] NP completeness and other open questions

[edit] P = NP

Main article: Complexity classes P and NP
See also: Oracle machine

The question of whether P = NP (can problems that can be solved in non-deterministic polynomial time also always be solved in deterministic polynomial time?) is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[10] If the answer is yes, many important problems can be shown to have more efficient solutions that are now used with reluctance because of unknown edge cases. These include various types of integer programming in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems.[11][12]

The P = NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute the solution of which is a US$1,000,000 prize for the first person to provide a solution.[13]

Questions like this motivate the concepts of hard and complete

[edit] Hard

A class of problems X is hard for a class of problems Y if every problem in Y can be transformed “easily” (that is to say, it may take a lot of effort but the person solving the problem knows a solution exists) into some problem in X that produces the same solution. The definition of “hard” (or rather “easy”) problems differs by context. For P = NP, “hard” means NP-hard— a class of problems that are not necessarily in NP themselves, but to which any they can be reduced to an NP problem in polynomial time.

[edit] Complete

The class X is complete for Y if it is hard for Y and is also a subset of it.

Thus, the class of NP-complete problems contains the most “difficult” problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce another problem, Π1, to a known NP-complete problem, Π2, would indicate that there is no known polynomial-time solution for Π1. This is due to the fact that a polynomial-time solution to Π1 would yield a polynomial-time solution to Π2. Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.[10]

[edit] Incomplete

Diagram of complexity classes provided that P ≠ NP. The existence of problems in NP outside both P and NP-complete in this case was established by Ladner.[14]

Incomplete problems are those in NP that are neither NP-complete nor in P. In other words, incomplete problems can neither be solved in polynomial time nor are they hard problems.

It has been shown that if P ≠ NP then there exist NP-incomplete problems.[14][15]

[edit] Graph isomorphism

An important unsolved problem in this context is, whether the graph isomorphism problem is in P, NP-complete, or NP-incomplete. The answer is not known, but there are strong hints that the problem is at least not NP-complete.[16] The graph isomorphism problem asks whether two given graphs are isomorphic.

[edit] The NP = co-NP problem

Co-NP is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of NP problems. It is believed[who?] that the two classes are not equal; however, it has not yet been proven. It has been shown[15] that if these two complexity classes are not equal no NP-complete problem can be in co-NP and no co-NP-complete problem can be in NP.

Every problem in P is also in both NP and co-NP. Integer factorization is an example of a problem in both NP and co-NP that is not known to be in P.

[edit] Intractability

Problems that can be solved but not fast enough for the solution to be useful are called intractable[17]. Naive complexity theory assumes problems that lack polynomial-time solutions are intractable for more than the smallest inputs. Problems that are known to be intractable in this sense include those that are EXPTIME-complete. If NP is not the same as P, then the NP-complete problems are also intractable in this sense. To see why exponential-time algorithms might be unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is roughly the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. Nevertheless a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances.

The graph shows time (average of 100 instances in msec using a 933 MHz Pentium III) vs.problem size for knapsack problems for a state-of-the-art specialized algorithm. Quadratic fit suggests that empirical algorithmic complexity for instances with 50–10,000 variables is O((log n)2). The data comes from [18]

[edit] Theory vs. practice

What intractability means in practice is open to debate. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time (see graph) and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.

[edit] See also

[edit] References

  1. ^ Trahtenbrot (1956). 
  2. ^ Trahtenbrot (1967). 
  3. ^ Rabin, M. O. (1959). “Speed of Computation of Functions and Classification of Recursive Sets”.: 1–2, Proc. 3rd Conference of Scientific Societies. 
  4. ^ Rabin, M. O. (1960), Tech. Report 2, Degree of Difficulty of Computing a Function and a Partial Ordering of Recursive Sets, Jerusalem: Hebrew University, Jerusalem 
  5. ^ Rabin, M. O. (1963). Real-time computation. Israel Journal of Math. pp. 203–211. , Hartmanis, Lewis and Stearns
  6. ^ Hartmanis and Stearns (1964) 
  7. ^ a b Hartmanis and Stearns (1965) 
  8. ^ Aho, Hopcroft and Ullman (1976) 
  9. ^ Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin, “PRIMES is in P”, Annals of Mathematics, 160 (2004), 781–793
  10. ^ a b c d Sipser, Michael (2006). “Time Complexity”. Introduction to the Theory of Computation (2nd ed.). USA: Thomson Course Technology. ISBN 0534950973. 
  11. ^ Berger, Bonnie A.; Leighton, Terrance (1998). “Protein folding in the hydrophobic-hydrophilic (HP) model is NP-complete”. Journal of Computational Biology 5 (1): p27–40. doi:10.1089/cmb.1998.5.27. PMID 9541869. 
  12. ^ Cook, Stephen (April 2000). The P versus NP Problem. Clay Mathematics Institute. http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf. Retrieved on 2006-10-18. 
  13. ^ Jaffe, Arthur M. (2006). “The Millennium Grand Challenge in Mathematics“. Notices of the AMS 53 (6). http://www.ams.org/notices/200606/fea-jaffe.pdf. Retrieved on 2006-10-18. 
  14. ^ a b Ladner, Richard E. (1975). “On the structure of polynomial time reducibility” (PDF). Journal of the ACM (JACM) 22 (1): 151–171. doi:10.1145/321864.321877. http://delivery.acm.org/10.1145/330000/321877/p155-ladner.pdf?key1=321877&key2=7146531911&coll=&dl=ACM&CFID=15151515&CFTOKEN=6184618. 
  15. ^ a b Du, Ding-Zhu; Ko, Ker-I (2000). Theory of Computational Complexity. John Wiley & Sons. ISBN 978-0-471-34506-0. 
  16. ^ Arvind, Vikraman; Kurur, Piyush P. (2006), “Graph isomorphism is in SPP”, Information and Computation 204 (5): 835–852, doi:10.1016/j.ic.2006.02.002 
  17. ^ Hopcroft, et al. (2007), p. 368 
  18. ^ Pisinger, D. 2003. “Where are the hard knapsack problems?” Technical Report 2003/08, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark

[edit] Further reading

  • Blum, Manuel (1967) On the Size of Machines, Information and Control, v. 11, pp. 257–265
  • Blum M. (1967) A Machine-independent Theory of Complexity of Recursive Functions, Journal of the ACM, v. 14, No.2, pp. 322–336
  • Downey R. and M. Fellows (1999) Parameterized Complexity, Springer-Verlag.
  • Hartmanis, J., Lewis, P.M., and Stearns, R.E. (1965) Classification of computations by time and memory requirements, Proc. IfiP Congress 65, pp. 31–35
  • Hartmanis, J., and Stearns, R.E. (1964) Computational Complexity of Recursive Sequences, IEEE Proc. 5th Annual Symp. on Switching Circuit Theory and Logical Design, pp. 82–90
  • Hartmanis, J., and Stearns, R.E. (1965) On the Computational Complexity of Algorithms, Transactions American Mathematical Society, No. 5, pp. 285–306
  • Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Boston/San Francisco/New York
  • van Leeuwen, Jan, (ed.) Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, 1990. ISBN 978-0-444-88071-0 (Volume A). QA 76.H279 1990. Huge compendium of information, 1000s of references in the various articles.
  • Trahtenbrot, B.A. (1956) Signalizing Functions and Table Operators, Research Notices of the Pensa Pedagogical Institute, v. 4, pp. 75–87 (in Russian)
  • Trahtenbrot, B.A. Complexity of Algorithms and Computations, NGU, Novosibirsk, 1967 (in Russian)

[edit] External links

[hide]

v  d  e

Important complexity classes (more)

 
P • NP • co-NP • NP-C • co-NP-C • NP-hard • UP • #P • #P-C • L • NL • NC • P-C • PSPACE • PSPACE-C • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • PR • RE • Co-RE • RE-C • Co-RE-C • R • BQP • BPP • RP • ZPP • PCP • IP • PH 

Read Full Post »

semantic technology | semantic Web | semantic technologies | Web technology | semantic search engine | search engine | semantic web technology | Web technologies | semantic web technologies | semantic search | Web Consortium | Web Services | search engines | query technology | natural language processing | semantic search technology | Web service | database technology | semantic technology products | Web standards | semantic software | semantic applications | semantic web applications | web applications | e – commerce | search technology | semantic analysis technology | semantic solutions | social networking | social networks | software | application infrastructure software | information technology | search results | semantic systems | social networking sites | social software | technology | technology fields | Web component enabling people | Web search services | clinical trials management software system | corporate semantic technology crown jewels | data mining techniques | healthcare | internet evolution | learning tool | semantic technology applications | semantic technology kicks | software giant | technology development | technology fits | text mining | venture capital | Web development | Web extension | web search | data mining | biotechnology | automated semantic technology-based search engine | distributed Internet applications | educational technology field | eService software | food | integrated solutions | Internet Applications | knowledge discovery software | knowledge management solutions | media technologies | media technology | real-world applications | search | search engine technology | search platform | search queries | search query | search software | semantic advertising platform | semantic search engines | social media | software agents | software technology | start-up | web application | Web Conference | Web content | Web data | web developers | Web research | application data management software solutions | ad network | automotive applications | bank | communications industry | computational linguistic technology | content filtering software | conventional keyword technology | computing | cool technology | data aware& applications

 

Google | Microsoft | IBM | Yahoo! | Oracle | Reuters | Semantic Universe | Sprylogics International Corp. | Cognition Technologies | Gilbane Group | Industrial Software Technology S.A. | Metatomix Inc. | AOL | Cerebra | Progress Software Corporation | Velocity Interactive Group | Vulcan Capital | Cisco | Franz Inc. | nexTier Networks | PercipEnz Technologies Inc. | Semantic Web Technologies | Telstra | Thetus Corporation | TopQuadrant Inc | W3C | Xerox | YouTube | Elsevier | eBay | Dow Jones | Dow Jones Enterprise Media Group | Collexis Holdings Inc. | ClearForest | Business Information Systems | BBC | Adobe | Gridstone Research | Hurwitz & Associates | IBM Corp. | Joost | Linguamatics Ltd | Morgan Stanley | Powerset | Radar Networks | SAP AG | Semantic Social Networks | Semantic Technology Solutions | Semantic Web Company | Sonic | Sun Microsystems Inc. | TextWise | the New York Times | Transversal | Vantage Learning | Wipro Limited | Wipro Technologies | Aberdeen Group | Ars Technica | AskMeNow Inc. | Atmel Corporation | Bayer | Battery Ventures | Bizo Inc. | Capgemini | Canada NewsWire | AVR32 AP7 Series Atmel Corporation | Charles River Ventures | Citigroup | Cerebra Inc. | dbMotion | Devesys Technologies Inc. | Devestil | Dream Ventures | Comcast | Editions Profil | Emerging Technologies | Evri | eWeek Chief Technology | Fast Search & Transfer | Gamma Enterprise Technologies | Health Language Inc. | Helion Venture Partners | Hewlett Packard Research Labs | HP Labs | Interactive Supercomputing Inc. | IONA Technologies | Jarg | Kosmix | Life Sciences (HCLS) Interest Group | Matrixware Information Services | Maverick Capital | Medical Knowledge Group | Merrill Lynch | Microsoft Corp. | MITRE Corporation | nexTier Networks Inc. | Pakistan Telecom | PowerOne Capital Markets Limited | Project10X

 

Tim Berners-Lee | John Goodwin | Dave McComb | Paul Miller | Amit Sheth | Nova Spivack | Christine Connors | Alex Iskold | Janie Davies | Jeremy Carroll | Jim Rapoza | Paul Allen | Richard MacManus | Steven Pinker | Ben Adida | Andy Plesser | Bill Gates | Chris Morrison | Chris Augustin | Dan Brickley | David Crystal | David Barnes | Gabor Karsai | Holger Knublauch(TopQuadrant) | David Michael | Devin Wenig | Jason Gaedtke | Jim Hendler | Johann Johannson | John Turato | Josh Dilworth | Julia Kent | Karel Teige | Laura Bennett | Lee Feigenbaum | Marc Horowitz | Mark Greaves | Matthew West | Michael Singer | Neil Roseman | Nikita Ogievetsky | Paolo Dellapiana | Phil Muncaster | Philippe Le Hegaret | Roberto Maria Clemente | Stefan Decker | Stephen Conroy | Stephen Solocof | Steve Ballmer | Steve Ganz | Taylor Cowan | Thomas Tague | Tim O | Tom Morris | Venky Harinarayan | Walter Perry | Yossi Vardi | Dean Whitney | David Provost | David J. Neff | Hiroshi Yasukawa | Hideki Isozaki | Henry Blodget | Harry Chen | Harry Barfoot | Hanson Dodge | Hamish Cunningham | Greg Farr | Gordon M | Gord Hotchkiss | Golda Velez | Gilberto Gil | Geraldine Peters | George Lucas | George Box | Geoff Brown | G. Daniel Martich | Frank Gilbane | Fay Foley-McDunnough | Evans Hadoop | Eva Oliveira | Erika Preuss | Erik Collier | Erick Schonfeld | Eric Hoffer | Eran Shir | Emma Tonkin | Eddy Choi | Draper Fisher Jurvetson | Douglas Turnbull | Douglas Reeves | Dominic White | Dirk-Willem van Gulik | Diane Mueller | Dewey Decimal | Dave Roberts | Dave Raggett | Clay Shirky | Christopher Bancroft | Christian Dupuis

 

MIT | European Union | City University London | Securities and Exchange Commission | SEMANTIC TECHNOLOGY INSTITUTE | National Council for the Blind | Ireland Centre For Inclusive Technology | Digital Enterprise Research Institute | Carnegie Mellon University | Center for Semantic Excellence | Collective Intelligence | European Commission | German Federal Institute for Risk Assessment | Information Integration Intelligence | Institute of Electrical and Electronics Engineers | Missouri Council for History Education | Mozilla Foundation | Northeastern University | ABAP | Palm Springs Unified School District | Pittsburgh Medical Center | SAP Integration and Certification Center | Silicon Valley | sixth Arab Computer Society | Stanford | University of Pittsburgh | UN’s Food and Agricultural Organisation | U.S. Government | VocaLink’s Euro Payment Service | NASD | New York University’s Stern School of Business | New School for Design | National University of Ireland | National Institute of Health | Moscow Institute of Physics and Technology | Mohawk College | Mohawk Applied Research Centre for Health Informatics | MIT Media Lab | Massachusetts Institute of Technology | ManTech MBI’s government | LSDIS Lab . | Lake Washington School District | LA school district | Knowledge Society | Knowledge Organization | Knowledge Media Institute | Knewco’s mission | Ken North Computing LLC Panel | Keio University in Japan | Kansas State University | Joint Information Systems Committee | Jeff Gray Institute | Japan Branch Office | International WIC Institute | International Telecommunication Union | International Software & Productivity Engineering Institute | International Society | international child protection and development charity | Institute of Traffic Safety | Institute of Technology | Institute for Traffic Safety and Automation Engineering | Institute for Traffic Safety | Institute for Technology Assessment | Information Systems Department | Information Society | Imperial College | IEEE Industry Standards and Technology Organization | Highland Consumer Fund | Global | Gene B. Glick Junior Achievement Education Center | Founders Fund | Federal Systems Division | federal government’s sports administration | Federal Department of Innovation, Industry, Science and Research | Euro Payment Service | Environmental Systems Research Institute | Division of Research & Education Systems | Cloud Computing Interoperability Forum | Center for Coordination Science | Churchill Club | Cerebra University | Center Party | Department of Information Technology | Department of Defense | Department of Control and Transport Automation | Department of Control | Department of Computing | Department of Computer Science and Engineering | Defense Logistics Information Research Program’s Technical Solutions Council | Defense Advanced Research Projects Agency | Dalian University of Technology | Cyc Foundation | Cortex Intelligence | Control and Transport Automation Budapest University of Technology and Economics | Consortium | Computer Writing and Research Lab | Columbia University in New York | Columbia University | California Institute of Technology | Business Intelligence for HR

Read Full Post »

Information

Information

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article may require cleanup to meet Wikipedia’s quality standards. Please improve this article if you can. (April 2007)

For other uses, see Information (disambiguation).

The ASCII codes for the word “Wikipedia” represented in binary, the numeral system most commonly used for encoding computer information

Information as a concept has a diversity of meanings, from everyday usage to technical settings. Generally speaking, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.

Many people[who?] speak about the Information Age as the advent of the Knowledge Age[citation needed]or knowledge society, the information society, the Information revolution, and information technologies, and even though informatics, information science and computer science are often in the spotlight, the word “information” is often used without careful consideration of the various meanings it has acquired.

[edit] Etymology

According to the Oxford English Dictionary, the first known historical meaning of the word information in English was the act of informing, or giving form or shape to the mind, as in education, instruction, or training. A quote from 1387 English text: “Five books come down from heaven for information of mankind.” It was also used for an item of training, e.g. a particular instruction. “Melibee had heard the great skills and reasons of Dame Prudence, and her wise information and techniques.” (1386)

The English word was apparently derived from the Latin accusative form (informationem) of the nominative (informatio): this noun is on its turn derived from the verb “informare” (to inform) in the sense of “to give form to the mind”, “to discipline”, “instruct”, “teach”: “Men so wise should go and inform their kings.” (1330) Inform itself comes (via French) from the Latin verb informare, to give form to, to form an idea of. Furthermore, Latin itself already even contained the word informatio meaning concept or idea, but the extent to which this may have influenced the development of the word information in English is unclear.

As a final note, the ancient Greek word for form was “μορφή” (morf -> morphe, Morph) and also είδος eidos (kind, idea, shape, set), the latter word was famously used in a technical philosophical sense by Plato (and later Aristotle) to denote the ideal identity or essence of something (see Theory of forms). “Eidos” can also be associated with thought, proposition or even concept.

[edit] As a message

Information is a term with many meanings depending on context, but is as a rule closely related to such concepts as meaning, knowledge, instruction, communication, representation, and mental stimulus. Simply stated, information is a message received and understood. In terms of data, it can be defined as a collection of facts from which conclusions may be drawn. There are many other aspects of information since it is the knowledge acquired through study or experience or instruction. But overall, information is the result of processing, manipulating and organizing data in a way that adds to the knowledge of the person receiving it.

Information is the state of a system of interest. Message is the information materialized.

Information is a quality of a message from a sender to one or more receivers. Information is always about something (size of a parameter, occurrence of an event, value, ethics, etc). Viewed in this manner, information does not have to be accurate; it may be a truth or a lie, or just the sound of a falling tree. Even a disruptive noise used to inhibit the flow of communication and create misunderstanding would in this view be a form of information. However, generally speaking, if the amount of information in the received message increases, the message is more accurate.

This model assumes there is a definite sender and at least one receiver. Many refinements of the model assume the existence of a common language understood by the sender and at least one of the receivers. An important variation identifies information as that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message. In another variation, it is not required that the sender be capable of understanding the message, or even cognizant that there is a message, making information something that can be extracted from an environment, e.g., through observation, reading or measurement.

Communication theory provides a numerical measure of the uncertainty of an outcome. For example, we can say that “the signal contained thousands of bits of information”. Communication theory tends to use the concept of information entropy, generally attributed to Claude Shannon, see below.

Another form of information is Fisher information, a concept of R.A. Fisher. This is used in application of statistics to estimation theory and to science in general. Fisher information is thought of as the amount of information that a message carries about an unobservable parameter. It can be computed from knowledge of the likelihood function defining the system. For example, with a normal likelihood function, the Fisher information is the reciprocal of the variance of the law. In the absence of knowledge of the likelihood law, the Fisher information may be computed from normally distributed score data as the reciprocal of their second moment.

Even though information and data are often used interchangeably, they are actually very different. Data is a set of unrelated information, and as such is of no use until it is properly evaluated. Upon evaluation, once there is some significant relation between data, and they show some relevance, then they are converted into information. Now this same data can be used for different purposes. Thus, till the data convey some information, they are not useful and therefore not information.

[edit] Measuring information entropy

The view of information as a message came into prominence with the publication in 1948 of an influential paper by Claude Shannon, “A Mathematical Theory of Communication.” This paper provides the foundations of information theory and endows the word information not only with a technical meaning but also a measure. If the sending device is equally likely to send any one of a set of N messages, then the preferred measure of “the information produced when one message is chosen from the set” is the base two logarithm of N (This measure is called self-information). In this paper, Shannon continues:

The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey. A device with two stable positions, such as a relay or a flip-flop circuit, can store one bit of information. N such devices can store N bits…[1]

A complementary way of measuring information is provided by algorithmic information theory. In brief, this measures the information content of a list of symbols based on how predictable they are, or more specifically how easy it is to compute the list through a program: the information content of a sequence is the number of bits of the shortest program that computes it. The sequence below would have a very low algorithmic information measurement since it is a very predictable pattern, and as the pattern continues the measurement would not change. Shannon information would give the same information measurement for each symbol, since they are statistically random, and each new symbol would increase the measurement.

123456789101112131415161718192021

It is important to recognize the limitations of traditional information theory and algorithmic information theory from the perspective of human meaning. For example, when referring to the meaning content of a message Shannon noted “Frequently the messages have meaning… these semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages” (emphasis in original).

In information theory signals are part of a process, not a substance; they do something, they do not contain any specific meaning. Combining algorithmic information theory and information theory we can conclude that the most random signal contains the most information as it can be interpreted in any way and cannot be compressed.[citation needed]

Michael Reddy noted that “‘signals’ of the mathematical theory are ‘patterns that can be exchanged’. There is no message contained in the signal, the signals convey the ability to select from a set of possible messages.” In information theory “the system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design”.

[edit] As a pattern

Information is any represented pattern. This view assumes neither accuracy nor directly communicating parties, but instead assumes a separation between an object and its representation. Consider the following example: economic statistics represent an economy, however inaccurately. What are commonly referred to as data in computing, statistics, and other fields, are forms of information in this sense. The electro-magnetic patterns in a computer network and connected devices are related to something other than the pattern itself, such as text characters to be displayed and keyboard input. Signals, signs, and symbols are also in this category. On the other hand, according to semiotics, data is symbols with certain syntax and information is data with a certain semantic. Painting and drawing contain information to the extent that they represent something such as an assortment of objects on a table, a profile, or a landscape. In other words, when a pattern of something is transposed to a pattern of something else, the latter is information. This would be the case whether or not there was anyone to perceive it.

But if information can be defined merely as a pattern, does that mean that neither utility nor meaning are necessary components of information? Arguably a distinction must be made between raw unprocessed data and information which possesses utility, value or some quantum of meaning. On this view, information may indeed be characterized as a pattern; but this is a necessary condition, not a sufficient one.

An individual entry in a telephone book, which follows a specific pattern formed by name, address and telephone number, does not become “informative” in some sense unless and until it possesses some degree of utility, value or meaning. For example, someone might look up a girlfriend’s number, might order a take away etc. The vast majority of numbers will never be construed as “information” in any meaningful sense. The gap between data and information is only closed by a behavioral bridge whereby some value, utility or meaning is added to transform mere data or pattern into information.

When one constructs a representation of an object, one can selectively extract from the object (sampling) or use a system of signs to replace (encoding), or both. The sampling and encoding result in representation. An example of the former is a “sample” of a product; an example of the latter is “verbal description” of a product. Both contain information of the product, however inaccurate. When one interprets representation, one can predict a broader pattern from a limited number of observations (inference) or understand the relation between patterns of two different things (decoding). One example of the former is to sip a soup to know if it is spoiled; an example of the latter is examining footprints to determine the animal and its condition. In both cases, information sources are not constructed or presented by some “sender” of information. Regardless, information is dependent upon, but usually unrelated to and separate from, the medium or media used to express it. In other words, the position of a theoretical series of bits, or even the output once interpreted by a computer or similar device, is unimportant, except when someone or something is present to interpret the information. Therefore, a quantity of information is totally distinct from its medium.

[edit] As sensory input

Often information is viewed as a type of input to an organism or designed device. Inputs are of two kinds. Some inputs are important to the function of the organism (for example, food) or device (energy) by themselves. In his book Sensory Ecology, Dusenbery called these causal inputs. Other inputs (information) are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of association with other information but eventually there must be a connection to a causal input. In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or device. For example, light is often a causal input to plants but provides information to animals. The colored light reflected from a flower is too weak to do much photosynthetic work but the visual system of the bee detects it and the bee’s nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, which are causal inputs, serving a nutritional function.

Information is any type of sensory input. When an organism with a nervous system receives an input, it transforms the input into an electrical signal. This is regarded information by some. The idea of representation is still relevant, but in a slightly different manner. That is, while abstract painting does not represent anything concretely, when the viewer sees the painting, it is nevertheless transformed into electrical signals that create a representation of the painting. Defined this way, information does not have to be related to truth, communication, or representation of an object. Entertainment in general is not intended to be informative. Music, the performing arts, amusement parks, works of fiction and so on are thus forms of information in this sense, but they are not necessarily forms of information according to some definitions given above. Consider another example: food supplies both nutrition and taste for those who eat it. If information is equated to sensory input, then nutrition is not information but taste is.

[edit] As an influence which leads to a transformation

Information is any type of pattern that influences the formation or transformation of other patterns. In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern. Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind. Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said that information in this sense is something potentially perceived as representation, though not created or presented for that purpose.

If, however, the premise of “influence” implies that information has been perceived by a conscious mind and also interpreted by it, the specific context associated with this interpretation may cause the transformation of the information into knowledge. Complex definitions of both “information” and “knowledge” make such semantic and logical analysis difficult, but the condition of “transformation” is an important point in the study of information as it relates to knowledge, especially in the business discipline of knowledge management. In this practice, tools and processes are used to assist a knowledge worker in performing research and making decisions, including steps such as:

  • reviewing information in order to effectively derive value and meaning
  • referencing metadata if any is available
  • establishing a relevant context, often selecting from many possible contexts
  • deriving new knowledge from the information
  • making decisions or recommendations from the resulting knowledge

Stewart (2001) argues that the transformation of information into knowledge is a critical one, lying at the core of value creation and competitive advantage for the modern enterprise.

When Marshall McLuhan speaks of media and their effects on human cultures, he refers to the structure of artifacts that in turn shape our behaviors and mindsets. Also, pheromones are often said to be “information” in this sense.

[edit] As a property in physics

Main article: Physical information

In 2003, J. D. Bekenstein claimed there is a growing trend in physics to define the physical world as being made of information itself (and thus information is defined in this way) (see Digital physics). Information has a well defined meaning in physics. Examples of this include the phenomenon of quantum entanglement where particles can interact without reference to their separation or the speed of light. Information itself cannot travel faster than light even if the information is transmitted indirectly. This could lead to the fact that all attempts at physically observing a particle with an “entangled” relationship to another are slowed down, even though the particles are not connected in any other way other than by the information they carry.

Another link is demonstrated by the Maxwell’s demon thought experiment. In this experiment, a direct relationship between information and another physical property, entropy, is demonstrated. A consequence is that it is impossible to destroy information without increasing the entropy of a system; in practical terms this often means generating heat. Another, more philosophical, outcome is that information could be thought of as interchangeable with energy. Thus, in the study of logic gates, the theoretical lower bound of thermal energy released by an AND gate is higher than for the NOT gate (because information is destroyed in an AND gate and simply converted in a NOT gate). Physical information is of particular importance in the theory of quantum computers.

[edit] As records

Records are a specialized form of information. Essentially, records are information produced consciously or as by-products of business activities or transactions and retained because of their value. Primarily their value is as evidence of the activities of the organization but they may also be retained for their informational value. Sound records management ensures that the integrity of records is preserved for as long as they are required.

The international standard on records management, ISO 15489, defines records as “information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business”. The International Committee on Archives (ICA) Committee on electronic records defined a record as, “a specific piece of recorded information generated, collected or received in the initiation, conduct or completion of an activity and that comprises sufficient content, context and structure to provide proof or evidence of that activity”.

Records may be retained because of their business value, as part of the corporate memory of the organization or to meet legal, fiscal or accountability requirements imposed on the organization. Willis (2005) expressed the view that sound management of business records and information delivered “…six key requirements for good corporate governance…transparency; accountability; due process; compliance; meeting statutory and common law requirements; and security of personal and corporate information.”

[edit] Information and semiotics

Beynon-Davies [2] explains the multi-faceted concept of information in terms of that of signs and sign-systems. Signs themselves can be considered in terms of four inter-dependent levels, layers or branches of semiotics: pragmatics, semantics, syntax, and empirics. These four layers serve to connect the social world on the one hand with the physical or technical world on the other.

Pragmatics is concerned with the purpose of communication. Pragmatics links the issue of signs with that of intention. The focus of pragmatics is on the intentions of human agents underlying communicative behaviour. In other words, intentions link language to action.

Semantics is concerned with the meaning of a message conveyed in a communicative act. Semantics considers the content of communication. Semantics is the study of the meaning of signs – the association between signs and behaviour. Semantics can be considered as the study of the link between symbols and their referents or concepts; particularly the way in which signs relate to human behaviour.

Syntax is concerned with the formalism used to represent a message. Syntax as an area studies the form of communication in terms of the logic and grammar of sign systems. Syntax is devoted to the study of the form rather than the content of signs and sign-systems.

Empirics is the study of the signals used to carry a message; the physical characteristics of the medium of communication. Empirics is devoted to the study of communication channels and their characteristics, e.g., sound, light, electronic transmission etc.

Communication normally exists within the context of some social situation. The social situation sets the context for the intentions conveyed (pragmatics) and the form in which communication takes place. In a communicative situation intentions are expressed through messages which comprise collections of inter-related signs taken from a language which is mutually understood by the agents involved in the communication. Mutual understanding implies that agents involved understand the chosen language in terms of its agreed syntax (syntactics) and semantics. The sender codes the message in the language and sends the message as signals along some communication channel (empirics). The chosen communication channel will have inherent properties which determine outcomes such as the speed with which communication can take place and over what distance.

[edit] See also

[edit] References

  1. ^ The Bell System Technical Journal, Vol. 27, p. 379, (July 1948).
  2. ^ Beynon-Davies P. (2002). Information Systems: an introduction to informatics in Organisations. Palgrave, Basingstoke, UK. ISBN 0-333-96390-3

[edit] Further reading

  • Alan Liu (2004). The Laws of Cool: Knowledge Work and the Culture of Information, University of Chicago Press
  • Bekenstein, Jacob D. (2003, August). Information in the holographic universe. Scientific American.
  • Luciano Floridi, (2005). ‘Is Information Meaningful Data?’, Philosophy and Phenomenological Research, 70 (2), pp. 351 – 370. Available online at Oxford University
  • Luciano Floridi, (2005). ‘Semantic Conceptions of Information’, The Stanford Encyclopedia of Philosophy (Winter 2005 Edition), Edward N. Zalta (ed.). Available online at Stanford University
  • Stewart, Thomas, (2001). Wealth of Knowledge. Doubleday, New York, NY, 379 p.
  • Young, Paul. The Nature of Information (1987). Greenwood Publishing Group, Westport, Ct.

[edit] External links

Sister project Look up information in Wiktionary, the free dictionary.
[hide]

v  d  e

Technology

Applied science

Information

 

Industry

Military

Domestic

Engineering

Health / safety

Transport

 
 
 
 
 
 
 
 

Read Full Post »

From Logic to Ontology: The limit of “The Semantic Web”
Francisco Antonio Cerón García
Physics’s Spanish Royal Society
fcerong@gmail.com

The limits of the semantic web (http://en.wikipedia.org/wiki/Semantic_Web) are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic (http://en.wikipedia.org/wiki/Logic) that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems (http://en.wikipedia.org/wiki/Gödel’s_incompleteness_theorems) remark, the final tautology of each construction or metonymic language mathematical (http://en.wikipedia.org/wiki/Mathematical_logic), which leads to inconsistencies. The construction of the Semantic Web is a undecidible problem (http://en.wikipedia.org/wiki/Undecidable_problem).
This consistent logic is completely opposite to the logic that makes inconsistent use of time (http://en.wikipedia.org/wiki/Jacques_Lacan), inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.
The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem (http://en.wikipedia.org/wiki/Halting_problem) or stop of calculation.
So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.
As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine (http://en.wikipedia.org/wiki/Turing_machine) and “the halting problem”.
One solution for the problem to build the semantic web could be the use of Non-formal or Inconsistency Logic (https://methainternet.wordpress.com/2008/01/27/non-formal-or-inconsistency-logic-lacans-logic-godels-incompleteness-theorems).

Read Full Post »

Computation

Computation

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Computation is a general term for any type of information processing. This includes phenomena ranging from human thinking to calculations with a more narrow meaning. Computation is a process following a well-defined model that is understood and can be expressed in an algorithm, protocol, network topology, etc. Computation is also a major subject matter of computer science: it investigates what can or cannot be done in a computational manner.

Sister project Look up computation in Wiktionary, the free dictionary.

Contents

[hide]

[edit] Classes of computation

Computation can be classified by at least three orthogonal criteria: digital vs analog, sequential vs parallel vs concurrent, batch vs interactive.

In practice, digital computation is often used to simulate natural processes (for example, Evolutionary computation), including those that are more naturally described by analog models of computation (for example, Artificial neural network). In this situation, it is important to distinguish between the mechanism of computation and the simulated model.

[edit] Computations as a physical phenomenon

A computation can be seen as a purely physical phenomenon occurring inside a closed physical system called a computer. Examples of such physical systems include digital computers, quantum computers, DNA computers, molecular computers, analog computers or wetware computers. This point of view is the one adopted by the branch of theoretical physics called the physics of computation.

An even more radical point of view is the postulate of digital physics that the evolution of the universe itself is a computation – Pancomputationalism.

[edit] Mathematical models of computation

In the theory of computation, a diversity of mathematical models of computers have been developed. Typical mathematical models of computers are the following:

  • State models including Turing Machine, Push-down automaton, Finite state automaton, and PRAM
  • Functional models including lambda calculus
  • Logical models including logic programming
  • Concurrent models including Actor model and process calculi

[edit] History

The word computation has an archaic meaning (from its Latin etymological roots), but the word has come back in use with the arising of a new scientific discipline: computer science.

[edit] See also

Computer Science portal
  This computer science article is a stub. You can help Wikipedia by expanding it.

Read Full Post »

Computational problem

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might want to solve. For example, the problem of factoring

“Given a positive integer n, find a nontrivial prime factor of n.”

is a computational problem. Computational problems are one of the main objects of study in theoretical computer science. The field of algorithms studies methods of solving computational problems efficiently. The complementary field of computational complexity attempts to explain why certain computational problems are intractable for computers.

A computational problem can be viewed as an infinite collection of instances together with a solution for every instance. For example in the factoring problem, the instances are the integers n, and solutions are prime numbers p that describe nontrivial prime factors of n.

It is conventional to represent both instances and solutions by binary strings, namely elements of {0, 1}*. For example, numbers can be represented as binary strings using the binary encoding. (For readability, we identify numbers with their binary encodings in the examples below.)

[edit] Types of computational problems

A decision problem is a computational problem where the answer for every instance is either yes or no. An example of a decision problem is primality testing:

“Given a positive integer n, determine if n is prime.”

A decision problem is typically represented as the set of all instances for which the answer is yes. For example, primality testing can be represented as the infinite set

L = {2, 3, 5, 7, 11, …}

In a search problem, the answers can be arbitrary strings. For example, factoring is a search problem where the instances are (string representations of) positive integers and the solutions are (string representations of) collections of primes.

A search problem is represented as a relation over consisting of all the instance-solution pairs, called a search relation. For example, primality can be represented as the relation

R = {(4, 2), (6, 2), (6, 3), (8, 2), (8, 4), (9, 3), …}

which consist of all pairs of numbers (n, p), where p is a nontrivial prime factor of n.

A counting problem asks for the number of solutions to a given search problem. For example, the counting problem associated with primality is

“Given a positive integer n, count the number of nontrivial prime factors of n.”

A counting problem can be represented by a function f from {0, 1}* to the nonnegative integers. For a search relation R, the counting problem associated to R is the function

fR(x) = |{y: (x, y) ∈ R}|.

An optimization problem asks for finding the “best possible” solution among the set of all possible solutions to a search problem. One example is the maximum independent set problem:

“Given a graph G, find an independent set of G of maximum size.”

Optimization problems can be represented by their search relations.

[edit] Promise problems

In computational complexity theory, it is usually implicitly assumed that any string in {0, 1}* represents an instance of the computational problem in question. However, sometimes not all strings {0, 1}* represent valid instances, and one specifies a proper subset of {0, 1}* as the set of “valid instances”. Computational problems of this type are called promise problems.

The following is an example of a (decision) promise problem:

“Given a graph G, determine if G has an independent set of size at most 5, or every independent set in G has size at least 10.”

Here, the valid instances are those graphs whose maximum independent set size is either at most 5 or at least 10.

Decision promise problems are usually represented as pairs of disjoint subsets (Lyes, Lno) of {0, 1}*. The valid instances are those in LyesLno. Lyes and Lno represent the instances whose answer is yes and no, respectively.

Promise problems play an important role in several areas of computational complexity, including hardness of approximation, property testing, and interactive proof systems.

[edit] References

Read Full Post »

Computer

Computer

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the machine. For other uses, see Computer (disambiguation).
“Computer technology” redirects here. For the company, see Computer Technology Limited.

This article may be too long to comfortably read and navigate. Please consider splitting content into sub-articles and using this article for a summary of the key points of the subject. (June 2009)

A computer is a machine that manipulates data according to a set of instructions.

Although mechanical examples of computers have existed through much of recorded human history, the first resembling a modern computer were developed in the mid-20th century (1940–1945). The first electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers (PC).[1] Modern computers based on tiny integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery. Personal computers in their various forms are icons of the Information Age, what most people think of as a “computer”, but the embedded computers found in devices ranging from fighter aircraft to industrial robots, digital cameras, and toys are the most numerous.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a personal digital assistant to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

Contents

[hide]

[edit] History of computing

Main article: History of computer hardware

The Jacquard loom was one of the first programmable devices.

The first use of the word “computer” was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.[3]

The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC). Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.[4] This is the essence of programmability.

The “castle clock”, an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer.[5] It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour,[6][7] and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed to compensate for the changing lengths of day and night throughout the year.[5]

The end of the Middle Ages saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard‘s 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine.[8] Limited finances and Babbage’s inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. “After some initial trials with paper tape, he settled on punched cards …”[9] To process these punched cards he invented the tabulator, and the key punch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith’s company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: “The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.” [10]

George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November of 1937, Stibitz invented and built a relay-based calculator he dubbed the “Model K” (for “kitchen table”, on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.[11]

Defining characteristics of some early digital computers of the 1940s (In the history of computing hardware)
Name First operational Numeral system Computing mechanism Programming Turing complete
Zuse Z3 (Germany) May 1941 Binary Electro-mechanical Program-controlled by punched film stock (but no conditional branch) Yes (1998)
Atanasoff–Berry Computer (US) 1942 Binary Electronic Not programmable—single purpose No
Colossus Mark 1 (UK) February 1944 Binary Electronic Program-controlled by patch cables and switches No
Harvard Mark I – IBM ASCC (US) May 1944 Decimal Electro-mechanical Program-controlled by 24-channel punched paper tape (but no conditional branch) No
Colossus Mark 2 (UK) June 1944 Binary Electronic Program-controlled by patch cables and switches No
ENIAC (US) July 1946 Decimal Electronic Program-controlled by patch cables and switches Yes
Manchester Small-Scale Experimental Machine (UK) June 1948 Binary Electronic Stored-program in Williams cathode ray tube memory Yes
Modified ENIAC (US) September 1948 Decimal Electronic Program-controlled by patch cables and switches plus a primitive read-only stored programming mechanism using the Function Tables as program ROM Yes
EDSAC (UK) May 1949 Binary Electronic Stored-program in mercury delay line memory Yes
Manchester Mark 1 (UK) October 1949 Binary Electronic Stored-program in Williams cathode ray tube memory and magnetic drum memory Yes
CSIRAC (Australia) November 1949 Binary Electronic Stored-program in mercury delay line memory Yes

A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as “the first digital electronic computer” is difficult (Shannon 1940). Notable achievements include:

EDSAC was one of the first computers to implement the stored program (von Neumann) architecture.

  • Konrad Zuse‘s electromechanical “Z machines”. The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world’s first operational computer.
  • The non-programmable Atanasoff–Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact then its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.
  • The secret British Colossus computers (1943),[12] which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.
  • The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
  • The U.S. Army’s Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse‘s Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.

Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the “stored program architecture” or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or “Baby”), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann’s paper—EDVAC—was completed but did not see full-time use for an additional two years.

Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word “computer” is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.

Microprocessors are miniaturized devices that often implement stored program CPUs.

Computers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953.[13] In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.

Modern smartphones are fully-programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existence.

[edit] Stored program architecture

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer’s memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. For example:

mov #0,sum ; set sum to 0
mov #1,num ; set num to 1
loop: add num,sum ; add num to sum
add #1,num ; add 1 to num
cmp num,#1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in about a millionth of a second.[14]

However, computers cannot “think” for themselves in the sense that they only solve problems in exactly the way they are programmed to. An intelligent human faced with the above addition task might soon realize that instead of actually adding up all the numbers one can simply use the equation

1+2+3+...+n = {{n(n+1)} \over 2}

and arrive at the correct answer (500,500) with little work.[15] In other words, a computer programmed to add up the numbers one by one as in the example above would do exactly that without regard to efficiency or alternative solutions.

[edit] Programs

A 1970s punched card containing one line from a FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labelled “PROJ039” for identification purposes.

In practical terms, a computer program may run from just a few instructions to many millions of instructions, as in a program for a word processor or a web browser. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Errors in computer programs are called “bugs“. Bugs may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program to “hang“—become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or “crash“. Otherwise benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an “exploit“—code designed to take advantage of a bug and disrupt a program’s proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program’s design.[16]

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from—each with a unique numerical code. Since the computer’s memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer’s memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers,[17] it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer’s assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[18]

Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[19] Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this problem.

[edit] Example

A traffic light showing red

Suppose a computer is being employed to drive a traffic signal at an intersection between two streets. The computer has the following three basic instructions.

  1. ON(Streetname, Color) Turns the light on Streetname with a specified Color on.
  2. OFF(Streetname, Color) Turns the light on Streetname with a specified Color off.
  3. WAIT(Seconds) Waits a specifed number of seconds.
  4. START Starts the program
  5. REPEAT Tells the computer to repeat a specified part of the program in a loop.

Comments are marked with a // on the left margin. Assume the streetnames are Broadway and Main.

START

//Let Broadway traffic go
OFF(Broadway, Red)
ON(Broadway, Green)
WAIT(60 seconds)

//Stop Broadway traffic
OFF(Broadway, Green)
ON(Broadway, Yellow)
WAIT(3 seconds)
OFF(Broadway, Yellow)
ON(Broadway, Red)

//Let Main traffic go
OFF(Main, Red)
ON(Main, Green)
WAIT(60 seconds)

//Stop Main traffic
OFF(Main, Green)
ON(Main, Yellow)
WAIT(3 seconds)
OFF(Main, Yellow)
ON(Main, Red)

//Tell computer to continuously repeat the program.
REPEAT ALL

With this set of instructions, the computer would cycle the light continually through red, green, yellow and back to red again on both streets.

However, suppose there is a simple on/off switch connected to the computer that is intended to be used to make the light flash red while some maintenance operation is being performed. The program might then instruct the computer to:

START

IF Switch == OFF then: //Normal traffic signal operation
{
//Let Broadway traffic go
OFF(Broadway, Red)
ON(Broadway, Green)
WAIT(60 seconds)

//Stop Broadway traffic
OFF(Broadway, Green)
ON(Broadway, Yellow)
WAIT(3 seconds)
OFF(Broadway, Yellow)
ON(Broadway, Red)

//Let Main traffic go
OFF(Main, Red)
ON(Main, Green)
WAIT(60 seconds)

//Stop Main traffic
OFF(Main, Green)
ON(Main, Yellow)
WAIT(3 seconds)
OFF(Main, Yellow)
ON(Main, Red)

//Tell the computer to repeat this section continuously.
REPEAT THIS SECTION
}

IF Switch == ON THEN: //Maintenance Mode
{
//Turn the red lights on and wait 1 second.
ON(Broadway, Red)
ON(Main, Red)
WAIT(1 second)

//Turn the red lights off and wait 1 second.
OFF(Broadway, Red)
OFF(Main, Red)
WAIT(1 second)

//Tell the comptuer to repeat the statements in this section.
REPEAT THIS SECTION
}

In this manner, the traffic signal will run a flash-red program when the switch is on, and will run the normal program when the switch is off. Both of these program examples show the basic layout of a computer program in a simple, familiar context of a traffic signal. Any experienced programmer can spot many software bugs in the program, for instance, not making sure that the green light is off when the switch is set to flash red. However, to remove all possible bugs would make this program much longer and more complicated, and would be confusing to nontechnical readers: the aim of this example is a simple demonstration of how computer instructions are laid out.

[edit] How computers work

A general purpose computer has four main components: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.

The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.

[edit] Control unit

Main articles: CPU design and Control unit

The control unit (often called a control system or central controller) manages the computer’s various components; it reads and interprets (decodes) the program instructions, transforming them into a series of control signals which activate other parts of the computer.[20] Control systems in advanced computers may change the order of some instructions so as to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[21]

Diagram showing how a particular MIPS architecture instruction would be decoded by the control system.

The control system’s function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program—and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.

[edit] Arithmetic/logic unit (ALU)

Main article: Arithmetic logic unit

The ALU is capable of performing two classes of operations: arithmetic and logic.[22]

The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers—albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”).

Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs so that they can process several instructions at the same time.[23] Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.

[edit] Memory

Main article: Computer storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was replaced by semiconductor memory.

A computer’s memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595”. The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software’s responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two’s complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer’s speed.

Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer’s initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer’s operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[24]

In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer’s part.

[edit] Input/output (I/O)

Main article: Input/output

Hard disks are common I/O devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[25] Devices that provide input or output to the computer are called peripherals.[26] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.

Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.

[edit] Multitasking

Main article: Computer multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[27]

One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time”, then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[28]

Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.

Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly — in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.

[edit] Multiprocessing

Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[29] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

[edit] Networking and the Internet

Main articles: Computer networking and Internet

Visualization of a portion of the routes on the Internet.

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military’s SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.[30]

In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[31] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

[edit] Further topics

[edit] Hardware

Main article: Computer hardware

The term hardware covers all of those parts of a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware
First Generation (Mechanical/Electromechanical) Calculators Antikythera mechanism, Difference Engine, Norden bombsight
Programmable Devices Jacquard loom, Analytical Engine, Harvard Mark I, Z3
Second Generation (Vacuum Tubes) Calculators Atanasoff–Berry Computer, IBM 604, UNIVAC 60, UNIVAC 120
Programmable Devices Colossus, ENIAC, Manchester Small-Scale Experimental Machine, EDSAC, Manchester Mark 1, Ferranti Pegasus, Ferranti Mercury, CSIRAC, EDVAC, UNIVAC I, IBM 701, IBM 702, IBM 650, Z22
Third Generation (Discrete transistors and SSI, MSI, LSI Integrated circuits) Mainframes IBM 7090, IBM 7080, System/360, BUNCH
Minicomputer PDP-8, PDP-11, System/32, System/36
Fourth Generation (VLSI integrated circuits) Minicomputer VAX, IBM System i
4-bit microcomputer Intel 4004, Intel 4040
8-bit microcomputer Intel 8008, Intel 8080, Motorola 6800, Motorola 6809, MOS Technology 6502, Zilog Z80
16-bit microcomputer Intel 8088, Zilog Z8000, WDC 65816/65802
32-bit microcomputer Intel 80386, Pentium, Motorola 68000, ARM architecture
64-bit microcomputer[32] Alpha, MIPS, PA-RISC, PowerPC, SPARC, x86-64
Embedded computer Intel 8048, Intel 8051
Personal computer Desktop computer, Home computer, Laptop computer, Personal digital assistant (PDA), Portable computer, Tablet computer, Wearable computer
Theoretical/experimental Quantum computer, Chemical computer, DNA computing, Optical computer, Spintronics based computer
Other Hardware Topics
Peripheral device (Input/output) Input Mouse, Keyboard, Joystick, Image scanner
Output Monitor, Printer
Both Floppy disk drive, Hard disk, Optical disc drive, Teleprinter
Computer busses Short range RS-232, SCSI, PCI, USB
Long range (Computer networking) Ethernet, ATM, FDDI

[edit] Software

Main article: Computer software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC compatible), it is sometimes called “firmware” to indicate that it falls into an uncertain area somewhere between hardware and software.

Computer software
Operating system Unix and BSD UNIX System V, AIX, HP-UX, Solaris (SunOS), IRIX, List of BSD operating systems
GNU/Linux List of Linux distributions, Comparison of Linux distributions
Microsoft Windows Windows 95, Windows 98, Windows NT, Windows 2000, Windows XP, Windows Vista, Windows CE
DOS 86-DOS (QDOS), PC-DOS, MS-DOS, FreeDOS
Mac OS Mac OS classic, Mac OS X
Embedded and real-time List of embedded operating systems
Experimental Amoeba, Oberon/Bluebottle, Plan 9 from Bell Labs
Library Multimedia DirectX, OpenGL, OpenAL
Programming library C standard library, Standard template library
Data Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
File format HTML, XML, JPEG, MPEG, PNG
User interface Graphical user interface (WIMP) Microsoft Windows, GNOME, KDE, QNX Photon, CDE, GEM
Text-based user interface Command-line interface, Text user interface
Application Office suite Word processing, Desktop publishing, Presentation program, Database management system, Scheduling & Time management, Spreadsheet, Accounting software
Internet Access Browser, E-mail client, Web server, Mail transfer agent, Instant messaging
Design and manufacturing Computer-aided design, Computer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
Graphics Raster graphics editor, Vector graphics editor, 3D modeler, Animation editor, 3D computer graphics, Video editing, Image processing
Audio Digital audio editor, Audio playback, Mixing, Audio synthesis, Computer music
Software Engineering Compiler, Assembler, Interpreter, Debugger, Text Editor, Integrated development environment, Performance analysis, Revision control, Software configuration management
Educational Edutainment, Educational game, Serious game, Flight simulator
Games Strategy, Arcade, Puzzle, Simulation, First-person shooter, Platform, Massively multiplayer, Interactive fiction
Misc Artificial intelligence, Antivirus software, Malware scanner, Installer/Package management systems, File manager

[edit] Programming languages

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine language by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming Languages
Lists of programming languages Timeline of programming languages, Categorical list of programming languages, Generational list of programming languages, Alphabetical list of programming languages, Non-English-based programming languages
Commonly used Assembly languages ARM, MIPS, x86
Commonly used High level languages Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object Pascal
Commonly used Scripting languages Bourne script, JavaScript, Python, Ruby, PHP, Perl

[edit] Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Computer-related professions
Hardware-related Electrical engineering, Electronics engineering, Computer engineering, Telecommunications engineering, Optical engineering, Nanoscale engineering
Software-related Computer science, Desktop publishing, Human-computer interaction, Information technology, Scientific computing, Software engineering, Video game industry, Web design

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations
Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C
Professional Societies ACM, ACM Special Interest Groups, IET, IFIP
Free/Open source software groups Free Software Foundation, Mozilla Foundation, Apache Software Foundation

[edit] See also

Sister project Look up computer in Wiktionary, the free dictionary.
Sister projectWikiquote has a collection of quotations related to: Computers
Sister projectWikimedia Commons has media related to: Computer
Sister project Wikiversity has learning materials about Introduction to Computers

 

[edit] Notes

  1. ^ In 1946, ENIAC consumed an estimated 174 kW. By comparison, a typical personal computer may use around 400 W; over four hundred times less. (Kempf 1961)
  2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100 operations per second. A modern “commodity” microprocessor (as of 2007) can process billions of operations per second, and many of these operations are more complicated and useful than early computer operations.
  3. ^ computer, n., Oxford English Dictionary (2 ed.), Oxford University Press, 1989, http://dictionary.oed.com/, retrieved on 2009-04-10 
  4. ^ “Heron of Alexandria”. http://www.mlahanas.de/Greeks/HeronAlexandria2.htm. Retrieved on 2008-01-15. 
  5. ^ a b Ancient Discoveries, Episode 11: Ancient Robots, History Channel, http://www.youtube.com/watch?v=rxjbaQl0ad8, retrieved on 2008-09-06 
  6. ^ Howard R. Turner (1997), Science in Medieval Islam: An Illustrated Introduction, p. 184, University of Texas Press, ISBN 0-292-78149-0
  7. ^ Donald Routledge Hill, “Mechanical Engineering in the Medieval Near East”, Scientific American, May 1991, pp. 64-9 (cf. Donald Routledge Hill, Mechanical Engineering)
  8. ^ The analytical engine should not be confused with Babbage’s difference engine which was a non-programmable mechanical calculator.
  9. ^ Columbia University Computing History: Herman Hollerith
  10. ^ http://www.time.com/time/time100/scientist/profile/turing.html
  11. ^ “Inventor Profile: George R. Stibitz”. National Inventors Hall of Fame Foundation, Inc.. http://www.invent.org/hall_of_fame/140.html. 
  12. ^ B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park’s Codebreaking Computers, Oxford University Press, 2006
  13. ^ Lavington 1998, p. 37
  14. ^ This program was written similarly to those for the PDP-11 minicomputer and shows some typical things a computer can do. All the text after the semicolons are comments for the benefit of human readers. These have no significance to the computer and are ignored. (Digital Equipment Corporation 1972)
  15. ^ Attempts are often made to create programs that can overcome this fundamental limitation of computers. Software that mimics learning and adaptation is part of artificial intelligence.
  16. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intel microprocessors in the early 1990s to produce inaccurate results for certain floating point division operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
  17. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
  18. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
  19. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly by another program called an interpreter.
  20. ^ The control unit’s role in interpreting instructions has varied somewhat in the past. Although the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Many computers include some instructions that may only be partially interpreted by the control system and partially interpreted by another device. This is especially the case with specialized computing hardware that may be partially self-contained. For example, EDVAC, one of the earliest stored-program computers, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
  21. ^ Instructions often occupy more than one memory address, so the program counters usually increases by the number of memory locations required to store one instruction.
  22. ^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and Computing. A K Peters, Ltd.. p. 54. ISBN 9781568811284. 
  23. ^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics. CRC Press. p. 45. ISBN 9780824740672. 
  24. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma 1988)
  25. ^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12. 
  26. ^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the Microprocessors. Wiley. p. 85. ISBN 9780471050513. 
  27. ^ Jerry Peek; Grace Todino, John Strang (2002). Learning the UNIX Operating System: A Concise Guide for the New User. O’Reilly. p. 130. ISBN 9780596002619. 
  28. ^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111. ISBN 9780849309496. 
  29. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
  30. ^ Agatha C. Hughes (2000). Systems, Experts, and Computers. MIT Press. p. 161. ISBN 9780262082853. “The experience of SAGE helped make possible the first truly large-scale commercial real-time network: the SABRE computerized airline reservations system…” 
  31. ^ “A Brief History of the Internet”. Internet Society. http://www.isoc.org/internet/history/brief.shtml. Retrieved on 2008-09-20. 
  32. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit incarnations were introduced.

[edit] References

[edit] External links

  

Read Full Post »

Older Posts »