Feeds:
Posts
Comments

Archive for January 21st, 2008

Economist.com

Sharing what matters

Jun 7th 2007
From The Economist print edition

Software: A computing maverick hopes to upgrade the web, transforming it from a document collection into a data commons

Belle Mellor

MOST people find it difficult to keep up with Danny Hillis’s imaginative leaps. In the 1980s he dreamt of building intelligent computers and co-founded Thinking Machines, a firm with a mission to make machines “that will be proud of us”, as he used to put it—with tongue only half in cheek. That did not quite happen, but Mr Hillis did, in the process, pioneer the field of massively parallel supercomputing. After a stint at Disney, where he proposed building a theme-park full of free-roaming robot dinosaurs, he turned his attention to building a mechanical clock that will run for 10,000 years, a task that arguably requires genius in its justification as well as its execution. Now this maverick of the technology industry has a new idea that could have a big impact rather sooner than that.

It concerns the web, a creation that, though impressive, is pedestrian compared with what Mr Hillis has in mind. Today’s web allows easy and universal sharing of documents. Before the web, internet users could share documents only by making bilateral arrangements—requesting a document from someone else by e-mail, for example—which incurred transactional “friction”, so that relatively few people did so. The web eliminated that friction. Today it is obvious that this was world-changing, but Mr Hillis still remembers “how hard it was to explain” before it happened.

Déjà vu. The next step, he says, is to let the web do for data what it has already done for documents. Just as there used to be lots of people with interesting but unshared documents, today there are innumerable people and organisations with useful but locked-up databases. These range from topics of life-and-death importance—the World Health Organisation’s data on bird-flu outbreaks, say—to things that are deceptively banal but potentially useful—a foodie’s private spreadsheet listing the best wines at his local restaurants, say. For data to change the world as documents have changed it, the web must again eliminate all friction involved in sharing.

Metaweb Technologies, a firm set up by Mr Hillis and his co-founders, Robert Cook and John Giannandrea, aims to do exactly that with Freebase, a website that sits on top of a new kind of database. The name is not a pun on cocaine but a contraction of “free” and “database”, since the database shares the spirit of Wikipedia, the free and collaborative encyclopedia. (Mr Hillis is on the advisory board of Wikipedia’s parent organisation.) Just as Wikipedia lets people contribute information to its articles, Freebase, which is in a test phase, will let anybody contribute, correct or recombine data. The difference is that information on Wikipedia tends to be “unstructured”—ie, buried in text—whereas on Freebase it will be structured, so that each item can be re-used in any context.

It is an open question whether enough people will contribute their data to generate the momentum of Wikipedia, but Mr Hillis is optimistic. “Most people with data want others to have and use it,” he reckons. A boffin who collects data on butterflies, say, might want to upload it so that others with the same fascination can add their own information. Another researcher might then add data on lizards, and yet others might then combine the data on butterflies and lizards with existing geographical data to create maps or analyse patterns. The fact that users will not know in advance how their data might be used is precisely the point.

This requires a new level of flexibility in the database. When building most databases today, programmers decide in advance what sort of questions users might wish to ask of the data, by defining what are known as the “schema”—the types of records in the database and the relationships between them. Metaweb’s 35 programmers, by contrast, have built a new sort of database, based on a more flexible structure known to programmers as a “graph”, which allows users to contribute and use not just data, but schemas as well. They can, in short, ask any sort of question of the database.

Metaweb is thus very different from commercial database software, such as that made by Oracle, and from Google Base, which might superficially appear similar because it too allows anybody to upload data. Google Base, says Mr Cook, consists of many independent data sets that are stored in a coherent way. This means that many records are duplicates—if several people upload the details of the same digital camera, say—and may even contradict one another. Metaweb, by contrast, reconciles conflicting data and ensures that each object exists only once in the database. But each object can be tied to every other object, so that the resulting web of associations looks rather like the neural networks in a brain.

There is one similarity to Google, however. The search giant’s founders, Sergey Brin and Larry Page, initially concentrated on perfecting a technology (search) that could change the world, without worrying about a business model, which came much later (in the form of advertising). Mr Hillis plans to do the same. For now, he is much too excited about the technology to worry about the money. “Everything else I’ve worked on, if it succeeds, only helps one thing,” he says. “This has the potential to make everything better.”

Read Full Post »

http://www.freebase.com/

Read Full Post »

Digg thisBookmark this Print version Sphere It

Shared database MetaWeb gets $42M boost

By Matt Marshall 01.14.08

updated
metaweb.jpgMetaweb Technologies, the San Francisco company developing an open shared database called Freebase to store and edit the world’s information, has just gotten a big boost from Benchmark Capital

and Goldman Sachs

.The two firms have invested in a $42.4 million second round of capital for the company, VentureBeat has learned. The company could not be reached for comment. A partner at Benchmark was reached, but he declined comment. [Update: Benchmark followed up Tuesday confirming the news.] This follows a $15 million investment two years ago. Besides Benchmark, the earlier investors also include Millennium Technology Ventures and Omidyar Network

.The investment is considerable, and comes at a time when a number of experts are betting that a more powerful, “semantic” Web is about to emerge, where data about information is much more structured than it is today. People are still waiting for the “killer app” that will exploit this new sort of web, but it’s generally believed that a database such as Freebase or Twine will be needed for this to happen.

[Update: I should clarify: Twine is not so much a database as it is an application. But the linking of data — through relationships — is similar, and thus the easy confusion. Conceivably, Twine could use the Freebase database, and is thus complimentary. See comments below. Twine’s Nova Spivack says it best: “Twine is more like a semantic Facebook, and Metaweb is more like a semantic Wikipedia.” Metaweb is a content repository and Twine is an app that uses content for specific purposes.]

VentureBeat writer Chris Morrison once described Twine:

Let’s dumb this down to a very concrete example. In Twine, I might be identified as “Chris Morrison,” and then labeled with the markers “writer,” “venturebeat,” “male,” “technology,” “charming” and “good-looking” (all true, of course). Twine would set me apart from the many other Chris Morrisons running around.

Both Freebase and Twine have drawn considerable hype (see coverage when Freebase was announced last year). Freebase is essentially building a Wikipedia-like database, but with much more power. While volunteers are madly writing up entries on Wikipedia — and very good one at that — there’s no system on Wikipedia that can tell the functional relationship between two related pages — i.e., something telling it “here are all the entries about males who are good-looking and who write about technology.” Freebase has the ability to do that.

Here’s a video tour of how it works. Freebase categorizes knowledge according to thousands of “types” of information, such as film, director or city. Those are the highest order of categorization. Then underneath those types you have “topics,” which are individual examples of the types — such as Annie Hall and Woody Allen. It boasts two million topics to date. This lets Freebase represent information in a structured way, to support queries from web developers wanting to build applications around them. It also solicits people to contribute their knowledge to the database, governed by a community of editors. It offers a Creative Commons license so that it can be used to power applications, on an open API.

Search is an example of an application it can make more powerful. See the screenshots below, which show you the example of a search at Freebase for the word Manhattan. Freebase lets you further specify that you’re looking for a film, as opposed to a location, and it rearranges the results accordingly — something you can’t do with Google.

layton.jpgFreebase “knows” Woody Allen is an director, and then knows other directors. It also knows Allen’s place of birth, and it can take you to that city, where you can find other people born there, and so on. Everything is connected.

MetaWeb is run by Thomas Layton (pictured here), who became CEO last year, after he left another Benchmark company, OpenTable.

freebase-screen.jpg

freebase-screen2.jpg

freebase-screen3.jpg

Read Full Post »

Freebase Developer Metaweb Technologies Gets $42.4 Million

Authored by Mark Hefflinger on January 15, 2008 – 8:19am.
San Francisco Metaweb Technologies, the developer of an expansive online database that aims to house all the world’s data, has raised $42.4 million in its second round of venture capital financing, led by Benchmark Capital and Goldman Sachs, VentureBeat reports.San Francisco-based Metaweb is developing Freebase, a database that allows any user to add or edit content, build applications on top of, or integrate into their websites.

Content is drawn from sources including Wikipedia, MusicBrainz and the SEC archives.

Unlike Wikipedia, Freebase lists facts and statistics, instead of arranging information within encyclopedia-like articles.

The company has now raised a total of $57 million since 2006.

Related Links:
http://snipurl.com/1xido (VentureBeat)

http://www.freebase.com

Published in DMW Daily, January 15, 2007

Read Full Post »

Semantic Web

sw-horz-w3c.jpg

Semantic Web

From Wikipedia, the free encyclopedia

(Redirected from Semantic web)
Jump to: navigation, search

The Semantic Web is an evolving extension of the World Wide Web in which web content can be expressed not only in natural language, but also in a format that can be read and used by software agents, thus permitting them to find, share and integrate information more easily.[1] It derives from W3C director Tim Berners-Lee‘s vision of the Web as a universal medium for data, information, and knowledge exchange.

At its core, the semantic web is comprised of a philosophy,[2] a set of design principles,[3] collaborative working groups, and a variety of enabling technologies. Some elements of the semantic web are expressed as prospective future possibilities that have yet to be implemented or realized.[4] Other elements of the semantic web are expressed in formal specifications.[5] Some of these include Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain.

Contents

[hide]

//

[edit] Purpose

Humans are capable of using the Web to carry out tasks such as finding the Finnish word for “car,” to reserve a library book, or to search for the cheapest DVD and buy it. However, a computer cannot accomplish the same tasks without human direction because web pages are designed to be read by people, not machines. The semantic web is a vision of information that is understandable by computers, so that they can perform more of the tedious work involved in finding, sharing and combining information on the web.

For example, a computer might be instructed to list the prices of flat screen HDTVs larger than 40 inches (1,000 mm) with 1080p resolution at shops in the nearest town that are open until 8pm on Tuesday evenings. Today, this task requires search engines that are individually tailored to every website being searched. The semantic web provides a common standard (RDF) for websites to publish the relevant information in a more readily machine-processable and integratable form.

Tim Berners-Lee originally expressed the vision of the semantic web as follows[6]:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.

Tim Berners-Lee, 1999

Semantic publishing will benefit greatly from the semantic web. In particular, the semantic web is expected to revolutionize scientific publishing, such as real-time publishing and sharing of experimental data on the Internet. This simple but radical idea is now being explored by W3C HCLS group’s Scientific Publishing Task Force.

Tim Berners-Lee has further stated[7]:

People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource.

Tim Berners-Lee, A ‘more revolutionary’ Web

[edit] Relationship to the Hypertext Web

[edit] Markup

Many files on a typical computer can be loosely divided into documents and data. Documents, like mail messages, reports and brochures, are read by humans. Data, like calendars, addressbooks, playlists and spreadsheets, are presented using an application program which lets them be viewed, searched and combined in many ways.

Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags, for example <meta name="keywords" content="computing, computer studies, computer"><meta name="description" content="xxxx... "><meta name="author" content="xxxx"> provide a method by which computers can catagorise the content of web pages.

The semantic web takes the concept further; it involves publishing the data in a language, Resource Description Framework (RDF), specifically for data, so that it can be categorized as human perception and be “understood” by computers. So the data are not just stored but filed and well handled.

HTML describes documents and the links between them. RDF, by contrast, describes arbitrary things such as people, meetings, or airplane parts.

For example, with HTML and a tool to render it (perhaps Web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as “this document’s title is ‘Widget Superstore'”. But there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text “X586172” is something that should be positioned near “Acme Gizmo” and “€ 199”, etc. There is no way to say “this is a catalog” or even to establish that “Acme Gizmo” is a kind of title or that “€ 199” is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page.

See also: Semantic HTML, Linked Data.

[edit] Descriptive and extensible

The semantic web addresses this shortcoming, using the descriptive technologies Resource Description Framework (RDF) and Web Ontology Language (OWL), and the data-centric, customizable Extensible Markup Language (XML). These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest as descriptive data stored in Web-accessible databases, or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout/rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e. to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and facilitating automated information gathering and research by computers.

[edit] Skeptical reactions

[edit] Practical feasibility

Some critics question the basic feasibility of a complete or even partial fulfillment of the semantic web. Some develop their critique from the perspective of human behavior and personal preferences, which ostensibly diminish the likelihood of its fulfillment (see e.g., metacrap). Other commentators object that there are limitations that stem from the current state of software engineering itself. (see e.g., Leaky abstraction).

Where semantic web technologies have found a greater degree of practical adoption, it has tended to be among core specialized communities and organizations for intra company projects.[8] The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the world wide web.[8]

[edit] An unrealized idea

The original 2001 Scientific American article (from Berners-Lee) described an expected evolution of the existing Web to a Semantic Web. Such an evolution has yet to occur. Indeed, a more recent article from Berners-Lee and colleagues stated that: “This simple idea, however, remains largely unrealized.” [9] Nonetheless, the recognized authorities in the Semantic Web keep asserting the feasibility of the original idea, and sometimes they even claim that many of the components of the initial vision have already been deployed.[citation needed]

[edit] Censorship and privacy

Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geo location meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog.

[edit] Doubling output formats

Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism.

Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.

[edit] Components

[edit] XML, XML Schema, RDF, OWL, SPARQL

The semantic web comprises the standards and tools of XML, XML Schema, RDF, RDF Schema and OWL. The OWL Web Ontology Language Overview describes the function and relationship of each of these components of the semantic web:

  • XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within.
  • XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
  • RDF is a simple language for expressing data models, which refer to objects (“resources“) and their relationships. An RDF-based model can be represented in XML syntax.
  • RDF Schema is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. “exactly one”), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.
  • SPARQL is a protocol and query language for semantic web data sources.

Current ongoing standardizations include:

The intent is to enhance the usability and usefulness of the Web and its interconnected resources through:

  • servers which expose existing data systems using the RDF and SPARQL standards. Many converters to RDF exist from different applications. Relational databases are an important source. The semantic web server attaches to the existing system without affecting its operation.
  • documents “marked up” with semantic information (an extension of the HTML <meta> tags used in today’s Web pages to supply information for Web search engines using web crawlers). This could be machine-understandable information about the human-understandable content of the document (such as the creator, title, description, etc., of the document) or it could be purely metadata representing a set of facts (such as resources and services elsewhere in the site). (Note that anything that can be identified with a Uniform Resource Identifier (URI) can be described, so the semantic web can reason about animals, people, places, ideas, etc.) Semantic markup is often generated automatically, rather than manually.
  • common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of ‘the Author of the page’ won’t be confused with Author in the sense of a book that is the subject of a book review).
  • automated agents to perform tasks for users of the semantic web using this data
  • web-based services (often with agents of their own) to supply information specifically to agents (for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming).

[edit] Projects

[edit] Neurocommons

The Neurocommons is an open RDF database developed by Science Commons. It was compiled from major life sciences databases with a focus on neuroscience. It is accessible via a web-based front end using the SPARQL query language at its original location trieu and at the DERI mirror location.

[edit] FOAF

A popular application of the semantic web is Friend of a Friend (or FoaF), which describes relationships among people and other agents in terms of RDF.

[edit] SIOC

The SIOC Project – Semantically-Interlinked Online Communities provides a vocabulary of terms and relationships that model web data spaces. Examples of such data spaces include, among others: discussion forums, weblogs, blogrolls / feed subscriptions, mailing lists, shared bookmarks, image galleries.

[edit] SIMILE

Semantic Interoperability of Metadata and Information in unLike Environments Massachusetts Institute of Technology

SIMILE is a joint project, conducted by the MIT Libraries and MIT CSAIL, which seeks to enhance interoperability among digital assets, schemata/vocabularies/ontologies, meta data, and services.

[edit] Linking Open Data

 linking-open-data-diagram_2007-09.jpg

Datasets in the Linking Open Data project, as of September 2007

The Linking Open Data project is a community lead effort to create openly accessible, and interlinked, RDF Data on the Web. The data in question takes the form of RDF Data Sets drawn from a broad collection of data sources. There is a focus on the Linked Data style of publishing RDF on the Web.

The project is one of several sponsored by the W3C‘s Semantic Web Education & Outreach Interest Group (SWEO)

[edit] Tools

[edit] Browsers

A semantic web Browser is a form of Web User Agent that expressly requests RDF data from Web Servers using the best practice known as “Content Negotiation”. These tools provide a user interface that enables data-link oriented navigation of RDF data by dereferencing the data links (URIs) in the RDF Data Sets returned by Web Servers.

Examples of semantic web browsers include:

[edit] Services

[edit] Notification Services

[edit] Semantic Web Ping Service

The Semantic Web Ping Service is a notification service for the semantic web that tracks the creation and modification of RDF based data sources on the Web. It provides Web Services for loosely coupled monitoring of RDF data. In addition, it provides a breakdown of RDF data sources tracked by vocabulary that includes: SIOC, FOAF, DOAP, RDFS, and OWL.

[edit] Piggy Bank

Another freely downloadable tool is the plug-in to Firefox, Piggy Bank. Piggy Bank works by extracting or translating web scripts into RDF information and storing this information on the user’s computer. This information can then be retrieved independently of the original context and used in other contexts, for example by using Google Maps to display information. Piggy Bank works with a new service, Semantic Bank, which combines the idea of tagging information with the new web languages. Piggy Bank was developed by the Simile Project, which also provides RDFizers, tools that can be used to translate specific types of information, for example weather reports for US zip codes, into RDF. Efforts like these could ease a potentially troublesome transition between the web of today and its semantic successor.

[edit] See also

Concepts and methodologies
Related articles
Companies and applications

[edit] References

  • Cardoso, J. (March 2007). Semantic Web Services: Theory, Tools and Applications. Idea Group.. ISBN 978-1-59904-045-5. 
  • Cardoso, J., Sheth, Amit (2006). Semantic Web Services, Processes and Applications. Springer. ISBN 0-38730239-5. 
  • Jane Greenberg, Eva Mendez (Eds.) (2007). Knitting the Semantic Web. New York: The Haworth Information Press. ISBN 0-7890-3591-2. 

[edit] Notes

  1. ^ http://www.w3.org/2001/sw/SW-FAQ#What1
  2. ^ http://www.w3.org/2001/sw/Activity
  3. ^ http://www.w3.org/DesignIssues/
  4. ^ http://www.w3.org/2001/sw/SW-FAQ#What3
  5. ^ http://www.w3.org/2001/sw/#spec
  6. ^ Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web. HarperSanFrancisco, chapter 12. ISBN 9780062515872. 
  7. ^ Victoria Shannon (200606-26). A ‘more revolutionary’ Web. International Herald Tribune. Retrieved on 200605-24.
  8. ^ a b Ivan Herman (2007). State of the Semantic Web. Semantic Days 2007. Retrieved on 200707-26.
  9. ^ Nigel Shadbolt, Wendy Hall, Tim Berners-Lee (2006). The Semantic Web Revisited. IEEE Intelligent Systems. Retrieved on 200704-13.

[edit] External links

Read Full Post »

Collective intelligence

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Collective intelligence is a form of intelligence that emerges from the collaboration and competition of many individuals. Collective intelligence appears in a wide variety of forms of consensus decision making in bacteria, animals, humans, and computers. The study of collective intelligence may properly be considered a subfield of sociology, of business, of computer science, and of mass behavior — a field that studies collective behavior from the level of quarks to the level of bacterial, plant, animal, and human societies.

The above definition has emerged from the writings of Peter Russell (1983), Tom Atlee (1993), Pierre Lévy (1994), Howard Bloom (1995), Francis Heylighen (1995), Douglas Engelbart, Cliff Joslyn, Ron Dembo, Gottfried Mayer-Kress (2003) and other theorists. Collective intelligence is referred to as Symbiotic intelligence by Norman L. Johnson.

Some figures like Tom Atlee prefer to focus on collective intelligence primarily in humans and actively work to upgrade what Howard Bloom calls “the group IQ”. Atlee feels that collective intelligence can be encouraged “to overcome ‘groupthink‘ and individual cognitive bias in order to allow a collective to cooperate on one process—while achieving enhanced intellectual performance.”

One CI pioneer, George Pór, defined the collective intelligence phenomenon as “the capacity of human communities to evolve towards higher order complexity and harmony, through such innovation mechanisms as differentiation and integration, competition and collaboration.”[1] Tom Atlee and George Pór state that “collective intelligence also involves achieving a single focus of attention and standard of metrics which provide an appropriate threshold of action”. Their approach is rooted in Scientific Community Metaphor.

Contents

[hide]

//

[edit] General concepts

Howard Bloom traces the evolution of collective intelligence from the days of our bacterial ancestors 3.5 billion years ago to the present and demonstrates how a multi-species intelligence has worked since the beginning of life. [2]

Tom Atlee and George Pór, on the other hand, feel that while group theory and artificial intelligence have something to offer, the field of collective intelligence should be seen by some as primarily a human enterprise in which mind-sets, a willingness to share, and an openness to the value of distributed intelligence for the common good are paramount. Individuals who respect collective intelligence, say Atlee and Pór, are confident of their own abilities and recognize that the whole is indeed greater than the sum of any individual parts.[citation needed]

From Pór and Atlee’s point of view, maximizing collective intelligence relies on the ability of an organization to accept and develop “The Golden Suggestion”, which is any potentially useful input from any member. Groupthink often hampers collective intelligence by limiting input to a select few individuals or filtering potential Golden Suggestions without fully developing them to implementation.

Knowledge focusing through various voting methods has the potential for many unique perspectives to converge through the assumption that uninformed voting is to some degree random and can be filtered from the decision process leaving only a residue of informed consensus. Critics point out that often bad ideas, misunderstandings, and misconceptions are widely held, and that structuring of the decision process must favor experts who are presumably less prone to random or misinformed voting in a given context.

While these are the views of experts like Atlee and Pór, other founding fathers of collective intelligence see the field differently. Francis Heylighen, Valerie Turchin, and Gottfried Mayer-Kress view collective intelligence through the lens of computer science and cybernetics. Howard Bloom stresses the biological adaptations that have turned most of this earth’s living beings into components of what he calls “a learning machine”. And Peter Russell, Elisabet Sahtouris, and Barbara Marx Hubbard (originator of the term “conscious evolution”) are inspired by the visions of a noosphere–a transcendent, rapidly evolving collective intelligence–an informational cortex of the planet.

[edit] History

An early precursor of the concept of collective intelligence was entomologist William Morton Wheeler‘s observation that seemingly independent individuals can cooperate so closely as to become indistinguishable from a single organism. In 1911 Wheeler saw this collaborative process at work in ants, who acted like the cells of a single beast with a collective mind. He called the larger creature that the colony seemed to form a “superorganism”.

In 1912, Émile Durkheim identified society as the sole source of human logical thought. He argues in The Elementary Forms of Religious Life that society constitutes a higher intelligence because it transcends the individual over space and time. [3]

Collective intelligence, which has antecedents in Pierre Teilhard de Chardin‘s concept of “noosphere” as well as H.G. Wells‘s concept of “world brain,” has more recently been examined in depth by Pierre Lévy in a book by the same name, by Howard Bloom in Global Brain (see also the term global brain), by Howard Rheingold in Smart Mobs, and by Robert David Steele Vivas in The New Craft of Intelligence. The latter introduces the concept of all citizens as “intelligence minutemen,” drawing only on legal and ethical sources of information, as able to create a “public intelligence” that keeps public officials and corporate managers honest, turning the concept of “national intelligence” on its head (previously concerned about spies and secrecy).

In 1986, Howard Bloom combined the concepts of apoptosis, parallel distributed processing, group selection, and the superorganism to produce a theory of how a collective intelligence works [4]. Later, he went further and showed how collective intelligences like those of competing bacterial colonies and of competing human societies can be explained in terms of computer-generated “complex adaptive systems” and the “genetic algorithms”, concepts pioneered by John Holland. [2]

David Skrbina [5] cites the concept of a ‘group mind’ as being derived from Plato’s concept of panpsychism (that mind or consciousness is omnipresent and exists in all matter). He follows the development of the concept of a ‘group mind’ as articulated by Hobbes in relation to his Leviathan which functioned as a coherent entity and Fechner’s arguments for a collective consciousness of mankind. He cites Durkheim as the most notable advocate of a ‘collective consciousness” and Teilhard as the thinker who has developed the philosophical implications of the group mind more than any other.

Collective intelligence is an amplification of the precepts of the Founding Fathers, as represented by Thomas Jefferson in his statement, “A Nation’s best defense is an educated citizenry.” During the industrial era, schools and corporations took a turn toward separating elites from the people they expected to follow them. Both government and private sector organizations glorified bureaucracy and, with bureaucracy, secrecy and compartmentalized knowledge. In the past twenty years, a body of knowledge has emerged which demonstrates that secrecy is actually pathological, and enables selfish decisions against the public interest. Collective intelligence restores the power of the people over their society, and neutralizes the power of vested interests that manipulate information to concentrate wealth.

[edit] Types of collective intelligence

CI_types1s.jpg

[edit] Examples of collective intelligence

The best-known collective intelligence projects are political parties, which mobilize large numbers of people to form policy, select candidates and to finance and run election campaigns. Military units, trade unions, and corporations are focused on more narrow concerns but would satisfy some definitions of a genuine “C.I.”—the most rigorous definition would require a capacity to respond to very arbitrary conditions without orders or guidance from “law” or “customers” that tightly constrain actions. Another example is in which online advertising companies like BootB are using collective intelligence in order to bypass marketing agencies.

Improvisational actors also experience a type of collective intelligence, which they term ‘Group Mind’.

[edit] Mathematical techniques

One measure sometimes applied, especially by more artificial intelligence focused theorists, is a “collective intelligence quotient” (or “cooperation quotient”)—which presumably can be measured like the “individual” intelligence quotient (IQ)—thus making it possible to determine the marginal extra intelligence added by each new individual participating in the collective, thus using metrics to avoid the hazards of group think and stupidity.

In 2001, Tadeusz (Ted) Szuba from AGH University in Poland proposed a formal model for the phenomenon of Collective Intelligence. It is assumed to be an unconscious, random, parallel, and distributed computational process, run in mathematical logic by the social structure. [6].

In this model, beings and information are modeled as abstract information molecules carrying expressions of mathematical logic. They are quasi-randomly displacing due to their interaction with their environments with their intended displacements. Their interaction in abstract computational space creates multithread inference process which we perceive as Collective Intelligence. Thus, a non-Turing model of computation is used. This theory allows simple formal definition of Collective Intelligence as the property of social structure and seems to be working well for a wide spectrum of beings, from bacterial colonies up to human social structures. Collective Intelligence considered as a specific computational process is providing a straightforward explanation of several social phenomena. For this model of Collective Intelligence, the formal definition of IQS (IQ Social) was proposed and was defined as “the probability function over the time and domain of N-element inferences which are reflecting inference activity of the social structure.” While IQS seems to be computationally hard, modeling of social structure in terms of a computational process as described above gives a chance for approximation. Prospective applications are optimization of companies through the maximization of their IQS, and the analysis of drug resistance against Collective Intelligence of bacterial colonies.[6]

[edit] Opposing views

Skeptics, especially those critical of artificial intelligence and more inclined to believe that risk of bodily harm and bodily action are the basis of all unity between people, are more likely to emphasize the capacity of a group to take action and withstand harm as one fluid mass mobilization, shrugging off harms the way a body shrugs off the loss of a few cells. This strain of thought is most obvious in the anti-globalization movement and characterized by the works of John Zerzan, Carol Moore, and Starhawk, who typically shun academics. These theorists are more likely to refer to ecological and collective wisdom and to the role of consensus process in making ontological distinctions than to any form of “intelligence” as such, which they often argue does not exist, or is mere “cleverness”.

Harsh critics of artificial intelligence on ethical grounds are likely to promote collective wisdom-building methods, such as the new tribalists and the Gaians. Whether these can be said to be collective intelligence systems is an open question. Some, e.g. Bill Joy, simply wish to avoid any form of autonomous artificial intelligence and seem willing to work on rigorous collective intelligence in order to remove any possible niche for AI.

[edit] Recent developments

Growth of the Internet and mobile telecom has also highlighted “swarming” or “rendezvous” technologies that enable meetings or even dates on demand. The full impact of such technology on collective intelligence and political effort has yet to be felt, but the anti-globalization movement relies heavily on e-mail, cell phones, pagers, SMS, and other means of organizing before, during, and after events. One theorist involved in both political and theoretical activity, Tom Atlee, codifies on a disciplined basis the connections between these events and the political imperatives that drive them. The Indymedia organization does this in a more journalistic way, and there is some coverage of such current events even here at Wikipedia.

It seems likely that such resources could combine in future into a form of collective intelligence accountable only to the current participants but with some strong moral or linguistic guidance from generations of contributors – or even take on a more obviously political form, to advance some shared goals.

[edit] See also

[edit] Notes

  1. ^ George Pór, Blog of Collective Intelligence
  2. ^ a b Howard Bloom, Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century, 2000
  3. ^ Émile Durkheim, The Elementary Forms of Religious Life, 1912.
  4. ^ Howard Bloom, The Lucifer Principle: A Scientific Expedition Into the Forces of History, 1995
  5. ^ Skrbina, D., 2001, Participation, Organization, and Mind: Toward a Participatory Worldview [1], ch. 8, Doctoral Thesis, Centre for Action Research in Professional Practice, School of Management, University of Bath: England
  6. ^ a b Szuba T., Computational Collective Intelligence, 420 pages, Wiley NY, 2001

[edit] References

Sun, Ron, (2006). “Cognition and Multi-Agent Interaction”. Cambridge University Press.

[edit] External links

Read Full Post »

Thursday Sep 20, 2007

hyperdata

I just came across a recent post by Nova Spivack, “The Semantic Web, Collective Intelligence and Hyperdata“, where he defines a couple of very useful words.

First the very useful concept of hyperdata:

One might respond […] by noting that there is already a lot of data on the Web, in XML and other formats — how is the Semantic Web different from that? What is the difference between “Data on the Web” and the idea of “The Data Web?”The best answer to this question that I have heard was something that Dean Allemang said at a recent Semantic Web SIG in Palo Alto. Dean said, “Sure there is data on the Web, but it’s not actually a web of data.” The difference is that in the Semantic Web paradigm, the data can be linked to other data in other places, it’s a web of data, not just data on the Web.

I call this concept of interconnected data, “Hyperdata.” It does for data what hypertext did for text. I’m probably not the originator of this term, but I think it is a very useful term and analogy for explaining the value of the Semantic Web.

Then in the context of the ongoing discussion on tagging and folksonomies, he defines folktologies:

For example, take Metaweb’s Freebase. Freebase is what I call a “folktology” — it’s an emergent, community generated ontology. Users collaborate to add to the ontology and the knowledge base that is populated within it.

Nova’s Hyperdata article was written in response to Tim O’Reilly’s recent post Economist Confused about the Semantic Web. Tim correctly points out that the word Semantic is often used to cover technologies that are closer to Web2.0, data silo technologies. But to get out of the data silos, we need hyperdata which is, like the web, and contra Tim, a social/folk/community enterprise.

The original Economist article was published on August 28 2007: The web: some antics

Btw. I have some open invitations to Freebase for those who want to help edit it. It can now be browsed by anyone.

Read Full Post »

« Newer Posts - Older Posts »

%d bloggers like this: