Feeds:
Posts
Comments

Posts Tagged ‘AI Matrix’

Evolving Trends

July 11, 2006

P2P 3.0: The People’s Google

/*

This is a more extensive version of the Web 3.0 article with extra sections about the implications of Web 3.0 to Google.

See this follow up article  for the more disruptive ‘decentralized kowledgebase’ version of the model discussed in article.

Also see this non-Web3.0 version: P2P to Destroy Google, Yahoo, eBay et al 

Web 3.0 Developers:

Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

  1. Description Logic Programs: Combining Logic Programs with Description Logic

*/

In Web 3.0 (aka Semantic Web) P2P Inference Engines running on millions of users’ PCs and working with standardized domain-specific ontologies (created by Wikipedia, Ontoworld, other organizations or individuals) using Semantic Web tools, including Semantic MediaWiki, will produce an infomration infrastructure far more powerful than Google (or any current search engine.)

The availability of standardized ontologies that are being created by people, organizations, swarms, smart mobs, e-societies, etc, and the near-future availability of P2P Semantic Web Inference Engines that work with those ontologies means that we will be able to build an intelligent, decentralized, “P2P” version of Google.

Thus, the emergence of P2P Inference Engines and domain-specific ontologies in Web 3.0 (aka Semantic Web) will present a major threat to the central “search” engine model.

Basic Web 3.0 Concepts

Knowledge domains

A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

Information vs Knowledge

To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

Ontologies

For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

Inference Engines

In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

Info Agents

Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

Proofs and Answers

The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

P2P 3.0 vs Google

If you think of how many processes currently run on all the computers and devices connected to the Internet then that should give you an idea of how many Info Agents can be running at once (as of today), all reasoning collaboratively across the different domains of human knowledge, processing and reasoning about heaps of information, deducing answers and deciding truthfulness or falsehood of user-stated or system-generated propositions.

Web 3.0 will bring with it a shift from centralized search engines to P2P Semantic Web Inference Engines, which will collectively have vastly more deductive power, in both quality and quantity, than Google can ever have (included in this exclusion is any future AI-enabled version of Google, as it will not be able to keep up with the distributed P2P AI matrix that will be enabled by millions of users running free P2P Semantic Web Inference Engine software on their home PCs.)

Thus, P2P Semantic Web Inference Engines will pose a huge and escalating threat to Google and other search engines and will expectedly do to them what P2P file sharing and BitTorrent did to FTP (central-server file transfer) and centralized file hosting in general (e.g. Amazon’s S3 use of BitTorrent.)

In other words, the coming of P2P Semantic Web Inference Engines, as an integral part of the still-emerging Web 3.0, will threaten to wipe out Google and other existing search engines. It’s hard to imagine how any one company could compete with 2 billion Web users (and counting), all of whom are potential users of the disruptive P2P model described here.

“The Future Has Arrived But It’s Not Evenly Distributed”

Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

Addendum

On AI and Natural Language Processing

I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

Related

  1. Wikipedia 3.0: The End of Google?
  2. Intelligence (Not Content) is King in Web 3.0
  3. Get Your DBin
  4. All About Web 3.0

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

Advertisements

Read Full Post »

Evolving Trends

June 11, 2006

P2P Semantic Web Engines

No Comments »

Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    June 30, 2006

    Web 3.0: Basic Concepts

    /*(this post was last updated at 1:20pm EST, July 19, ‘06)

    You may also wish to see Wikipedia 3.0: The End of Google? (The original ‘Web 3.0/Semantic Web’ article) and P2P 3.0: The People’s Google (a more extensive version of this article showing the implication of P2P Semantic Web Engines to Google.)

    Web 3.0 Developers:

    Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

    1. Description Logic Programs: Combining Logic Programs with Description Logic

    */

    Basic Web 3.0 Concepts

    Knowledge domains

    A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

    Information vs Knowledge

    To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

    Ontologies

    For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

    Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Inference Engines

    In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

    Info Agents

    Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

    Proofs and Answers

    The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

    This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

    “The Future Has Arrived But It’s Not Evenly Distributed”

    Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

    The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of artficial intelligence (AI) that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. All About Web 3.0
    4. Semantic MediaWiki
    5. Get Your DBin

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    Evolving Trends

    July 17, 2006

    Intelligence (Not Content) is King in Web 3.0

    Observation

    1. There’s an enormous amount of free content on the Web.
    2. Pirates will aways find ways to share copyrighted content, i.e. get content for free.
    3. There’s an exponential growth in the amount of free, user-generated content.
    4. Net Neutrality (or the lack of a two-tier Internet) will only help ensure the continuance of this trend.
    5. Content is is becoming so commoditized that it only costs us the monthly ISP fee to access.

    Conslusions (or Hypotheses)

    The next value paradigm in the content business is going to be about embedding “intelligent findability” into the content layer, by using a semantic CMS (like Semantic MediaWiki, that enables domain experts to build informal ontologies [or semantic annotations] on top of the information) and by adding inferencing capabilities to existing search engines. I know this represents less than the full vision for Web 3.0 as I’ve outlined in the Wikipedia 3.0 and Web 3.0 articles but it’s a quantum leap above and beyond the level of intelligence that exists today within the content layer. Also, semantic CMS can be part of P2P Semantic Web Inference Engine applications that would push central search model’s like Google’s a step closer to being a “utility” like transport, unless Google builds their own AI, which would then have to compete with the people’s P2P version (see: P2P 3.0: The People’s Google and Get Your DBin.)

    In other words, “intelligent findability” NOT content in itself will be King in Web 3.0.

    Related

    1. Towards Intelligent Findability
    2. Wikipedia 3.0: The End of Google?
    3. Web 3.0: Basic Concepts
    4. P2P 3.0: The People’s Google
    5. Why Net Neutrality is Good for Web 3.0
    6. Semantic MediaWiki
    7. Get Your DBin

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    Evolving Trends

    July 17, 2006

    Web 3.0 Blog Application

    (this post was reblogged at 7:31am EST, Jan 6, ‘07)Background

    As concluded in my previous post there’s an exponetial growth in the amount of user-generated content (videos, blogs, photos, P2P content, etc).

    The enormous amount of free content available today is just too much for the current “dumb search” technology that is used to access it.

    I believe that content is now a commodity and the next layer of value is all about “Intelligent Findability.”

    Take my blog for example, it’s less than 60 days old, and I’ve never blogged before, but as of today it already has ~500 RSS daily subscribers (and growing), with a noticeable increase after the iPod post I made 3 days ago, 6,281 incoming links (according to MSN) and ~70,000 page views in total so far (mostly due to the Wikipedia 3.0 post, which according to Alexa.com reached an estimated ~2M people.) That demonstrates the potential of blogs to generate and spread lots of content.

    So there is a lot of blog-generated content (if you consider how many bloggers are out there) and that doesn’t even include the hundreds of thousands (or millions?) of videos and photos uploaded daily to YouTube, Google Video, Flickr and all those other video and photo sharing sites. It also doesn’t include the 30% of total Internet bandwidth being sucked up by BitTorrent clients.

    There’s just too much content and no seriously effective way to find what you need. Google is our only hope for now but Google is rudimentary compared to the vision of Semantic-Web Info Agents expressed in the Wikipedia 3.0 and Web 3.0 articles.

    Idea

    We’d like to embed “Intelligent Findability” into a blogging application so that others will be able to get the most of the information, ideas and analyses we generate.
    If you do a search right now for “cool consumer idea” you will not get the iPod post. Instead you will get this post, but that is because I’m specifically making the association between “cool consumer idea” and “iPod” in this post.

    Google tries to get around the debilitating limitation of keyword-based search engine technology in the same way by letting people associate phrases or words with a given link. If enough people linked to the iPod post and put the words “cool consumer idea” in the link then when searching Google for “cool consumer idea” you will see the iPod post. However, unless people band together and decide to call it a “cool consumer idea” it won’t show up in the search results. You would have to enter something like “portable music application” (which is actually one of the search results that showed up on my WordPress dashboard today.)

    Using Semantic MediaWiki (which allows domain experts to embed semantic annotations into the information) I could insert semantic annotations to semantically link concepts in the information on this blog that would build an ontology that defines semantic relationships between terms in the information (i.e. meaning) where “iPod” would be semantically related to “product” which would be semantically related to “consumer electronics” and where the sentence Portable Music Studio would be semantically related (through use of annotations) to “vision”, “idea”, “concept”, “entertainment”, “music”, “consumer electronics”, “mp3 player” and so on, while the “iPod” would be also semantically related to “cool” (as in what is “cool”?) Thus, using rules of inference for my domain of knowledge I should able to deliver an intelligent search capability that deductively reasons the best match to a search query, based on matching the deduced meanings (represented as semantic graphs) from the user’s query and the information.

    The quality of the deductive capability would depend on the consistency and completeness of the semantic annotations and the pan-domain or EvolvingTrends-domain ontology that I would build, among other factors. But generally speaking, since the ontology and the semantic annotations would be built by me if we think alike (or have a fairly similar semantic model of the world) then you will not only be able to read my blog but you will be able to read my mind. The idea is that, with my help in supplying the semantic annotations, such system will be able to deduce possible meaning (as a graph of semantic relationships) out of each sentence in the post and respond to search queries by reasoning about meaning rather than matching keywords.

    This is possible with Semantic MediaWiki (which is under development) However, in this particular instance, I don’t want a Semantic Wiki. I want a Semantic Blog. But that should be just a simple step away.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Towards Intelligent Findability
    3. Web 3.0: Basic Concepts
    4. Intelligence (Not Content) is King in Web 3.0
    5. Semantic MediaWiki
    6. Open Source Your Mind
    7. iPod as a Portable Music Studio

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    Evolving Trends

    July 12, 2006

    Wikipedia 3.0: El fin de Google (traducción)

    Wikipedia 3.0: El fin de Google (traducción)

    Translation kindly provided by Eric Rodriguez

    /*

    Desarrolladores: Este es el nuevo proyecto open source Semantic MediaWiki.

    Bloggers: Este post explica la curiosa historia sobre como este articulo alcanzó 33,000 lectores solo en las primeras 24 horas desde su publicación, a través de digg. Este post explica cuál es el problema con digg y la Web 2.0 y como solucionarlo.

    Relacionado:

    1. All About Web 3.0
    2. P2P 3.0: The People’s Google
    3. Google Dont Like Web 3.0 [sic]
    4. For Great Justice, Take Off Every Digg
    5. Reality as a Service (RaaS): The Case for GWorld
    6. From Mediocre to Visionary

    */

    por Marc Fawzi de Evolving Trends

    Versión española (por Eric Rodriguez de Toxicafunk)

    La Web Semántica (o Web 3.0) promete “organizar la información mundial” de una forma dramáticamente más lógica que lo que Google podría lograr con su diseño de motor actual. Esto es cierto desde el punto de vista de la comprensión por parte de las maquinas versus la humana. La Web Semántica requiere del uso de un lenguaje ontológico declarativo, como lo es OWL, para producir ontologías específicas de dominio que las máquinas pueden usar para razonar sobre la información y de esta forma alcanzar nuevas conclusiones, en lugar de simplemente buscar / encontrar palabras claves.

    Sin embargo, la Web Semántica, que se encuentra todavía en una etapa de desarrollo en la que los investigadores intentan definir que modelo es el mejor y cual tiene mayor usabilidad, requeriría la participación de miles de expertos en distintos campos por un periodo indefinido de tiempo para poder producir las ontologías específicas de dominio necesarias para su funcionamiento.

    Las maquinas (o más bien el razonamiento basado en maquinas, también conocido como Software IA o ‘agentes de información’) podrían entonces usar las laboriosas –mas no completamente manuales- ontologías elaboradas para construir una vista (o modelo formal) sobre como los términos individuales, en un determinado conjunto de información, se relacionan entre sí. Tales relaciones se pueden considerar como axiomas (premisas básicas), que junto con las reglas que gobiernan el proceso de inferencia permiten a la vez que limitan la interpretación (y el uso correctamente-formado) de dichos términos por parte de los agentes de información, para poder razonar nuevas conclusiones basándose en la información existente, es decir, pensar. En otras palabras, se podría usar software para generar teoremas (proposiciones formales demostrables basadas en axiomas y en las reglas de inferencia), permitiendo así el razonamiento deductivo formal a nivel de máquinas. Y dado que una ontología, tal como se describe aquí, se trata de un enunciado de Teoría Lógica, dos o más agentes de información procesando la misma ontología de un dominio específico serán capaces de colaborar y deducir la respuesta a una query (búsqueda o consulta a una base de datos), sin ser dirigidos por el mismo software.

    De esta forma, y como se ha establecido, en la Web Semántica los agentes basados en maquina (o un grupo colaborador de agentes) serán capaces de entender y usar la información traduciendo conceptos y deduciendo nueva información en lugar de simplemente encontrar palabras clave.

    Una vez que las máquinas puedan entender y usar la información, usando un lenguaje estándar de ontología, el mundo nuca volverá a ser el mismo. Será posible tener un agente de información (o varios) entre tu ‘fuerza laboral‘ virtual aumentada por IA, cada uno teniendo acceso a diferentes espacios de dominio especifico de comprensión y todos comunicándose entre si para formar una conciencia colectiva.

    Podrás pedirle a tu agente o agentes de información que te encuentre el restaurante más cercano de cocina Italiana, aunque el restaurante más cercano a ti se promocione como un sitio para Pizza y no como un restaurante Italiano. Pero este es solo un ejemplo muy simple del razonamiento deductivo que las máquinas serán capaces de hacer a partir de la información existente.

    Implicaciones mucho más sorprendentes se verán cuando se considere que cada área del conocimiento humano estará automáticamente al alcance del espacio de comprensión de tus agentes de información. Esto es debido a que cada agente se puede comunicar con otros agentes de información especializados en diferentes dominios de conocimiento para producir una conciencia colectiva (usando la metáfora Borg) que abarca todo el conocimiento humano. La “mente” colectiva de dichos agentes-como-el-Borg conformara la Maquina Definitiva de Respuestas, desplazando fácilmente a Google de esta posición, que no ocupa enteramente.

    El problema con la Web Semántica, aparte de que los investigadores siguen debatiendo sobre que diseño e implementación de modelo de lenguaje de ontología (y tecnologías asociadas) es el mejor y el más usable, es que tomaría a miles o incluso miles de miles de personas con vastos conocimientos muchos años trasladar el conocimiento humano a ontologías especificas de dominio.

    Sin embargo, si en algún punto tomáramos la comunidad Wikipedia y les facilitásemos las herramientas y los estándares adecuados con que trabajar (sean estos existentes o a desarrollar en el futuro), de forma que sea posible para individuos razonablemente capaces reducir el conocimiento humano en ontologías de dominios específicos, entonces el tiempo necesario para hacerlo se vería acortado a unos cuantos años o posiblemente dos

    El surgimiento de una Wikipedia 3.0 (en referencia a Web 3.0, nombre dado a la Web Semántica) basada en el modelo de la Web Semántica anunciaría el fin de Google como la Maquina Definitiva de Respuestas. Este sería remplazado por “WikiMind” (WikiMente) que no sería un simple motor de búsqueda como Google sino un verdadero Cerebro Global: un poderoso motor de inferencia de dominios, con un vasto conjunto de ontologías (a la Wikipedia 3.0) cubriendo todos los dominios de conocimiento humano, capaz de razonar y deducir las respuestas en lugar de simplemente arrojar cruda información mediante el desfasado concepto de motor de búsqueda.

    Notas
    Tras escribir el post original descubrí que la aplicación Wikipedia, también conocida como MeadiaWiki que no ha de confundirse con Wikipedia.org, ya ha sido usado para implementar ontologías. El nombre que han seleccionado es Ontoworld. Me parece que WikiMind o WikiBorg hubiera sido un nombre más atractivo, pero Ontoworld también me gusta, algo así como “y entonces descendió al mundo,” (1) ya que se puede tomar como una referencia a la mente global que un Ontoworld capacitado con la Web Semántica daría a lugar.

    En tan solo unos cuantos años la tecnología de motor e búsqueda que provee a Google casi todos sus ingresos/capital, seria obsoleta… A menos que tuvieran un contrato con Ontoworld que les permitiera conectarse a su base de datos de ontologías añadiendo así la capacidad de motor de inferencia a las búsquedas de Google.

    Pero lo mismo es cierto para Ask,com y MSN y Yahoo.

    A mi me encantaría ver más competencia en este campo, y no ver a Google o cualquier otra compañía establecerse como líder sobre los otros.

    La pregunta, usando términos Churchilianos, es si la combinación de Wikipedia con la Web Semántica significa el principio del fin para Google o el fin del principio. Obviamente, con miles de billones de dólares con dinero de sus inversionistas en juego, yo opinaría que es lo último. Sin embargo, si me gustaría ver que alguien los superase (lo cual es posible en mi opinión).

    (1) El autor hace referencia al juego de palabra que da el prefijo Onto de ontología que suena igual al adverbio unto en ingles. La frase original es “and it descended onto the world,”.

    Aclaración
    Favor observar que Ontoworld, que implementa actualmente las ontologías, se basa en la aplicación “Wikipedia” (también conocida como MediaWiki) que no es lo mismo que Wikipedia.org.

    Así mismo, espero que Wikipedia.org utilice su fuerza de trabajo de voluntarios para reducir la suma de conocimiento humano que se ha introducido en su base de datos a ontologías de dominio específico para la Web Semántica (Web 3.0) y por lo tanto, “Wikipedia 3.0”.

    Respuesta a Comentarios de los Lectores
    Mi argumento es que Wikipedia actualmente ya cuenta con los recursos de voluntarios para producir las ontologías para cada uno de los dominios de conocimiento que actualmente cubre y que la Web Semántica tanto necesita, mientras que Google no cuenta con tales recursos, por lo que dependería de Wikipedia.

    Las ontologías junto con toda la información de la Web, podrán ser accedidas por Google y los demás pero será Wikipedia quien quede a cargo de tales ontologías debido a que actualmente Wikipedia ya cubre una enorme cantidad de dominios de conocimiento y es ahí donde veo el cambio en el poder.

    Ni Google ni las otras compañías posee el recurso humano (los miles de voluntarios con que cuenta Wikipedia) necesario para crear las ontologías para todos los dominios de conocimiento que Wikipedia ya cubre. Wikipedia si cuenta con tales recursos y además esta posicionada de forma tal que puede hacer trabajo mejor y más efectivo que cualquier otro. Es difícil concebir como Google lograría crear dichas ontologías (que crecen constantemente tanto en numero como en tamaño) dado la cantidad de trabajo que se requiere. Wikipedia, en cambio, puede avanzar de forma mucho más rápida gracias a su masiva y dedicada fuerza de voluntarios expertos.

    Creo que la ventaja competitiva será para quien controle la creación de ontologías para el mayor numero de dominios de conocimiento (es decir, Wikipedia) y no para quien simplemente acceda a ellas (es decir, Google).

    Existen muchos dominios de conocimiento que Wikipedia todavía no cubre. En esto Google tendría una oportunidad pero solamente si las personas y organizaciones que producen la información hicieran también sus propias ontologías, tal que Google pudiera acceder a ellas a través de su futuro motor de Web Semántica. Soy de la opinión que esto será así en el futuro pero que sucederá poco a poco y que Wikipedia puede tener listas las ontologías para todos los dominios de conocimiento con que ya cuenta mucho más rápido además de contar con la enorme ventaja de que ellos estarían a cargo de esas ontologías (la capa básica para permitir la IA).

    Todavía no esta claro, por supuesto, si la combinación de Wikipedia con la Web Semántica anuncia el fin de Google o el fin del principio. Como ya mencioné en el artículo original. Me parece que es la última opción, y que la pregunta que titula de este post, bajo el presente contexto, es meramente retórica. Sin embargo, podría equivocarme en mi juicio y puede que Google de paso a Wikipedia como la maquina definitiva de respuestas mundial.

    Después de todo, Wikipedia cuenta con “nosotros”. Google no. Wikipedia deriva su de poder de “nosotros”. Google deriva su poder de su tecnología y su inflado precio de mercado. ¿Con quien contarías para cambiar el mundo?

    Respuesta a Preguntas Básicas por parte de los Lectores
    El lector divotdave formulá unas cuantas preguntas que me parecen de naturaleza básica (es decir, importante). Creo que más personas se estarán preguntando las mismas cuestiones por lo que las incluyo con sus respectivas respuestas.

    Pregunta:
    ¿Como distinguir entre buena y mala información? Como determinar que partes del conocimiento humano aceptar y que parte rechazar?

    Respuesta:
    No es necesario distinguir entre buena y mala información (que no ha de confundirse con bien-formada vs. mal-formada) si se utiliza una fuente de información confiable (con ontologías confiables asociadas). Es decir, si la información o conocimiento que se busca se puede derivar de Wikipedia 3.0, entonces se asume que la información es confiable.

    Sin embargo, con respecto a como conectar los puntos al devolver información o deducir respuestas del inmenso mar de información que va más allá de Wikipedia, entonces la pregunta se vuelve muy relevante. Como se podría distinguir la buena información de la mala de forma que se pueda producir buen conocimiento (es decir, comprender información o nueva información producida a través del razonamiento deductivo basado en la información existente).

    Pregunta:
    Quien, o qué según sea el caso, determina que información es irrelevante para mí como usuario final?

    Respuesta:
    Esta es una buena pregunta que debe ser respondida por los investigadores que trabajan en los motores IA para la Web 3.0.

    Será necesario hacer ciertas suposiciones sobre que es lo que se está preguntando. De la misma forma en que tuve que suponer ciertas cosas sobre lo que realmente me estabas preguntando al leer tu pregunta, también lo tendrán que hacer los motores IA, basados en un proceso cognitivo muy similar al nuestro, lo cual es tema para otro post, pero que ha sido estudiado por muchos investigadores IA.

    Pregunta:
    ¿Significa esto en última instancia que emergerá un todopoderoso* estándar al cual toda la humanidad tendrá que adherirse (por falta de información alternativa)?

    Respuesta:
    No existe la necesidad de un estándar, excepto referente al lenguaje en el que se escribirán las ontologías (es decir, OWL, OWL-DL. OWL Full, etc.). Los investigadores de la Web Semántica intentan determinar la mejor opción, y la más usable, tomando en consideración el desempeño humano y de las máquinas al construir y –exclusivamente en el último caso- interpretar dichas ontologías.

    Dos o más agentes de información que trabajen con la misma ontología especifica de dominio pero con diferente software (diferente motor IA) pueden colaborar entre ellos. El único estándar necesario es el lenguaje de la ontología y las herramientas asociadas de producción.

    Anexo

    Sobre IA y el Procesamiento del Lenguaje Natural

    Me parece que la primera generación de IA que será usada por la Web 3.0 (conocido como Web Semántica) estará basada en motores de inferencia relativamente simples (empleando enfoques tanto algorítmicos como heurísticas) que no intentarán ningún tipo de procesamiento de lenguaje natural. Sin embargo, si mantendrán las capacidades de razonamiento deductivo formal descritas en este articulo.

    Sobre el debate acerca de La Naturaleza y Definición de IA

    La introducción de la IA en el ciber-espacio se hará en primer lugar con motores de inferencia (usando algoritmos y heurística) que colaboren de manera similar al P2P y que utilicen ontologías estándar. La interacción paralela entre cientos de millones de Agentes IA ejecutándose dentro de motores P2P de IA en las PCs de los usuarios dará cabida al complejo comportamiento del futuro cerebro global.

    2 Comments »

    1. […] Acá un recorte directo de la traducción del articulo original. (perdí mucho tiempo tratando de entenderlo, se nota?) por Marc Fawzi de Evolving Trends […]Pingback by DxZone 2.0 (beta) – DxBlog » Blog Archive » Web 3.0? — August 7, 2006 @ 9:03 pm
    2. Es muy interesante. Creo que el artículo de Wikipedia sobre Web 2.0 complementa muy bien este trabajo:

      Bien podría hablarse de la Web 3.0 para la Web semántica. Pero una diferencia fundamental entre ambas versiones de web (2.0 y 3.0) es el tipo de participante. La 2.0 tiene como principal protagonista al usuario humano que escribe artículos en su blog o colabora en un wiki. El requisito es que además de publicar en HTML emita parte de sus aportaciones en XML/RDF (RSS, ATOM, etc.). La 3.0, sin embargo, está orientada hacia el protagonismo de procesadores mecánicos que entiendan de lógica descriptiva en OWL. La 3.0 está concebida para que las máquinas hagan el trabajo de las personas a la hora de procesar la avalancha de información publicada en la Web.

      La clave está aquí al final: la Web 3.0 será protagonizada por robots inteligentes y dispositivos ubícuos. De esto ya ha dicho algo O’Reilly.

      Desde luego estoy de acuerdo con el autor, la Wikipedia semántica será la bomba, pero me temo que será un subconjunto de la social o folcsonómica, porque la semántica tiene limitaciones. Debería explicar esto en algún artículo. Tal vez lo haga en las páginas de nuestro proyecto Wikiesfera, que para eso es más sexy un wiki que un blog. 😉

      Gracias por la traducción.

      Comment by Joseba — November 30, 2006 @ 1:19 am

    RSS feed for comments on this post. TrackBack URI

    Leave a comment

    Read Full Post »

    Evolving Trends

    July 29, 2006

    Search By Meaning

    I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

    However, I also do see the solution!

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. Intelligence (Not Content) is King in Web 3.0
    4. Web 3.0 Blog Application
    5. Towards Intelligent Findability
    6. All About Web 3.0

    Beats

    42. Grey Cell Green

    Posted by Marc Fawzi

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

    5 Comments »

    1. context is a kind of meaning, innit?

      Comment by qxcvz — July 30, 2006 @ 3:24 am

    2. You’re one piece short of Lego Land.

      I have to make the trek down to San Diego and see what it’s all about.

      How do you like that for context!? 🙂

      Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

      Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

      The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

      I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

      [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

      An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

      Marc

      p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

      Comment by evolvingtrends — July 30, 2006 @ 10:19 am

    3. Hey there, Marc.
      Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

      Comment by Ian — August 7, 2006 @ 5:06 pm

    4. Luckily, I’m not an opinion journalist although I could easily pass for one.

      You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

      BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

      :] … dzzt.

      Marc

      Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

    5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

      Hope we’ll still be able to do a little “fencing” around this subject!

      Comment by Ian — August 7, 2006 @ 7:01 pm

    RSS feed for comments on this post. TrackBack URI

    Read Full Post »

    Older Posts »

    %d bloggers like this: