Feeds:
Posts
Comments

Posts Tagged ‘evolution’

  • My Dashboard
  • New Post
  • Evolving Trends

    June 30, 2006

    Web 3.0: Basic Concepts

    /*(this post was last updated at 1:20pm EST, July 19, ‘06)

    You may also wish to see Wikipedia 3.0: The End of Google? (The original ‘Web 3.0/Semantic Web’ article) and P2P 3.0: The People’s Google (a more extensive version of this article showing the implication of P2P Semantic Web Engines to Google.)

    Web 3.0 Developers:

    Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

    1. Description Logic Programs: Combining Logic Programs with Description Logic

    */

    Basic Web 3.0 Concepts

    Knowledge domains

    A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

    Information vs Knowledge

    To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

    Ontologies

    For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

    Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Inference Engines

    In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

    Info Agents

    Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

    Proofs and Answers

    The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

    This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

    “The Future Has Arrived But It’s Not Evenly Distributed”

    Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

    The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of artficial intelligence (AI) that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. All About Web 3.0
    4. Semantic MediaWiki
    5. Get Your DBin

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    From Logic to Ontology: The limit of “The Semantic Web”

     

     

    (Some post are written in English and Spanish language) 

    http://www.linkedin.com/answers/technology/web-development/TCH_WDD/165684-18926951 

    From Logic to Ontology: The limit of “The Semantic Web” 

     http://en.wikipedia.org/wiki/Undecidable_problem#Other_problems

    If you read the next posts on this blog: 

    Semantic Web

    The Semantic Web

    What is the Semantic Web, Actually?

    The Metaweb: Beyond Weblogs. From the Metaweb to the Semantic Web: A Roadmap

    Semantics to the people! ontoworld

    What’s next for the Internet

    Web 3.0: Update

    How the Wikipedia 3.0: The End of Google? article reached 2 million people in 4 days!

    Google vs Web 3.0

    Google dont like Web 3.0 [sic] Why am I not surprised?

    Designing a better Web 3.0 search engine

    From semantic Web (3.0) to the WebOS (4.0)

    Search By Meaning

    A Web That Thinks Like You

    MINDING THE PLANET: THE MEANING AND FUTURE OF THE SEMANTIC WEB

    The long-promised “semantic” web is starting to take shape

    Start-Up Aims for Database to Automate Web Searching

    Metaweb: a semantic wiki startup

    http://www.freebase.com/

    The Semantic Web, Collective Intelligence and Hyperdata.

    Informal logic 

    Logical argument

    Consistency proof 

    Consistency proof and completeness: Gödel’s incompleteness theorems

    Computability theory (computer science): The halting problem

    Gödel’s incompleteness theorems: Relationship with computability

    Non-formal or Inconsistency Logic: LACAN’s LOGIC. Gödel’s incompleteness theorems,

    You will realize the internal relationship between them linked from Logic to Ontology.  

    I am writing from now on an article about the existence of the semantic web.  

    I will prove that it does not exist at all, and that it is impossible to build from machines like computers.  

    It does not depend on the software and hardware you use to build it: You cannot do that at all! 

    You will notice the internal relations among them, and the connecting thread is the title of this post: “Logic to ontology.”   

    I will prove that there is no such construction, which can not be done from the machines, and that does not depend on the hardware or software used.  

    More precisely, the limits of the semantic web are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems remark, the final tautology of each construction or metonymic language (mathematical), which leads to inconsistencies. 

    This consistent logic is completely opposite to the logic that makes inconsistent use of time, inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.  

    The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem or stop of calculation.  

    So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.  

    As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine and “the halting problem”.  

    Then as the necessary and sufficient condition for the theorem is not fulfilled, we still have the necessary condition that if a language uses time, it lacks formal logic, the logic used is inconsistent and therefore has no stop problem.

    This is a necessary condition for the semantic web, but it is not enough and therefore no machine, whether it is a Turing Machine, a computer or a device as random as a black body related to physics field, can deal with any language other than mathematics language hence it is implied that this language is forced to meet the halting problem, a result of Gödel theorem.   

    De la lógica a la ontología: El límite de la “web semántica”  

    Si lee los siguientes artículos de este blog: 

    http://es.wikipedia.org/wiki/Web_sem%C3%A1ntica  

    Wikipedia 3.0: El fin de Google (traducción Spanish)

    Lógica 

    Lógica Consistente y completitud: Teoremas de la incompletitud de Gödel (Spanish)

    Consistencia lógica (Spanish)

    Teoría de la computabilidad. Ciencia de la computación.

    Teoremas de la incompletitud de Gödel y teoría de la computación: Problema de la parada 

    Lógica inconsistente e incompletitud: LOGICAS LACANIANAS y Teoremas de la incompletitud de Gödel (Spanish)  

    Jacques Lacan (Encyclopædia Britannica Online)

    Usted puede darse cuenta de las relaciones internas entre ellos, y el hilo conductor es el título de este mismo post: “de la lógica a la ontología”.  

    Probaré que no existe en absoluto tal construcción, que no se puede hacer desde las máquinas, y que no depende ni del hardware ni del software utilizado.   

    Matizando la cuestión, el límite de la web semántica está dado no por las máquinas y/o sistemas biológicos que se pudieran usar, sino porque la lógica con que se intenta construir carece del uso del tiempo, ya que la lógica formal es puramente metonímica y carece de la metáfora, y eso es lo que marcan los teoremas de Gödel, la tautología final de toda construcción y /o lenguaje metonímico (matemático), que lleva a contradicciones.  

    Esta lógica consistente es opuesta a la lógica inconsistente que hace uso del tiempo, propia del insconciente humano, pero el uso del tiempo está construido en base a la falta, no en torno a lo positivo sino en base a negaciones y ausencias, y eso es imposible de reflejar en una máquina porque la percepción de la falta necesita de la conciencia de sí mismo que se adquiere con la ausencia.   

    El problema está en que pretendemos construir un sistema inteligente que sustituya nuestro pensamiento, al menos en las búsquedas de información, pero la particularidad de nuestro pensamiento humano es el uso del tiempo el que permite concluir, por eso no existe en la mente humana el problema de la parada o detención del cálculo, o lo que es lo mismo ausencia del momento de concluir.  

    Así que todos los esfuerzos encaminados a la web semántica están destinados al fracaso a priori si lo que se pretende es prolongar nuestro pensamiento humano en las máquinas, ellas carecen de discurso metafórico, pues sólo son una construcción matemática, que siempre será tautológica y metonímica, ya que además carece del uso del tiempo que es lo que lleva al corte, la conclusión o la “parada”.  

    Como demostración vale la del contraejemplo, o sea que si suponemos que es posible construir la web semántica, como un lenguaje con capacidades similares al lenguaje humano, que tiene el uso del tiempo, entonces si ese es un teorema general, con un solo contraejemplo se viene abajo, y el contraejemplo está dado en el caso particular de la máquina de Turing y el “problema de la parada”.  

    Luego no se cumple la condición necesaria y suficiente del teorema, nos queda la condición necesaria que es que si un lenguaje tiene el uso del tiempo, carece de lógica formal, usa la lógica inconsistente y por lo tanto no tiene el problema de la parada”, esa es condición necesaria para la web semántica, pero no suficiente y por ello ninguna máquina, sea de Turing, computador o dispositivo aleatorio como un cuerpo negro en física, puede alcanzar el uso de un lenguaje que no sea el matemático con la paradoja de la parada, consecuencia del teorema de Gödel.

    Jacques Lacan (Encyclopædia Britannica Online)

    Read Full Post »

    Evolving Trends

    July 29, 2006

    Search By Meaning

    I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

    However, I also do see the solution!

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. Intelligence (Not Content) is King in Web 3.0
    4. Web 3.0 Blog Application
    5. Towards Intelligent Findability
    6. All About Web 3.0

    Beats

    42. Grey Cell Green

    Posted by Marc Fawzi

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

    5 Comments »

    1. context is a kind of meaning, innit?

      Comment by qxcvz — July 30, 2006 @ 3:24 am

    2. You’re one piece short of Lego Land.

      I have to make the trek down to San Diego and see what it’s all about.

      How do you like that for context!? 🙂

      Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

      Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

      The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

      I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

      [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

      An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

      Marc

      p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

      Comment by evolvingtrends — July 30, 2006 @ 10:19 am

    3. Hey there, Marc.
      Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

      Comment by Ian — August 7, 2006 @ 5:06 pm

    4. Luckily, I’m not an opinion journalist although I could easily pass for one.

      You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

      BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

      :] … dzzt.

      Marc

      Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

    5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

      Hope we’ll still be able to do a little “fencing” around this subject!

      Comment by Ian — August 7, 2006 @ 7:01 pm

    RSS feed for comments on this post. TrackBack URI

    Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    July 20, 2006

    Google dont like Web 3.0 [sic]

    (this post was last updated at 9:50am EST, July 24, ‘06)

    Why am I not surprised?

    Google exec challenges Berners-Lee

    The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.

    Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)

    In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.

    For more information, see the articles below.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Wikipedia 3.0: El fin de Google (traducción)
    3. All About Web 3.0
    4. Web 3.0: Basic Concepts
    5. P2P 3.0: The People’s Google
    6. Intelligence (Not Content) is King in Web 3.0
    7. Web 3.0 Blog Application
    8. Towards Intelligent Findability
    9. Why Net Neutrality is Good for Web 3.0
    10. Semantic MediaWiki
    11. Get Your DBin

    Somewhat Related

    1. Unwisdom of Crowds
    2. Reality as a Service (RaaS): The Case for GWorld
    3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy
    4. Is Google a Monopoly?
    5. Self-Aware e-Society

    Beats

    1. In the Hearts of the Wildmen

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF

    Read Full Post »

    Dashboard

  • New Post
  • Evolving Trends
  • June 30, 2006

    Web 3.0: Basic Concepts

    /*

    (this post was last updated at 1:20pm EST, July 19, ‘06)

    You may also wish to see Wikipedia 3.0: The End of Google? (The original ‘Web 3.0/Semantic Web’ article) and P2P 3.0: The People’s Google (a more extensive version of this article showing the implication of P2P Semantic Web Engines to Google.)

    Web 3.0 Developers:

    Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

    1. Description Logic Programs: Combining Logic Programs with Description Logic

    */

    Basic Web 3.0 Concepts

    Knowledge domains

    A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

    Information vs Knowledge

    To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

    Ontologies

    For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

    Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Inference Engines

    In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

    Info Agents

    Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

    Proofs and Answers

    The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

    This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

    “The Future Has Arrived But It’s Not Evenly Distributed”

    Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

    The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of artficial intelligence (AI) that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. All About Web 3.0
    4. Semantic MediaWiki
    5. Get Your DBin

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, AI Engine, OWL-DL, AI Engine, The Matrix, AI Matrix, Global Brain, Semantic MediaWiki, P2P 3.0

    Read Full Post »

    Evolving Trends

    June 9, 2006

    Google 2.71828: Analysis of Google’s Web 2.0 Strategy

    The title of this post should read Google 2.0, but in the tradition of Google’s college campus ads I decided to use 2.71828, which is the value of e (the base of the natural system of logarithms), in place of just plain 2.0.

    So what’s all the fuss about? Google throws a bucket of AJAX (not the Colgate-Palmolive variety) on some empty Web page and people go “they must be doing something genius!” and then on and on goes the blogosphere in its analysis of Google’s latest move.

    The Problem with Google’s Spreadsheets and Writely

    First of all, for those who are not acquainted with the decades old concept of CVS (concurrent versioning system), being able to have multiple users edit the same document and enjoy all the benefits of version control is nothing new. Programmers have been relying on such systems for several decades now. And Wikipedia’s multi-user document editing and version control capabilities is just one popular example of CVS being applied it to Web documents.

    What I find unusual/weird is that fact that Google’s Spreadsheet and Writely do not follow the established CVS model where a given document is protected from being edited at the same time by more than one user.

    In CVS-like applications, like Wikipedia, many users may edit a given document but not all at the same time. Having multiple users edit a document, a program’s source file or an Excel spreadsheet at the same time (where there are many logical and semantic inter-dependencies across the document) can result in conflicting changes that create a mess. Changing an input value or formula in one cell or changing a global variable or a method in a program file (that’s called by other methods elsewhere in the program) will have effects elsewhere in the spreadsheet or program. If one person makes the changes then they’ll be able to track all places in the code that would get affected and make sure that their changes do not create bugs in the program or the spreadsheet. They would also comment it so that others understand what the change is. But if two people are making changes at the same time to the same spreadsheet or program (or document) and one person makes changes in one area that affects another area that the other person happens to be making changes to directly or indirectly, then you could see how that would lead to bugs being created due conflicting changes.

    For example, let’s say I change A from 1 to 2 in area AA which I expect to change the value of B (which is defined as A + C and has an acceptable range of 0-4) from 3 to 4 in area BB but at the same time you (the second person editing the document) change C from 2 to 3 in area CC which you expect to change the value of B from 3 to 4 in area BB, then we would end up with B = 5, which neither of us would expect and which is outside the acceptable range of B, i.e. any value for B larger than 4 would break the calculation/program, thus creating a bug. This is the most simple case. In real practice the “multiple simultaneous changes” scenarios would be much more complicated and involve more logical and semantic inter-connections. In the scenario where two or more people can edit the same spreadsheet, document or program at the same time, if one cannot see the changes that the other person is making and understand their effect on all related parts of the program, spreadsheet or document then one should not be making any changes of their own at the same time, or else they would be creating havoc, especially as the number of people editing at the same time goes beyond two people. With two people it is conceivable that a system or process can be worked out where people can synchronize their editing. With three or more people the interaction process grows in complexity so much that it would require implementing a strict synchronization process (or a more evolved model) in the editing application to coordinate the work of the simultaneous editors so that mess can be avoided.

    The CVS model, which is used by Wikipedia and other multi-user editing applications, avoids the mess that would be created by “too many chefs in the kitchen” by letting each document (or each “dish” in this case) be worked on by only one chef at a time, with each chef leaving notes for the others as to what he did, and with rollback and labeling capabilities. I find it unbelievable that a company like Google with such concentration of rocket scientists cannot see the silliness in letting “too many chefs work on the same dish” (not just in the same kitchen) without implementing the synchronization process that would be needed to avoid a certain, predictable mess.

    What people really want for multi-user document editing is what Wikipedia already offers, which is consistent with the CVS scheme that has been so successful in the context of software development teams.

    What people really want for multi-user spreadsheet editing is what DabbleDB already offers, i.e. to be able to create multiple views of the same dataset, and I would say also to be able to leverage the CVS scheme for multi-user editing where users may edit the same document but not at the same time!

    The Problem with Google’s Strategy

    In trying to understand Google’s strategy, I can think of three possible theories:

    1. The “short range chaos vs. long range order” theory. This implies that Google is striving towards more and more order, i.e. decreasing entropy, as a long range trend, just like the process of evolution in nature. This means that Google leverages evolving strategies (as in self-organization, complextity and chaos) to generate long range order. This is a plausible way to explain Google’s moves so far. Knowing what we know about the creative people they’ve hired, I’m tempted to conclude that this is what they’re doing.

    2. The “complicated but orderly” theory. Think of this as a parametric function vs. a chaotic pattern or a linear or cyclical one. This type of strategy has the property of being both confusing as well as constructive. Several bloggers have pointed to this possibility. But who are they distracting? Microsoft? Do they really think Microsoft is so naive to fall for it? I doubt it. So I don’t understand why they would prefer complicated over simple when it comes to their strategy.

    3. The “total uninhibited decline” theory. This implies chaos in both the short and the long ranges of their strategy. Few people would consider this a possibility at this time.

    It would seem that Google’s strategy falls under theory number one, i.e. they work with a large set of evolving strategies, trying to mimic nature itself. But whether or not they recognize this about themselves I have no clue.

    So what if I was to convince a bunch of kids to take the Firefox source code and ideas from the GreaseMonkey and Platypus plugins and produce a version of FireFox that automatically removes all AdSense ads from Web pages and reformats the page so that there would be no empty white areas (where the Ads are removed from) … What would that do to Google’s ad-suported business model? I think it could do a lot of damage to Google’s model as most users hate having their view of the Web cluttered with mostly useless ads.

    However, some say that Google Base is the Next Big Thing, and that’s an ad-supported model where people actually want to see the ads. In that case, it would seem that those bloggers who say Google’s strategy fits under theory number two (complicated but predictable) are correct.

    Personally, I believe that Google’s strategy is a mix of both complex as well as complicated behaviors (i.e. theories number one and two), which is a sure way to get lost in the game.

    Beating Google in Software as a Service (Saas)

    As far as I can tell, Google 2.0 (or Google 2.71828 as I like to call it) has been mostly about SaaS.

    Google is now to the SaaS industry what Microsoft has been to the desktop software industry. VCs are afraid to invest in something that Google may copy and co-opt.

    However, just like how MS had failed to beat Quicken with MS Money and how they’ve failed to beat Adobe and Macromedia (which are one now) Google will fail to beat those who know how to run circles around it. One exit strategy for such companies may be a sale to Yahoo (just kidding, but why not!)

    In general here is what SaaS companies can do to run circles around the likes of Google and Yahoo.

    1. Let Google and Yahoo spend billions to expand their share of the general consumer market. You can’t beat them there. Focus on a market niche.

    2. Provide unique value through product differentiation.

    3. Provide higher value through integration with partners’ products and services.

    4. Cater to the needs of your niche market on much more personal basis than Google or Yahoo can ever hope to accomplish.

    5. Offer vertical applications that Google and Yahoo would not be interested in offering (too small a market for them) to enhance your offering.

    Posted by Marc Fawzi

    Enjoyed this analysis? Share it with others on:

    Tags:

    Web 2.0, Ajax, Flex 2, Web strandards, Trends, RIA, Rich Internet applications, product management, innovation, tagging, social networking, user generated content, Software as a Service (Saas), chaos theory, Startup, Evolution, Google Writely, Google Spreadsheets, Google, DabbleDB, Google Base

    14 Comments »

    Read Full Post »

    Article

    Wikipedia 3.0: The End of Google?

    The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.

    However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

    Machines (or machine-based reasoning, aka AI software or ‘info agents’) would then be able to use those laboriously –but not entirely manually– constructed ontologies to build a view (or formal model) of how the individual terms within the information relate to each other. Those relationships can be thought of as the axioms (basic assumptions), which together with the rules governing the inference process both enable as well as constrain the interpretation (and well-formed use) of those terms by the info agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent info agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Thus, and as stated, in the Semantic Web individual machine-based agents (or a collaborating group of agents) will be able to understand and use information by translating concepts and deducing new information rather than just matching keywords.

    Once machines can understand and use information, using a standard ontology language, the world will never be the same. It will be possible to have an info agent (or many info agents) among your virtual AI-enhanced workforce each having access to different domain specific comprehension space and all communicating with each other to build a collective consciousness.

    You’ll be able to ask your info agent or agents to find you the nearest restaurant that serves Italian cuisine, even if the restaurant nearest you advertises itself as a Pizza joint as opposed to an Italian restaurant. But that is just a very simple example of the deductive reasoning machines will be able to perform on information they have.

    Far more awesome implications can be seen when you consider that every area of human knowledge will be automatically within the comprehension space of your info agents. That is because each info agent can communicate with other info agents who are specialized in different domains of knowledge to produce a collective consciousness (using the Borg metaphor) that encompasses all human knowledge. The collective “mind” of those agents-as-the-Borg will be the Ultimate Answer Machine, easily displacing Google from this position, which it does not truly fulfill.

    The problem with the Semantic Web, besides that researchers are still debating which design and implementation of the ontology language model (and associated technologies) is the best and most usable, is that it would take thousands or tens of thousands of knowledgeable people many years to boil down human knowledge to domain specific ontologies.

    However, if we were at some point to take the Wikipedia community and give them the right tools and standards to work with (whether existing or to be developed in the future), which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.

    The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.

    Notes

    After writing the original post I found out that the Wikipedia application, also known as MediaWiki and not to be confused with Wikipedia.org, has already been used to implement ontologies. The name that they’ve chosen is Ontoworld. I think WikiMind or WikiBorg would have been a cooler name, but I like ontoworld, too, as in “and it descended onto the world,” since that may be a reference to the global mind a Semantic-Web-enabled OntoWorld would lead to.

    In just a few years Google’s search engine technology, which provides almost all of their revenue, could be made obsolete… That is unless they have a deal with Ontoworld where they will tap into their database of ontologies and add an inference engine capability to Google search.

    But so can Ask.com and MSN and Yahoo.

    I would really love to see more competition in this arena, not to see Google or any one company establish a huge lead over others.

    The question, to rephrase in Churchillian terms, is wether the combination of the Semantic Web and Wikipedia signals the beginning of the end for Google or the end of the beginning. Obviously, with tens of billions of dollars at stake in investors’ money, I would think that it is the latter. No one wants to see Google fail. There’s too much vested interest. However, I do want to see somebody out maneuver them (which can be done in my opinion.)

    Clarification

    Please note that Ontoworld, which currently implements the ontologies, is based on the “Wikipedia” application (also known as MediaWiki), but it is not the same as Wikipedia.org.

    Likewise, I expect Wikipedia.org will use their volunteer workforce to reduce the sum of human knowledge that has been entered into their database to domain-specific ontologies for the Semantic Web (aka Web 3.0) Hence, “Wikipedia 3.0.”

    Response to Readers’ Comments

    The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

    Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

    Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

    I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

    There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information would also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

    It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

    After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

    Response to Basic Questions Raised by the Readers

    Reader divotdave asked a few questions, which I thought to be very basic in nature (i.e. important.) I believe more people will be pondering about the same issues, so I’m to including here them with the replies.

    Question:
    How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

    Reply:
    It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

    However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

    Question:
    Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

    Reply:
    That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

    There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

    Question:
    Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

    Reply:
    There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

    Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

    The only standard required is that of the ontology language and associated production tools.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    On the Debate about the Nature and Definition of AI

    The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that will run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

    Related:

    1. Web 3.0 Update
    2. All About Web 3.0 <– list of all Web 3.0 articles on this site
    3. P2P 3.0: The People’s Google
    4. Reality as a Service (RaaS): The Case for GWorld <– 3D Web + Semantic Web + AI
    5. For Great Justice, Take Off Every Digg
    6. Google vs Web 3.0

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

     


    Update on how the Web 3.0 vision is spreading:




    Update on how Google is co-opting the Wikipedia 3.0 vision:




    Web 3D fans:

    Here is the original Web 3D + Semantic Web + AI article:

    Web 3D + Semantic Web + AI *

    The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipeda 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

    Developers:

    Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

    1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

    Jan 7, ‘07: The following Evolving Trends post discussing current state of semantic search engines and ways to improve the design:

    1. Designing a Better Web 3.0 Search Engine

    The idea described in this article was adopted by Hakia after it was published here, so this article may be considered as prior art.

    June 27, ‘06: Semantic MediaWiki project

    1. http://wiki.ontoworld.org/wiki/Semantic_MediaWiki

    Bloggers:

    This post provides the history behind use of the term Web 3.0 in the context of the Semantic Web and AI.

    This post explains the accidental way in which this article reaching 2 million people in 4 days.

    Futurists:

    For a broader context, please see this post.


    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI

    101 Comments »

    1. […] La verdad es que cuando leí el título de este artículo me llamó la atención aunque fui un poco escéptico y decidí echarle un vistazo. No tengo mucha idea de Inglés pero lo que pude entender y los enlaces a los que me llevó me ayudó a entender la idea del proyecto. […]

      Pingback by Semantic Wiki, ¿Google en peligro? » aNieto2K — June 26, 2006 @ 8:06 am

    2. English Translation of the above linked comment/trackback (courtesy of Google):

      The truth is that when I read the title of this article it called my attention to it although I was a little skeptical and I decided to have a look at it. I do not have much idea of English but what I could understand and the connections to which it took me helped to understand the idea of the project.

      Comment by evolvingtrends — June 26, 2006 @ 8:15 am

    3. […] I wouldn’t be surprised if the strategy would shift from talking to a call center agent to searching an online FAQ created by a “semantic web,” possibly even utilizing voice-recognition software coupled with text-to-voice technologies. It may sound like science fiction, but so was video-conferencing a couple of years ago. […]

      Pingback by Techno Pinoy » Archives » BPO and Call Centers — June 26, 2006 @ 10:20 am

    4. Whatever happens you can bet Google could easily buy the technology in Wikipedia to extend it’s reach beyond’s its current search engine-ishness.

      Jim Sym

      Comment by magicmarketing — June 26, 2006 @ 12:30 pm

    5. Hmmmmmmmmm.

      I think I will think on this for a few days.

      Comment by farlane — June 26, 2006 @ 12:45 pm

    6. Jim,

      It’s not the technology. It’s the thousands of knowledgeable people that have to work for years to boild down human knowledge to a set of ontologies.

      I guess OntoWorld’s would open up to companies like Google and let them plug their inference enginers into it.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 1:35 pm

    7. […] Will we ever ask Google a question again?read more | digg story […]

      Pingback by Stroke My Blog… » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 1:37 pm

    8. My first thought on this was that “the semantic web needs robots” (in order to be created) and that I’m not sure if the AI described is ready yet. We have companies like Semantica which enable us to create small scale semantic webs and networks and knowlege-management platforms but it still requires a great deal of manual labor to input the ontological terms properly. Ontoworld would do a lot of that, yes–but tags are still tags. You can manipulate them and draw patterns, to a point. Machines still need to process it effectively, efficiently, and then communicate what they have made to us, the humans. Are we there yet?

      Comment by Sam Jackson — June 26, 2006 @ 1:49 pm

    9. […] Read the full article here. […]

      Pingback by Herold Internet Marketing & Consulting » Blog Archive » Will Wikipedia 3.0 the new Google? » Will Wikipedia 3.0 the new Google?- San Francisco California — June 26, 2006 @ 1:50 pm

    10. […] Evolving Trends » Wikipedia’s OntoWorld: The End of Google?   […]

      Pingback by The Geek Gene » Wikipedia’s OntoWorld: The End of Google? — June 26, 2006 @ 1:54 pm

    11. Google will figure out a way to start their own form of a similar system. They may have allready.. who knows.. maybes it’s in the testing phase and when its ready, it may be simply turned on like a light switch… ?

      Comment by Eugene F — June 26, 2006 @ 1:56 pm

    12. Sam,

      I’ve clarified the premise of my argument to clarify that the right tools and standards have to be ready first, but the Ontoworld project is already in progress… Technology evolves based on our needs, so we have to take those early awkward steps in order to get there.


      “However, if we were at some point to take the Wikipedia community and give them the right tools and standards (whether existing or to be developed in the future) to work with, which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.”

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 1:59 pm

    13. […] read more | digg story […]

      Pingback by Tech Meat » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 2:08 pm

    14. Eugene,

      Wikipedia has a cult of users who number in the thousands and who are knowledgeable in their own domains and who have proved they can do decent quality work. They would be needed to create the ontologies. It’s no small job. Google would have to go out and hire the world? Wikipedia has the educated/knowledgeable resources needed for the job and all they need is better more user-friendly tools (automation, IDE, etc) and more usable standards.

      It’s not there yet..

      Again, it’s about the workforce not the technology. Google just doesn’t have enough people to do it.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 2:13 pm

    15. I can’t read an article like this without remembering Clay Shirky’s article on “The Semantic Web, Syllogism, and Worldview” [1]. I remain a skeptic on the Semantic Web, just as I remain a skeptic on AI. I’ll believe it when I see it.

      [1] http://www.shirky.com/writings/semantic_syllogism.html

      Comment by Karl G — June 26, 2006 @ 2:15 pm

    16. I know not with what weapons Web 3.0 will be fought, but Web 4.0 will be fought with sticks and stones.

      Comment by Albert Einstein — June 26, 2006 @ 2:16 pm

    17. […] Uhhhh leiam este artigo: http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/ no início deste ano saiu na newsweek umas 4 páginas falando sobre semantic web.. é a coisa mais revolucionária desde o monitor! […]

      Pingback by Batutinhas » Semantic Web — June 26, 2006 @ 3:36 pm

    18. this will happen. except it will happen by google doing it. that’s the future of google.. the ability to ask it a question in plain english and get back an intelligable answer. i hate to break it, but this won’t kill google. it will seal the deal for google as king.

      Comment by nick podges — June 26, 2006 @ 3:50 pm

    19. Sounds like Hal might come back from the dead1

      Comment by elgoodpaco — June 26, 2006 @ 3:59 pm

    20. The AI community (no dis) once felt this problem was easy to solve. Mere ontology is not enough, nor is a large workforce. When it comes to natural language, disambiguation is a major problem, and that requires a patterned database. I believe we could get off to a good start, but without some new concepts – or the location of some old, neglected ones – it’ll never be satisfying to use. I don’t think Google’s worried; they may even kick in a few megabucks.

      Comment by Tony — June 26, 2006 @ 4:06 pm

    21. Tony,

      I was not saying anything about natural language queries, though it may have looked like it since I did not specify the query mechanism for sake of berivity and keeping it within grasp.

      No need to go into natural language.

      Ontologies + Inference Engines + Query Logic = Ultimate Answer Machine (for the next 5-8 years.. after that the definition of ultimate may include natural language.)

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 4:13 pm

    22. […] read more | digg story […]

      Pingback by Motion Personified » Blog Archive » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 4:55 pm

    23. Folks,

      Clarification:

      It’s not Wikipedia itself that is operating Ontoworld. They are simply using the Wikipedia application, also known as WikiMedia. But I envision “Wikipedia 3.0″ to follow from Ontoworld.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 4:59 pm

    24. As good as google is something better will come. All dynastys must come to an end.

      Comment by realestateceo — June 26, 2006 @ 5:44 pm

    25. I’m a huge fan of both Wikipedia and Google.
      I’m just really interesting to see how this all develops!

      Great article

      Comment by web design uk — June 26, 2006 @ 5:52 pm

    26. Sounds like a fantastic concept but I can see as you say, “There is to much vested interest” in the Google type search engine. The idea of such massive change is too much to come in smoothly, its like moving from fossil fuels to solar. Though it would be for our collective best most of us who have the money (Western world) would be somewhat inconvenienced and the big players would be greatly inconvenienced. So who will invest so massively if the return will be years away? How do you pay several thousand people if there is no forseeable end in sight? Would need the support of a government or something.

      Comment by rockwatching — June 26, 2006 @ 6:32 pm

    27. Interesting theory. I have found that when I’m looking for facts, the wikipedia article is in the top search results. Google gets me there quickly. Not sure if wikipedia could ever kill Google entirely, though.

      Comment by Sarah — June 26, 2006 @ 7:00 pm

    28. If anything, this will only enable Google to do their job more efficiently. It may slightly alter the way the accomplish the end goal, but it will help them much more than hinder. However, several good points are made here. My personal opinion is that click fraud and the similar problems propose much more of a threat to Google’s revenue model than a better organization of the worlds largest form of media.

      Comment by Donnie — June 26, 2006 @ 8:04 pm

    29. Google has been the reference for the last few years. However, the development of web 2.0 technologies have somewhat shifted the focus away from Google.

      Wikipedia and its technologies are definitely a force to be reckoned with in web tech. I find myself using Wikipedia first for specific knowledge based queries, validating them through other sources. When I explore the blogosphere, I don’t use Google at all.

      Comment by range — June 26, 2006 @ 8:32 pm

    30. Interesting article!

      Comment by drhaisook — June 26, 2006 @ 8:43 pm

    31. This article has been noted by the Antiwikipedia (http://www.antiwikipedia.com). We will now integrate ontologies into our wisdombase.

      Comment by Antiwikipedia — June 26, 2006 @ 9:17 pm

    32. Alright! Good for you, Antiwikipedia!

      🙂

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 9:21 pm

    33. http://www.goertzel.org/papers/webmind.html

      Comment by Not Webmind — June 26, 2006 @ 10:09 pm

    34. Each human being is unique. Each of us has a unique genetic makeup, unique values, and unique circumstances. There is no universally applicable algorithm that applies to something as individualized as the pursuit of enlightenment. I cannot imagine such a highly structured project that does not disenfranchise the majority of people in one way or another; the more pervasive the attempted structuring, the more universal and profound the disenfranchisement will surely become. Eventually we will all be fighting with each other, dining on dog food, and living in mud huts, while we pound away on our keyboards, or whatever.

      Comment by thomas — June 26, 2006 @ 11:18 pm

    35. Ian:

      Thank you for bringing this forward. That’s encompassed by what I was implying regarding giving the ontology makers the right tools.

      Update:

      However, when I said thousands of people needed to produce the ontologies, I did not mean to say that they would produce them manually. Yet, my assessment is that given the vastness of human knowledge, which for the most part exists in plain form, we would need thousands of people working over time, with automation tools and whatever tools available (now or in the future), to produce the ontologies (not manually and not persistently intervening.) The tools should make the job faster and more realistic but it would still take a lot of time and many people.

      Think about how long it took to build Wikipedia’s content. Many years and thousands of people. How can the conversion process from Wikipedia’s current format to ontologies (even with next years tools) take much less than a few years and thousands of people? (I said two years optimistaclly) The conversion from Wikipedia to formal (computationally complete and decidable) ontologies cannot be entirely automated, at least not yet.

      I’ll look further into it and try to get a better estimate for the cost in time and labor but has their been any final standard? Are we going to be using OWL, OWL-DL or OWL Full or somewhere in between OWL and OWL-DL?

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 11:37 pm

    36. Nicely written but a few gaps I take issue with: First, the concept of the Semantic Web predates the concept of Web 2.0 so it’s a little disingenuous to call it Web 3.0. It’s been struggling for many years while making very little progress that I’ve seen. The Wikipedia analogy is an interesting one. Wikipedia achieves some very significant things through the tireless efforts of thousands. A lot of successes with Wikipedia and some very significant shortcomings. And that’s one website. By extension, the Semantic Web requires the participation of practically everybody to work.

      It’s a good theory and a noble goal with some rather serious hurdles to overcome. Not least of which is the requirement to engage so many people when so many people are so very lazy. And people are not inclined to agree while the Semantic Web requires significant (if not universal) agreement to work. Look to Wikipedia talk pages to see what happens when people disagree. I think the focus on some huge, utopian and probably unachievable goal is likely to be nothing more than a distraction from what the next big thing will really be.

      And I’m not about to let you in on that secret. Not before the IPO anyway.

      Comment by Mr Angry — June 27, 2006 @ 1:06 am

    37. Web 1.5?

      I just doubled it then for double the fun.

      If it hasn’t happened yet then it must come after 2.0, which is hapening now. Thus, it must be “3.0″ 🙂
      I didn’t know the IPO market in Australia was going strong! Take us there with you!

      The bet your company made on your product makes you financially biased against the Wkipedia 3.0 vision, as described here. 🙂

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 1:16 am

    38. Marc,

      I harbor a serious doubt that people will embrace query logic. I agree that application of ontology will eventually lead to better resolution in database searches. But natural language is what Google gets from *most* people, who are and will remain disinclined to learn more “query logic”.

      More problematic, mere ontology will remain unable to resolve the infinite array of conceptual ambiguities and relationships that the human brain is patterned by experience to resolve almost effortlessly. So -short of a new heuristic algorithm- while it’d be great to see results *today*, I just don’t see ontology leading to such superior results that people will abandon Google. But I’d enjoy being wrong.

      Comment by Tony — June 27, 2006 @ 4:41 am

    39. Tony,

      If you call Google’s boolean search query a “natural language query” then I call my writing Shakespearean. Obviously, neither comes anywhere with 200,000 miles to its claim. But I’d like to think I’m much closer to mine than Google is to being a natural language query system. 😉

      It’s not mere ontology. The inference rules can be intelligent and that’s where ‘improved’ heuristics can be rellied on to “kind of, sort of” mimic the human brain until we fully get there. We have to take those early awkward steps or the future will not arrive, not even in an unevenly distributed way. 🙂

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 5:13 am

    40. I’m struggling to see how the likes of google will fall from grace because of standard XML templating. There will still information to be search or indeed machines to be interfaced with but how different is this to how we work with google today? Google can simply integrate web semantics into their search, if they do it better and first they will continue to dominate the search market. Perhaps only a large organisation such as google can succeed with this technology since as Mr Angry argues you have to get everyone to work together with standards, hell could this be the way that Microsoft or AOL topple Google’s search dominance.

      It is though unlikely that Wikipedia or a similar voluntary organisation would produce something a system that could topple Google.

      Comment by smiggs — June 27, 2006 @ 2:18 pm

    41. smiggs,

      It’s not about the next generation search engine (in this case the inference engine and the whole AI enablement in general) or the tools to produce the ontologies.

      The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

      Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

      Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

      I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

      There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information would also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

      It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

      After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 7:26 pm

    42. Ultimately I don’t think Google will see Wikipedia as a threat as such. They are already a highly trusted source for Google. As more ontologies are created in fields other than those covered by wikipedia, or indeed as more granular and detailed sources for wikepedia themselves; Wikipedia will become one of very many trusted sources, the data from which will be aggregated and delivered by Google. Surely Google actually welcomes more well crafted, semantically correct sources than the grey goo of todays genberally poorly structured non semantic web sites?

      The real challenge is going to lie in being able to create a plural Internet where there are hundreds / thousands of trusted RDF aggregates or similar, where wikipedia will be only one voice. There is a also presumption that we must somehow manage to make sense of the grey goo we have (and still producing producing more daily.) At this point in time the water / grey go is still pouring over the dam at a considerable rate. The worst possible scenario is that we must start again, learning from the mistakes of the pioneer days

      The current google interface is far from a natural language processor, but who would bet against them getting that right if they were first given a higher quality input?

      Comment by Rob Kirton — June 28, 2006 @ 8:59 am

    43. I personally like having Google as a market leader. Yahoo has become a bunch of greedy thugs – their apps invariably have ads on the left, the right, above, and below, all blinking and moving, and $199 a year for a link in their now-useless directory has always been incredibly nasty – and MSN, well, greedy thugs does pretty much cover it! Google on the other hand has shared the wealth quite nicely with content creators via the AdSense program, which allows normal people to earn a good living as webmasters if they have a particular area of expertise and the ability to share it. Yes, it’s also inspired a number of sleazebags to have Made-For-Adsense sites, but that’s inevitable and those tend to get bounced sooner or later.

      Wikipedia.org is a threat to a lot of us who have worked hard to develop good web sites only to find that increasingly people are just going to wikipedia, which I guess is fair but often the depth and variety is greater out there in the wild. I’ll also point out that a single ontology directed source might stifle a lot of the independent voices who add variety as all traffic goes to them, if I’m understanding it right. I do understand that wikipedia.org is “open” but I’ve experienced openness in the Open Directory that ended up being authoritarian dictatorship when editors get out of hand, and don’t see wikipedia.org as being non-susceptible to that.

      Comment by analysis — June 28, 2006 @ 11:16 am

    44. “I would really love to see more competition”

      And open standards, data, source code, access….all the good stuff.

      http://www.techcrunch.com/2006/06/06/google-to-add-albums-to-picassa/#comment-66429

      Comment by Dave — June 28, 2006 @ 6:48 pm

    45. I enjoyed your commentary on “The Semantic Web” and find your observations right on. It was the prefect follow-up to watching a “Royal Society” presentation by – Professor Sir Tim Berners-Lee FRS (Video on-demand) The future of the world wide web this past evening. Although somewhat dated (as am I)from 9/03, it helped put everything in perspective.

      http://www.royalsoc.ac.uk/portals/bernerslee/rnh.htm

      Thanks.

      Comment by Tom K — June 29, 2006 @ 5:36 am

    46. I don’t think Google will allow themselves to become obsolete. Google just may get as many people to get the job done as Wikipedia has. It’s not impossible. We’ll just have to wait and see on that one.

      Comment by Panda — June 29, 2006 @ 4:34 pm

    47. Nothing can kill google now 🙂

      Comment by Ivan Minic — June 29, 2006 @ 6:43 pm

    48. Good Comment

      Comment by manuandycole — June 29, 2006 @ 7:39 pm

    49. Google will have to compete with the likes of Yahoo and MSN new search technology.

      I’m thankful at least there is some competition left in the U.S. (oil, telecom, banking – almost no competition any more)

      Comment by Krag — June 29, 2006 @ 7:42 pm

    50. I would have to agree that competiton is getting less and here in the USA.

      Comment by onecoolsoul — June 29, 2006 @ 9:58 pm

    51. So…I am trying to wrap my brain around this concept a bit (regardless of the threat to Google or whatever). The collective mind of this Semantic web can, in theory, intuitively connect bits of information together due to their relational association beyond a “keyword” association (i.e. your Pizza/Italian) example. However, how does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject? Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user? And what of ranking or order – Google’s search results are driven by many factors…but they may not be the best results for what I am specifically looking for…even on the level of personal preference. Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to? Hmmm…

      Comment by divotdave — June 30, 2006 @ 3:11 am

    52. Question:
      How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

      Reply:
      It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

      However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

      Question:
      Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

      Reply:
      That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

      There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

      Question:
      Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

      Reply:
      There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

      Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

      The only standard required is that of the ontology language and associated production tools.

      Marc

      P.S. This set of responses was refined at 10:25pm EST, June 30, ‘06.

      Comment by evolvingtrends — June 30, 2006 @ 3:30 am

    53. I’m in utter awe and there is no way I can digest all this information tonight, so I’ve saved it in its own “favorites” folder. The internet is changing and growing every day and its difficult to keep up. Thanks for the information.

      Comment by Gregory Name — July 1, 2006 @ 2:39 am

    54. seems like some hardcore knowledge

      Comment by unblock myspace — July 1, 2006 @ 4:57 pm

    55. Trying to get machines to understand the English language? People have a tough enough time doing that in every day life, with all the misunderstandings born of incorrect interpertations of the spoken or written word.

      In reference to the first basic question raised by readers: How would a machine distinguish good information from bad? Given that the concept of good vs bad is purely subjective, the answer is machines can’t make that determination because machines can’t make value judgements. Any choice made by a machine that would appear to be a value judgement is really that of the developing programmer.

      Comment by NorthOfTheCity — July 2, 2006 @ 1:58 pm

    56. One big issue I have to wonder about is how to keep out the legions of spammers with their MFA (made for AdSense) sites. They have done an incredible job of staying one step ahead of everyone except perhaps Akismet, spamming Google quite effectively, lodging their turds in forums and blogs, and generally being quite ingenious in their ability to spread filth. Given that we cannot expect our government to crack down on their criminal activities – not necessarily the spamming but the crime that generates the funding for it – how can we insulate Wiki3/Web3 from all that rubbish?

      Another issue is the tendency I’ve noticed for any authority to become incredibly insular and snotty, no doubt due to the massive fight against spam. The Open Directory is famous for its arbitrary, permanent decisions and lack of any ability to take criticism; but Yahoo was hardly fun in its prime, and Google seems to be getting rather aloof as well. The v3 web/wiki may need to confront that head-on since it makes for bad decisions (since self-examination disappears in a mass of self-righteousness.)

      Comment by analysis — July 2, 2006 @ 2:37 pm

    57. On AI and Natural Language Processing

      I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

      On the Debate about the Nature and Definition of AI

      Please note that I could not get into moral, philosophical or religious debates such as what is AI vs human consciousness or what is evolution vs intelligent design or any other such debate that cannot be resolved by computable (read – provable) and decidable arguments in the context of formal theory.

      Non-formal and semi-formal (incomplete, evolving) theories can be found everywhere in every branch of science. For example, in the branch of physics, the so-called “String Theory” is not a formal theory: it has no theoretically provable arguments (or predictions.) Likewise, the field of AI is littered with non-formal theories that some folks in the field (but not all) may embrace as so-called ’seminal’ works even though those non-formal theories have not made any theoretically provable predictions (not to mention experimentally proven ones.) They are no more than a philosophy, an ideology or even a personal religion and they could not be debated with decidable, provable arguments.

      The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

      Comment by evolvingtrends — July 2, 2006 @ 5:27 pm

    58. […] This article explains the background for how the Wikipedia 3.0/Semantic Web vs Google meme was launched. […]

      Pingback by Evolving Trends » For Great Justice, Take Off Every Digg — July 3, 2006 @ 10:32 am

    59. […] So I read this blog entry on A list apart a ways back, but never made an entry here. It’s titled web 3.0 by Jeffrey Zeldman. And then there’s this blog entry that came out today on the Evolving Trends blog called Wikipedia 3.0: The End of Google?. Jeffrey Zeldman makes the point that Web 2.0 is really about the collaboration and community. Not only for the end-user (i.e. flickr, ma.gnolia, etc…), but on the development side. It allows small teams to work more efficiently and to focus on things like usability. AJAX technologies (PHP, Ruby on Rails, XML, CSS, JavaScript, XHTML, and sometimes Microsoft widgets) allow applications to be elegant and simple. The end result is products that do what they do really well vs. an overload of feataures and complexity. Zeldman concludes by saying he’s going to let all the hype over Web 2.0 pass and get on to Web 3.0. […]

      Pingback by Douglas Reynolds : Experience. Organized. » Blog Archive » Web 2.0 (web 3.0?) — July 3, 2006 @ 10:47 am

    60. […] http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/ […]

      Pingback by Copper » I can’t wait…. — July 3, 2006 @ 12:35 pm

    61. […] Wikipedia 3.0 the end of Google? […]

      Pingback by Marcus Cake » Web 3.0, the global brain and the impact on financial markets — July 5, 2006 @ 7:25 am

    62. […] Con respecto a la entrada anterior sobre Google, tambien existe una tesis que predice la “caida” de google. Se basa en la Web Semantica (Web 3.0) y en su poder para deducir (o inducir) respuestas, en lugar de simplemente buscar palabras claves. El problema (aparte de los obstaculos técnicos) esta en que recolectar y clasificar (categorizar) dicha información es un esfuerzo que requiere de miles de personas trabajando por mucho tiempo. El post Wikipedia 3.0: El fin de Google propone que la inmensa base de datos de Wikipedia junto con sus miles de colaboradores voluntarios son la respuesta al problema recien planteado, suponiendo que se les provea con las herramientas adecuadas. […]

      Pingback by Dime Jaguar » Todo tiene una antitesis — July 6, 2006 @ 4:04 am

    63. […] Evolving Trends says that a “Wikipedia 3.0″ could make Google obsolete. […]

      Pingback by appletree » Blog Archive » Thursday Links: Bruce Smith Edition — July 6, 2006 @ 2:30 pm

    64. […] But there it is. That was then. Now, it seems, the rage is Web 3.0. It all started with this article here addressing the Semantic Web, the idea that a new organizational structure for the web ought to be based on concepts that can be interpreted. The idea is to help computers become learning machines, not just pattern matchers and calculators. […]

      Pingback by SHM Project » Blog Archive » Web 3.0 – The Tyranny of X Point Oh — July 7, 2006 @ 1:30 pm

    65. […] Despite that I had painted a picture (not so foreign to many in concept) of a future ‘intelligent collective’ (aka Global Brain [a term which had been in use in somewhat similar context for years now]) in the articles on Wikipedia 3.0 and Web 3.0, I believe that the solution to Web 2.0 is not to make the crowd more intelligent so that ‘it’ (not the people) can control ‘itself’ (and ‘us’ in the process) but to allow us to retain control over it, using true and tried structures and processes. […]

      Pingback by Evolving Trends » Microcrowds: Towards a Self-Aware, Self-Organizing Society? — July 8, 2006 @ 2:31 am

    66. […] Marc đưa ra khái niệm về Wikipedia 3.0 , báo hiệu cho kết thúc của Google […]

      Pingback by Wisiwip » Blog Archive » Digg 08/07/06 — July 9, 2006 @ 7:13 am

    67. […] It doesn’t matter who thought of it first. So it’s better to put these ideas out there in the open, be them good ideas like the Wikipedia 3.0, Web 3.0, ‘Unwisdom of Crowds’ or Google GoodSense or “potentially” bad ones like the Tagging People in the Real World or the e-Society ideas. […]

      Pingback by Evolving Trends » Good and Bad Ideas — July 9, 2006 @ 3:03 pm

    68. […] web 3.0 y google ¿La web 3.0 es el final de google? … no, 5 razones porqué no razón 4: google puede escuchar tu ambiente y en 5 segundos saber que programa ves en TV! […]

      Pingback by miniPLUG » web 3.0 y google — July 9, 2006 @ 4:33 pm

    69. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » From Web 2.0 to Web 2.5 — July 9, 2006 @ 6:34 pm

    70. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » Web 2.0: Back to the “Hunter Gatherer” Society — July 9, 2006 @ 6:41 pm

    71. […] Since writing the article on Wikipedia 3.0: The End of Google? I’ve received over 65,000 page-views from people in almost every Internet-savvy population around the world, and all that traffic happened in less than two weeks, with 85% of it in the first 4 or 5 days. […]

      Pingback by Evolving Trends » Google is a Monopoly — July 10, 2006 @ 6:14 am

    72. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » VirAL POSts — July 12, 2006 @ 11:11 am

    73. Wikipedia 3.0: ¿El final de Google?

      El autor nos habla de los “agentes info” que revolucionarán la forma en que se busca información en la web y la manera en que la comunidad de Wikepedia podría ayudar a acelerar la implementación de la “Máquina Suprema de Respuestas”

      Nota: e…

      Trackback by tecnoticias — July 16, 2006 @ 9:24 pm

    74. […] Evolving Trends » Wikipedia 3.0: The End of Google? Caught up with Web 2.0 yet, if not just skip over and check out Web 3.0! Evoloving Trends walks through a very intellectual analysis of the next big thing. (tags: blog future search SemanticWeb wiki Wikipedia) […]

      Pingback by links for 2006-07-19 » My blog of HR, and technology stuff — July 19, 2006 @ 4:39 pm

    75. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » It’s Easier to Lead Than Follow — August 9, 2006 @ 11:41 am

    76. […] Wikipedia 3.0: The End of Google? […]

      Pingback by I predict … « Evolving Trends — September 3, 2006 @ 12:27 pm

    77. People actually have to take the time to write if this is going to work. And you can look at all the wikis on the net that have failed because they didn’t reach a critical mass of volunteers. People have jobs and blogs to take care of, therefore, the for-profit model is going to be the way it has to work.

      Comment by Astralis — September 13, 2006 @ 3:48 am

    78. “The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.”

      This captures the emerging power of wikis that we are all moving to.

      Thanks!

      Comment by Howard Oliver — September 13, 2006 @ 7:54 pm

    79. […] Det finnes ingen Web 3.0 side på WikiPedia – det er faktisk satt opp en sperre mot å opprette en slik side – men siden om Semantic Web kan være av interesse. Jeg skal også lese artikkelen Wikipedia 3.0: The End of Google? når jeg får litt tid. […]

      Pingback by kjempekjekt.com » Blog Archive » Web 3.0 — November 15, 2006 @ 3:53 am

    80. Update:
      Companies and researchers are developing tools and processes to let domain experts with no knowledge of ontology construction build a formal ontology in a manner that is transparent to them, i.e. without them even realizing that they’re building one. Such tools/processes are emerging from research organizations and Web 3.0 ventures.

      The article “Google Co-Op: The End of Wikipedia?” is linked from this article. It provides a plausible counter argument with respect to how the Semantic Web will emerge. But it’s only one potential future scenario among many even more likely future scenarios, some of which are already taking shape.
      Marc

      Comment by evolvingtrends — November 15, 2006 @ 5:57 pm

    81. […] Go here for latest update on ontology creation […]

      Pingback by Web 3.0 Update « Evolving Trends — November 19, 2006 @ 2:21 pm

    82. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Google Answers: The End of Google Co-Op? « Evolving Trends — November 30, 2006 @ 9:38 am

    83. […] Acá un recorte directo de la traducción del articulo original. (perdí mucho tiempo tratando de entenderlo, se nota?) por Marc Fawzi de Evolving Trends […]

      Pingback by DxZone » DxBlog » Blog Archive » Web 3.0? — December 3, 2006 @ 4:29 pm

    84. Right now, few months before we intended to, we have the begginign of such a semantic application in our modest martial arts wiki.

      The main intention is to discovers causative and other relations between techniques in our field of expertise.

      We have only few of these articles ready, here is one example:
      http://www.ninjutsu.co.il/wiki/index.php/Harai_goshi
      The process of describing ontological relations is painstakingly slow involving many human hours.

      Comment by yossi — December 27, 2006 @ 11:26 am

    85. Hi Yossi,

      Good stuff.

      The creation of domain-specific ontologies(including domain-specific ontologies that enable machine reasoning about a given subject) has to be a transparent process from the domain expert’s perspective. Whether that’s done through technology or well-designed processes depends on what needs to be accomplished

      See latest update on Web 3.0 technologies: http://evolvingtrends.wordpress.com/2006/11/19/web-30-update/

      Thanks for your comment.

      Marc

      Comment by evolvingtrends — December 28, 2006 @ 1:07 am

    86. […] by Wikipedia’s founder), future semantic version of Wikipedia (aka Wikipedia 3.0), and Google’s Pagerank algorithm to shed some light on how to design a better semantic […]

      Pingback by Designing a better semantic search engine « Evolving Trends — January 7, 2007 @ 9:04 pm

    87. […] Interestingly, my friend Marc Fawzi described exactly this idea in a piece he posted on the subject last […]

      Pingback by PCs Powered by the Wisdom of Crowds | twopointouch: web 2.0, blogs and social media — January 10, 2007 @ 3:27 pm

    88. 20 years ago or so, there was a great hipe about Japan’s project of “5th generation of computers”. The idea was to use AI tools (specifically, Prolog) to make computers understand people. The project failed with no result. Probably, Web 3.0 will be more sucessfull, and probably not.
      May be for the start, create standalone semantic operating system? Or we’ll have to access semantic web with dumb windoze machines?

      Comment by Alexei Kaigorodov — February 3, 2007 @ 12:50 pm

    89. There is at least one Semantic Desktop project out there already.

      Marc

      Comment by evolvingtrends — February 6, 2007 @ 7:22 am

    90. Alexei has touched on the subject of needing more powerful computers and operating systems to handle the semantic explosion of synonyms and language rules. The brain is still king. Until quantum computers.

      Comment by Geoff Dodd — February 24, 2007 @ 8:15 am

    91. Much is being done to define and tackle the problems, so you should see some exciting paradigms emerge over the next few years.

      There are many ways to define the problem, so there are many ways to solve it.

      :]

      Comment by evolvingtrends — February 24, 2007 @ 8:58 am

    92. […] personas consideran que esta Web 3.0 será el fin de empresas como Google, pero otras  ya alucinan el surgimiento de una Inteligencia Artificial al estilo de Ghost in the […]

      Pingback by ¿Web 3.0 o Skynet? « El metaverso de JL — March 9, 2007 @ 6:38 pm

    93. […] « Evolving Trends Posted in Semantic web, Web 2.0 by Aman Shakya on the March 23rd, 2007 Wikipedia 3.0: The End of Google? « Evolving Trends The Semantic Web or Web 3.0 promises to “organize the world’s information” in a dramatically […]

      Pingback by Wikipedia 3.0: The End of Google? « Evolving Trends « Aman’s Blog — March 23, 2007 @ 4:52 am

    94. Google may rack most of the benefits by hosting Wikipedia.

      Comment by Nat — June 1, 2007 @ 4:23 am

    95. People are not as stupid as you may think.

      Any such decision involving collusion between a public-created knowledge base like Wikipedia and any company like Google who may try to control it will be challenged and opposed by the majority of people.

      Marc

      Comment by evolvingtrends — July 30, 2007 @ 6:24 pm

    96. […] interesting article, written some time ago now, suggests that the Web 3.0 vision might be the way to end Google’s monopoly. It is touted as being a new way to organise […]

      Pingback by Will Google rule the Digital Realm? | Simon Emery — November 5, 2007 @ 4:20 pm

    97. […] can read the whole article here Wikipedia3.0 – The End of Google A more trimmed down version of the same article can be found here P2P ai will kill […]

      Pingback by Web 2.0 or 3.0? — November 8, 2007 @ 1:16 am

    98. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Google tries again to co-opt the Wikipedia 3.0 vision « Evolving Trends — December 17, 2007 @ 4:13 pm

    99. […] The Law of The Observer… […]

      Pingback by I Observe. Therefore I Am. « Evolving Trends — December 18, 2007 @ 6:28 pm

    100. […] blog (http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/) that was talking about Web. 3.0. The argument it was making about Wikipedia, ontological knowledge […]

      Pingback by musings on participation and digital technologies « snapshot… — January 9, 2008 @ 3:51 pm

    101. […] And here’s the Evolving Trends article that gave both Google and Wikia the impetus to get into user-enhanced (and /semantic) search: here […]

      Pingback by Hakia, Google, Wikia « Evolving Trends — January 16, 2008 @ 4:43 pm

     

    Read Full Post »

    « Newer Posts

    %d bloggers like this: