Feeds:
Posts
Comments

Posts Tagged ‘Wikipedia AI’

Evolving Trends

Web 3.0

Historically speaking, the first coining of Web 3.0 in conjunction with Semantic Web and/or AI agents and the first coining of  Web 3.0 in conjunction with Wikipedia and/or Google was made in the Wikipedia 3.0: The End of Google? article, which was published on Evolving Trends (this blog) on June 26, ‘06.

June 28, ‘06: Here’s what a fellow blogger, who had reviewed the Wikipedia 3.0 article, had to say:

“[…] But there it is. That was then. Now, it seems, the rage is Web 3.0. It all started
with this article here addressing the Semantic Web, the idea that a new organizational
structure for the web ought to be based on concepts that can be interpreted. The idea is
to help computers become learning machines, not just pattern matchers and calculators. […]“

June 28, ‘06: A fellow blogger wrote:

“This is the first non-sarcastic reference to Web 3.0 I’ve seen in the wild”

As of Jan 25, there are 11,000 links to Evolving Trends from blogs, forums and news sites pointing to the Wikipedia 3.0 article.

Jan 25, ‘07: A fellow blogger wrote:

“In 2004 I with my friend Aleem worked on idea of Semantic Web (as our senior project), and now I have been hearing news of Web 3.0. I decided to work on the idea further in 2005, and may be we could have made a very small scaled 4th generation search engine. Though this has never become reality but now it seems it’s hot time for putting Semantics and AI into web. Reading about Web 3.0 again thrilled me with the idea. [Wikia] has decided to jump into search engines and give Google a tough time :). So I hope may be I get a chance to become part of this Web 3.0 and make information retreival better.”

Alexa graph

According to Alexa the Wikipedia 3.0: The End of Google? article estimated penetration peaked on June 28 at a ratio of 650 per each 1,000,000 people. Based on an estimated number of 1,000,000,000 Web users, this means that it reached 650,000 people on June 28, and other hundreds of thousands of people on June 26, 27, 29, 30. This includes people who read the article at about 6,000 sites (according to MSN) that had linked to Evolving Trends. Based on the Alexa graph, we could estimate that the article reach close to 2 million people in the first 4.5 days of its release.

Update on Alexa Statistics (Sep. 18, 2008): some people have pointed out (independently with respect to their own experience) that Alexa’s statistics are skewed and not very reliable. As far as the direct hits to the on this blog they’re in the 200,000 range as of this writing.


Note: the term “Web 3.0″ is the dictionary word “Web” followed by the number “3″, a decimal point and the number “0.” As such, the term itself cannot and should not have any commercial significance in any context.  

Update on how the Wikipedia 3.0 vision is spreading:


Update on how Google is hopelessly attempting to co-opt the Wikipedia 3.0 vision:  

Web 3D + Semantic Web + AI as Web 3.0:  

Here is the original article that gave birth to the Web 3.0 vision:

3D Web + Semantic Web + AI *

The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipeda 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Jan 7, ‘07: The following Evolving Trends post discussing current state of semantic search engines and ways to improve the design:

  1. Designing a Better Web 3.0 Search Engine

The idea described in this article was adopted by Hakia after it was published here, so this article may be considered as prior art.

June 27, ‘06: Semantic MediaWiki project, enabling the insertion of semantic annotations (or metadata) into Wikipedia content (This project is now hosted by Wikia, Wikipedia founder Jimmy wales’ private venture, and may benefit Wikia instead of Wikipedia, which is why I see it as a conflict of interest.)

Bloggers:

This post provides the history behind use of the term Web 3.0 in the context of the Semantic Web and AI.

This post explains the accidental way in which this article reaching 2 million people in 4 days.


Web 3.0 Articles on Evolving Trends

Noteworthy mentions of the Wikipedia 3.0 article:

Tags:

Semantic Web, Web strandards, Trends, OWL, Googleinference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Wikipedia, Wikipedia 3.0Wikipedia AI, P2P 3.0, P2P AI, P2P Semantic Web inference Engineintelligent findability

Evolving Trends is Powered by +[||||]- 42V

Read Full Post »

Evolving Trends

July 29, 2006

Search By Meaning

I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

However, I also do see the solution!

Related

  1. Wikipedia 3.0: The End of Google?
  2. P2P 3.0: The People’s Google
  3. Intelligence (Not Content) is King in Web 3.0
  4. Web 3.0 Blog Application
  5. Towards Intelligent Findability
  6. All About Web 3.0

Beats

42. Grey Cell Green

Posted by Marc Fawzi

Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

5 Comments »

  1. context is a kind of meaning, innit?

    Comment by qxcvz — July 30, 2006 @ 3:24 am

  2. You’re one piece short of Lego Land.

    I have to make the trek down to San Diego and see what it’s all about.

    How do you like that for context!? 🙂

    Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

    Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

    The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

    I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

    [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

    An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

    Marc

    p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

    Comment by evolvingtrends — July 30, 2006 @ 10:19 am

  3. Hey there, Marc.
    Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

    Comment by Ian — August 7, 2006 @ 5:06 pm

  4. Luckily, I’m not an opinion journalist although I could easily pass for one.

    You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

    BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

    :] … dzzt.

    Marc

    Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

  5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

    Hope we’ll still be able to do a little “fencing” around this subject!

    Comment by Ian — August 7, 2006 @ 7:01 pm

RSS feed for comments on this post. TrackBack URI

Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    July 20, 2006

    Google dont like Web 3.0 [sic]

    (this post was last updated at 9:50am EST, July 24, ‘06)

    Why am I not surprised?

    Google exec challenges Berners-Lee

    The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.

    Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)

    In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.

    For more information, see the articles below.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Wikipedia 3.0: El fin de Google (traducción)
    3. All About Web 3.0
    4. Web 3.0: Basic Concepts
    5. P2P 3.0: The People’s Google
    6. Intelligence (Not Content) is King in Web 3.0
    7. Web 3.0 Blog Application
    8. Towards Intelligent Findability
    9. Why Net Neutrality is Good for Web 3.0
    10. Semantic MediaWiki
    11. Get Your DBin

    Somewhat Related

    1. Unwisdom of Crowds
    2. Reality as a Service (RaaS): The Case for GWorld
    3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy
    4. Is Google a Monopoly?
    5. Self-Aware e-Society

    Beats

    1. In the Hearts of the Wildmen

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF

    Read Full Post »

    Evolving Trends

    July 19, 2006

    Towards Intelligent Findability

    (This post was last updated at 12:45pm EST, July 22, 06)

    By Eric Noam Rodriguez (versión original en español CMS Semántico)

    Editing and Addendum by Marc Fawzi

    A lot of buzz about Web 3.0 and Wikipedia 3.0 has been generagted lately by Marc Fawzi through this blog, so I’ve decided that for my first post here I’d like to dive into this idea and take a look at how to build a Semantic Content Management System (CMS). I know this blog has had a more of a visionary, psychological and sociological theme (i.e., the vision for the future and the Web’s effect on society, human relationships and the individual himself), but I’d like to show the feasibility of this vision by providing some technical details.

    Objective

     

    We want a CMS capable of building a knowledge base (that is a set of domain-specific ontologies) with formal deductive reasoning capabilities.

    Requirements

     

    1. A semantic CMS framework.
    2. An ontology API.
    3. An inference engine.
    4. A framework for building info-agents.

    HOW-TO

     

    The general idea would be something like this:

    1. Users use a semantic CMS like Semantic MediaWiki to enter information as well as semantic annotations (to establish semantic links between concepts in the given domain on top of the content) This typically produces an informal ontology on top of the information, which, when combined with domain inference rules and the query structures (for the particular schema) that are implemented in an independent info agent or built into the CMS, would give us a Domain Knowledge Database. (Alternatively, we can have users enter information into a non-semantic CMS to create content based on a given doctype or content schema and then front-end it with an info agent that works with a formal ontology of the given domain, but we would then need to perform natural language processing, including using statistical semantic models, since we would lose the certainty that would normally be provided by the semantic annotations that, in a Semantic CMS, would break down the natural language in the information to a definite semantic structure.)
    2. Another set of info agents adds to our knowledge base inferencing-based querying services for information on the Web or other domain-specific databases. User entered information plus information obtained from the web makes up our Global Knowledge Database.
    3. We provide a Web-based interface for querying the inference engine.

    Each doctype or schema (depending on the CMS of your choice) will have a more or less direct correspondence with our ontologies (i.e. one schema or doctype maps with one ontology). The sum of all the content of a particular schema makes up a knowledge-domain which when transformed into a semantic language like (RDF or more specifically OWL) and combined with the domain inference rules and the query structures (for the particular schema) constitute our knowledge database. The choice of CMS is not relevant as long as you can query its contents while being able to define schemas. What is important is the need for an API to access the ontology. Luckily projects like JENA fills this void perfectly providing both an RDF and an OWL API for Java.

    In addition, we may want an agent to add or complete our knowledge base using available Web Services (WS). I’ll assume you’re familiarized with WS so I won’t go into details.

     

    Now, the inference engine would seem like a very hard part. It is. But not for lack of existing technology: the W3C already have a recommendation language for querying RDF (viz. a semantic language) known as SPARQL (http://www.w3.org/TR/rdf-sparql-query/) and JENA already has a SPARQL query engine.

    The difficulty lies in the construction of ontologies which would have to be formal (i.e. consistent, complete, and thoroughly studied by experts in each knowledge-domain) in order to obtain powerful deductive capabilities (i.e. reasoning).

    Conclusion

    We already have technology powerful enough to build projects such as this: solid CMS, standards such as RDF, OWL, and SPARQL as well as a stable framework for using them such as JENA. There are also many frameworks for building info-agents but you don’t necessarily need a specialized framework, a general software framework like J2EE is good enough for the tasks described in this post.

    All we need to move forward with delivering on the Web 3.0 vision (see 1, 2, 3) is the will of the people and your imagination.

    Addendum

    In the diagram below, the domain-specific ontologies (OWL 1 … N) could be all built by Wikipedia (see Wikipedia 3.0) since they already have the largest online database of human knowledge and the domain experts among their volunteers to build the ontologies for each domain of human knowledge. One possible way is for Wikipedia will build informal ontologies using Semantic MediaWiki (as Ontoworld is doing for the Semantic Web domain of knowledge) but Wikipedia may wish to wait until they have the ability to build formal ontologies, which would enable more powerful machine-reasoning capabilities.

    [Note: The ontologies simply allow machines to reason about information. They are not information but meta-information. They have to be formally consistent and complete for best results as far as machine-based reasoning is concerned.]

    However, individuals, teams, organizations and corporations do not have to wait for Wikipedia to build the ontologies. They can start building their own domain-specific ontologies (for their own domains of knowledge) and use Google, Wikipedia, MySpace, etc as sources of information. But as stated in my latest edit to Eric’s post, we would have to use natural language processing in that case, including statistical semantic models, as the information won’t be pre-semanticized (or semantically annotated), which makes the task more dificult (for us and for the machine …)

    What was envisioned in the Wikipedia 3.0: The End of Google? article was that since Wikipedia has the volunteer resources and the world’s largest database of human knowledge then it will be in the powerful position of being the developer and maintainer of the ontologies (including the semantic annotations/statements embedded in each page) which will become the foundation for intelligence (and “Intelligent Findability”) in Web 3.0.

    This vision is also compatible with the vision for P2P AI (or P2P 3.0), where people will run P2P inference engines on their PCs that communicate and collaborate with each other and that tap into information form Google, Wikipedia, etc, which will ultimately push Google and central search engines down to the commodity layer (eventually making them a utility business just like ISPs.)

    Diagram

    Related

    1. Wikipedia 3.0: The End of Google? June 26, 2006
    2. Wikipedia 3.0: El fin de Google (traducción) July 12, 2006
    3. Web 3.0: Basic Concepts June 30, 2006
    4. P2P 3.0: The People’s Google July 11, 2006
    5. Why Net Neutrality is Good for Web 3.0 July 15, 2006
    6. Intelligence (Not Content) is King in Web 3.0 July 17, 2006
    7. Web 3.0 Blog Application July 18, 2006
    8. Semantic MediaWiki July 12, 2006
    9. Get Your DBin July 12, 2006

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Google, GData, inference engine, AI, ontology, Semantic Web, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, JENA, SPARQL, RDF, OWL

     

    Read Full Post »

    Article

    Wikipedia 3.0: The End of Google?

    The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.

    However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

    Machines (or machine-based reasoning, aka AI software or ‘info agents’) would then be able to use those laboriously –but not entirely manually– constructed ontologies to build a view (or formal model) of how the individual terms within the information relate to each other. Those relationships can be thought of as the axioms (basic assumptions), which together with the rules governing the inference process both enable as well as constrain the interpretation (and well-formed use) of those terms by the info agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent info agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Thus, and as stated, in the Semantic Web individual machine-based agents (or a collaborating group of agents) will be able to understand and use information by translating concepts and deducing new information rather than just matching keywords.

    Once machines can understand and use information, using a standard ontology language, the world will never be the same. It will be possible to have an info agent (or many info agents) among your virtual AI-enhanced workforce each having access to different domain specific comprehension space and all communicating with each other to build a collective consciousness.

    You’ll be able to ask your info agent or agents to find you the nearest restaurant that serves Italian cuisine, even if the restaurant nearest you advertises itself as a Pizza joint as opposed to an Italian restaurant. But that is just a very simple example of the deductive reasoning machines will be able to perform on information they have.

    Far more awesome implications can be seen when you consider that every area of human knowledge will be automatically within the comprehension space of your info agents. That is because each info agent can communicate with other info agents who are specialized in different domains of knowledge to produce a collective consciousness (using the Borg metaphor) that encompasses all human knowledge. The collective “mind” of those agents-as-the-Borg will be the Ultimate Answer Machine, easily displacing Google from this position, which it does not truly fulfill.

    The problem with the Semantic Web, besides that researchers are still debating which design and implementation of the ontology language model (and associated technologies) is the best and most usable, is that it would take thousands or tens of thousands of knowledgeable people many years to boil down human knowledge to domain specific ontologies.

    However, if we were at some point to take the Wikipedia community and give them the right tools and standards to work with (whether existing or to be developed in the future), which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.

    The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.

    Notes

    After writing the original post I found out that the Wikipedia application, also known as MediaWiki and not to be confused with Wikipedia.org, has already been used to implement ontologies. The name that they’ve chosen is Ontoworld. I think WikiMind or WikiBorg would have been a cooler name, but I like ontoworld, too, as in “and it descended onto the world,” since that may be a reference to the global mind a Semantic-Web-enabled OntoWorld would lead to.

    In just a few years Google’s search engine technology, which provides almost all of their revenue, could be made obsolete… That is unless they have a deal with Ontoworld where they will tap into their database of ontologies and add an inference engine capability to Google search.

    But so can Ask.com and MSN and Yahoo.

    I would really love to see more competition in this arena, not to see Google or any one company establish a huge lead over others.

    The question, to rephrase in Churchillian terms, is wether the combination of the Semantic Web and Wikipedia signals the beginning of the end for Google or the end of the beginning. Obviously, with tens of billions of dollars at stake in investors’ money, I would think that it is the latter. No one wants to see Google fail. There’s too much vested interest. However, I do want to see somebody out maneuver them (which can be done in my opinion.)

    Clarification

    Please note that Ontoworld, which currently implements the ontologies, is based on the “Wikipedia” application (also known as MediaWiki), but it is not the same as Wikipedia.org.

    Likewise, I expect Wikipedia.org will use their volunteer workforce to reduce the sum of human knowledge that has been entered into their database to domain-specific ontologies for the Semantic Web (aka Web 3.0) Hence, “Wikipedia 3.0.”

    Response to Readers’ Comments

    The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

    Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

    Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

    I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

    There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information would also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

    It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

    After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

    Response to Basic Questions Raised by the Readers

    Reader divotdave asked a few questions, which I thought to be very basic in nature (i.e. important.) I believe more people will be pondering about the same issues, so I’m to including here them with the replies.

    Question:
    How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

    Reply:
    It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

    However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

    Question:
    Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

    Reply:
    That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

    There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

    Question:
    Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

    Reply:
    There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

    Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

    The only standard required is that of the ontology language and associated production tools.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    On the Debate about the Nature and Definition of AI

    The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that will run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

    Related:

    1. Web 3.0 Update
    2. All About Web 3.0 <– list of all Web 3.0 articles on this site
    3. P2P 3.0: The People’s Google
    4. Reality as a Service (RaaS): The Case for GWorld <– 3D Web + Semantic Web + AI
    5. For Great Justice, Take Off Every Digg
    6. Google vs Web 3.0

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

     


    Update on how the Web 3.0 vision is spreading:




    Update on how Google is co-opting the Wikipedia 3.0 vision:




    Web 3D fans:

    Here is the original Web 3D + Semantic Web + AI article:

    Web 3D + Semantic Web + AI *

    The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipeda 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

    Developers:

    Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

    1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

    Jan 7, ‘07: The following Evolving Trends post discussing current state of semantic search engines and ways to improve the design:

    1. Designing a Better Web 3.0 Search Engine

    The idea described in this article was adopted by Hakia after it was published here, so this article may be considered as prior art.

    June 27, ‘06: Semantic MediaWiki project

    1. http://wiki.ontoworld.org/wiki/Semantic_MediaWiki

    Bloggers:

    This post provides the history behind use of the term Web 3.0 in the context of the Semantic Web and AI.

    This post explains the accidental way in which this article reaching 2 million people in 4 days.

    Futurists:

    For a broader context, please see this post.


    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI

    101 Comments »

    1. […] La verdad es que cuando leí el título de este artículo me llamó la atención aunque fui un poco escéptico y decidí echarle un vistazo. No tengo mucha idea de Inglés pero lo que pude entender y los enlaces a los que me llevó me ayudó a entender la idea del proyecto. […]

      Pingback by Semantic Wiki, ¿Google en peligro? » aNieto2K — June 26, 2006 @ 8:06 am

    2. English Translation of the above linked comment/trackback (courtesy of Google):

      The truth is that when I read the title of this article it called my attention to it although I was a little skeptical and I decided to have a look at it. I do not have much idea of English but what I could understand and the connections to which it took me helped to understand the idea of the project.

      Comment by evolvingtrends — June 26, 2006 @ 8:15 am

    3. […] I wouldn’t be surprised if the strategy would shift from talking to a call center agent to searching an online FAQ created by a “semantic web,” possibly even utilizing voice-recognition software coupled with text-to-voice technologies. It may sound like science fiction, but so was video-conferencing a couple of years ago. […]

      Pingback by Techno Pinoy » Archives » BPO and Call Centers — June 26, 2006 @ 10:20 am

    4. Whatever happens you can bet Google could easily buy the technology in Wikipedia to extend it’s reach beyond’s its current search engine-ishness.

      Jim Sym

      Comment by magicmarketing — June 26, 2006 @ 12:30 pm

    5. Hmmmmmmmmm.

      I think I will think on this for a few days.

      Comment by farlane — June 26, 2006 @ 12:45 pm

    6. Jim,

      It’s not the technology. It’s the thousands of knowledgeable people that have to work for years to boild down human knowledge to a set of ontologies.

      I guess OntoWorld’s would open up to companies like Google and let them plug their inference enginers into it.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 1:35 pm

    7. […] Will we ever ask Google a question again?read more | digg story […]

      Pingback by Stroke My Blog… » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 1:37 pm

    8. My first thought on this was that “the semantic web needs robots” (in order to be created) and that I’m not sure if the AI described is ready yet. We have companies like Semantica which enable us to create small scale semantic webs and networks and knowlege-management platforms but it still requires a great deal of manual labor to input the ontological terms properly. Ontoworld would do a lot of that, yes–but tags are still tags. You can manipulate them and draw patterns, to a point. Machines still need to process it effectively, efficiently, and then communicate what they have made to us, the humans. Are we there yet?

      Comment by Sam Jackson — June 26, 2006 @ 1:49 pm

    9. […] Read the full article here. […]

      Pingback by Herold Internet Marketing & Consulting » Blog Archive » Will Wikipedia 3.0 the new Google? » Will Wikipedia 3.0 the new Google?- San Francisco California — June 26, 2006 @ 1:50 pm

    10. […] Evolving Trends » Wikipedia’s OntoWorld: The End of Google?   […]

      Pingback by The Geek Gene » Wikipedia’s OntoWorld: The End of Google? — June 26, 2006 @ 1:54 pm

    11. Google will figure out a way to start their own form of a similar system. They may have allready.. who knows.. maybes it’s in the testing phase and when its ready, it may be simply turned on like a light switch… ?

      Comment by Eugene F — June 26, 2006 @ 1:56 pm

    12. Sam,

      I’ve clarified the premise of my argument to clarify that the right tools and standards have to be ready first, but the Ontoworld project is already in progress… Technology evolves based on our needs, so we have to take those early awkward steps in order to get there.


      “However, if we were at some point to take the Wikipedia community and give them the right tools and standards (whether existing or to be developed in the future) to work with, which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.”

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 1:59 pm

    13. […] read more | digg story […]

      Pingback by Tech Meat » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 2:08 pm

    14. Eugene,

      Wikipedia has a cult of users who number in the thousands and who are knowledgeable in their own domains and who have proved they can do decent quality work. They would be needed to create the ontologies. It’s no small job. Google would have to go out and hire the world? Wikipedia has the educated/knowledgeable resources needed for the job and all they need is better more user-friendly tools (automation, IDE, etc) and more usable standards.

      It’s not there yet..

      Again, it’s about the workforce not the technology. Google just doesn’t have enough people to do it.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 2:13 pm

    15. I can’t read an article like this without remembering Clay Shirky’s article on “The Semantic Web, Syllogism, and Worldview” [1]. I remain a skeptic on the Semantic Web, just as I remain a skeptic on AI. I’ll believe it when I see it.

      [1] http://www.shirky.com/writings/semantic_syllogism.html

      Comment by Karl G — June 26, 2006 @ 2:15 pm

    16. I know not with what weapons Web 3.0 will be fought, but Web 4.0 will be fought with sticks and stones.

      Comment by Albert Einstein — June 26, 2006 @ 2:16 pm

    17. […] Uhhhh leiam este artigo: http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/ no início deste ano saiu na newsweek umas 4 páginas falando sobre semantic web.. é a coisa mais revolucionária desde o monitor! […]

      Pingback by Batutinhas » Semantic Web — June 26, 2006 @ 3:36 pm

    18. this will happen. except it will happen by google doing it. that’s the future of google.. the ability to ask it a question in plain english and get back an intelligable answer. i hate to break it, but this won’t kill google. it will seal the deal for google as king.

      Comment by nick podges — June 26, 2006 @ 3:50 pm

    19. Sounds like Hal might come back from the dead1

      Comment by elgoodpaco — June 26, 2006 @ 3:59 pm

    20. The AI community (no dis) once felt this problem was easy to solve. Mere ontology is not enough, nor is a large workforce. When it comes to natural language, disambiguation is a major problem, and that requires a patterned database. I believe we could get off to a good start, but without some new concepts – or the location of some old, neglected ones – it’ll never be satisfying to use. I don’t think Google’s worried; they may even kick in a few megabucks.

      Comment by Tony — June 26, 2006 @ 4:06 pm

    21. Tony,

      I was not saying anything about natural language queries, though it may have looked like it since I did not specify the query mechanism for sake of berivity and keeping it within grasp.

      No need to go into natural language.

      Ontologies + Inference Engines + Query Logic = Ultimate Answer Machine (for the next 5-8 years.. after that the definition of ultimate may include natural language.)

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 4:13 pm

    22. […] read more | digg story […]

      Pingback by Motion Personified » Blog Archive » Wikipedia 3.0: The End of Google? — June 26, 2006 @ 4:55 pm

    23. Folks,

      Clarification:

      It’s not Wikipedia itself that is operating Ontoworld. They are simply using the Wikipedia application, also known as WikiMedia. But I envision “Wikipedia 3.0″ to follow from Ontoworld.

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 4:59 pm

    24. As good as google is something better will come. All dynastys must come to an end.

      Comment by realestateceo — June 26, 2006 @ 5:44 pm

    25. I’m a huge fan of both Wikipedia and Google.
      I’m just really interesting to see how this all develops!

      Great article

      Comment by web design uk — June 26, 2006 @ 5:52 pm

    26. Sounds like a fantastic concept but I can see as you say, “There is to much vested interest” in the Google type search engine. The idea of such massive change is too much to come in smoothly, its like moving from fossil fuels to solar. Though it would be for our collective best most of us who have the money (Western world) would be somewhat inconvenienced and the big players would be greatly inconvenienced. So who will invest so massively if the return will be years away? How do you pay several thousand people if there is no forseeable end in sight? Would need the support of a government or something.

      Comment by rockwatching — June 26, 2006 @ 6:32 pm

    27. Interesting theory. I have found that when I’m looking for facts, the wikipedia article is in the top search results. Google gets me there quickly. Not sure if wikipedia could ever kill Google entirely, though.

      Comment by Sarah — June 26, 2006 @ 7:00 pm

    28. If anything, this will only enable Google to do their job more efficiently. It may slightly alter the way the accomplish the end goal, but it will help them much more than hinder. However, several good points are made here. My personal opinion is that click fraud and the similar problems propose much more of a threat to Google’s revenue model than a better organization of the worlds largest form of media.

      Comment by Donnie — June 26, 2006 @ 8:04 pm

    29. Google has been the reference for the last few years. However, the development of web 2.0 technologies have somewhat shifted the focus away from Google.

      Wikipedia and its technologies are definitely a force to be reckoned with in web tech. I find myself using Wikipedia first for specific knowledge based queries, validating them through other sources. When I explore the blogosphere, I don’t use Google at all.

      Comment by range — June 26, 2006 @ 8:32 pm

    30. Interesting article!

      Comment by drhaisook — June 26, 2006 @ 8:43 pm

    31. This article has been noted by the Antiwikipedia (http://www.antiwikipedia.com). We will now integrate ontologies into our wisdombase.

      Comment by Antiwikipedia — June 26, 2006 @ 9:17 pm

    32. Alright! Good for you, Antiwikipedia!

      🙂

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 9:21 pm

    33. http://www.goertzel.org/papers/webmind.html

      Comment by Not Webmind — June 26, 2006 @ 10:09 pm

    34. Each human being is unique. Each of us has a unique genetic makeup, unique values, and unique circumstances. There is no universally applicable algorithm that applies to something as individualized as the pursuit of enlightenment. I cannot imagine such a highly structured project that does not disenfranchise the majority of people in one way or another; the more pervasive the attempted structuring, the more universal and profound the disenfranchisement will surely become. Eventually we will all be fighting with each other, dining on dog food, and living in mud huts, while we pound away on our keyboards, or whatever.

      Comment by thomas — June 26, 2006 @ 11:18 pm

    35. Ian:

      Thank you for bringing this forward. That’s encompassed by what I was implying regarding giving the ontology makers the right tools.

      Update:

      However, when I said thousands of people needed to produce the ontologies, I did not mean to say that they would produce them manually. Yet, my assessment is that given the vastness of human knowledge, which for the most part exists in plain form, we would need thousands of people working over time, with automation tools and whatever tools available (now or in the future), to produce the ontologies (not manually and not persistently intervening.) The tools should make the job faster and more realistic but it would still take a lot of time and many people.

      Think about how long it took to build Wikipedia’s content. Many years and thousands of people. How can the conversion process from Wikipedia’s current format to ontologies (even with next years tools) take much less than a few years and thousands of people? (I said two years optimistaclly) The conversion from Wikipedia to formal (computationally complete and decidable) ontologies cannot be entirely automated, at least not yet.

      I’ll look further into it and try to get a better estimate for the cost in time and labor but has their been any final standard? Are we going to be using OWL, OWL-DL or OWL Full or somewhere in between OWL and OWL-DL?

      Marc

      Comment by evolvingtrends — June 26, 2006 @ 11:37 pm

    36. Nicely written but a few gaps I take issue with: First, the concept of the Semantic Web predates the concept of Web 2.0 so it’s a little disingenuous to call it Web 3.0. It’s been struggling for many years while making very little progress that I’ve seen. The Wikipedia analogy is an interesting one. Wikipedia achieves some very significant things through the tireless efforts of thousands. A lot of successes with Wikipedia and some very significant shortcomings. And that’s one website. By extension, the Semantic Web requires the participation of practically everybody to work.

      It’s a good theory and a noble goal with some rather serious hurdles to overcome. Not least of which is the requirement to engage so many people when so many people are so very lazy. And people are not inclined to agree while the Semantic Web requires significant (if not universal) agreement to work. Look to Wikipedia talk pages to see what happens when people disagree. I think the focus on some huge, utopian and probably unachievable goal is likely to be nothing more than a distraction from what the next big thing will really be.

      And I’m not about to let you in on that secret. Not before the IPO anyway.

      Comment by Mr Angry — June 27, 2006 @ 1:06 am

    37. Web 1.5?

      I just doubled it then for double the fun.

      If it hasn’t happened yet then it must come after 2.0, which is hapening now. Thus, it must be “3.0″ 🙂
      I didn’t know the IPO market in Australia was going strong! Take us there with you!

      The bet your company made on your product makes you financially biased against the Wkipedia 3.0 vision, as described here. 🙂

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 1:16 am

    38. Marc,

      I harbor a serious doubt that people will embrace query logic. I agree that application of ontology will eventually lead to better resolution in database searches. But natural language is what Google gets from *most* people, who are and will remain disinclined to learn more “query logic”.

      More problematic, mere ontology will remain unable to resolve the infinite array of conceptual ambiguities and relationships that the human brain is patterned by experience to resolve almost effortlessly. So -short of a new heuristic algorithm- while it’d be great to see results *today*, I just don’t see ontology leading to such superior results that people will abandon Google. But I’d enjoy being wrong.

      Comment by Tony — June 27, 2006 @ 4:41 am

    39. Tony,

      If you call Google’s boolean search query a “natural language query” then I call my writing Shakespearean. Obviously, neither comes anywhere with 200,000 miles to its claim. But I’d like to think I’m much closer to mine than Google is to being a natural language query system. 😉

      It’s not mere ontology. The inference rules can be intelligent and that’s where ‘improved’ heuristics can be rellied on to “kind of, sort of” mimic the human brain until we fully get there. We have to take those early awkward steps or the future will not arrive, not even in an unevenly distributed way. 🙂

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 5:13 am

    40. I’m struggling to see how the likes of google will fall from grace because of standard XML templating. There will still information to be search or indeed machines to be interfaced with but how different is this to how we work with google today? Google can simply integrate web semantics into their search, if they do it better and first they will continue to dominate the search market. Perhaps only a large organisation such as google can succeed with this technology since as Mr Angry argues you have to get everyone to work together with standards, hell could this be the way that Microsoft or AOL topple Google’s search dominance.

      It is though unlikely that Wikipedia or a similar voluntary organisation would produce something a system that could topple Google.

      Comment by smiggs — June 27, 2006 @ 2:18 pm

    41. smiggs,

      It’s not about the next generation search engine (in this case the inference engine and the whole AI enablement in general) or the tools to produce the ontologies.

      The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

      Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

      Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

      I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

      There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information would also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

      It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

      After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

      Marc

      Comment by evolvingtrends — June 27, 2006 @ 7:26 pm

    42. Ultimately I don’t think Google will see Wikipedia as a threat as such. They are already a highly trusted source for Google. As more ontologies are created in fields other than those covered by wikipedia, or indeed as more granular and detailed sources for wikepedia themselves; Wikipedia will become one of very many trusted sources, the data from which will be aggregated and delivered by Google. Surely Google actually welcomes more well crafted, semantically correct sources than the grey goo of todays genberally poorly structured non semantic web sites?

      The real challenge is going to lie in being able to create a plural Internet where there are hundreds / thousands of trusted RDF aggregates or similar, where wikipedia will be only one voice. There is a also presumption that we must somehow manage to make sense of the grey goo we have (and still producing producing more daily.) At this point in time the water / grey go is still pouring over the dam at a considerable rate. The worst possible scenario is that we must start again, learning from the mistakes of the pioneer days

      The current google interface is far from a natural language processor, but who would bet against them getting that right if they were first given a higher quality input?

      Comment by Rob Kirton — June 28, 2006 @ 8:59 am

    43. I personally like having Google as a market leader. Yahoo has become a bunch of greedy thugs – their apps invariably have ads on the left, the right, above, and below, all blinking and moving, and $199 a year for a link in their now-useless directory has always been incredibly nasty – and MSN, well, greedy thugs does pretty much cover it! Google on the other hand has shared the wealth quite nicely with content creators via the AdSense program, which allows normal people to earn a good living as webmasters if they have a particular area of expertise and the ability to share it. Yes, it’s also inspired a number of sleazebags to have Made-For-Adsense sites, but that’s inevitable and those tend to get bounced sooner or later.

      Wikipedia.org is a threat to a lot of us who have worked hard to develop good web sites only to find that increasingly people are just going to wikipedia, which I guess is fair but often the depth and variety is greater out there in the wild. I’ll also point out that a single ontology directed source might stifle a lot of the independent voices who add variety as all traffic goes to them, if I’m understanding it right. I do understand that wikipedia.org is “open” but I’ve experienced openness in the Open Directory that ended up being authoritarian dictatorship when editors get out of hand, and don’t see wikipedia.org as being non-susceptible to that.

      Comment by analysis — June 28, 2006 @ 11:16 am

    44. “I would really love to see more competition”

      And open standards, data, source code, access….all the good stuff.

      http://www.techcrunch.com/2006/06/06/google-to-add-albums-to-picassa/#comment-66429

      Comment by Dave — June 28, 2006 @ 6:48 pm

    45. I enjoyed your commentary on “The Semantic Web” and find your observations right on. It was the prefect follow-up to watching a “Royal Society” presentation by – Professor Sir Tim Berners-Lee FRS (Video on-demand) The future of the world wide web this past evening. Although somewhat dated (as am I)from 9/03, it helped put everything in perspective.

      http://www.royalsoc.ac.uk/portals/bernerslee/rnh.htm

      Thanks.

      Comment by Tom K — June 29, 2006 @ 5:36 am

    46. I don’t think Google will allow themselves to become obsolete. Google just may get as many people to get the job done as Wikipedia has. It’s not impossible. We’ll just have to wait and see on that one.

      Comment by Panda — June 29, 2006 @ 4:34 pm

    47. Nothing can kill google now 🙂

      Comment by Ivan Minic — June 29, 2006 @ 6:43 pm

    48. Good Comment

      Comment by manuandycole — June 29, 2006 @ 7:39 pm

    49. Google will have to compete with the likes of Yahoo and MSN new search technology.

      I’m thankful at least there is some competition left in the U.S. (oil, telecom, banking – almost no competition any more)

      Comment by Krag — June 29, 2006 @ 7:42 pm

    50. I would have to agree that competiton is getting less and here in the USA.

      Comment by onecoolsoul — June 29, 2006 @ 9:58 pm

    51. So…I am trying to wrap my brain around this concept a bit (regardless of the threat to Google or whatever). The collective mind of this Semantic web can, in theory, intuitively connect bits of information together due to their relational association beyond a “keyword” association (i.e. your Pizza/Italian) example. However, how does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject? Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user? And what of ranking or order – Google’s search results are driven by many factors…but they may not be the best results for what I am specifically looking for…even on the level of personal preference. Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to? Hmmm…

      Comment by divotdave — June 30, 2006 @ 3:11 am

    52. Question:
      How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

      Reply:
      It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

      However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

      Question:
      Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

      Reply:
      That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

      There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

      Question:
      Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

      Reply:
      There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

      Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

      The only standard required is that of the ontology language and associated production tools.

      Marc

      P.S. This set of responses was refined at 10:25pm EST, June 30, ‘06.

      Comment by evolvingtrends — June 30, 2006 @ 3:30 am

    53. I’m in utter awe and there is no way I can digest all this information tonight, so I’ve saved it in its own “favorites” folder. The internet is changing and growing every day and its difficult to keep up. Thanks for the information.

      Comment by Gregory Name — July 1, 2006 @ 2:39 am

    54. seems like some hardcore knowledge

      Comment by unblock myspace — July 1, 2006 @ 4:57 pm

    55. Trying to get machines to understand the English language? People have a tough enough time doing that in every day life, with all the misunderstandings born of incorrect interpertations of the spoken or written word.

      In reference to the first basic question raised by readers: How would a machine distinguish good information from bad? Given that the concept of good vs bad is purely subjective, the answer is machines can’t make that determination because machines can’t make value judgements. Any choice made by a machine that would appear to be a value judgement is really that of the developing programmer.

      Comment by NorthOfTheCity — July 2, 2006 @ 1:58 pm

    56. One big issue I have to wonder about is how to keep out the legions of spammers with their MFA (made for AdSense) sites. They have done an incredible job of staying one step ahead of everyone except perhaps Akismet, spamming Google quite effectively, lodging their turds in forums and blogs, and generally being quite ingenious in their ability to spread filth. Given that we cannot expect our government to crack down on their criminal activities – not necessarily the spamming but the crime that generates the funding for it – how can we insulate Wiki3/Web3 from all that rubbish?

      Another issue is the tendency I’ve noticed for any authority to become incredibly insular and snotty, no doubt due to the massive fight against spam. The Open Directory is famous for its arbitrary, permanent decisions and lack of any ability to take criticism; but Yahoo was hardly fun in its prime, and Google seems to be getting rather aloof as well. The v3 web/wiki may need to confront that head-on since it makes for bad decisions (since self-examination disappears in a mass of self-righteousness.)

      Comment by analysis — July 2, 2006 @ 2:37 pm

    57. On AI and Natural Language Processing

      I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

      On the Debate about the Nature and Definition of AI

      Please note that I could not get into moral, philosophical or religious debates such as what is AI vs human consciousness or what is evolution vs intelligent design or any other such debate that cannot be resolved by computable (read – provable) and decidable arguments in the context of formal theory.

      Non-formal and semi-formal (incomplete, evolving) theories can be found everywhere in every branch of science. For example, in the branch of physics, the so-called “String Theory” is not a formal theory: it has no theoretically provable arguments (or predictions.) Likewise, the field of AI is littered with non-formal theories that some folks in the field (but not all) may embrace as so-called ’seminal’ works even though those non-formal theories have not made any theoretically provable predictions (not to mention experimentally proven ones.) They are no more than a philosophy, an ideology or even a personal religion and they could not be debated with decidable, provable arguments.

      The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

      Comment by evolvingtrends — July 2, 2006 @ 5:27 pm

    58. […] This article explains the background for how the Wikipedia 3.0/Semantic Web vs Google meme was launched. […]

      Pingback by Evolving Trends » For Great Justice, Take Off Every Digg — July 3, 2006 @ 10:32 am

    59. […] So I read this blog entry on A list apart a ways back, but never made an entry here. It’s titled web 3.0 by Jeffrey Zeldman. And then there’s this blog entry that came out today on the Evolving Trends blog called Wikipedia 3.0: The End of Google?. Jeffrey Zeldman makes the point that Web 2.0 is really about the collaboration and community. Not only for the end-user (i.e. flickr, ma.gnolia, etc…), but on the development side. It allows small teams to work more efficiently and to focus on things like usability. AJAX technologies (PHP, Ruby on Rails, XML, CSS, JavaScript, XHTML, and sometimes Microsoft widgets) allow applications to be elegant and simple. The end result is products that do what they do really well vs. an overload of feataures and complexity. Zeldman concludes by saying he’s going to let all the hype over Web 2.0 pass and get on to Web 3.0. […]

      Pingback by Douglas Reynolds : Experience. Organized. » Blog Archive » Web 2.0 (web 3.0?) — July 3, 2006 @ 10:47 am

    60. […] http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/ […]

      Pingback by Copper » I can’t wait…. — July 3, 2006 @ 12:35 pm

    61. […] Wikipedia 3.0 the end of Google? […]

      Pingback by Marcus Cake » Web 3.0, the global brain and the impact on financial markets — July 5, 2006 @ 7:25 am

    62. […] Con respecto a la entrada anterior sobre Google, tambien existe una tesis que predice la “caida” de google. Se basa en la Web Semantica (Web 3.0) y en su poder para deducir (o inducir) respuestas, en lugar de simplemente buscar palabras claves. El problema (aparte de los obstaculos técnicos) esta en que recolectar y clasificar (categorizar) dicha información es un esfuerzo que requiere de miles de personas trabajando por mucho tiempo. El post Wikipedia 3.0: El fin de Google propone que la inmensa base de datos de Wikipedia junto con sus miles de colaboradores voluntarios son la respuesta al problema recien planteado, suponiendo que se les provea con las herramientas adecuadas. […]

      Pingback by Dime Jaguar » Todo tiene una antitesis — July 6, 2006 @ 4:04 am

    63. […] Evolving Trends says that a “Wikipedia 3.0″ could make Google obsolete. […]

      Pingback by appletree » Blog Archive » Thursday Links: Bruce Smith Edition — July 6, 2006 @ 2:30 pm

    64. […] But there it is. That was then. Now, it seems, the rage is Web 3.0. It all started with this article here addressing the Semantic Web, the idea that a new organizational structure for the web ought to be based on concepts that can be interpreted. The idea is to help computers become learning machines, not just pattern matchers and calculators. […]

      Pingback by SHM Project » Blog Archive » Web 3.0 – The Tyranny of X Point Oh — July 7, 2006 @ 1:30 pm

    65. […] Despite that I had painted a picture (not so foreign to many in concept) of a future ‘intelligent collective’ (aka Global Brain [a term which had been in use in somewhat similar context for years now]) in the articles on Wikipedia 3.0 and Web 3.0, I believe that the solution to Web 2.0 is not to make the crowd more intelligent so that ‘it’ (not the people) can control ‘itself’ (and ‘us’ in the process) but to allow us to retain control over it, using true and tried structures and processes. […]

      Pingback by Evolving Trends » Microcrowds: Towards a Self-Aware, Self-Organizing Society? — July 8, 2006 @ 2:31 am

    66. […] Marc đưa ra khái niệm về Wikipedia 3.0 , báo hiệu cho kết thúc của Google […]

      Pingback by Wisiwip » Blog Archive » Digg 08/07/06 — July 9, 2006 @ 7:13 am

    67. […] It doesn’t matter who thought of it first. So it’s better to put these ideas out there in the open, be them good ideas like the Wikipedia 3.0, Web 3.0, ‘Unwisdom of Crowds’ or Google GoodSense or “potentially” bad ones like the Tagging People in the Real World or the e-Society ideas. […]

      Pingback by Evolving Trends » Good and Bad Ideas — July 9, 2006 @ 3:03 pm

    68. […] web 3.0 y google ¿La web 3.0 es el final de google? … no, 5 razones porqué no razón 4: google puede escuchar tu ambiente y en 5 segundos saber que programa ves en TV! […]

      Pingback by miniPLUG » web 3.0 y google — July 9, 2006 @ 4:33 pm

    69. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » From Web 2.0 to Web 2.5 — July 9, 2006 @ 6:34 pm

    70. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » Web 2.0: Back to the “Hunter Gatherer” Society — July 9, 2006 @ 6:41 pm

    71. […] Since writing the article on Wikipedia 3.0: The End of Google? I’ve received over 65,000 page-views from people in almost every Internet-savvy population around the world, and all that traffic happened in less than two weeks, with 85% of it in the first 4 or 5 days. […]

      Pingback by Evolving Trends » Google is a Monopoly — July 10, 2006 @ 6:14 am

    72. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » VirAL POSts — July 12, 2006 @ 11:11 am

    73. Wikipedia 3.0: ¿El final de Google?

      El autor nos habla de los “agentes info” que revolucionarán la forma en que se busca información en la web y la manera en que la comunidad de Wikepedia podría ayudar a acelerar la implementación de la “Máquina Suprema de Respuestas”

      Nota: e…

      Trackback by tecnoticias — July 16, 2006 @ 9:24 pm

    74. […] Evolving Trends » Wikipedia 3.0: The End of Google? Caught up with Web 2.0 yet, if not just skip over and check out Web 3.0! Evoloving Trends walks through a very intellectual analysis of the next big thing. (tags: blog future search SemanticWeb wiki Wikipedia) […]

      Pingback by links for 2006-07-19 » My blog of HR, and technology stuff — July 19, 2006 @ 4:39 pm

    75. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Evolving Trends » It’s Easier to Lead Than Follow — August 9, 2006 @ 11:41 am

    76. […] Wikipedia 3.0: The End of Google? […]

      Pingback by I predict … « Evolving Trends — September 3, 2006 @ 12:27 pm

    77. People actually have to take the time to write if this is going to work. And you can look at all the wikis on the net that have failed because they didn’t reach a critical mass of volunteers. People have jobs and blogs to take care of, therefore, the for-profit model is going to be the way it has to work.

      Comment by Astralis — September 13, 2006 @ 3:48 am

    78. “The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.”

      This captures the emerging power of wikis that we are all moving to.

      Thanks!

      Comment by Howard Oliver — September 13, 2006 @ 7:54 pm

    79. […] Det finnes ingen Web 3.0 side på WikiPedia – det er faktisk satt opp en sperre mot å opprette en slik side – men siden om Semantic Web kan være av interesse. Jeg skal også lese artikkelen Wikipedia 3.0: The End of Google? når jeg får litt tid. […]

      Pingback by kjempekjekt.com » Blog Archive » Web 3.0 — November 15, 2006 @ 3:53 am

    80. Update:
      Companies and researchers are developing tools and processes to let domain experts with no knowledge of ontology construction build a formal ontology in a manner that is transparent to them, i.e. without them even realizing that they’re building one. Such tools/processes are emerging from research organizations and Web 3.0 ventures.

      The article “Google Co-Op: The End of Wikipedia?” is linked from this article. It provides a plausible counter argument with respect to how the Semantic Web will emerge. But it’s only one potential future scenario among many even more likely future scenarios, some of which are already taking shape.
      Marc

      Comment by evolvingtrends — November 15, 2006 @ 5:57 pm

    81. […] Go here for latest update on ontology creation […]

      Pingback by Web 3.0 Update « Evolving Trends — November 19, 2006 @ 2:21 pm

    82. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Google Answers: The End of Google Co-Op? « Evolving Trends — November 30, 2006 @ 9:38 am

    83. […] Acá un recorte directo de la traducción del articulo original. (perdí mucho tiempo tratando de entenderlo, se nota?) por Marc Fawzi de Evolving Trends […]

      Pingback by DxZone » DxBlog » Blog Archive » Web 3.0? — December 3, 2006 @ 4:29 pm

    84. Right now, few months before we intended to, we have the begginign of such a semantic application in our modest martial arts wiki.

      The main intention is to discovers causative and other relations between techniques in our field of expertise.

      We have only few of these articles ready, here is one example:
      http://www.ninjutsu.co.il/wiki/index.php/Harai_goshi
      The process of describing ontological relations is painstakingly slow involving many human hours.

      Comment by yossi — December 27, 2006 @ 11:26 am

    85. Hi Yossi,

      Good stuff.

      The creation of domain-specific ontologies(including domain-specific ontologies that enable machine reasoning about a given subject) has to be a transparent process from the domain expert’s perspective. Whether that’s done through technology or well-designed processes depends on what needs to be accomplished

      See latest update on Web 3.0 technologies: http://evolvingtrends.wordpress.com/2006/11/19/web-30-update/

      Thanks for your comment.

      Marc

      Comment by evolvingtrends — December 28, 2006 @ 1:07 am

    86. […] by Wikipedia’s founder), future semantic version of Wikipedia (aka Wikipedia 3.0), and Google’s Pagerank algorithm to shed some light on how to design a better semantic […]

      Pingback by Designing a better semantic search engine « Evolving Trends — January 7, 2007 @ 9:04 pm

    87. […] Interestingly, my friend Marc Fawzi described exactly this idea in a piece he posted on the subject last […]

      Pingback by PCs Powered by the Wisdom of Crowds | twopointouch: web 2.0, blogs and social media — January 10, 2007 @ 3:27 pm

    88. 20 years ago or so, there was a great hipe about Japan’s project of “5th generation of computers”. The idea was to use AI tools (specifically, Prolog) to make computers understand people. The project failed with no result. Probably, Web 3.0 will be more sucessfull, and probably not.
      May be for the start, create standalone semantic operating system? Or we’ll have to access semantic web with dumb windoze machines?

      Comment by Alexei Kaigorodov — February 3, 2007 @ 12:50 pm

    89. There is at least one Semantic Desktop project out there already.

      Marc

      Comment by evolvingtrends — February 6, 2007 @ 7:22 am

    90. Alexei has touched on the subject of needing more powerful computers and operating systems to handle the semantic explosion of synonyms and language rules. The brain is still king. Until quantum computers.

      Comment by Geoff Dodd — February 24, 2007 @ 8:15 am

    91. Much is being done to define and tackle the problems, so you should see some exciting paradigms emerge over the next few years.

      There are many ways to define the problem, so there are many ways to solve it.

      :]

      Comment by evolvingtrends — February 24, 2007 @ 8:58 am

    92. […] personas consideran que esta Web 3.0 será el fin de empresas como Google, pero otras  ya alucinan el surgimiento de una Inteligencia Artificial al estilo de Ghost in the […]

      Pingback by ¿Web 3.0 o Skynet? « El metaverso de JL — March 9, 2007 @ 6:38 pm

    93. […] « Evolving Trends Posted in Semantic web, Web 2.0 by Aman Shakya on the March 23rd, 2007 Wikipedia 3.0: The End of Google? « Evolving Trends The Semantic Web or Web 3.0 promises to “organize the world’s information” in a dramatically […]

      Pingback by Wikipedia 3.0: The End of Google? « Evolving Trends « Aman’s Blog — March 23, 2007 @ 4:52 am

    94. Google may rack most of the benefits by hosting Wikipedia.

      Comment by Nat — June 1, 2007 @ 4:23 am

    95. People are not as stupid as you may think.

      Any such decision involving collusion between a public-created knowledge base like Wikipedia and any company like Google who may try to control it will be challenged and opposed by the majority of people.

      Marc

      Comment by evolvingtrends — July 30, 2007 @ 6:24 pm

    96. […] interesting article, written some time ago now, suggests that the Web 3.0 vision might be the way to end Google’s monopoly. It is touted as being a new way to organise […]

      Pingback by Will Google rule the Digital Realm? | Simon Emery — November 5, 2007 @ 4:20 pm

    97. […] can read the whole article here Wikipedia3.0 – The End of Google A more trimmed down version of the same article can be found here P2P ai will kill […]

      Pingback by Web 2.0 or 3.0? — November 8, 2007 @ 1:16 am

    98. […] Wikipedia 3.0: The End of Google? […]

      Pingback by Google tries again to co-opt the Wikipedia 3.0 vision « Evolving Trends — December 17, 2007 @ 4:13 pm

    99. […] The Law of The Observer… […]

      Pingback by I Observe. Therefore I Am. « Evolving Trends — December 18, 2007 @ 6:28 pm

    100. […] blog (http://evolvingtrends.wordpress.com/2006/06/26/wikipedia-30-the-end-of-google/) that was talking about Web. 3.0. The argument it was making about Wikipedia, ontological knowledge […]

      Pingback by musings on participation and digital technologies « snapshot… — January 9, 2008 @ 3:51 pm

    101. […] And here’s the Evolving Trends article that gave both Google and Wikia the impetus to get into user-enhanced (and /semantic) search: here […]

      Pingback by Hakia, Google, Wikia « Evolving Trends — January 16, 2008 @ 4:43 pm

     

    Read Full Post »

    %d bloggers like this: