Feeds:
Posts
Comments

Posts Tagged ‘intelligent findability’

Evolving Trends

Web 3.0

Historically speaking, the first coining of Web 3.0 in conjunction with Semantic Web and/or AI agents and the first coining of  Web 3.0 in conjunction with Wikipedia and/or Google was made in the Wikipedia 3.0: The End of Google? article, which was published on Evolving Trends (this blog) on June 26, ‘06.

June 28, ‘06: Here’s what a fellow blogger, who had reviewed the Wikipedia 3.0 article, had to say:

“[…] But there it is. That was then. Now, it seems, the rage is Web 3.0. It all started
with this article here addressing the Semantic Web, the idea that a new organizational
structure for the web ought to be based on concepts that can be interpreted. The idea is
to help computers become learning machines, not just pattern matchers and calculators. […]“

June 28, ‘06: A fellow blogger wrote:

“This is the first non-sarcastic reference to Web 3.0 I’ve seen in the wild”

As of Jan 25, there are 11,000 links to Evolving Trends from blogs, forums and news sites pointing to the Wikipedia 3.0 article.

Jan 25, ‘07: A fellow blogger wrote:

“In 2004 I with my friend Aleem worked on idea of Semantic Web (as our senior project), and now I have been hearing news of Web 3.0. I decided to work on the idea further in 2005, and may be we could have made a very small scaled 4th generation search engine. Though this has never become reality but now it seems it’s hot time for putting Semantics and AI into web. Reading about Web 3.0 again thrilled me with the idea. [Wikia] has decided to jump into search engines and give Google a tough time :). So I hope may be I get a chance to become part of this Web 3.0 and make information retreival better.”

Alexa graph

According to Alexa the Wikipedia 3.0: The End of Google? article estimated penetration peaked on June 28 at a ratio of 650 per each 1,000,000 people. Based on an estimated number of 1,000,000,000 Web users, this means that it reached 650,000 people on June 28, and other hundreds of thousands of people on June 26, 27, 29, 30. This includes people who read the article at about 6,000 sites (according to MSN) that had linked to Evolving Trends. Based on the Alexa graph, we could estimate that the article reach close to 2 million people in the first 4.5 days of its release.

Update on Alexa Statistics (Sep. 18, 2008): some people have pointed out (independently with respect to their own experience) that Alexa’s statistics are skewed and not very reliable. As far as the direct hits to the on this blog they’re in the 200,000 range as of this writing.


Note: the term “Web 3.0″ is the dictionary word “Web” followed by the number “3″, a decimal point and the number “0.” As such, the term itself cannot and should not have any commercial significance in any context.  

Update on how the Wikipedia 3.0 vision is spreading:


Update on how Google is hopelessly attempting to co-opt the Wikipedia 3.0 vision:  

Web 3D + Semantic Web + AI as Web 3.0:  

Here is the original article that gave birth to the Web 3.0 vision:

3D Web + Semantic Web + AI *

The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipeda 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Jan 7, ‘07: The following Evolving Trends post discussing current state of semantic search engines and ways to improve the design:

  1. Designing a Better Web 3.0 Search Engine

The idea described in this article was adopted by Hakia after it was published here, so this article may be considered as prior art.

June 27, ‘06: Semantic MediaWiki project, enabling the insertion of semantic annotations (or metadata) into Wikipedia content (This project is now hosted by Wikia, Wikipedia founder Jimmy wales’ private venture, and may benefit Wikia instead of Wikipedia, which is why I see it as a conflict of interest.)

Bloggers:

This post provides the history behind use of the term Web 3.0 in the context of the Semantic Web and AI.

This post explains the accidental way in which this article reaching 2 million people in 4 days.


Web 3.0 Articles on Evolving Trends

Noteworthy mentions of the Wikipedia 3.0 article:

Tags:

Semantic Web, Web strandards, Trends, OWL, Googleinference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Wikipedia, Wikipedia 3.0Wikipedia AI, P2P 3.0, P2P AI, P2P Semantic Web inference Engineintelligent findability

Evolving Trends is Powered by +[||||]- 42V

Read Full Post »

Evolving Trends

July 17, 2006

Intelligence (Not Content) is King in Web 3.0

Observation

  1. There’s an enormous amount of free content on the Web.
  2. Pirates will aways find ways to share copyrighted content, i.e. get content for free.
  3. There’s an exponential growth in the amount of free, user-generated content.
  4. Net Neutrality (or the lack of a two-tier Internet) will only help ensure the continuance of this trend.
  5. Content is is becoming so commoditized that it only costs us the monthly ISP fee to access.

Conslusions (or Hypotheses)

The next value paradigm in the content business is going to be about embedding “intelligent findability” into the content layer, by using a semantic CMS (like Semantic MediaWiki, that enables domain experts to build informal ontologies [or semantic annotations] on top of the information) and by adding inferencing capabilities to existing search engines. I know this represents less than the full vision for Web 3.0 as I’ve outlined in the Wikipedia 3.0 and Web 3.0 articles but it’s a quantum leap above and beyond the level of intelligence that exists today within the content layer. Also, semantic CMS can be part of P2P Semantic Web Inference Engine applications that would push central search model’s like Google’s a step closer to being a “utility” like transport, unless Google builds their own AI, which would then have to compete with the people’s P2P version (see: P2P 3.0: The People’s Google and Get Your DBin.)

In other words, “intelligent findability” NOT content in itself will be King in Web 3.0.

Related

  1. Towards Intelligent Findability
  2. Wikipedia 3.0: The End of Google?
  3. Web 3.0: Basic Concepts
  4. P2P 3.0: The People’s Google
  5. Why Net Neutrality is Good for Web 3.0
  6. Semantic MediaWiki
  7. Get Your DBin

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

Read Full Post »

Evolving Trends

July 17, 2006

Web 3.0 Blog Application

(this post was reblogged at 7:31am EST, Jan 6, ‘07)Background

As concluded in my previous post there’s an exponetial growth in the amount of user-generated content (videos, blogs, photos, P2P content, etc).

The enormous amount of free content available today is just too much for the current “dumb search” technology that is used to access it.

I believe that content is now a commodity and the next layer of value is all about “Intelligent Findability.”

Take my blog for example, it’s less than 60 days old, and I’ve never blogged before, but as of today it already has ~500 RSS daily subscribers (and growing), with a noticeable increase after the iPod post I made 3 days ago, 6,281 incoming links (according to MSN) and ~70,000 page views in total so far (mostly due to the Wikipedia 3.0 post, which according to Alexa.com reached an estimated ~2M people.) That demonstrates the potential of blogs to generate and spread lots of content.

So there is a lot of blog-generated content (if you consider how many bloggers are out there) and that doesn’t even include the hundreds of thousands (or millions?) of videos and photos uploaded daily to YouTube, Google Video, Flickr and all those other video and photo sharing sites. It also doesn’t include the 30% of total Internet bandwidth being sucked up by BitTorrent clients.

There’s just too much content and no seriously effective way to find what you need. Google is our only hope for now but Google is rudimentary compared to the vision of Semantic-Web Info Agents expressed in the Wikipedia 3.0 and Web 3.0 articles.

Idea

We’d like to embed “Intelligent Findability” into a blogging application so that others will be able to get the most of the information, ideas and analyses we generate.
If you do a search right now for “cool consumer idea” you will not get the iPod post. Instead you will get this post, but that is because I’m specifically making the association between “cool consumer idea” and “iPod” in this post.

Google tries to get around the debilitating limitation of keyword-based search engine technology in the same way by letting people associate phrases or words with a given link. If enough people linked to the iPod post and put the words “cool consumer idea” in the link then when searching Google for “cool consumer idea” you will see the iPod post. However, unless people band together and decide to call it a “cool consumer idea” it won’t show up in the search results. You would have to enter something like “portable music application” (which is actually one of the search results that showed up on my WordPress dashboard today.)

Using Semantic MediaWiki (which allows domain experts to embed semantic annotations into the information) I could insert semantic annotations to semantically link concepts in the information on this blog that would build an ontology that defines semantic relationships between terms in the information (i.e. meaning) where “iPod” would be semantically related to “product” which would be semantically related to “consumer electronics” and where the sentence Portable Music Studio would be semantically related (through use of annotations) to “vision”, “idea”, “concept”, “entertainment”, “music”, “consumer electronics”, “mp3 player” and so on, while the “iPod” would be also semantically related to “cool” (as in what is “cool”?) Thus, using rules of inference for my domain of knowledge I should able to deliver an intelligent search capability that deductively reasons the best match to a search query, based on matching the deduced meanings (represented as semantic graphs) from the user’s query and the information.

The quality of the deductive capability would depend on the consistency and completeness of the semantic annotations and the pan-domain or EvolvingTrends-domain ontology that I would build, among other factors. But generally speaking, since the ontology and the semantic annotations would be built by me if we think alike (or have a fairly similar semantic model of the world) then you will not only be able to read my blog but you will be able to read my mind. The idea is that, with my help in supplying the semantic annotations, such system will be able to deduce possible meaning (as a graph of semantic relationships) out of each sentence in the post and respond to search queries by reasoning about meaning rather than matching keywords.

This is possible with Semantic MediaWiki (which is under development) However, in this particular instance, I don’t want a Semantic Wiki. I want a Semantic Blog. But that should be just a simple step away.

Related

  1. Wikipedia 3.0: The End of Google?
  2. Towards Intelligent Findability
  3. Web 3.0: Basic Concepts
  4. Intelligence (Not Content) is King in Web 3.0
  5. Semantic MediaWiki
  6. Open Source Your Mind
  7. iPod as a Portable Music Studio

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

Read Full Post »

Evolving Trends

July 29, 2006

Search By Meaning

I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

However, I also do see the solution!

Related

  1. Wikipedia 3.0: The End of Google?
  2. P2P 3.0: The People’s Google
  3. Intelligence (Not Content) is King in Web 3.0
  4. Web 3.0 Blog Application
  5. Towards Intelligent Findability
  6. All About Web 3.0

Beats

42. Grey Cell Green

Posted by Marc Fawzi

Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

5 Comments »

  1. context is a kind of meaning, innit?

    Comment by qxcvz — July 30, 2006 @ 3:24 am

  2. You’re one piece short of Lego Land.

    I have to make the trek down to San Diego and see what it’s all about.

    How do you like that for context!? 🙂

    Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

    Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

    The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

    I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

    [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

    An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

    Marc

    p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

    Comment by evolvingtrends — July 30, 2006 @ 10:19 am

  3. Hey there, Marc.
    Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

    Comment by Ian — August 7, 2006 @ 5:06 pm

  4. Luckily, I’m not an opinion journalist although I could easily pass for one.

    You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

    BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

    :] … dzzt.

    Marc

    Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

  5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

    Hope we’ll still be able to do a little “fencing” around this subject!

    Comment by Ian — August 7, 2006 @ 7:01 pm

RSS feed for comments on this post. TrackBack URI

Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    July 20, 2006

    Google dont like Web 3.0 [sic]

    (this post was last updated at 9:50am EST, July 24, ‘06)

    Why am I not surprised?

    Google exec challenges Berners-Lee

    The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.

    Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)

    In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.

    For more information, see the articles below.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Wikipedia 3.0: El fin de Google (traducción)
    3. All About Web 3.0
    4. Web 3.0: Basic Concepts
    5. P2P 3.0: The People’s Google
    6. Intelligence (Not Content) is King in Web 3.0
    7. Web 3.0 Blog Application
    8. Towards Intelligent Findability
    9. Why Net Neutrality is Good for Web 3.0
    10. Semantic MediaWiki
    11. Get Your DBin

    Somewhat Related

    1. Unwisdom of Crowds
    2. Reality as a Service (RaaS): The Case for GWorld
    3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy
    4. Is Google a Monopoly?
    5. Self-Aware e-Society

    Beats

    1. In the Hearts of the Wildmen

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF

    Read Full Post »

    Evolving Trends

    July 19, 2006

    Towards Intelligent Findability

    (This post was last updated at 12:45pm EST, July 22, 06)

    By Eric Noam Rodriguez (versión original en español CMS Semántico)

    Editing and Addendum by Marc Fawzi

    A lot of buzz about Web 3.0 and Wikipedia 3.0 has been generagted lately by Marc Fawzi through this blog, so I’ve decided that for my first post here I’d like to dive into this idea and take a look at how to build a Semantic Content Management System (CMS). I know this blog has had a more of a visionary, psychological and sociological theme (i.e., the vision for the future and the Web’s effect on society, human relationships and the individual himself), but I’d like to show the feasibility of this vision by providing some technical details.

    Objective

     

    We want a CMS capable of building a knowledge base (that is a set of domain-specific ontologies) with formal deductive reasoning capabilities.

    Requirements

     

    1. A semantic CMS framework.
    2. An ontology API.
    3. An inference engine.
    4. A framework for building info-agents.

    HOW-TO

     

    The general idea would be something like this:

    1. Users use a semantic CMS like Semantic MediaWiki to enter information as well as semantic annotations (to establish semantic links between concepts in the given domain on top of the content) This typically produces an informal ontology on top of the information, which, when combined with domain inference rules and the query structures (for the particular schema) that are implemented in an independent info agent or built into the CMS, would give us a Domain Knowledge Database. (Alternatively, we can have users enter information into a non-semantic CMS to create content based on a given doctype or content schema and then front-end it with an info agent that works with a formal ontology of the given domain, but we would then need to perform natural language processing, including using statistical semantic models, since we would lose the certainty that would normally be provided by the semantic annotations that, in a Semantic CMS, would break down the natural language in the information to a definite semantic structure.)
    2. Another set of info agents adds to our knowledge base inferencing-based querying services for information on the Web or other domain-specific databases. User entered information plus information obtained from the web makes up our Global Knowledge Database.
    3. We provide a Web-based interface for querying the inference engine.

    Each doctype or schema (depending on the CMS of your choice) will have a more or less direct correspondence with our ontologies (i.e. one schema or doctype maps with one ontology). The sum of all the content of a particular schema makes up a knowledge-domain which when transformed into a semantic language like (RDF or more specifically OWL) and combined with the domain inference rules and the query structures (for the particular schema) constitute our knowledge database. The choice of CMS is not relevant as long as you can query its contents while being able to define schemas. What is important is the need for an API to access the ontology. Luckily projects like JENA fills this void perfectly providing both an RDF and an OWL API for Java.

    In addition, we may want an agent to add or complete our knowledge base using available Web Services (WS). I’ll assume you’re familiarized with WS so I won’t go into details.

     

    Now, the inference engine would seem like a very hard part. It is. But not for lack of existing technology: the W3C already have a recommendation language for querying RDF (viz. a semantic language) known as SPARQL (http://www.w3.org/TR/rdf-sparql-query/) and JENA already has a SPARQL query engine.

    The difficulty lies in the construction of ontologies which would have to be formal (i.e. consistent, complete, and thoroughly studied by experts in each knowledge-domain) in order to obtain powerful deductive capabilities (i.e. reasoning).

    Conclusion

    We already have technology powerful enough to build projects such as this: solid CMS, standards such as RDF, OWL, and SPARQL as well as a stable framework for using them such as JENA. There are also many frameworks for building info-agents but you don’t necessarily need a specialized framework, a general software framework like J2EE is good enough for the tasks described in this post.

    All we need to move forward with delivering on the Web 3.0 vision (see 1, 2, 3) is the will of the people and your imagination.

    Addendum

    In the diagram below, the domain-specific ontologies (OWL 1 … N) could be all built by Wikipedia (see Wikipedia 3.0) since they already have the largest online database of human knowledge and the domain experts among their volunteers to build the ontologies for each domain of human knowledge. One possible way is for Wikipedia will build informal ontologies using Semantic MediaWiki (as Ontoworld is doing for the Semantic Web domain of knowledge) but Wikipedia may wish to wait until they have the ability to build formal ontologies, which would enable more powerful machine-reasoning capabilities.

    [Note: The ontologies simply allow machines to reason about information. They are not information but meta-information. They have to be formally consistent and complete for best results as far as machine-based reasoning is concerned.]

    However, individuals, teams, organizations and corporations do not have to wait for Wikipedia to build the ontologies. They can start building their own domain-specific ontologies (for their own domains of knowledge) and use Google, Wikipedia, MySpace, etc as sources of information. But as stated in my latest edit to Eric’s post, we would have to use natural language processing in that case, including statistical semantic models, as the information won’t be pre-semanticized (or semantically annotated), which makes the task more dificult (for us and for the machine …)

    What was envisioned in the Wikipedia 3.0: The End of Google? article was that since Wikipedia has the volunteer resources and the world’s largest database of human knowledge then it will be in the powerful position of being the developer and maintainer of the ontologies (including the semantic annotations/statements embedded in each page) which will become the foundation for intelligence (and “Intelligent Findability”) in Web 3.0.

    This vision is also compatible with the vision for P2P AI (or P2P 3.0), where people will run P2P inference engines on their PCs that communicate and collaborate with each other and that tap into information form Google, Wikipedia, etc, which will ultimately push Google and central search engines down to the commodity layer (eventually making them a utility business just like ISPs.)

    Diagram

    Related

    1. Wikipedia 3.0: The End of Google? June 26, 2006
    2. Wikipedia 3.0: El fin de Google (traducción) July 12, 2006
    3. Web 3.0: Basic Concepts June 30, 2006
    4. P2P 3.0: The People’s Google July 11, 2006
    5. Why Net Neutrality is Good for Web 3.0 July 15, 2006
    6. Intelligence (Not Content) is King in Web 3.0 July 17, 2006
    7. Web 3.0 Blog Application July 18, 2006
    8. Semantic MediaWiki July 12, 2006
    9. Get Your DBin July 12, 2006

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Google, GData, inference engine, AI, ontology, Semantic Web, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, JENA, SPARQL, RDF, OWL

     

    Read Full Post »

    %d bloggers like this: