Feeds:
Posts
Comments

Posts Tagged ‘Info Agent’

Evolving Trends

July 29, 2006

Search By Meaning

I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

However, I also do see the solution!

Related

  1. Wikipedia 3.0: The End of Google?
  2. P2P 3.0: The People’s Google
  3. Intelligence (Not Content) is King in Web 3.0
  4. Web 3.0 Blog Application
  5. Towards Intelligent Findability
  6. All About Web 3.0

Beats

42. Grey Cell Green

Posted by Marc Fawzi

Tags:

Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

5 Comments »

  1. context is a kind of meaning, innit?

    Comment by qxcvz — July 30, 2006 @ 3:24 am

  2. You’re one piece short of Lego Land.

    I have to make the trek down to San Diego and see what it’s all about.

    How do you like that for context!? 🙂

    Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

    Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

    The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

    I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

    [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

    An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

    Marc

    p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

    Comment by evolvingtrends — July 30, 2006 @ 10:19 am

  3. Hey there, Marc.
    Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

    Comment by Ian — August 7, 2006 @ 5:06 pm

  4. Luckily, I’m not an opinion journalist although I could easily pass for one.

    You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

    BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

    :] … dzzt.

    Marc

    Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

  5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

    Hope we’ll still be able to do a little “fencing” around this subject!

    Comment by Ian — August 7, 2006 @ 7:01 pm

RSS feed for comments on this post. TrackBack URI

Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    July 20, 2006

    Google dont like Web 3.0 [sic]

    (this post was last updated at 9:50am EST, July 24, ‘06)

    Why am I not surprised?

    Google exec challenges Berners-Lee

    The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.

    Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)

    In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.

    For more information, see the articles below.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Wikipedia 3.0: El fin de Google (traducción)
    3. All About Web 3.0
    4. Web 3.0: Basic Concepts
    5. P2P 3.0: The People’s Google
    6. Intelligence (Not Content) is King in Web 3.0
    7. Web 3.0 Blog Application
    8. Towards Intelligent Findability
    9. Why Net Neutrality is Good for Web 3.0
    10. Semantic MediaWiki
    11. Get Your DBin

    Somewhat Related

    1. Unwisdom of Crowds
    2. Reality as a Service (RaaS): The Case for GWorld
    3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy
    4. Is Google a Monopoly?
    5. Self-Aware e-Society

    Beats

    1. In the Hearts of the Wildmen

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF

    Read Full Post »

    Evolving Trends

    July 19, 2006

    Towards Intelligent Findability

    (This post was last updated at 12:45pm EST, July 22, 06)

    By Eric Noam Rodriguez (versión original en español CMS Semántico)

    Editing and Addendum by Marc Fawzi

    A lot of buzz about Web 3.0 and Wikipedia 3.0 has been generagted lately by Marc Fawzi through this blog, so I’ve decided that for my first post here I’d like to dive into this idea and take a look at how to build a Semantic Content Management System (CMS). I know this blog has had a more of a visionary, psychological and sociological theme (i.e., the vision for the future and the Web’s effect on society, human relationships and the individual himself), but I’d like to show the feasibility of this vision by providing some technical details.

    Objective

     

    We want a CMS capable of building a knowledge base (that is a set of domain-specific ontologies) with formal deductive reasoning capabilities.

    Requirements

     

    1. A semantic CMS framework.
    2. An ontology API.
    3. An inference engine.
    4. A framework for building info-agents.

    HOW-TO

     

    The general idea would be something like this:

    1. Users use a semantic CMS like Semantic MediaWiki to enter information as well as semantic annotations (to establish semantic links between concepts in the given domain on top of the content) This typically produces an informal ontology on top of the information, which, when combined with domain inference rules and the query structures (for the particular schema) that are implemented in an independent info agent or built into the CMS, would give us a Domain Knowledge Database. (Alternatively, we can have users enter information into a non-semantic CMS to create content based on a given doctype or content schema and then front-end it with an info agent that works with a formal ontology of the given domain, but we would then need to perform natural language processing, including using statistical semantic models, since we would lose the certainty that would normally be provided by the semantic annotations that, in a Semantic CMS, would break down the natural language in the information to a definite semantic structure.)
    2. Another set of info agents adds to our knowledge base inferencing-based querying services for information on the Web or other domain-specific databases. User entered information plus information obtained from the web makes up our Global Knowledge Database.
    3. We provide a Web-based interface for querying the inference engine.

    Each doctype or schema (depending on the CMS of your choice) will have a more or less direct correspondence with our ontologies (i.e. one schema or doctype maps with one ontology). The sum of all the content of a particular schema makes up a knowledge-domain which when transformed into a semantic language like (RDF or more specifically OWL) and combined with the domain inference rules and the query structures (for the particular schema) constitute our knowledge database. The choice of CMS is not relevant as long as you can query its contents while being able to define schemas. What is important is the need for an API to access the ontology. Luckily projects like JENA fills this void perfectly providing both an RDF and an OWL API for Java.

    In addition, we may want an agent to add or complete our knowledge base using available Web Services (WS). I’ll assume you’re familiarized with WS so I won’t go into details.

     

    Now, the inference engine would seem like a very hard part. It is. But not for lack of existing technology: the W3C already have a recommendation language for querying RDF (viz. a semantic language) known as SPARQL (http://www.w3.org/TR/rdf-sparql-query/) and JENA already has a SPARQL query engine.

    The difficulty lies in the construction of ontologies which would have to be formal (i.e. consistent, complete, and thoroughly studied by experts in each knowledge-domain) in order to obtain powerful deductive capabilities (i.e. reasoning).

    Conclusion

    We already have technology powerful enough to build projects such as this: solid CMS, standards such as RDF, OWL, and SPARQL as well as a stable framework for using them such as JENA. There are also many frameworks for building info-agents but you don’t necessarily need a specialized framework, a general software framework like J2EE is good enough for the tasks described in this post.

    All we need to move forward with delivering on the Web 3.0 vision (see 1, 2, 3) is the will of the people and your imagination.

    Addendum

    In the diagram below, the domain-specific ontologies (OWL 1 … N) could be all built by Wikipedia (see Wikipedia 3.0) since they already have the largest online database of human knowledge and the domain experts among their volunteers to build the ontologies for each domain of human knowledge. One possible way is for Wikipedia will build informal ontologies using Semantic MediaWiki (as Ontoworld is doing for the Semantic Web domain of knowledge) but Wikipedia may wish to wait until they have the ability to build formal ontologies, which would enable more powerful machine-reasoning capabilities.

    [Note: The ontologies simply allow machines to reason about information. They are not information but meta-information. They have to be formally consistent and complete for best results as far as machine-based reasoning is concerned.]

    However, individuals, teams, organizations and corporations do not have to wait for Wikipedia to build the ontologies. They can start building their own domain-specific ontologies (for their own domains of knowledge) and use Google, Wikipedia, MySpace, etc as sources of information. But as stated in my latest edit to Eric’s post, we would have to use natural language processing in that case, including statistical semantic models, as the information won’t be pre-semanticized (or semantically annotated), which makes the task more dificult (for us and for the machine …)

    What was envisioned in the Wikipedia 3.0: The End of Google? article was that since Wikipedia has the volunteer resources and the world’s largest database of human knowledge then it will be in the powerful position of being the developer and maintainer of the ontologies (including the semantic annotations/statements embedded in each page) which will become the foundation for intelligence (and “Intelligent Findability”) in Web 3.0.

    This vision is also compatible with the vision for P2P AI (or P2P 3.0), where people will run P2P inference engines on their PCs that communicate and collaborate with each other and that tap into information form Google, Wikipedia, etc, which will ultimately push Google and central search engines down to the commodity layer (eventually making them a utility business just like ISPs.)

    Diagram

    Related

    1. Wikipedia 3.0: The End of Google? June 26, 2006
    2. Wikipedia 3.0: El fin de Google (traducción) July 12, 2006
    3. Web 3.0: Basic Concepts June 30, 2006
    4. P2P 3.0: The People’s Google July 11, 2006
    5. Why Net Neutrality is Good for Web 3.0 July 15, 2006
    6. Intelligence (Not Content) is King in Web 3.0 July 17, 2006
    7. Web 3.0 Blog Application July 18, 2006
    8. Semantic MediaWiki July 12, 2006
    9. Get Your DBin July 12, 2006

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Google, GData, inference engine, AI, ontology, Semantic Web, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, JENA, SPARQL, RDF, OWL

     

    Read Full Post »

    %d bloggers like this: