Feeds:
Posts
Comments

Posts Tagged ‘innovation’

Evolving Trends

July 10, 2006

Is Google a Monopoly?

(this post was last updated on Jan 11, ‘07)Given the growing feeling that Google holds too much power over our future without any proof that they can handle such responsibility wisely, and with plenty of proof to the opposite1, it is clear why people find themselves breathing a sigh of relief at the prospect of a new Web order, where Google will not be as powerful and dominant.

In the software industry, economies of scale do not derive from production capacity but rather from the size of the installed user base, as software is made of electrical pulses that can be downloaded by the users, at a relatively small cost to the producer (or virtually no cost if using the P2P model of the Web.) This means that the size of the installed user base replaces production capacity in classical economic terms.

Just as Microsoft used its economies of scale (i.e. its installed user base) as part of a copy-and-co-opt strategy to dominate the desktop, Google has shifted from a strategy of genuine innovation, which is expensive and risky, to a lower-risk copy-and-co-opt strategy in which it uses its economies of scale (i.e. its installed user base) to eliminate competition and dominate the Web.

The combination of the ability to copy and co-opt innovations across broad segments of the market together with existing and growing economies of scale is what makes Google a monopoly.

Consider the following example: DabbleDB (among other companies) beat Google to market with their online, collaborative spreadsheet application, but Google acquired their competitor and produced a similar (yet inferior) product that is now threatening to kill DabbleDB’s chances for growth.

One way to think of what’s happening is in terms of the first law of thermodynamics (aka conservation of energy): if Google grows then many smaller companies will die. And as Google grows, many smaller companies are dying.

It is not any better or worse than it used to be under the Microsoft monopoly for companies that have to compete with Google . But it’s much worse for us the people because what is at stake now is much bigger. It’s no longer about our PCs and LANs, it’s about the future of the entire Web.

You could argue that the patent system protects smaller companies from having their products and innovations copied and co-opted by bigger competitors like Google. However, during the Microsoft dominated era, very few companies succeeded in suing them for patent infringement. I happen to know of one former PC software company and their ex CEO who succeeded in suing Microsoft for $120M. But that’s a rare exception to a common rule: the one with the deeper pockets always has the advantage in court (they can drag the lawsuit for years and make it too costly for others to sue them.)

Therefore, given that Google is perceived as a growing monopoly that many see as having acquired too much power, too fast, without the wisdom to use that power responsibly, I’m not too surprised that many people have welcomed the Wikipedia 3.0 vision.


1. What leaps to mind as far as Google’s lack of wisdom is how they had sold the world on their “Do No Evil” mantra only to “Do Evil” when it came to oppressing the already-oppressed (see: Google Chinese censorship.)

Related

  1. Wikipedia 3.0: The End of Google?
  2. P2P 3.0: The People’s Google
  3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy

Posted by Marc Fawzi

Tags:

Web 2.0, Google, Adam Smith, Monopoly, Trends, imperialism, Anti-Americanism, economies of scale, innovation, Startup, Google Writely, Google spreadsheets, DabbleDB, Google Base, Web 3.0

8 Comments »

  1. Google is large and influential. That doesn’t make it a monopoly.They have 39% market share in Search in the US – http://searchenginewatch.com/reports/article.php/3099931 – a lot more than their closest competitor, but it’s wrong to describe them as a monopoly. A monopoly has a legal entitlement to be the only provider of a product or service. More loosely, it can be used to describe a company with such dominance in the market that it makes no sense to try to compete with them. Neither apply to Google. I think your correspondents are simply reacting against the biggest player because they are the biggest, the same way people knock Microsoft, Symantec, Adobe, etc.

    Certainly, Yahoo!, MSN and Ask Jeeves, etc. aren’t ready to throw in the towel yet. Arguably, if they were struggling, and I don’t know if they are, DabbleDB would need to differentiate a little more against Google to make their model work as a business. I am not sure they need to.

    One last point, there isn’t a finite number of people looking for spreadsheets, etc., online. It’s a growing market with enormous untapped potential. The winners will be those best able to overcome the serious objections people have towards online apps – security & stability. Spreadsheets and databases are business apps – it will not be good enough to throw up something that is marked beta and sometimes works and might be secure. I think people dealing with business data *want* to pay for such products, because it guarantees them levels of service and the likelihood that the company will still be around in a year.

    Comment by Ian — July 11, 2006 @ 4:52 am

  2. I don’t think any company can compete against Google, especially not small companies. If MS and Interactive Corp. are having to struggle against Google then how can any small company compete against them? They have economies of scale that cannot be undone so easily, except through P2P subversion of the central search model (See my Web 3.0 article), which is going to happen on its own (I don’t need to advocate it.)Having said that I did specify ways to compete with Google in SaaS in the post titled Google 2.71828: Analysis of Google’s Web 2.0 Strategy

    But in gerenal, it’s getting tough out there because of Google’s economies of scale and their ability/willingess to copy-and-co-opt innovations across a broad segment of the market.

    Ian wrote:

    “A monopoly has a legal entitlement to be the only provider of a product or service.”

    Response:
    The definition of Monopoly in the US does not equate to state run companies or any such concept from the EU domain. It simply equates to economies of scale and ability to copy and co-opt innovations in a broad sector of the market. Monopolies that exist in market niches are a natural result of free markets but ones that exist in broad segments are problematic to free markets.

    Marc

    Comment by evolvingtrends — July 11, 2006 @ 5:21 am

  3. Don’t forget that Google prevents AdSense publishers from using other context-based advertising services on the same pages that have AdSense ads.Comment by drew — July 12, 2006 @ 12:23 am
  4. That’s a sure sign that they’re a monopoly. Just like MS used to force PC makers to do the same.Marc

    Comment by evolvingtrends — July 12, 2006 @ 5:05 am

  5. […] impact it has over a worldwide, super-connected tool like the Internet. An article by Marc Fawzi on Evolving Trends expressed this effectively […]Pingback by What Evil Lurks in the Heart of Google | Phil Butler Unplugged — November 6, 2007 @ 11:46 pm
  6. […] with existing and growing economies of scale is what makes Google a monopoly,” states Evolving Trends. As Google grows, many smaller companies will die. In order to set up its monopoly, Google is used […]Pingback by Google: pro’s and con’s « E-culture & communication: open your mind — November 14, 2007 @ 6:42 am
  7. Contrary to what the Google fan club and the Google propoganda machine would have you believe, Here are some real facts:- People do have a choice with operating systems. They can buy a MAC or use Linux.

    – Google has a terrible tack record of abusing its power:
    – click fraud lawsuit where they used a grubby lawyer and tricks to pay almost nothing.
    – they pass on a very small share to adsense publishers and make them sign a confidentiality agreement.
    – They tried to prevent publishers from showing other ads.

    – Google Adsense is responsible for the majority of spam on the Internet.

    – Google has a PR machine which includes Matt Cutts and others, who suppress criticism and even make personal attacks on people who are critical of them. They are also constantly releasing a barrage press releases with gimmicks to improve their image with the public.

    Wake up people. Excessive power leads to abuse.

    Comment by Pete — January 26, 2008 @ 1:25 pm

  8. I think this is a really important discussion which has been started here, thank you Marc.
    I got suspicious today when I heard about MS wanting to take over Yahoo! – or will it be Google…Anyway, this is really crucial stuff here, it’s the much praised freedom of the information age and hence the real hope for a truly open world which is at stake here, I hope there’s some degree of acknowledgment on this.

    So to feed the discussion more,

    – what can we do as users?
    – are there alternative independent search engines out there?
    – should we think of starting new strategies in information retrieval?
    – what ideas are around?

    Is there a good active community somewhere discussing these issues? would be interested in participating…

    thank you
    fabio

    Comment by Fabio — February 4, 2008 @ 11:42 am

Read Full Post »

Evolving Trends

July 11, 2006

P2P 3.0: The People’s Google

/*

This is a more extensive version of the Web 3.0 article with extra sections about the implications of Web 3.0 to Google.

See this follow up article  for the more disruptive ‘decentralized kowledgebase’ version of the model discussed in article.

Also see this non-Web3.0 version: P2P to Destroy Google, Yahoo, eBay et al 

Web 3.0 Developers:

Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

  1. Description Logic Programs: Combining Logic Programs with Description Logic

*/

In Web 3.0 (aka Semantic Web) P2P Inference Engines running on millions of users’ PCs and working with standardized domain-specific ontologies (created by Wikipedia, Ontoworld, other organizations or individuals) using Semantic Web tools, including Semantic MediaWiki, will produce an infomration infrastructure far more powerful than Google (or any current search engine.)

The availability of standardized ontologies that are being created by people, organizations, swarms, smart mobs, e-societies, etc, and the near-future availability of P2P Semantic Web Inference Engines that work with those ontologies means that we will be able to build an intelligent, decentralized, “P2P” version of Google.

Thus, the emergence of P2P Inference Engines and domain-specific ontologies in Web 3.0 (aka Semantic Web) will present a major threat to the central “search” engine model.

Basic Web 3.0 Concepts

Knowledge domains

A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

Information vs Knowledge

To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

Ontologies

For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

Inference Engines

In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

Info Agents

Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

Proofs and Answers

The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

P2P 3.0 vs Google

If you think of how many processes currently run on all the computers and devices connected to the Internet then that should give you an idea of how many Info Agents can be running at once (as of today), all reasoning collaboratively across the different domains of human knowledge, processing and reasoning about heaps of information, deducing answers and deciding truthfulness or falsehood of user-stated or system-generated propositions.

Web 3.0 will bring with it a shift from centralized search engines to P2P Semantic Web Inference Engines, which will collectively have vastly more deductive power, in both quality and quantity, than Google can ever have (included in this exclusion is any future AI-enabled version of Google, as it will not be able to keep up with the distributed P2P AI matrix that will be enabled by millions of users running free P2P Semantic Web Inference Engine software on their home PCs.)

Thus, P2P Semantic Web Inference Engines will pose a huge and escalating threat to Google and other search engines and will expectedly do to them what P2P file sharing and BitTorrent did to FTP (central-server file transfer) and centralized file hosting in general (e.g. Amazon’s S3 use of BitTorrent.)

In other words, the coming of P2P Semantic Web Inference Engines, as an integral part of the still-emerging Web 3.0, will threaten to wipe out Google and other existing search engines. It’s hard to imagine how any one company could compete with 2 billion Web users (and counting), all of whom are potential users of the disruptive P2P model described here.

“The Future Has Arrived But It’s Not Evenly Distributed”

Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

Addendum

On AI and Natural Language Processing

I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

Related

  1. Wikipedia 3.0: The End of Google?
  2. Intelligence (Not Content) is King in Web 3.0
  3. Get Your DBin
  4. All About Web 3.0

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

Read Full Post »

Evolving Trends

June 11, 2006

P2P Semantic Web Engines

No Comments »

Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    June 30, 2006

    Web 3.0: Basic Concepts

    /*(this post was last updated at 1:20pm EST, July 19, ‘06)

    You may also wish to see Wikipedia 3.0: The End of Google? (The original ‘Web 3.0/Semantic Web’ article) and P2P 3.0: The People’s Google (a more extensive version of this article showing the implication of P2P Semantic Web Engines to Google.)

    Web 3.0 Developers:

    Feb 5, ‘07: The following reference should provide some context regarding the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0) but there are better, simpler ways of doing it. 

    1. Description Logic Programs: Combining Logic Programs with Description Logic

    */

    Basic Web 3.0 Concepts

    Knowledge domains

    A knowledge domain is something like Physics, Chemistry, Biology, Politics, the Web, Sociology, Psychology, History, etc. There can be many sub-domains under each domain each having their own sub-domains and so on.

    Information vs Knowledge

    To a machine, knowledge is comprehended information (aka new information produced through the application of deductive reasoning to exiting information). To a machine, information is only data, until it is processed and comprehended.

    Ontologies

    For each domain of human knowledge, an ontology must be constructed, partly by hand [or rather by brain] and partly with the aid of automation tools.

    Ontologies are not knowledge nor are they information. They are meta-information. In other words, ontologies are information about information. In the context of the Semantic Web, they encode, using an ontology language, the relationships between the various terms within the information. Those relationships, which may be thought of as the axioms (basic assumptions), together with the rules governing the inference process, both enable as well as constrain the interpretation (and well-formed use) of those terms by the Info Agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent Info Agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Inference Engines

    In the context of Web 3.0, Inference engines will be combining the latest innovations from the artificial intelligence (AI) field together with domain-specific ontologies (created as formal or informal ontologies by, say, Wikipedia, as well as others), domain inference rules, and query structures to enable deductive reasoning on the machine level.

    Info Agents

    Info Agents are instances of an Inference Engine, each working with a domain-specific ontology. Two or more agents working with a shared ontology may collaborate to deduce answers to questions. Such collaborating agents may be based on differently designed Inference Engines and they would still be able to collaborate.

    Proofs and Answers

    The interesting thing about Info Agents that I did not clarify in the original post is that they will be capable of not only deducing answers from existing information (i.e. generating new information [and gaining knowledge in the process, for those agents with a learning function]) but they will also be able to formally test propositions (represented in some query logic) that are made directly or implied by the user. For example, instead of the example I gave previously (in the Wikipedia 3.0 article) where the user asks “Where is the nearest restaurant that serves Italian cuisine” and the machine deduces that a pizza restaurant serves Italian cuisine, the user may ask “Is the moon blue?” or say that the “moon is blue” to get a true or false answer from the machine. In this case, a simple Info Agent may answer with “No” but a more sophisticated one may say “the moon is not blue but some humans are fond of saying ‘once in a blue moon’ which seems illogical to me.”

    This test-of-truth feature assumes the use of an ontology language (as a formal logic system) and an ontology where all propositions (or formal statements) that can be made can be computed (i.e. proved true or false) and were all such computations are decidable in finite time. The language may be OWL-DL or any language that, together with the ontology in question, satisfy the completeness and decidability conditions.

    “The Future Has Arrived But It’s Not Evenly Distributed”

    Currently, Semantic Web (aka Web 3.0) researchers are working out the technology and human resource issues and folks like Tim Berners-Lee, the Noble prize recipient and father of the Web, are battling critics and enlightening minds about the coming human-machine revolution.

    The Semantic Web (aka Web 3.0) has already arrived, and Inference Engines are working with prototypical ontologies, but this effort is a massive one, which is why I was suggesting that its most likely enabler will be a social, collaborative movement such as Wikipedia, which has the human resources (in the form of the thousands of knowledgeable volunteers) to help create the ontologies (most likely as informal ontologies based on semantic annotations) that, when combined with inference rules for each domain of knowledge and the query structures for the particular schema, enable deductive reasoning at the machine level.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of artficial intelligence (AI) that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines (employing both algorithmic and heuristic approaches) that will not attempt to perform natural language processing. However, they will still have the formal deductive reasoning capabilities described earlier in this article.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. All About Web 3.0
    4. Semantic MediaWiki
    5. Get Your DBin

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    Evolving Trends

    July 9, 2006

    Open Source Your Mind

    Any idea that you come up with that can bring a lot of power to someone and is realistic enough to attempt will inevitably get built by someone.It doesn’t matter that you thought of it first. So it’s better to put your ideas out there in the open, be them good ideas like Wikipedia 3.0, P2P 3.0 (The People’s Google) and Google GoodSense or “potentially” concern-causing ones like the Tagging People in the Real World and the e-Society ideas.In today’s world, if anyone can think of a powerful idea that is realistic enough to attempt then chances are someone is already working on it or someone will be working on it within months.

    Therefore, it is wise to get both good and potentially concern-causing ideas out there and let people be aware of them so that the good ones like the vision for Wikipedia 3.0 and the debate about the ‘Unwisdom of Crowds‘ can be of benefit to all and so that potentially concern-causing ones like the Tagging People in the Real World and the e-Society ideas can be debated in the open.

    It is in a way similar to the one aspect of the patent system. If someone comes up with the cure to cancer or with an important new technology then we, as a society, would want them to describe how it’s made or how it works so we can be sure we have access to it. However, given the availability of blogs and the connectivity we have today, wise innovators, including those in the open source movement, are putting their deas out there in the open so that society as a whole may learn about them, debate them, and decide whether to embrace them, fight them or do something in between (moderate their effect.)

    For some, it can be a lot of fun, especially the unpredictability element.

    So open source your blue sky vision and let the world here about it.

    And for the potentially concern-causing ideas, it’s better to bring them out in the open than to work on them (or risk others working on them) in the dark.

    In other words, open source your mind.

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Read Full Post »

    Evolving Trends

    July 29, 2006

    Search By Meaning

    I’ve been working on a pretty detailed technical scheme for a “search by meaning” search engine (as opposed to [dumb] Google-like search by keyword) and I have to say that in conquering the workability challenge in my limited scope I can see the huge problem facing Google and other Web search engines in transitioning to a “search by meaning” model.

    However, I also do see the solution!

    Related

    1. Wikipedia 3.0: The End of Google?
    2. P2P 3.0: The People’s Google
    3. Intelligence (Not Content) is King in Web 3.0
    4. Web 3.0 Blog Application
    5. Towards Intelligent Findability
    6. All About Web 3.0

    Beats

    42. Grey Cell Green

    Posted by Marc Fawzi

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, Global Brain, semantic blog, intelligent findability, search by meaning

    5 Comments »

    1. context is a kind of meaning, innit?

      Comment by qxcvz — July 30, 2006 @ 3:24 am

    2. You’re one piece short of Lego Land.

      I have to make the trek down to San Diego and see what it’s all about.

      How do you like that for context!? 🙂

      Yesterday I got burnt real bad at Crane beach in Ipswich (not to be confused with Cisco’s IP Switch.) The water was freezing. Anyway, on the way there I was told about the one time when the kids (my nieces) asked their dad (who is a Cisco engineer) why Ipswich is called Ipswich. He said he didn’t know. They said “just make up a reason!!!!!!” (since they can’t take “I don’t know” for an answer) So he said they initially wanted to call it PI (pie) but decided it to switch the letters so it became IPSWICH. The kids loved that answer and kept asking him whenever they had their friends on a beach trip to explain why Ipswich is called Ipswich. I don’t get the humor. My logic circuits are not that sensitive. Somehow they see the illogic of it and they think it’s hilarious.

      Engineers and scientists tend to approach the problem through the most complex path possible because that’s dictated by the context of their thinking, but genetic algorithms could do a better job at that, yet that’s absolutely not what I’m hinting is the answer.

      The answer is a lot more simple (but the way simple answers are derived is often thru deep thought that abstracts/hides all the complexity)

      I’ll stop one piece short cuz that will get people to take a shot at it and thereby create more discussion around the subject, in general, which will inevitably get more people to coalesce around the Web 3.0 idea.

      [badly sun burnt face] + ] … It’s time to experiment with a digi cam … i.e. towards a photo + audio + web 3.0 blog!

      An 8-mega pixel camera phone will do just fine! (see my post on tagging people in the real world.. it is another very simple idea but I like this one much much better.)

      Marc

      p.s. my neurons are still in perfectly good order but I can’t wear my socks!!!

      Comment by evolvingtrends — July 30, 2006 @ 10:19 am

    3. Hey there, Marc.
      Have talked to people about semantic web a bit more now, and will get my thoughts together on the subject before too long. The big issue, basically, is buy-in from the gazillions of content producers we have now. My impression is the big business will lead on semantic web, because it’s more useful to them right now, rather than you or I as ‘opinion/journalist’ types.

      Comment by Ian — August 7, 2006 @ 5:06 pm

    4. Luckily, I’m not an opinion journalist although I could easily pass for one.

      You’ll see a lot of ‘doing’ from us now that we’re talking less 🙂

      BTW, just started as Chief Architect with a VC funded Silicon Valley startup so that’s keeping me busy, but I’m recruiting developers and orchestrating a P2P 3.0 / Web 3.0 / Semantic Web (AI-enabled) open source project consistent with the vision we’ev outlined. 

      :] … dzzt.

      Marc

      Comment by evolvingtrends — August 7, 2006 @ 5:10 pm

    5. Congratulations on the job, Marc. I know you’re a big thinker and I’m delighted to hear about that.

      Hope we’ll still be able to do a little “fencing” around this subject!

      Comment by Ian — August 7, 2006 @ 7:01 pm

    RSS feed for comments on this post. TrackBack URI

    Read Full Post »

  • My Dashboard
  • New Post
  • Evolving Trends

    July 20, 2006

    Google dont like Web 3.0 [sic]

    (this post was last updated at 9:50am EST, July 24, ‘06)

    Why am I not surprised?

    Google exec challenges Berners-Lee

    The idea is that the Semantic Web will allow people to run AI-enabled P2P Search Engines that will collectively be more powerful than Google can ever be, which will relegate Google to just another source of information, especially as Wikipedia [not Google] is positioned to lead the creation of domain-specific ontologies, which are the foundation for machine-reasoning [about information] in the Semantic Web.

    Additionally, we could see content producers (including bloggers) creating informal ontologies on top of the information they produce using a standard language like RDF. This would have the same effect as far as P2P AI Search Engines and Google’s anticipated slide into the commodity layer (unless of course they develop something like GWorld)

    In summary, any attempt to arrive at widely adopted Semantic Web standards would significantly lower the value of Google’s investment in the current non-semantic Web by commoditizing “findability” and allowing for intelligent info agents to be built that could collaborate with each other to find answers more effectively than the current version of Google, using “search by meaning” as opposed to “search by keyword”, as well as more cost-efficiently than any future AI-enabled version of Google, using disruptive P2P AI technology.

    For more information, see the articles below.

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Wikipedia 3.0: El fin de Google (traducción)
    3. All About Web 3.0
    4. Web 3.0: Basic Concepts
    5. P2P 3.0: The People’s Google
    6. Intelligence (Not Content) is King in Web 3.0
    7. Web 3.0 Blog Application
    8. Towards Intelligent Findability
    9. Why Net Neutrality is Good for Web 3.0
    10. Semantic MediaWiki
    11. Get Your DBin

    Somewhat Related

    1. Unwisdom of Crowds
    2. Reality as a Service (RaaS): The Case for GWorld
    3. Google 2.71828: Analysis of Google’s Web 2.0 Strategy
    4. Is Google a Monopoly?
    5. Self-Aware e-Society

    Beats

    1. In the Hearts of the Wildmen

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    Tags:

    Semantic Web, Web strandards, Trends, OWL, innovation, Startup, Evolution, Google, GData, inference, inference engine, AI, ontology, Semanticweb, Web 2.0, Web 2.0, Web 3.0, Web 3.0, Google Base, artificial intelligence, AI, Wikipedia, Wikipedia 3.0, collective consciousness, Ontoworld, Wikipedia AI, Info Agent, Semantic MediaWiki, DBin, P2P 3.0, P2P AI, AI Matrix, P2P Semantic Web inference Engine, semantic blog, intelligent findability, RDF

    Read Full Post »

    Older Posts »

    %d bloggers like this: