Posts Tagged ‘Collective’

Spock – Vertical Search Done Right

Written by Alex Iskold / June 26, 2007 6:10 AM / 11 Comments

There has been quite a lot of buzz lately around a vertical search engine for people, called Spock. While still in private beta, the engine has already impressed users with its rich feature set and social aspects. Yet, there is something that has gone almost unnoticed – Spock is one of the best vertical semantic search engines built so far. There are four things that makes their approach special:

  • The person-centric perspective of a query
  • Rich set of attributes that characterize people (geography, birthday, occupation, etc.)
  • Usage of tags as links or relationships between people
  • Self-correcting mechanism via user feedback loop

Spock’s focus on people

The only kind of search result that you get from Spock is a list of people; and it interprets any query as if it is about people. So whether you search for democrats or ruby on rails or new york, the results will be lists of people associated with the query. In that sense, the algorithm is probably a flavor of the page rank or frequency analysis algorithm used by Google – but tailored to people.

Rich semantics, tags and relationships

As a vertical engine, Spock knows important attributes that people have. Even in the beta stage, the set is quite rich: name, gender, age, occupation and location just to name a few. Perhaps the most interesting aspect of Spock is its usage of tags. Firstly, all frequent phrases that Spock extracts via its crawler become tags. In addition, users can also add tags. So Spock leverages a combination of automated tags and people power for tagging.

A special kind of tag in Spock is called ‘relationships’ – and it’s the secret sauce that glues people together. For example, Chelsea is related to Clinton because she is his daughter, but Bush is related to Clinton because he is the successor to the title of President. The key thing here is that relationships are explicit in Spock. These relationships taken together weave a complex web of connections between people that is completely realistic. Spock gives us a glimpse of how semantics emerge out of the simple mechanism of tagging.

Feedback loops

The voting aspect of Spock also harnesses the power of automation and people. It is a simple, yet very interesting way to get feedback into the system. Spock is experimenting with letting people vote on the existing “facts” (tags/relationships) and it re-arranges information to reflect the votes. To be fair, the system is not yet tuned to do this correctly all the time – it’s hard to know right from wrong. However, it is clear that a flavor of this approach in the near future will ‘teach’ computers what the right answer is.

Limitations of Spock’s approach

The techniques that we’ve discussed are very impressive, but they have limitations. The main problem is that Spock is likely to have much more complete information about celebrities and well known people than about ordinary people. The reason for it is the amount of data. More people are going to be tagging and voting on the president of the United States than on ordinary people. Unless of course, Spock breaks out and becomes so viral that a lot of local communities form – much like on Facebook. While it’s possible, at this point it does not seem to likely. But even if Spock just becomes a search engine that works best for famous people, it is still very useful and powerful.


Spock is fascinating because of its focus and leverage of semantics. Using tags as relationships and the feedback loop strike me as having great potential to grow a learning system organically, in the matter that learning systems evolve in nature. Most importantly, it is pragmatic and instantly useful.

Leave a comment or trackback on ReadWriteWeb and be in to win a $30 Amazon voucher – courtesy of our competition sponsors AdaptiveBlue and their Netflix Queue Widget.

2 TrackBacks

Listed below are links to blogs that reference this entry: Spock – Vertical Search Done Right.TrackBack URL for this entry: http://www.readwriteweb.com/cgi-bin/mt/mt-tb.cgi/2309
» Weekly Wrapup, 25-29 June 2007 from Read/WriteWeb
The Weekly Wrapups have been a feature of Read/WriteWeb since the beginning of January 2005 (when they were called Web 2.0 Weekly Wrapups). Nowadays the Wrapup is designed for those of you who can’t keep up with a daily dose… Read More
» The Web’s Top Takeover Targets from Read/WriteWeb
This past year has been a very eventful one in the M&A arena, with many of web 2.0’s biggest names being snapped up. A few stand-outs include the likes of YouTube, Photobucket, Feedburner, Last.fm, and StumbleUpon. Yet, there still remains… Read More


Subscribe to comments for this post OR Subscribe to comments for all Read/WriteWeb posts

  • Spock will also create a huge database of “ordinary” people, too.They’re aggregating Facebook, MySpace and LinkedIn. They have less known people, too. I was known to the system – there was not much detail, but it included my name, age, country and MySpace-profile.

    If they start to index more resources, like domains (who owns which domains), blogs (there are millions of them…), more social networks or best: the web in general, they’re on the best way to actually become a search engine for _everybody_.

    Also, don’t underestimate the fact that everybody will at least tag himself. That’s our ego! 🙂

    Posted by: Sebastian | June 26, 2007 6:46 AM

  • I agree that there’s huge potential for Spock, and that it is very well done.Potential downside? If Spock does hit I can envision employers and recruiters making extensive use of it to check up on/get background on employees/prospects – which might not be such a good thing for some.

    Posted by: Chris | June 26, 2007 7:26 AM

  • spock is gonna take alot of money to market that domain. the name is terrible. spook is better. spoke is better. you would think they would at least common sense vertical web address like mylocator.com or something. the world does not need another website that you have to explain what it does. vertical done right needs no explanation to location. change the name. I like spoke better.Posted by: steven emery | June 26, 2007 9:11 AM

  • What the fuk is this?!?
    semenatic who? Dont they make antivirus ?
    Why would they want to do search engine.they cant tell me who stole my screwdriver but I know it was claxton before he left that POS.

    Posted by: Mike Hulubinka | June 26, 2007 10:50 AM

  • Pretty interesting technology. One of the default queries behind the log-in is “people killed by handguns.” I think the feedback loop feature is a great quality control mechanism, assuming it’s not terribly prone to abuse; it’s also a lot of fun to play with!I think I still have a couple invitations if anyone is interested in trying it out.

    Posted by: Cortland Coleman | June 26, 2007 8:23 PM

  • I am not excited by spock because its business objective is meaningless. it is a good tool to kill time. however, google is a great tool to save time.Posted by: keanu | June 26, 2007 8:59 PM

  • Well, I would like to make an interesting comment, but when I went to their site it was down for maintenance.A portent?

    Posted by: Alan Marks | June 27, 2007 6:15 AM

  • I had the same experience as Alan but now Spock’s back up it appears that it’s invitation only. As current users are able to invite others, it would be great if some generous person could send me an invitation! jason (at) talktoshiba.comPosted by: Jason | June 28, 2007 2:20 AM

  • hai all spockerPosted by: rmpal | July 3, 2007 5:22 AM

  • If you want free spock invites go to http://www.swapinvites.com/Posted by: Nathan | July 11, 2007 10:55 AM

  • Crawling the web does not always lead to good results…search on spock.com for “Christian” and just wonder about the results…Posted by: wayne | August 14, 2007 3:19 AM

Read Full Post »

Top-Down: A New Approach to the Semantic Web

Written by Alex Iskold / September 20, 2007 4:22 PM / 17 Comments

Earlier this week we wrote about the classic approach to the semantic web and the difficulties with that approach. While the original vision of the layer on top of the current web, which annotates information in a way that is “understandable” by computers, is compelling; there are technical, scientific and business issues that have been difficult to address.One of the technical difficulties that we outlined was the bottom-up nature of the classic semantic web approach. Specifically, each web site needs to annotate information in RDF, OWL, etc. in order for computers to be able to “understand” it.

As things stand today, there is little reason for web site owners to do that. The tools that would leverage the annotated information do not exist and there has not been any clearly articulated business and consumer value. Which means that there is no incentive for the sites to invest money into being compatible with the semantic web of the future.

But there are alternative approaches. We will argue that a more pragmatic, top-down approach to the semantic web not only makes sense, but is already well on the way toward becoming a reality. Many companies have been leveraging existing, unstructured information to build vertical, semantic services. Unlike the original vision, which is rather academic, these emergent solutions are driven by business and market potential.

In this post, we will look at the solution that we call the top-down approach to the semantic web, because instead of requiring developers to change or augment the web, this approach leverages and builds on top of current web as-is.

Why Do We Need The Semantic Web?

The complexity of original vision of the semantic web and lack of clear consumer benefits makes the whole project unrealistic. The simple question: Why do we need computers to understand semantics? remains largely unanswered.

While some of us think that building AI is cool, the majority of people think that AI is a little bit silly, or perhaps even unsettling. And they are right. AI for the sake of AI does not make any sense. If we are talking about building intelligent machines, and if we need to spend money and energy annotating all the information in the world for them, then there needs to be a very clear benefit.

Stated the way it is, the semantic web becomes a vision in search of a reason. What if the problem was restated from the consumer point of view? Here is what we are really looking forward to with the semantic web:

  • Spend less time searching
  • Spend less time looking at things that do not matter
  • Spend less time explaining what we want to computers

A consumer focus and clear benefit for businesses needs to be there in order for the semantic web vision to be embraced by the marketplace.

What If The Problem Is Not That Hard?

If all we are trying to do is to help people improve their online experiences, perhaps the full “understanding” of semantics by computers is not even necessary. The best online search tool today is Google, which is an algorithm based, essentially, on statistical frequency analysis and not semantics. Solutions that attempt to improve Google by focusing on generalized semantics have so far not been finding it easy to do so.

The truth is that the understanding of natural language by computers is a really hard problem. We have the language ingrained in our genes. We learn language as we grow up. We learn things iteratively. We have the chance to clarify things when we do not understand them. None of this is easily replicated with computers.

But what if it is not even necessary to build the first generation of semantic tools? What if instead of trying to teach computers natural language, we hard-wired into computers the concepts of everyday things like books, music, movies, restaurants, stocks and even people. Would that help us be more productive and find things faster?

Simple Semantics: Nouns And Verbs

When we think about a book we think about handful of things – title and author, maybe genre and the year it was published. Typically, though, we could care less about the publisher, edition and number of pages. Similarly, recipes provoke thoughts about cuisine and ingredients, while movies make us think about the plot, director, and stars.

When we think of people, we also think about a handful of things: birthday, where do they live, how we’re related to them, etc. The profiles found on popular social networks are great examples of simple semantics based around people:

Books, people, recipes, movies are all examples of nouns. The things that we do on the web around these nouns, such as looking up similar books, finding more people who work for the same company, getting more recipes from the same chef and looking up pictures of movie stars, are similar to verbs in everyday language. These are contextual actuals that are based on the understanding of the noun.

What if semantic applications hard-wired understanding and recognition of the nouns and then also hard-wired the verbs that make sense? We are actually well on our way doing just that. Vertical search engines like Spock, Retrevo, ZoomInfo, the page annotating technology from Clear Forrest, Dapper, and the Map+ extension for Firefox are just a few examples of top-down semantic web services.

The Top-Down Semantic Web Service

The essence of a top-down semantic web service is simple – leverage existing web information, apply specific, vertical semantic knowledge and then redeliver the results via a consumer-centric application. Consider the vertical search engine Spock, which scans the web for information about people. It knows how to recognize names in HTML pages and it also looks for common information about people that all people have – birthdays, locations, marital status, etc. In addition, Spock “understands” that people relate to each other. If you look up Bush, then Clinton will show up as a predecessor. If you look up Steve Jobs, then Bill Gates will come up as a rival.

In other words, Spock takes simple, everyday semantics about people and applies it to the information that already exists online. The result? A unique and useful vertical search engine for people. Further, note that Spock does not require the information to be re-annotated in RDF and OWL. Instead, the company builds adapters that use heuristics to get the data. The engine does not actually have full understanding of semantics about people, however. For example, it does not know that people like different kinds of ice cream, but it doesn’t need to. The point is that by focusing on a simple semantics, Spock is able to deliver a useful end-user service.

Another, much simpler, example is the Map+ add-on for Firefox. This application recognizes addresses and provides a map popup using Yahoo! Maps. It is the simplicity of this application that precisely conveys the power of simple semantics. The add-on “knows” what addresses look like. Sure, sometimes it makes mistakes, but most of the time it tags addresses in online documents properly. So it leverages existing information and then provides direct end user utility by meshing it up with Yahoo! Maps.

The Challenges Facing The Top-Down Approach

Despite being effective, the somewhat simplistic top-down approach has several problems. First, it is not really the semantic web as it is defined, instead its a group of semantic web services and applications that create utility by leveraging simple semantics. So the proponents of the classic approach would protest and they would be right. Another issue is that these services do not always get semantics right because of ambiguities. Because the recognition is algorithmic and not based on an underlying RDF representation, it is not perfect.

It seems to me that it is better to have simpler solutions that work 90% of the time than complex ones that never arrive. The key questions here are: How exactly are mistakes handled? And, is there a way for the user to correct the problem? The answers will be left up to the individual application. In life we are used to other people being unpredictable, but with computers, at least in theory, we expect things to work the same every time.

Yet another issue is that these simple solutions may not scale well. If the underlying unstructured data changes can the algorithms be changed quickly enough? This is always an issue with things that sit on top of other things without an API. Of course, if more web sites had APIs, as we have previously suggested, the top-down semantic web would be much easier and more certain.


While the original vision of the semantic web is grandiose and inspiring in practice it has been difficult to achieve because of the engineering, scientific and business challenges. The lack of specific and simple consumer focus makes it mostly an academic exercise. In the mean time, existing data is being leveraged by applying simple heuristics and making assumptions about particular verticals. What we have dubbed top-down semantic web applications have been appearing online and improving end user experiences by leveraging semantics to deliver real, tangible services.

Will the bottom-up semantic web ever happen? Possibly. But, at the moment the precise path to get there is not quite clear. In the mean time, we can all enjoy better online experience and get to where we need to go faster thanks to simple top-down semantic web services.

Leave a comment or trackback on ReadWriteWeb and be in to win a $30 Amazon voucher – courtesy of our competition sponsors AdaptiveBlue and their Netflix Queue Widget.

5 TrackBacks

Listed below are links to blogs that reference this entry: Top-Down: A New Approach to the Semantic Web.TrackBack URL for this entry: http://www.readwriteweb.com/cgi-bin/mt/mt-tb.cgi/1638
Summary: The original vision of the semantic web as a layer on top of the current web, annotated in a way that computers can “understand,” is certainly grandiose and intriguing. Yet, for the past decade it has been a kind… Read More
Alex Iskold’s ‘Semantic Web: Difficulties with the Classic Approach’ for Read/Write Web was one of the posts rolled up into yesterday’s outpouring here on Nodalities. He’s been busy during the (my) night, and I woke this morning to ‘Top-Down:… Read More
Yesterday brought an enlightening post by Alex Iskold, entitled “Top-Down: A New Approach to the Semantic Web“: “While the original vision of the semantic web is grandiose and inspiring in practice it has been difficult to achieve bec… Read More
Here is a summary of the week’s Web Tech action on Read/WriteWeb. Note that you can subscribe to the Weekly Wrapups, either via the special RSS feed or by email. Web News Yahoo! Drops $350m on Zimbra; an Open Source,… Read More
Em teoria a web sem√¢ntica √© fant√°stica, ou seja, redescrever toda a informa√ß√£o que j√° existe na web na tentativa de fazer os computadores entenderem o significado das coisas. Em poucas palavras, seria uma camada a mais na web com meta-informa√ß√µ… Read More


Subscribe to comments for this post OR Subscribe to comments for all Read/WriteWeb posts

  • Hi Alex. The top-down approach alone is not enough to reach the Semantic Web. It’s not even enough to reach the half-hearted attempt at the Semantic Web that you describe. I believe both the bottom-up and top-down approaches will be needed to reach the goal. At this time we’re faced with few too many people attempting either approach. Top-down isn’t even fully feasible yet, whereas a bottom-up approach can at least be done with currently available technology.I fully disagree with your statement that the complexity of original vision of the semantic web and lack of clear consumer benefits makes the whole project unrealistic.

    Posted by: James | September 20, 2007 4:58 PM

  • I think the use of Microformats does provide some actual practical usage of a form of machine-readable semantic formatting for content. Ok, it’s maybe not quite “the semantic web” people envision but it does have some usefulness.I also read an blog post yesterday by Peter Krantz entitled “RDFa – Implications for Accessibility” which talks about the W3C’s RDFa HTML extensions as opposed to Microformats as a means to include machine readable data.

    I wasn’t sure if I should be writing ‘semantic web’ with a capital S as some people seem to use that as suggesting something more than just the concept of ‘semantic standards compliant HTML / XHTML’!

    Posted by: Rick Curran | September 20, 2007 5:10 PM

  • If you’re talking about the Semantic Web (the W3C attempt, and the actual Semantic Web) then you use caps for the S and the W. If you’re talking about types of webs (reactive web, proactive web, semantic web) you would not use caps (proper noun vs noun).Posted by: James | September 20, 2007 5:29 PM

  • My bullshit meter went off after the first paragraph. You are so far off dude. You should take some time to “understand” it before you try to write about it. Otherwise you are just making noise.Your article is just noise.

    Where do I check that this article is not useful?

    Posted by: Ken Ewell | September 20, 2007 6:07 PM

  • Good article, especially about the top down vs. bottom up. I am working on a very specific problem – make it easy for teachers to create lessons – and search is not an answer ! We are working on an overlay – a top down semantic web, which not only includes normal metadata but also more domain specific, contextual information. SOme thoughts at http://doubleclix.wordpress.com/Posted by: Krishna Sankar | September 20, 2007 6:08 PM

  • The bottom-up and top-down approaches are not mutually exclusive, so there is no point in trying to pit one against the other. And indeed, why wait until the perfect vision is implemented? If some value can be delivered now by cutting corners, and more value later by investing in a more formal approach in parallel, then surely everyone wins. If a top-down service is able to cheaply extract facts from the Web now, then surely, it should be able to easily translate these facts into predicates (and map them to ontologies) so as to plug into bottom-up machinery (rules, proofs, etc.) as it becomes available. It’s all good.Posted by: Jean-Michel Decombe | September 20, 2007 6:21 PM

  • I must admit I was very disappointed to see an article on this topic with such a wide audience not take the opportunity to increase the visibility of microformats.In the perfect world, publishing platforms (wordpress, cms’s in general,etc.) would allow the publisher to easily mark certain parts of their content with semantic value, using microformats.

    Then, all modern browsers should be able to recognize them and provide the users with some useful actions. Add hCards to your address book, events to your calendar of choice, etc. The list goes on and on.

    From where I stand, we’re not that far. Firefox 3, MS IE8 and Apple have all shown interest in this matter. Let’s all hold hands and see what they have in store for us.

    Sir Tim Berners-Lee is much more than a dreamer. He is, as we all know, a visionary. Thank you, Sir.

    I hate spamming, but if you’re interested in these matters visit microformats dot org for more info and/or click my name for a fresh screencast showing how this works for the users.


    Posted by: André Luís | September 20, 2007 6:24 PM

  • How many stacked straw men does it take to reach the Moon? Apparently, both not too many, and quite a few.This week’s dust up about “what is the semantic Web?” is but a mote in the eye of history, and even within very recent history (say, 1-2 years) at that.

    The real story behind everyone desiring to state the obvious about easy things and hard things relating to information federation is that commercial prospects must be near at hand. I take this as good news.

    It will be interesting to see whose silks get dirtied as this jockeying continues out of the gate.

    Posted by: Mike Bergman | September 20, 2007 7:17 PM

  • I get the same sense of things Mike. The heated debating back and forth shows not only that the Semantic Web is taking in larger numbers of followers, but that we’re nearing a time when we can put what we’ve researched to practical use.The interesting thing to me will be the kind of products and services that will emerge. To me it’s not entirely clear yet what markets will be most profitable for Semantic Web technology (and semantic technologies in general).

    I hear that there is a lot of money in the market for a system that radically simplifies data exchange in the enterprise, but consumer products and services I’m not sure about. I don’t think there will be a market for “Semantic Web browsers.” I’m sure Firefox 4 will accommodate any such needs, and I would hope that becomes the case.

    I need to do my research on what the current “Semantic Web companies” are up to.

    Posted by: James | September 20, 2007 8:53 PM

  • I lost my comment on your last post somewhere, so I blogged it.All I’d add here is that most of the systems you describe are effectively domain-specific data silos. Unless there’s Web-based interop, these things are merely on the Web, not of the Web. Semantic Web technologies are designed for truly Web-based data integration, they are essentially an evolution of the link.

    Mike’s comment above creased me up – especially since you only have to look at his collection of Sweet Tools to see the “bottom-up Semantic Web” is coming along just fine, thank you 🙂 He does have a point – all this is really about is moving from a Web of Documents to a more general Web of Data.

    For a continual update, subscribe to Planet RDF, or even This Week’s Semantic Web. Coders might also be interested in the Developers Guide to Semantic Web Toolkits
    for different Programming Languages
    . As well as tools applications, there’s more and more linked data appearing on the Web all the time…

    Posted by: Danny | September 21, 2007 3:38 AM

  • some good points though I think you might use some less words. e.g. the goalsSpend less time searching
    Spend less time looking at things that do not matter
    Spend less time explaining what we want to computers

    are in short: “spend less time on things you do not like”.

    still, excellent post. as always 😉

    Posted by: Peter P | September 21, 2007 5:02 AM

  • Alex,
    You make good points. We need both top-down and bottom-up approaches.Isn’t GRDDL (http://www.w3.org/2004/01/rdxh/spec) a generic approach to gather information from documents?

    Simile projects and RDFizers are worth a look (http://simile.mit.edu/wiki/RDFizers)

    I think semantic web components – a way to describe the components that make up web applications, may be another approach to build bottom-up web.

    We do need a general framework of resource description as a common vocabulary whether our approach is top-down or bottom-up.

    We do need more dialog and I am glad that you started it with this post.


    Posted by: Dorai Thodla | September 21, 2007 5:23 AM

  • Peter’s P’s last comment pretty much sums up what everyone wants from new web technology (regardless of whether or not it falls under the semantic web umbrella), doesn’t it?And I quote:

    “Spend less time searching
    Spend less time looking at things that do not matter
    Spend less time explaining what we want to computers”

    In general, people want to spend less time doing the boring stuff and get right to good/relevant/interesting stuff (if I could add a picture, I’d totally post the “This is relevant to my interests” lolcat right, because really, what topic couldn’t benefit from a little lolcat levity?

    *by the way, since I know that many RWW readers are of the entrepreneurial type, if anyone is working on a project or has an idea that accomplishes the above missions, check out the Knight News Challenge – http://newschallenge.org.

    Posted by: Jackie | September 21, 2007 11:04 AM

  • I have just retired after finished spending years trying to play a small part in controlling a corporate intranet with rules as basic as “use HTML”. It degenerated into a collection of thousands of PDFs (with internal links)and even Word documents posted straight to the Intranet. In spite of supplied templates and document management tools the information suppliers saw the Intranet as if it were a paper filing cabinet. HTML combined with proper use of CSS goes a long way towards basic structure but even when given the tools information suppliers will not see the reason to use them.Posted by: Albert Mispel | September 21, 2007 1:23 PM

  • Good effort, but there is very little new here. Lots of work has been done in the area of semantic integration which understands that an inference architecture will always result in false associations that typically require lots of manual refinements (customizations) of ontologies.Semantic integration (where mission critical systems are involved) is a case of the good being the enemy of the perfect. If only we could return lots of choices and let the user pick. Google has it easy.

    Posted by: Pano | September 21, 2007 6:57 PM

  • yea google… they will get this …Posted by: Nature Wallpaper | September 21, 2007 11:34 PM

  • Thanks a lot for this post and the previous one on semantic web. Really interesting. I was wondering whether you will address what you said about computers not being able to understand human language, later on. I think this is one of the fundamental problems with semantic web. Although I do agree with you that we should do what we can, even if that means we cannot get any further than the “simple semantic web”. More comments on my blog.Posted by: Samuel Driessen | December 18, 2007 12:36 PM

Read Full Post »


Tech Biz  :  IT   RSS

Free! Why $0.00 Is the Future of Business

By Chris Anderson Email 02.25.08 | 12:00 AM

At the age of 40, King Gillette was a frustrated inventor, a bitter anticapitalist, and a salesman of cork-lined bottle caps. It was 1895, and despite ideas, energy, and wealthy parents, he had little to show for his work. He blamed the evils of market competition. Indeed, the previous year he had published a book, The Human Drift, which argued that all industry should be taken over by a single corporation owned by the public and that millions of Americans should live in a giant city called Metropolis powered by Niagara Falls. His boss at the bottle cap company, meanwhile, had just one piece of advice: Invent something people use and throw away.One day, while he was shaving with a straight razor that was so worn it could no longer be sharpened, the idea came to him. What if the blade could be made of a thin metal strip? Rather than spending time maintaining the blades, men could simply discard them when they became dull. A few years of metallurgy experimentation later, the disposable-blade safety razor was born. But it didn’t take off immediately. In its first year, 1903, Gillette sold a total of 51 razors and 168 blades. Over the next two decades, he tried every marketing gimmick he could think of. He put his own face on the package, making him both legendary and, some people believed, fictional. He sold millions of razors to the Army at a steep discount, hoping the habits soldiers developed at war would carry over to peacetime. He sold razors in bulk to banks so they could give them away with new deposits (“shave and save” campaigns). Razors were bundled with everything from Wrigley’s gum to packets of coffee, tea, spices, and marshmallows. The freebies helped to sell those products, but the tactic helped Gillette even more. By giving away the razors, which were useless by themselves, he was creating demand for disposable blades. A few billion blades later, this business model is now the foundation of entire industries: Give away the cell phone, sell the monthly plan; make the videogame console cheap and sell expensive games; install fancy coffeemakers in offices at no charge so you can sell managers expensive coffee sachets.

Chris Anderson discusses “Free.”

Video produced by Annaliza Savage and edited by Michael Lennon.

Thanks to Gillette, the idea that you can make money by giving something away is no longer radical. But until recently, practically everything “free” was really just the result of what economists would call a cross-subsidy: You’d get one thing free if you bought another, or you’d get a product free only if you paid for a service.

Over the past decade, however, a different sort of free has emerged. The new model is based not on cross-subsidies — the shifting of costs from one product to another — but on the fact that the cost of products themselves is falling fast. It’s as if the price of steel had dropped so close to zero that King Gillette could give away both razor and blade, and make his money on something else entirely. (Shaving cream?)

You know this freaky land of free as the Web. A decade and a half into the great online experiment, the last debates over free versus pay online are ending. In 2007 The New York Times went free; this year, so will much of The Wall Street Journal. (The remaining fee-based parts, new owner Rupert Murdoch announced, will be “really special … and, sorry to tell you, probably more expensive.” This calls to mind one version of Stewart Brand’s original aphorism from 1984: “Information wants to be free. Information also wants to be expensive … That tension will not go away.”)

Once a marketing gimmick, free has emerged as a full-fledged economy. Offering free music proved successful for Radiohead, Trent Reznor of Nine Inch Nails, and a swarm of other bands on MySpace that grasped the audience-building merits of zero. The fastest-growing parts of the gaming industry are ad-supported casual games online and free-to-try massively multiplayer online games. Virtually everything Google does is free to consumers, from Gmail to Picasa to GOOG-411.

The rise of “freeconomics” is being driven by the underlying technologies that power the Web. Just as Moore’s law dictates that a unit of processing power halves in price every 18 months, the price of bandwidth and storage is dropping even faster. Which is to say, the trend lines that determine the cost of doing business online all point the same way: to zero.

But tell that to the poor CIO who just shelled out six figures to buy another rack of servers. Technology sure doesn’t feel free when you’re buying it by the gross. Yet if you look at it from the other side of the fat pipe, the economics change. That expensive bank of hard drives (fixed costs) can serve tens of thousands of users (marginal costs). The Web is all about scale, finding ways to attract the most users for centralized resources, spreading those costs over larger and larger audiences as the technology gets more and more capable. It’s not about the cost of the equipment in the racks at the data center; it’s about what that equipment can do. And every year, like some sort of magic clockwork, it does more and more for less and less, bringing the marginal costs of technology in the units that we individuals consume closer to zero.

Photo Illustration: Jeff Mermelstein

As much as we complain about how expensive things are getting, we’re surrounded by forces that are making them cheaper. Forty years ago, the principal nutritional problem in America was hunger; now it’s obesity, for which we have the Green Revolution to thank. Forty years ago, charity was dominated by clothing drives for the poor. Now you can get a T-shirt for less than the price of a cup of coffee, thanks to China and global sourcing. So too for toys, gadgets, and commodities of every sort. Even cocaine has pretty much never been cheaper (globalization works in mysterious ways).

Digital technology benefits from these dynamics and from something else even more powerful: the 20th-century shift from Newtonian to quantum machines. We’re still just beginning to exploit atomic-scale effects in revolutionary new materials — semiconductors (processing power), ferromagnetic compounds (storage), and fiber optics (bandwidth). In the arc of history, all three substances are still new, and we have a lot to learn about them. We are just a few decades into the discovery of a new world.

What does this mean for the notion of free? Well, just take one example. Last year, Yahoo announced that Yahoo Mail, its free webmail service, would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite.” So the market price of online storage, at least for email, has now fallen to zero (see “Webmail Windfall“). And the stunning thing is that nobody was surprised; many had assumed infinite free storage was already the case.

For good reason: It’s now clear that practically everything Web technology touches starts down the path to gratis, at least as far as we consumers are concerned. Storage now joins bandwidth (YouTube: free) and processing power (Google: free) in the race to the bottom. Basic economics tells us that in a competitive market, price falls to the marginal cost. There’s never been a more competitive market than the Internet, and every day the marginal cost of digital information comes closer to nothing.

One of the old jokes from the late-’90s bubble was that there are only two numbers on the Internet: infinity and zero. The first, at least as it applied to stock market valuations, proved false. But the second is alive and well. The Web has become the land of the free.

The result is that we now have not one but two trends driving the spread of free business models across the economy. The first is the extension of King Gillette’s cross-subsidy to more and more industries. Technology is giving companies greater flexibility in how broadly they can define their markets, allowing them more freedom to give away products or services to one set of customers while selling to another set. Ryanair, for instance, has disrupted its industry by defining itself more as a full-service travel agency than a seller of airline seats (see “How Can Air Travel Be Free?”).

The second trend is simply that anything that touches digital networks quickly feels the effect of falling costs. There’s nothing new about technology’s deflationary force, but what is new is the speed at which industries of all sorts are becoming digital businesses and thus able to exploit those economics. When Google turned advertising into a software application, a classic services business formerly based on human economics (things get more expensive each year) switched to software economics (things get cheaper). So, too, for everything from banking to gambling. The moment a company’s primary expenses become things based in silicon, free becomes not just an option but the inevitable destination.

Forty years ago, Caltech professor Carver Mead identified the corollary to Moore’s law of ever-increasing computing power. Every 18 months, Mead observed, the price of a transistor would halve. And so it did, going from tens of dollars in the 1960s to approximately 0.000001 cent today for each of the transistors in Intel’s latest quad-core. This, Mead realized, meant that we should start to “waste” transistors.

Waste is a dirty word, and that was especially true in the IT world of the 1970s. An entire generation of computer professionals had been taught that their job was to dole out expensive computer resources sparingly. In the glass-walled facilities of the mainframe era, these systems operators exercised their power by choosing whose programs should be allowed to run on the costly computing machines. Their role was to conserve transistors, and they not only decided what was worthy but also encouraged programmers to make the most economical use of their computer time. As a result, early developers devoted as much code as possible to running their core algorithms efficiently and gave little thought to user interface. This was the era of the command line, and the only conceivable reason someone might have wanted to use a computer at home was to organize recipe files. In fact, the world’s first personal computer, a stylish kitchen appliance offered by Honeywell in 1969, came with integrated counter space.

Photo Illustration: Jeff Mermelstein

And here was Mead, telling programmers to embrace waste. They scratched their heads — how do you waste computer power? It took Alan Kay, an engineer working at Xerox’s Palo Alto Research Center, to show them. Rather than conserve transistors for core processing functions, he developed a computer concept — the Dynabook — that would frivolously deploy silicon to do silly things: draw icons, windows, pointers, and even animations on the screen. The purpose of this profligate eye candy? Ease of use for regular folks, including children. Kay’s work on the graphical user interface became the inspiration for the Xerox Alto, and then the Apple Macintosh, which changed the world by opening computing to the rest of us. (We, in turn, found no shortage of things to do with it; tellingly, organizing recipes was not high on the list.)

Of course, computers were not free then, and they are not free today. But what Mead and Kay understood was that the transistors in them — the atomic units of computation — would become so numerous that on an individual basis, they’d be close enough to costless that they might as well be free. That meant software writers, liberated from worrying about scarce computational resources like memory and CPU cycles, could become more and more ambitious, focusing on higher-order functions such as user interfaces and new markets such as entertainment. And that meant software of broader appeal, which brought in more users, who in turn found even more uses for computers. Thanks to that wasteful throwing of transistors against the wall, the world was changed.

What’s interesting is that transistors (or storage, or bandwidth) don’t have to be completely free to invoke this effect. At a certain point, they’re cheap enough to be safely disregarded. The Greek philosopher Zeno wrestled with this concept in a slightly different context. In Zeno’s dichotomy paradox, you run toward a wall. As you run, you halve the distance to the wall, then halve it again, and so on. But if you continue to subdivide space forever, how can you ever actually reach the wall? (The answer is that you can’t: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.)

In economics, the parallel is this: If the unitary cost of technology (“per megabyte” or “per megabit per second” or “per thousand floating-point operations per second”) is halving every 18 months, when does it come close enough to zero to say that you’ve arrived and can safely round down to nothing? The answer: almost always sooner than you think.

What Mead understood is that a psychological switch should flip as things head toward zero. Even though they may never become entirely free, as the price drops there is great advantage to be had in treating them as if they were free. Not too cheap to meter, as Atomic Energy Commission chief Lewis Strauss said in a different context, but too cheap to matter. Indeed, the history of technological innovation has been marked by people spotting such price and performance trends and getting ahead of them.

From the consumer’s perspective, though, there is a huge difference between cheap and free. Give a product away and it can go viral. Charge a single cent for it and you’re in an entirely different business, one of clawing and scratching for every customer. The psychology of “free” is powerful indeed, as any marketer will tell you.

This difference between cheap and free is what venture capitalist Josh Kopelman calls the “penny gap.” People think demand is elastic and that volume falls in a straight line as price rises, but the truth is that zero is one market and any other price is another. In many cases, that’s the difference between a great market and none at all.

The huge psychological gap between “almost zero” and “zero” is why micropayments failed. It’s why Google doesn’t show up on your credit card. It’s why modern Web companies don’t charge their users anything. And it’s why Yahoo gives away disk drive space. The question of infinite storage was not if but when. The winners made their stuff free first.

Traditionalists wring their hands about the “vaporization of value” and “demonetization” of entire industries. The success of craigslist’s free listings, for instance, has hurt the newspaper classified ad business. But that lost newspaper revenue is certainly not ending up in the craigslist coffers. In 2006, the site earned an estimated $40 million from the few things it charges for. That’s about 12 percent of the $326 million by which classified ad revenue declined that year.

But free is not quite as simple — or as stupid — as it sounds. Just because products are free doesn’t mean that someone, somewhere, isn’t making huge gobs of money. Google is the prime example of this. The monetary benefits of craigslist are enormous as well, but they’re distributed among its tens of thousands of users rather than funneled straight to Craig Newmark Inc. To follow the money, you have to shift from a basic view of a market as a matching of two parties — buyers and sellers — to a broader sense of an ecosystem with many parties, only some of which exchange cash.

The most common of the economies built around free is the three-party system. Here a third party pays to participate in a market created by a free exchange between the first two parties. Sound complicated? You’re probably experiencing it right now. It’s the basis of virtually all media.

In the traditional media model, a publisher provides a product free (or nearly free) to consumers, and advertisers pay to ride along. Radio is “free to air,” and so is much of television. Likewise, newspaper and magazine publishers don’t charge readers anything close to the actual cost of creating, printing, and distributing their products. They’re not selling papers and magazines to readers, they’re selling readers to advertisers. It’s a three-way market.

In a sense, what the Web represents is the extension of the media business model to industries of all sorts. This is not simply the notion that advertising will pay for everything. There are dozens of ways that media companies make money around free content, from selling information about consumers to brand licensing, “value-added” subscriptions, and direct ecommerce (see How-To Wiki for a complete list). Now an entire ecosystem of Web companies is growing up around the same set of models.

Between new ways companies have found to subsidize products and the falling cost of doing business in a digital age, the opportunities to adopt a free business model of some sort have never been greater. But which one? And how many are there? Probably hundreds, but the priceless economy can be broken down into six broad categories:

· “Freemium”
What’s free: Web software and services, some content. Free to whom: users of the basic version.

This term, coined by venture capitalist Fred Wilson, is the basis of the subscription model of media and is one of the most common Web business models. It can take a range of forms: varying tiers of content, from free to expensive, or a premium “pro” version of some site or software with more features than the free version (think Flickr and the $25-a-year Flickr Pro).

Again, this sounds familiar. Isn’t it just the free sample model found everywhere from perfume counters to street corners? Yes, but with a pretty significant twist. The traditional free sample is the promotional candy bar handout or the diapers mailed to a new mother. Since these samples have real costs, the manufacturer gives away only a tiny quantity — hoping to hook consumers and stimulate demand for many more.

Photo Illustration: Jeff Mermelstein

But for digital products, this ratio of free to paid is reversed. A typical online site follows the 1 Percent Rule — 1 percent of users support all the rest. In the freemium model, that means for every user who pays for the premium version of the site, 99 others get the basic free version. The reason this works is that the cost of serving the 99 percent is close enough to zero to call it nothing.

· Advertising
What’s free: content, services, software, and more. Free to whom: everyone.

Broadcast commercials and print display ads have given way to a blizzard of new Web-based ad formats: Yahoo’s pay-per-pageview banners, Google’s pay-per-click text ads, Amazon’s pay-per-transaction “affiliate ads,” and site sponsorships were just the start. Then came the next wave: paid inclusion in search results, paid listing in information services, and lead generation, where a third party pays for the names of people interested in a certain subject. Now companies are trying everything from product placement (PayPerPost) to pay-per-connection on social networks like Facebook. All of these approaches are based on the principle that free offerings build audiences with distinct interests and expressed needs that advertisers will pay to reach.

· Cross-subsidies
What’s free: any product that entices you to pay for something else. Free to whom: everyone willing to pay eventually, one way or another.

When Wal-Mart charges $15 for a new hit DVD, it’s a loss leader. The company is offering the DVD below cost to lure you into the store, where it hopes to sell you a washing machine at a profit. Expensive wine subsidizes food in a restaurant, and the original “free lunch” was a gratis meal for anyone who ordered at least one beer in San Francisco saloons in the late 1800s. In any package of products and services, from banking to mobile calling plans, the price of each individual component is often determined by psychology, not cost. Your cell phone company may not make money on your monthly minutes — it keeps that fee low because it knows that’s the first thing you look at when picking a carrier — but your monthly voicemail fee is pure profit.

On a busy corner in São Paulo, Brazil, street vendors pitch the latest “tecnobrega” CDs, including one by a hot band called Banda Calypso. Like CDs from most street vendors, these did not come from a record label. But neither are they illicit. They came directly from the band. Calypso distributes masters of its CDs and CD liner art to street vendor networks in towns it plans to tour, with full agreement that the vendors will copy the CDs, sell them, and keep all the money. That’s OK, because selling discs isn’t Calypso’s main source of income. The band is really in the performance business — and business is good. Traveling from town to town this way, preceded by a wave of supercheap CDs, Calypso has filled its shows and paid for a private jet.

The vendors generate literal street cred in each town Calypso visits, and its omnipresence in the urban soundscape means that it gets huge crowds to its rave/dj/concert events. Free music is just publicity for a far more lucrative tour business. Nobody thinks of this as piracy.

· Zero marginal cost
What’s free: things that can be distributed without an appreciable cost to anyone. Free to whom: everyone.

This describes nothing so well as online music. Between digital reproduction and peer-to-peer distribution, the real cost of distributing music has truly hit bottom. This is a case where the product has become free because of sheer economic gravity, with or without a business model. That force is so powerful that laws, guilt trips, DRM, and every other barrier to piracy the labels can think of have failed. Some artists give away their music online as a way of marketing concerts, merchandise, licensing, and other paid fare. But others have simply accepted that, for them, music is not a moneymaking business. It’s something they do for other reasons, from fun to creative expression. Which, of course, has always been true for most musicians anyway.

· Labor exchange
What’s free: Web sites and services. Free to whom: all users, since the act of using these sites and services actually creates something of value.

You can get free porn if you solve a few captchas, those scrambled text boxes used to block bots. What you’re actually doing is giving answers to a bot used by spammers to gain access to other sites — which is worth more to them than the bandwidth you’ll consume browsing images. Likewise for rating stories on Digg, voting on Yahoo Answers, or using Google’s 411 service (see “How Can Directory Assistance Be Free?”). In each case, the act of using the service creates something of value, either improving the service itself or creating information that can be useful somewhere else.

· Gift economy
What’s free: the whole enchilada, be it open source software or user-generated content. Free to whom: everyone.

From Freecycle (free secondhand goods for anyone who will take them away) to Wikipedia, we are discovering that money isn’t the only motivator. Altruism has always existed, but the Web gives it a platform where the actions of individuals can have global impact. In a sense, zero-cost distribution has turned sharing into an industry. In the monetary economy it all looks free — indeed, in the monetary economy it looks like unfair competition — but that says more about our shortsighted ways of measuring value than it does about the worth of what’s created.

Enabled by the miracle of abundance, digital economics has turned traditional economics upside down. Read your college textbook and it’s likely to define economics as “the social science of choice under scarcity.” The entire field is built on studying trade-offs and how they’re made. Milton Friedman himself reminded us time and time again that “there’s no such thing as a free lunch.

“But Friedman was wrong in two ways. First, a free lunch doesn’t necessarily mean the food is being given away or that you’ll pay for it later — it could just mean someone else is picking up the tab. Second, in the digital realm, as we’ve seen, the main feedstocks of the information economy — storage, processing power, and bandwidth — are getting cheaper by the day. Two of the main scarcity functions of traditional economics — the marginal costs of manufacturing and distribution — are rushing headlong to zip. It’s as if the restaurant suddenly didn’t have to pay any food or labor costs for that lunch.

Surely economics has something to say about that?

It does. The word is externalities, a concept that holds that money is not the only scarcity in the world. Chief among the others are your time and respect, two factors that we’ve always known about but have only recently been able to measure properly. The “attention economy” and “reputation economy” are too fuzzy to merit an academic department, but there’s something real at the heart of both. Thanks to Google, we now have a handy way to convert from reputation (PageRank) to attention (traffic) to money (ads). Anything you can consistently convert to cash is a form of currency itself, and Google plays the role of central banker for these new economies.

There is, presumably, a limited supply of reputation and attention in the world at any point in time. These are the new scarcities — and the world of free exists mostly to acquire these valuable assets for the sake of a business model to be identified later. Free shifts the economy from a focus on only that which can be quantified in dollars and cents to a more realistic accounting of all the things we truly value today.

Between digital economics and the wholesale embrace of King’s Gillette’s experiment in price shifting, we are entering an era when free will be seen as the norm, not an anomaly. How big a deal is that? Well, consider this analogy: In 1954, at the dawn of nuclear power, Lewis Strauss, head of the Atomic Energy Commission, promised that we were entering an age when electricity would be “too cheap to meter.” Needless to say, that didn’t happen, mostly because the risks of nuclear energy hugely increased its costs. But what if he’d been right? What if electricity had in fact become virtually free?The answer is that everything electricity touched — which is to say just about everything — would have been transformed. Rather than balance electricity against other energy sources, we’d use electricity for as many things as we could — we’d waste it, in fact, because it would be too cheap to worry about.

All buildings would be electrically heated, never mind the thermal conversion rate. We’d all be driving electric cars (free electricity would be incentive enough to develop the efficient battery technology to store it). Massive desalination plants would turn seawater into all the freshwater anyone could want, irrigating vast inland swaths and turning deserts into fertile acres, many of them making biofuels as a cheaper store of energy than batteries. Relative to free electrons, fossil fuels would be seen as ludicrously expensive and dirty, and so carbon emissions would plummet. The phrase “global warming” would have never entered the language.

Today it’s digital technologies, not electricity, that have become too cheap to meter. It took decades to shake off the assumption that computing was supposed to be rationed for the few, and we’re only now starting to liberate bandwidth and storage from the same poverty of imagination. But a generation raised on the free Web is coming of age, and they will find entirely new ways to embrace waste, transforming the world in the process. Because free is what you want — and free, increasingly, is what you’re going to get.

Chris Anderson (canderson@wired.com) is the editor in chief of Wired and author of The Long Tail. His next book, FREE, will be published in 2009 by Hyperion.

Search Wired

Top Stories Magazine Wired Blogs All Wired

Related Topics:

Comments (63)

Posted by: danielu23 hours ago1 Point
Your “Scenario 1” implies you know absolutely NOTHING about the movie business. Distributors and Studios make the money on ticket sales based on a percentage split with the projection houses. The bulk of ticket sales money goes to the Distributors a…
Posted by: tom2032 days ago1 Point
The information is not free, it is being paid for (in cash) mostly by advertisers trying to gain the attention of the website visitors. It is also paid for (in time wasted) by the people who are constantly distracted by the ads. Micro-payments were…
Posted by: mfouts2 days ago1 Point
That article is absolutley amazing!!! I am currently into buying real estate and I am slowly transitioning into the great world wide web. @ of my partners and I are trying to take advantage of the the www world via http://www.choiceisfreedom.com still under…
Posted by: foofah2 days ago1 Point
Great article…but give poor Zeno a break. “The answer is that you can’t [reach the wall]: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.” You’ve either missed Zeno’s point entirely, or you’…
Posted by: RainerGamer2 days ago1 Point
Sign me up.
Posted by: Lord_Jim2 days ago1 Point
Is something really free only because you don’t pay in dollars? What about being bombarded with advertising? What about giving away personal data to dubious parties? What about costly ‘upgrade options’ hidden behind every second button of allegedly …
Posted by: gdavis951293 days ago1 Point
Please Mr. Anderson, buy yourself a dictionary. You write: …Yahoo announced that Yahoo Mail… would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite”. ‘Unlimited’ means that Yahoo will not cap t…
Posted by: MikeG3 days ago1 Point
A few months ago I began researching free training & education. To be honest, I didn’t expect to find many good, free items, since I know that it takes time and effort (and time is money) to develop training. But I hoped my efforts would unearth …
Posted by: RAGZ3 days ago1 Point
You know, I subscribe to Wired, and I like the content, but please answer this question; why am I paying Wired’s comparatively high subscription cost if you’re going to stuff it so full of little ad inserts, that when I open it during my bathroom rit…
Posted by: jdwright103 days ago1 Point
This definitely true. It’s a pretty good strategy if you think about it. I just bought a $7 Gillette razor and the refill blades cost me $15!
Posted by: gdavis951293 days ago1 Point

Read Full Post »

Dave McComb : What will it take to build the Semantic Technology industry?

mccombDave McComb, CEO of asemantics and co-chair of this year’s Semantic Technology Conference in San Jose / USA, does not believe in killer applications when it comes to capitalize semantic technologies. Read his comment on what it takes to build a Semantic Technology industry.

What will it take to build the Semantic Technology industry?

I get asked this question a lot. And I’d like to get your help in answering it please.

As co-chair of the Semantic Technology Conference program, I see lots of customer organizations experimenting and adopting semantic technologies – especially ontology-driven development projects and semantic search tools – and seemingly as many start-ups and new products emerging to address their requirements. It’s an exciting time to be in this space and I’m glad to have a part to play.

But back to the question of “what will it take?” I don’t think anyone has all the answers, though it seems there’s a growing consensus about how semantics will eventually take hold:


1. A Little Semantics Goes a Long Way

I think it was Jim Hendler who first used the expression, and I find myself in stark agreement. Much of the criticism of the semantic web vision focuses on the folly of trying to boil the ocean, yet many of the successful early adopters are getting nice results by taking small incremental steps. There’s a nice little exchange at Dave Beckett’s blog on this point.

2. Realistic Expectations

I guess this relates to my first point, but I remain concerned about the hype and expectations that are being set around the semantic web, and now the term Web 3.0. As much as anyone, I’d love to see semantics grow exponentially. But this market is going to be driven by customers, not vendors, and the corporate clients I see have hyped too many times so they’re taking a more cautious approach. I’m confident they’ll catch on eventually, but let’s not try to push them too far, too fast.

3. We Don’t Need a Killer App

Personally I think we need to look at semantic capabilities as an increasing component of the web and computing infrastructure, as opposed than trying to identify a killer app that’s going to kickstart a buying frenzy. If a killer app emerges then that’s great, but don’t hold your breath. There’s plenty of value to be gained in the meantime. More than anything, we need to demonstrate speedy, cheap ways to get started with semantics. This will be far more useful in the long run.

4. We Need to Get Business Mindshare

It’s so obvious that I’m almost embarrassed to say it, but the main point is that we need to improve how we’re currently demonstrating the business value of semantic technology. I see a few key ways we can improve, starting with a greater willingness to talk about the projects already taking place. Secondly, I think we can leverage existing technology trends – especially SOA and mashups – to show how semantic technology can add value to these efforts. Third, and I risk offending a few people with this, but in the short term we should be emphasizing cost savings and reduced time to deployment over and above the extra intelligence and functionality that semantics can provide. Especially for corporate customers. Semantic SOA can save hugely over conventional approaches in data integration and interface projects, and this is where most businesses are really feeling the pain right now.

OK, so this is a short and incomplete list of ideas, and I’ll admit that part of my motivation is just to get the conversation started. But I hope you’ll join in. There are two places where you can start:

  1. My Blog
  2. The Semantic Technology Conference

This year’s SemTech conference in particular will have numerous discussions around the theme of how to grow the semantic technology industry, including Mills Davis’ Semantic Wave 2007 tutorial, and the Keynote panel on Building the Semantic Technology Industry: A Conversation with Entrepreneurs and Investors.

I hope to see you in one place or the other, and to get your input to the conversation.

Cheers,Dave McComb
CEO, Semantic Arts, Inc.
Co-Chair, Semantic Technology Conference

Read Full Post »

Nova Spivack : “Web 3.0 will combine the Semantic Web with social media, enabling a new generation of richer, more shareable, mashable content.”

spivackNova Spivack, CEO of Radar Networks and inventor of the term Web 3.0, gives a micro-interview to Tassilo Pellegrini on the logic of versioning the internet, popularizing the Semantic Web and the secrets of the Radar Networks Laboratories.

People have hardly got used to the concept of Web 2.0 then suddenly you came up with a term called Web 3.0 and shortly Web 4.0. What is the logic behind versioning the Internet?

My proposal is that we use these terms to index decades of the Web. Web 1.0 was 1990 – 2000 and the focus was mainly about the backend of the Web (HTML, http). Web 2.0 is 2000 – 2010 and has been mainly about the front-end of the Web (usability, AJAX, tagging, etc.). Web 3.0 will be 2010 – 2020 and will be about the backend again (RDF, Sparql and the Semantic Web) – it will upgrade the content of the Web. Web 4.0 will be from 2020 – 2030 and will be about the front-end again – a smarter, more proactive and productive Web in which apps will be able to reason and help users intelligently.

To me Web 3.0 seems like a mixture of social software and Semantic Web combining the principles of socially generated content and semantic interoperability. Could you agree on this? What is your explanation?

Yes I agree that Web 3.0 will combine the Semantic Web with social media, enabling a new generation of richer, more shareable, mashable content.

So where does the Semantic Web come in? Could it be that the W3C’s concept of a Semantic Web (despite its greatness) is too purist, too technology centred, which finally makes it difficult when it comes to outreach?

I think that the W3C’s original vision of the Semantic Web focused mainly on the value to software. But the Semantic Web will also be valuable to end-users, publishers, advertisers, buyers & sellers. The end-user benefits were not emphasized or even illustrated very much in the original vision. But that makes sense – the W3C is mainly focused on open standards for software. Today those of us who are promoting Web 3.0 are really focusing more on the benefits of semantics to end-users – regular non-technical end-users. That is ultimately the most important story to tell in order to bring about mainstream adoption.

Your company Radar Networks is still operating in stealth mode but a lot of people are really curious what it will reveal later this year. So what is behind the curtains? How will you apply the Web 3.0 principles in your products?

Well we’re in stealth as you point out so I can’t say so much yet. But we’re trying to bring the Semantic Web to ordinary non-technical end-users. Our application is a hosted Web-based service that will enable anyone to build and share their own Semantic website.

About Nova Spivack

Mr. Spivack has a BA in Philosophy, with a focus on cognitive science and artificial intelligence, from Oberlin College and a CSS degree from the International Space University a NASA-funded graduate professional business school for the space industry. In 1999 Mr. Spivack’s interest in space gave him the opportunity to help pioneer the early days of space tourism when he flew to the edge of space with Space Adventures and did micro-gravity parabolic flight training with the Russian air force.

Mr. Spivack’s weblog, Minding the Planet, focuses on Radar Networks and emerging technologies and can be read at http://www.mindingtheplanet.net.

A full version of his biography can be found here.

Read Full Post »

 10 Semantic Apps to Watch

Written by Richard MacManus / November 29, 2007 12:30 AM / 39 Comments
digg_url = ‘http://www.readwriteweb.com/archives/10_semantic_apps_to_watch.php’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;

digg_url = ‘http://digg.com/software/10_Semantic_Apps_to_Watch’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;One of the highlights of October’s Web 2.0 Summit in San Francisco was the emergence of ‘Semantic Apps’ as a force. Note that we’re not necessarily talking about the Semantic Web, which is the Tim Berners-Lee W3C led initiative that touts technologies like RDF, OWL and other standards for metadata. Semantic Apps may use those technologies, but not necessarily. This was a point made by the founder of one of the Semantic Apps listed below, Danny Hillis of Freebase (who is as much a tech legend as Berners-Lee).

The purpose of this post is to highlight 10 Semantic Apps. We’re not touting this as a ‘Top 10’, because there is no way to rank these apps at this point – many are still non-public apps, e.g. in private beta. It reflects the nascent status of this sector, even though people like Hillis and Spivack have been working on their apps for years now.

What is a Semantic App?

Firstly let’s define “Semantic App”. A key element is that the apps below all try to determine the meaning of text and other data, and then create connections for users. Another of the founders mentioned below, Nova Spivack of Twine, noted at the Summit that data portability and connectibility are keys to these new semantic apps – i.e. using the Web as platform.

In September Alex Iskold wrote a great primer on this topic, called Top-Down: A New Approach to the Semantic Web. In that post, Alex Iskold explained that there are two main approaches to Semantic Apps:

1) Bottom Up – involves embedding semantical annotations (meta-data) right into the data.
2) Top down – relies on analyzing existing information; the ultimate top-down solution would be a fully blown natural language processor, which is able to understand text like people do.

Now that we know what Semantic Apps are, let’s take a look at some of the current leading (or promising) products…


Freebase aims to “open up the silos of data and the connections between them”, according to founder Danny Hillis at the Web 2.0 Summit. Freebase is a database that has all kinds of data in it and an API. Because it’s an open database, anyone can enter new data in Freebase. An example page in the Freebase db looks pretty similar to a Wikipedia page. When you enter new data, the app can make suggestions about content. The topics in Freebase are organized by type, and you can connect pages with links, semantic tagging. So in summary, Freebase is all about shared data and what you can do with it.


Powerset (see our coverage here and here) is a natural language search engine. The system relies on semantic technologies that have only become available in the last few years. It can make “semantic connections”, which helps make the semantic database. The idea is that meaning and knowledge gets extracted automatically from Powerset. The product isn’t yet public, but it has been riding a wave of publicity over 2007.


Twine claims to be the first mainstream Semantic Web app, although it is still in private beta. See our in-depth review. Twine automatically learns about you and your interests as you populate it with content – a “Semantic Graph”. When you put in new data, Twine picks out and tags certain content with semantic tags – e.g. the name of a person. An important point is that Twine creates new semantic and rich data. But it’s not all user-generated. They’ve also done machine learning against Wikipedia to ‘learn’ about new concepts. And they will eventually tie into services like Freebase. At the Web 2.0 Summit, founder Nova Spivack compared Twine to Google, saying it is a “bottom-up, user generated crawl of the Web”.


AdaptiveBlue are makers of the Firefox plugin, BlueOrganizer. They also recently launched a new version of their SmartLinks product, which allows web site publishers to add semantically charged links to their site. SmartLinks are browser ‘in-page overlays’ (similar to popups) that add additional contextual information to certain types of links, including links to books, movies, music, stocks, and wine. AdaptiveBlue supports a large list of top web sites, automatically recognizing and augmenting links to those properties.

SmartLinks works by understanding specific types of information (in this case links) and wrapping them with additional data. SmartLinks takes unstructured information and turns it into structured information by understanding a basic item on the web and adding semantics to it.

[Disclosure: AdaptiveBlue founder and CEO Alex Iskold is a regular RWW writer]


Hakia is one of the more promising Alt Search Engines around, with a focus on natural language processing methods to try and deliver ‘meaningful’ search results. Hakia attempts to analyze the concept of a search query, in particular by doing sentence analysis. Most other major search engines, including Google, analyze keywords. The company told us in a March interview that the future of search engines will go beyond keyword analysis – search engines will talk back to you and in effect become your search assistant. One point worth noting here is that, currently, Hakia has limited post-editing/human interaction for the editing of hakia Galleries, but the rest of the engine is 100% computer powered.

Hakia has two main technologies:

1) QDEX Infrastructure (which stands for Query Detection and Extraction) – this does the heavy lifting of analyzing search queries at a sentence level.

2) SemanticRank Algorithm – this is essentially the science they use, made up of ontological semantics that relate concepts to each other.


Talis is a 40-year old UK software company which has created a semantic web application platform. They are a bit different from the other 9 companies profiled here, as Talis has released a platform and not a single product. The Talis platform is kind of a mix between Web 2.0 and the Semantic Web, in that it enables developers to create apps that allow for sharing, remixing and re-using data. Talis believes that Open Data is a crucial component of the Web, yet there is also a need to license data in order to ensure its openness. Talis has developed its own content license, called the Talis Community License, and recently they funded some legal work around the Open Data Commons License.

According to Dr Paul Miller, Technology Evangelist at Talis, the company’s platform emphasizes “the importance of context, role, intention and attention in meaningfully tracking behaviour across the web.” To find out more about Talis, check out their regular podcasts – the most recent one features Kaila Colbin (an occassional AltSearchEngines correspondent) and Branton Kenton-Dau of VortexDNA.

UPDATE: Marshall Kirkpatrick published an interview with Dr Miller the day after this post. Check it out here.


Venture funded UK semantic search engine TrueKnowledge unveiled a demo of its private beta earlier this month. It reminded Marshall Kirkpatrick of the still-unlaunched Powerset, but it’s also reminiscent of the very real Ask.com “smart answers”. TrueKnowledge combines natural language analysis, an internal knowledge base and external databases to offer immediate answers to various questions. Instead of just pointing you to web pages where the search engine believes it can find your answer, it will offer you an explicit answer and explain the reasoning patch by which that answer was arrived at. There’s also an interesting looking API at the center of the product. “Direct answers to humans and machine questions” is the company’s tagline.

Founder William Tunstall-Pedoe said he’s been working on the software for the past 10 years, really putting time into it since coming into initial funding in early 2005.


Tripit is an app that manages your travel planning. Emre Sokullu reviewed it when it presented at TechCrunch40 in September. With TripIt, you forward incoming bookings to plans@tripit.com and the system manages the rest. Their patent pending “itinerator” technology is a baby step in the semantic web – it extracts useful infomation from these mails and makes a well structured and organized presentation of your travel plan. It pulls out information from Wikipedia for the places that you visit. It uses microformats – the iCal format, which is well integrated into GCalendar and other calendar software.

The company claimed at TC40 that “instead of dealing with 20 pages of planning, you just print out 3 pages and everything is done for you”. Their future plans include a recommendation engine which will tell you where to go and who to meet.

Clear Forest

ClearForest is one of the companies in the top-down camp. We profiled the product in December ’06 and at that point ClearForest was applying its core natural language processing technology to facilitate next generation semantic applications. In April 2007 the company was acquired by Reuters. The company has both a Web Service and a Firefox extension that leverages an API to deliver the end-user application.

The Firefox extension is called Gnosis and it enables you to “identify the people, companies, organizations, geographies and products on the page you are viewing.” With one click from the menu, a webpage you view via Gnosis is filled with various types of annotations. For example it recognizes Companies, Countries, Industry Terms, Organizations, People, Products and Technologies. Each word that Gnosis recognizes, gets colored according to the category.

Also, ClearForest’s Semantic Web Service offers a SOAP interface for analyzing text, documents and web pages.


Spock is a people search engine that got a lot of buzz when it launched. Alex Iskold went so far as to call it “one of the best vertical semantic search engines built so far.” According to Alex there are four things that makes their approach special:

  • The person-centric perspective of a query
  • Rich set of attributes that characterize people (geography, birthday, occupation, etc.)
  • Usage of tags as links or relationships between people
  • Self-correcting mechanism via user feedback loop

As a vertical engine, Spock knows important attributes that people have: name, gender, age, occupation and location just to name a few. Perhaps the most interesting aspect of Spock is its usage of tags – all frequent phrases that Spock extracts via its crawler become tags; and also users can add tags. So Spock leverages a combination of automated tags and people power for tagging.


What have we missed? 😉 Please use the comments to list other Semantic Apps you know of. It’s an exciting sector right now, because Semantic Web and Web 2.0 technologies alike are being used to create new semantic applications. One gets the feeling we’re only at the beginning of this trend.

Leave a comment or trackback on ReadWriteWeb and be in to win a daily $30 Amazon gift voucher, courtesy of AdaptiveBlue and their Amazon Wishlist Widget.

2 TrackBacks

Listed below are links to blogs that reference this entry: 10 Semantic Apps to Watch.

TrackBack URL for this entry: http://www.readwriteweb.com/cgi-bin/mt/mt-tb.cgi/1796

» Top 10 semantic web players from wAve the mAchines
Wondering who the top applications are in the emerging semantic web field? The excellent Read/WriteWeb blog published a top 10 list last week. Freebase, Powerset, Twine, AdaptiveBlue, Hakia, Talis, TrueKnowledge, TripIt (an interesting application of t… Read More
On Read/WriteWeb there was a post about 10 semantic Apps to watch. It seems that the terminology of semantics is used for all sort of kinds. Our understanding is that semantic applications should use some form of ontology in order to describe the meani… Read More

Read Full Post »

Kurzweil Technologies
The Founders Mentoring Kurzweil Companies Contact Us About Ray Kurzweil Ray Kurzweil's Publications

In this excerpt from The Age of Spiritual Machines (Viking, 1999), Ray Kurzweil describes his work in speech recognition.

I also started Kurzweil Applied Intelligence, Inc. in 1982 with the goal of creating a voice activated word processor. This is a technology that is hungry for MIPs (i.e., computer speed) and Megabytes (i.e., memory), Ray Kurzweil introduced the first large-vocabulary speech recognition 	system in 1987 so early systems limited the size of the vocabulary that users could employ. These early systems also required users to pause briefly between words… so…. you…. had…. to…. speak….. like…. this. We combined this “discrete word” speech recognition technology with a medical knowledge base to create a system that enabled doctors to create their medical reports by simply talking to their computers. Our product, called Kurzweil VoiceMed (now Kurzweil Clinical Reporter), actually guides the doctors through the reporting process. We also introduced a general purpose dictation product called Kurzweil Voice, which enabled users to create written documents by speaking one word at a time to their personal computer. This product became particularly popular with people who have a disability in the use of their hands.

Just this year, courtesy of Moore’s Law, personal computers became fast enough to recognize fully continuous speech, so I am able to dictate the rest of this book by talking to our latest product, called Voice Xpress Plus, at speeds around a hundred words per minute. Of course, I don’t get a hundred words written every minute since I change my mind a lot, but Voice Xpress doesn’t seem to mind.

We sold this company as well, to Lernout & Hauspie (L&H), a large speech and language technology company headquartered in Belgium. Shortly after the acquisition by L&H in 1997, we arranged a strategic alliance between the dictation division of L&H (formerly Kurzweil Applied Intelligence) and Microsoft, so our speech technology is likely to be used by Microsoft in future products.

<!– –>
The Founders Mentoring Kurzweil Companies Contact Us About Ray Kurzweil Ray Kurzweil's Publications

Read Full Post »

« Newer Posts - Older Posts »

%d bloggers like this: