Feeds:
Posts
Comments

Posts Tagged ‘steps’

Evolving Trends

Wikipedia 3.0: The End of Google?

In Uncategorized on June 26, 2006 at 5:18 am

Author: Marc Fawzi

License: Attribution-NonCommercial-ShareAlike 3.0

Announcements:

Semantic Web Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Click here for more info and a list of related articles…

Forward

Two years after I published this article it has received over 200,000 hits and we now have several startups attempting to apply Semantic Web technology to Wikipedia and knowledge wikis in general, including Wikipedia founder’s own commercial startup as well as a startup that was recently purchased by Microsoft.

Recently, after seeing how Wikipedia’s governance is so flawed, I decided to write about a way to decentralize and democratize Wikipedia.

Versión española

Article

(Article was last updated at 10:15am EST, July 3, 2006)

Wikipedia 3.0: The End of Google?

 

The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.

However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

Machines (or machine-based reasoning, aka AI software or ‘info agents’) would then be able to use those laboriously –but not entirely manually– constructed ontologies to build a view (or formal model) of how the individual terms within the information relate to each other. Those relationships can be thought of as the axioms (assumed starting truths), which together with the rules governing the inference process both enable as well as constrain the interpretation (and well-formed use) of those terms by the info agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent info agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

Thus, and as stated, in the Semantic Web individual machine-based agents (or a collaborating group of agents) will be able to understand and use information by translating concepts and deducing new information rather than just matching keywords.

Once machines can understand and use information, using a standard ontology language, the world will never be the same. It will be possible to have an info agent (or many info agents) among your virtual AI-enhanced workforce each having access to different domain specific comprehension space and all communicating with each other to build a collective consciousness.

You’ll be able to ask your info agent or agents to find you the nearest restaurant that serves Italian cuisine, even if the restaurant nearest you advertises itself as a Pizza joint as opposed to an Italian restaurant. But that is just a very simple example of the deductive reasoning machines will be able to perform on information they have.

Far more awesome implications can be seen when you consider that every area of human knowledge will be automatically within the comprehension space of your info agents. That is because each info agent can communicate with other info agents who are specialized in different domains of knowledge to produce a collective consciousness (using the Borg metaphor) that encompasses all human knowledge. The collective “mind” of those agents-as-the-Borg will be the Ultimate Answer Machine, easily displacing Google from this position, which it does not truly fulfill.

The problem with the Semantic Web, besides that researchers are still debating which design and implementation of the ontology language model (and associated technologies) is the best and most usable, is that it would take thousands or tens of thousands of knowledgeable people many years to boil down human knowledge to domain specific ontologies.

However, if we were at some point to take the Wikipedia community and give them the right tools and standards to work with (whether existing or to be developed in the future), which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.

The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.

Notes

After writing the original post I found out that a modified version of the Wikipedia application, known as “Semantic” MediaWiki has already been used to implement ontologies. The name that they’ve chosen is Ontoworld. I think WikiMind would have been a cooler name, but I like ontoworld, too, as in “it descended onto the world,” since that may be seen as a reference to the global mind a Semantic-Web-enabled version of Wikipedia could lead to.

Google’s search engine technology, which provides almost all of their revenue, could be made obsolete in the near future. That is unless they have access to Ontoworld or some such pan-domain semantic knowledge repository such that they tap into their ontologies and add inference capability to Google search to build formal deductive intelligence into Google.

But so can Ask.com and MSN and Yahoo…

I would really love to see more competition in this arena, not to see Google or any one company establish a huge lead over others.

The question, to rephrase in Churchillian terms, is wether the combination of the Semantic Web and Wikipedia signals the beginning of the end for Google or the end of the beginning. Obviously, with tens of billions of dollars at stake in investors’ money, I would think that it is the latter. No one wants to see Google fail. There’s too much vested interest. However, I do want to see somebody out maneuver them (which can be done in my opinion.)

Clarification

Please note that Ontoworld, which currently implements the ontologies, is based on the “Wikipedia” application (also known as MediaWiki), but it is not the same as Wikipedia.org.

Likewise, I expect Wikipedia.org will use their volunteer workforce to reduce the sum of human knowledge that has been entered into their database to domain-specific ontologies for the Semantic Web (aka Web 3.0) Hence, “Wikipedia 3.0.”

Response to Readers’ Comments

The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

Response to Basic Questions Raised by the Readers

Reader divotdave asked a few questions, which I thought to be very basic in nature (i.e. important.) I believe more people will be pondering about the same issues, so I’m to including here them with the replies.

Question:
How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

Reply:
It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

Question:
Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

Reply:
That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

Question:
Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

Reply:
There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

The only standard required is that of the ontology language and associated production tools.

Addendum

On AI and Natural Language Processing

I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines that will NOT attempt to perform natural language processing, where current approaches still face too many serious challenges. However, they will still have the formal deductive reasoning capabilities described earlier in this article, and users would interact with these systems through some query language.

On the Debate about the Nature and Definition of AI

The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that will run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

Related:

  1. Web 3.0 Update
  2. All About Web 3.0 <– list of all Web 3.0 articles on this site
  3. P2P 3.0: The People’s Google
  4. Reality as a Service (RaaS): The Case for GWorld <– 3D Web + Semantic Web + AI
  5. For Great Justice, Take Off Every Digg
  6. Google vs Web 3.0
  7. People-Hosted “P2P” Version of Wikipedia
  8. Beyond Google: The Road to a P2P Economy


Update on how the Wikipedia 3.0 vision is spreading:


Update on how Google is co-opting the Wikipedia 3.0 vision:



Web 3D Fans:

Here is the original Web 3D + Semantic Web + AI article:

Web 3D + Semantic Web + AI *

The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipedia 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

Web 3.0 Developers:

Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

  1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

Jan 7, ‘07: The following Evolving Trends post discusses the current state of semantic search engines and ways to improve the paradigm:

  1. Designing a Better Web 3.0 Search Engine

June 27, ‘06: Semantic MediaWiki project, enabling the insertion of semantic annotations (or metadata) into the content:

  1. http://semantic-mediawiki.org/wiki/Semantic_MediaWiki (see note on Wikia below)

Wikipedia’s Founder and Web 3.0

(more…)

Advertisements

Read Full Post »

Tech Biz  :  IT   

Murdoch Calls Google, Yahoo Copyright Thieves — Is He Right?

By David Kravets EmailApril 03, 2009 | 5:00:18 PMCategories: Intellectual Property  

Murdoch_2 Rupert Murdoch, the owner of News Corp. and The Wall Street Journal, says Google and Yahoo are giant copyright scofflaws that steal the news.

“The question is, should we be allowing Google to steal all our copyright … not steal, but take,” Murdoch says. “Not just them, but Yahoo.”

But whether search-engine news aggregation is theft or a protected fair use under copyright law is unclear, even as Google and Yahoo profit tremendously from linking to news. So maybe Murdoch is right.

Murdoch made his comments late Thursday during an address at the Cable Show, an industry event held in Washington. He seemingly was blaming the web, and search engines, for the news media’s ills.

“People reading news for free on the web, that’s got to change,” he said.

Real estate magnate Sam Zell made similar comments in 2007 when he took over the Tribune Company and ran it into bankruptcy.

We suspect Zell and Murdoch are just blowing smoke. If they were not, perhaps they could demand Google and Yahoo remove their news content. The search engines would kindly oblige.

Better yet, if Murdoch and Zell are so set on monetizing their web content, they should sue the search engines and claim copyright violations in a bid to get the engines to pay for the content.

The outcome of such a lawsuit is far from clear.

It’s unsettled whether search engines have a valid fair use claim under the Digital Millennium Copyright Act. The news headlines are copied verbatim, as are some of the snippets that go along.

Fred von Lohmann of the Electronic Frontier Foundation points out that “There’s not a rock-solid ruling on the question.”

Should the search engines pay up for the content? Tell us what you think.

Read Full Post »

WIRED MAGAZINE: 16.03

Tech Biz  :  IT   RSS

Free! Why $0.00 Is the Future of Business

By Chris Anderson Email 02.25.08 | 12:00 AM

At the age of 40, King Gillette was a frustrated inventor, a bitter anticapitalist, and a salesman of cork-lined bottle caps. It was 1895, and despite ideas, energy, and wealthy parents, he had little to show for his work. He blamed the evils of market competition. Indeed, the previous year he had published a book, The Human Drift, which argued that all industry should be taken over by a single corporation owned by the public and that millions of Americans should live in a giant city called Metropolis powered by Niagara Falls. His boss at the bottle cap company, meanwhile, had just one piece of advice: Invent something people use and throw away.One day, while he was shaving with a straight razor that was so worn it could no longer be sharpened, the idea came to him. What if the blade could be made of a thin metal strip? Rather than spending time maintaining the blades, men could simply discard them when they became dull. A few years of metallurgy experimentation later, the disposable-blade safety razor was born. But it didn’t take off immediately. In its first year, 1903, Gillette sold a total of 51 razors and 168 blades. Over the next two decades, he tried every marketing gimmick he could think of. He put his own face on the package, making him both legendary and, some people believed, fictional. He sold millions of razors to the Army at a steep discount, hoping the habits soldiers developed at war would carry over to peacetime. He sold razors in bulk to banks so they could give them away with new deposits (“shave and save” campaigns). Razors were bundled with everything from Wrigley’s gum to packets of coffee, tea, spices, and marshmallows. The freebies helped to sell those products, but the tactic helped Gillette even more. By giving away the razors, which were useless by themselves, he was creating demand for disposable blades. A few billion blades later, this business model is now the foundation of entire industries: Give away the cell phone, sell the monthly plan; make the videogame console cheap and sell expensive games; install fancy coffeemakers in offices at no charge so you can sell managers expensive coffee sachets.

Chris Anderson discusses “Free.”

Video produced by Annaliza Savage and edited by Michael Lennon.

Thanks to Gillette, the idea that you can make money by giving something away is no longer radical. But until recently, practically everything “free” was really just the result of what economists would call a cross-subsidy: You’d get one thing free if you bought another, or you’d get a product free only if you paid for a service.

Over the past decade, however, a different sort of free has emerged. The new model is based not on cross-subsidies — the shifting of costs from one product to another — but on the fact that the cost of products themselves is falling fast. It’s as if the price of steel had dropped so close to zero that King Gillette could give away both razor and blade, and make his money on something else entirely. (Shaving cream?)

You know this freaky land of free as the Web. A decade and a half into the great online experiment, the last debates over free versus pay online are ending. In 2007 The New York Times went free; this year, so will much of The Wall Street Journal. (The remaining fee-based parts, new owner Rupert Murdoch announced, will be “really special … and, sorry to tell you, probably more expensive.” This calls to mind one version of Stewart Brand’s original aphorism from 1984: “Information wants to be free. Information also wants to be expensive … That tension will not go away.”)

Once a marketing gimmick, free has emerged as a full-fledged economy. Offering free music proved successful for Radiohead, Trent Reznor of Nine Inch Nails, and a swarm of other bands on MySpace that grasped the audience-building merits of zero. The fastest-growing parts of the gaming industry are ad-supported casual games online and free-to-try massively multiplayer online games. Virtually everything Google does is free to consumers, from Gmail to Picasa to GOOG-411.

The rise of “freeconomics” is being driven by the underlying technologies that power the Web. Just as Moore’s law dictates that a unit of processing power halves in price every 18 months, the price of bandwidth and storage is dropping even faster. Which is to say, the trend lines that determine the cost of doing business online all point the same way: to zero.

But tell that to the poor CIO who just shelled out six figures to buy another rack of servers. Technology sure doesn’t feel free when you’re buying it by the gross. Yet if you look at it from the other side of the fat pipe, the economics change. That expensive bank of hard drives (fixed costs) can serve tens of thousands of users (marginal costs). The Web is all about scale, finding ways to attract the most users for centralized resources, spreading those costs over larger and larger audiences as the technology gets more and more capable. It’s not about the cost of the equipment in the racks at the data center; it’s about what that equipment can do. And every year, like some sort of magic clockwork, it does more and more for less and less, bringing the marginal costs of technology in the units that we individuals consume closer to zero.

Photo Illustration: Jeff Mermelstein

As much as we complain about how expensive things are getting, we’re surrounded by forces that are making them cheaper. Forty years ago, the principal nutritional problem in America was hunger; now it’s obesity, for which we have the Green Revolution to thank. Forty years ago, charity was dominated by clothing drives for the poor. Now you can get a T-shirt for less than the price of a cup of coffee, thanks to China and global sourcing. So too for toys, gadgets, and commodities of every sort. Even cocaine has pretty much never been cheaper (globalization works in mysterious ways).

Digital technology benefits from these dynamics and from something else even more powerful: the 20th-century shift from Newtonian to quantum machines. We’re still just beginning to exploit atomic-scale effects in revolutionary new materials — semiconductors (processing power), ferromagnetic compounds (storage), and fiber optics (bandwidth). In the arc of history, all three substances are still new, and we have a lot to learn about them. We are just a few decades into the discovery of a new world.

What does this mean for the notion of free? Well, just take one example. Last year, Yahoo announced that Yahoo Mail, its free webmail service, would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite.” So the market price of online storage, at least for email, has now fallen to zero (see “Webmail Windfall“). And the stunning thing is that nobody was surprised; many had assumed infinite free storage was already the case.

For good reason: It’s now clear that practically everything Web technology touches starts down the path to gratis, at least as far as we consumers are concerned. Storage now joins bandwidth (YouTube: free) and processing power (Google: free) in the race to the bottom. Basic economics tells us that in a competitive market, price falls to the marginal cost. There’s never been a more competitive market than the Internet, and every day the marginal cost of digital information comes closer to nothing.

One of the old jokes from the late-’90s bubble was that there are only two numbers on the Internet: infinity and zero. The first, at least as it applied to stock market valuations, proved false. But the second is alive and well. The Web has become the land of the free.

The result is that we now have not one but two trends driving the spread of free business models across the economy. The first is the extension of King Gillette’s cross-subsidy to more and more industries. Technology is giving companies greater flexibility in how broadly they can define their markets, allowing them more freedom to give away products or services to one set of customers while selling to another set. Ryanair, for instance, has disrupted its industry by defining itself more as a full-service travel agency than a seller of airline seats (see “How Can Air Travel Be Free?”).

The second trend is simply that anything that touches digital networks quickly feels the effect of falling costs. There’s nothing new about technology’s deflationary force, but what is new is the speed at which industries of all sorts are becoming digital businesses and thus able to exploit those economics. When Google turned advertising into a software application, a classic services business formerly based on human economics (things get more expensive each year) switched to software economics (things get cheaper). So, too, for everything from banking to gambling. The moment a company’s primary expenses become things based in silicon, free becomes not just an option but the inevitable destination.

WASTE AND WASTE AGAIN
Forty years ago, Caltech professor Carver Mead identified the corollary to Moore’s law of ever-increasing computing power. Every 18 months, Mead observed, the price of a transistor would halve. And so it did, going from tens of dollars in the 1960s to approximately 0.000001 cent today for each of the transistors in Intel’s latest quad-core. This, Mead realized, meant that we should start to “waste” transistors.

Waste is a dirty word, and that was especially true in the IT world of the 1970s. An entire generation of computer professionals had been taught that their job was to dole out expensive computer resources sparingly. In the glass-walled facilities of the mainframe era, these systems operators exercised their power by choosing whose programs should be allowed to run on the costly computing machines. Their role was to conserve transistors, and they not only decided what was worthy but also encouraged programmers to make the most economical use of their computer time. As a result, early developers devoted as much code as possible to running their core algorithms efficiently and gave little thought to user interface. This was the era of the command line, and the only conceivable reason someone might have wanted to use a computer at home was to organize recipe files. In fact, the world’s first personal computer, a stylish kitchen appliance offered by Honeywell in 1969, came with integrated counter space.

Photo Illustration: Jeff Mermelstein

And here was Mead, telling programmers to embrace waste. They scratched their heads — how do you waste computer power? It took Alan Kay, an engineer working at Xerox’s Palo Alto Research Center, to show them. Rather than conserve transistors for core processing functions, he developed a computer concept — the Dynabook — that would frivolously deploy silicon to do silly things: draw icons, windows, pointers, and even animations on the screen. The purpose of this profligate eye candy? Ease of use for regular folks, including children. Kay’s work on the graphical user interface became the inspiration for the Xerox Alto, and then the Apple Macintosh, which changed the world by opening computing to the rest of us. (We, in turn, found no shortage of things to do with it; tellingly, organizing recipes was not high on the list.)

Of course, computers were not free then, and they are not free today. But what Mead and Kay understood was that the transistors in them — the atomic units of computation — would become so numerous that on an individual basis, they’d be close enough to costless that they might as well be free. That meant software writers, liberated from worrying about scarce computational resources like memory and CPU cycles, could become more and more ambitious, focusing on higher-order functions such as user interfaces and new markets such as entertainment. And that meant software of broader appeal, which brought in more users, who in turn found even more uses for computers. Thanks to that wasteful throwing of transistors against the wall, the world was changed.

What’s interesting is that transistors (or storage, or bandwidth) don’t have to be completely free to invoke this effect. At a certain point, they’re cheap enough to be safely disregarded. The Greek philosopher Zeno wrestled with this concept in a slightly different context. In Zeno’s dichotomy paradox, you run toward a wall. As you run, you halve the distance to the wall, then halve it again, and so on. But if you continue to subdivide space forever, how can you ever actually reach the wall? (The answer is that you can’t: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.)

In economics, the parallel is this: If the unitary cost of technology (“per megabyte” or “per megabit per second” or “per thousand floating-point operations per second”) is halving every 18 months, when does it come close enough to zero to say that you’ve arrived and can safely round down to nothing? The answer: almost always sooner than you think.

What Mead understood is that a psychological switch should flip as things head toward zero. Even though they may never become entirely free, as the price drops there is great advantage to be had in treating them as if they were free. Not too cheap to meter, as Atomic Energy Commission chief Lewis Strauss said in a different context, but too cheap to matter. Indeed, the history of technological innovation has been marked by people spotting such price and performance trends and getting ahead of them.

From the consumer’s perspective, though, there is a huge difference between cheap and free. Give a product away and it can go viral. Charge a single cent for it and you’re in an entirely different business, one of clawing and scratching for every customer. The psychology of “free” is powerful indeed, as any marketer will tell you.

This difference between cheap and free is what venture capitalist Josh Kopelman calls the “penny gap.” People think demand is elastic and that volume falls in a straight line as price rises, but the truth is that zero is one market and any other price is another. In many cases, that’s the difference between a great market and none at all.

The huge psychological gap between “almost zero” and “zero” is why micropayments failed. It’s why Google doesn’t show up on your credit card. It’s why modern Web companies don’t charge their users anything. And it’s why Yahoo gives away disk drive space. The question of infinite storage was not if but when. The winners made their stuff free first.

Traditionalists wring their hands about the “vaporization of value” and “demonetization” of entire industries. The success of craigslist’s free listings, for instance, has hurt the newspaper classified ad business. But that lost newspaper revenue is certainly not ending up in the craigslist coffers. In 2006, the site earned an estimated $40 million from the few things it charges for. That’s about 12 percent of the $326 million by which classified ad revenue declined that year.

But free is not quite as simple — or as stupid — as it sounds. Just because products are free doesn’t mean that someone, somewhere, isn’t making huge gobs of money. Google is the prime example of this. The monetary benefits of craigslist are enormous as well, but they’re distributed among its tens of thousands of users rather than funneled straight to Craig Newmark Inc. To follow the money, you have to shift from a basic view of a market as a matching of two parties — buyers and sellers — to a broader sense of an ecosystem with many parties, only some of which exchange cash.

The most common of the economies built around free is the three-party system. Here a third party pays to participate in a market created by a free exchange between the first two parties. Sound complicated? You’re probably experiencing it right now. It’s the basis of virtually all media.

In the traditional media model, a publisher provides a product free (or nearly free) to consumers, and advertisers pay to ride along. Radio is “free to air,” and so is much of television. Likewise, newspaper and magazine publishers don’t charge readers anything close to the actual cost of creating, printing, and distributing their products. They’re not selling papers and magazines to readers, they’re selling readers to advertisers. It’s a three-way market.

In a sense, what the Web represents is the extension of the media business model to industries of all sorts. This is not simply the notion that advertising will pay for everything. There are dozens of ways that media companies make money around free content, from selling information about consumers to brand licensing, “value-added” subscriptions, and direct ecommerce (see How-To Wiki for a complete list). Now an entire ecosystem of Web companies is growing up around the same set of models.

A TAXONOMY OF FREE
Between new ways companies have found to subsidize products and the falling cost of doing business in a digital age, the opportunities to adopt a free business model of some sort have never been greater. But which one? And how many are there? Probably hundreds, but the priceless economy can be broken down into six broad categories:

· “Freemium”
What’s free: Web software and services, some content. Free to whom: users of the basic version.

This term, coined by venture capitalist Fred Wilson, is the basis of the subscription model of media and is one of the most common Web business models. It can take a range of forms: varying tiers of content, from free to expensive, or a premium “pro” version of some site or software with more features than the free version (think Flickr and the $25-a-year Flickr Pro).

Again, this sounds familiar. Isn’t it just the free sample model found everywhere from perfume counters to street corners? Yes, but with a pretty significant twist. The traditional free sample is the promotional candy bar handout or the diapers mailed to a new mother. Since these samples have real costs, the manufacturer gives away only a tiny quantity — hoping to hook consumers and stimulate demand for many more.

Photo Illustration: Jeff Mermelstein

But for digital products, this ratio of free to paid is reversed. A typical online site follows the 1 Percent Rule — 1 percent of users support all the rest. In the freemium model, that means for every user who pays for the premium version of the site, 99 others get the basic free version. The reason this works is that the cost of serving the 99 percent is close enough to zero to call it nothing.

· Advertising
What’s free: content, services, software, and more. Free to whom: everyone.

Broadcast commercials and print display ads have given way to a blizzard of new Web-based ad formats: Yahoo’s pay-per-pageview banners, Google’s pay-per-click text ads, Amazon’s pay-per-transaction “affiliate ads,” and site sponsorships were just the start. Then came the next wave: paid inclusion in search results, paid listing in information services, and lead generation, where a third party pays for the names of people interested in a certain subject. Now companies are trying everything from product placement (PayPerPost) to pay-per-connection on social networks like Facebook. All of these approaches are based on the principle that free offerings build audiences with distinct interests and expressed needs that advertisers will pay to reach.

· Cross-subsidies
What’s free: any product that entices you to pay for something else. Free to whom: everyone willing to pay eventually, one way or another.

When Wal-Mart charges $15 for a new hit DVD, it’s a loss leader. The company is offering the DVD below cost to lure you into the store, where it hopes to sell you a washing machine at a profit. Expensive wine subsidizes food in a restaurant, and the original “free lunch” was a gratis meal for anyone who ordered at least one beer in San Francisco saloons in the late 1800s. In any package of products and services, from banking to mobile calling plans, the price of each individual component is often determined by psychology, not cost. Your cell phone company may not make money on your monthly minutes — it keeps that fee low because it knows that’s the first thing you look at when picking a carrier — but your monthly voicemail fee is pure profit.

On a busy corner in São Paulo, Brazil, street vendors pitch the latest “tecnobrega” CDs, including one by a hot band called Banda Calypso. Like CDs from most street vendors, these did not come from a record label. But neither are they illicit. They came directly from the band. Calypso distributes masters of its CDs and CD liner art to street vendor networks in towns it plans to tour, with full agreement that the vendors will copy the CDs, sell them, and keep all the money. That’s OK, because selling discs isn’t Calypso’s main source of income. The band is really in the performance business — and business is good. Traveling from town to town this way, preceded by a wave of supercheap CDs, Calypso has filled its shows and paid for a private jet.

The vendors generate literal street cred in each town Calypso visits, and its omnipresence in the urban soundscape means that it gets huge crowds to its rave/dj/concert events. Free music is just publicity for a far more lucrative tour business. Nobody thinks of this as piracy.

· Zero marginal cost
What’s free: things that can be distributed without an appreciable cost to anyone. Free to whom: everyone.

This describes nothing so well as online music. Between digital reproduction and peer-to-peer distribution, the real cost of distributing music has truly hit bottom. This is a case where the product has become free because of sheer economic gravity, with or without a business model. That force is so powerful that laws, guilt trips, DRM, and every other barrier to piracy the labels can think of have failed. Some artists give away their music online as a way of marketing concerts, merchandise, licensing, and other paid fare. But others have simply accepted that, for them, music is not a moneymaking business. It’s something they do for other reasons, from fun to creative expression. Which, of course, has always been true for most musicians anyway.

· Labor exchange
What’s free: Web sites and services. Free to whom: all users, since the act of using these sites and services actually creates something of value.

You can get free porn if you solve a few captchas, those scrambled text boxes used to block bots. What you’re actually doing is giving answers to a bot used by spammers to gain access to other sites — which is worth more to them than the bandwidth you’ll consume browsing images. Likewise for rating stories on Digg, voting on Yahoo Answers, or using Google’s 411 service (see “How Can Directory Assistance Be Free?”). In each case, the act of using the service creates something of value, either improving the service itself or creating information that can be useful somewhere else.

· Gift economy
What’s free: the whole enchilada, be it open source software or user-generated content. Free to whom: everyone.

From Freecycle (free secondhand goods for anyone who will take them away) to Wikipedia, we are discovering that money isn’t the only motivator. Altruism has always existed, but the Web gives it a platform where the actions of individuals can have global impact. In a sense, zero-cost distribution has turned sharing into an industry. In the monetary economy it all looks free — indeed, in the monetary economy it looks like unfair competition — but that says more about our shortsighted ways of measuring value than it does about the worth of what’s created.

THE ECONOMICS OF ABUNDANCE
Enabled by the miracle of abundance, digital economics has turned traditional economics upside down. Read your college textbook and it’s likely to define economics as “the social science of choice under scarcity.” The entire field is built on studying trade-offs and how they’re made. Milton Friedman himself reminded us time and time again that “there’s no such thing as a free lunch.

“But Friedman was wrong in two ways. First, a free lunch doesn’t necessarily mean the food is being given away or that you’ll pay for it later — it could just mean someone else is picking up the tab. Second, in the digital realm, as we’ve seen, the main feedstocks of the information economy — storage, processing power, and bandwidth — are getting cheaper by the day. Two of the main scarcity functions of traditional economics — the marginal costs of manufacturing and distribution — are rushing headlong to zip. It’s as if the restaurant suddenly didn’t have to pay any food or labor costs for that lunch.

Surely economics has something to say about that?

It does. The word is externalities, a concept that holds that money is not the only scarcity in the world. Chief among the others are your time and respect, two factors that we’ve always known about but have only recently been able to measure properly. The “attention economy” and “reputation economy” are too fuzzy to merit an academic department, but there’s something real at the heart of both. Thanks to Google, we now have a handy way to convert from reputation (PageRank) to attention (traffic) to money (ads). Anything you can consistently convert to cash is a form of currency itself, and Google plays the role of central banker for these new economies.

There is, presumably, a limited supply of reputation and attention in the world at any point in time. These are the new scarcities — and the world of free exists mostly to acquire these valuable assets for the sake of a business model to be identified later. Free shifts the economy from a focus on only that which can be quantified in dollars and cents to a more realistic accounting of all the things we truly value today.

FREE CHANGES EVERYTHING
Between digital economics and the wholesale embrace of King’s Gillette’s experiment in price shifting, we are entering an era when free will be seen as the norm, not an anomaly. How big a deal is that? Well, consider this analogy: In 1954, at the dawn of nuclear power, Lewis Strauss, head of the Atomic Energy Commission, promised that we were entering an age when electricity would be “too cheap to meter.” Needless to say, that didn’t happen, mostly because the risks of nuclear energy hugely increased its costs. But what if he’d been right? What if electricity had in fact become virtually free?The answer is that everything electricity touched — which is to say just about everything — would have been transformed. Rather than balance electricity against other energy sources, we’d use electricity for as many things as we could — we’d waste it, in fact, because it would be too cheap to worry about.

All buildings would be electrically heated, never mind the thermal conversion rate. We’d all be driving electric cars (free electricity would be incentive enough to develop the efficient battery technology to store it). Massive desalination plants would turn seawater into all the freshwater anyone could want, irrigating vast inland swaths and turning deserts into fertile acres, many of them making biofuels as a cheaper store of energy than batteries. Relative to free electrons, fossil fuels would be seen as ludicrously expensive and dirty, and so carbon emissions would plummet. The phrase “global warming” would have never entered the language.

Today it’s digital technologies, not electricity, that have become too cheap to meter. It took decades to shake off the assumption that computing was supposed to be rationed for the few, and we’re only now starting to liberate bandwidth and storage from the same poverty of imagination. But a generation raised on the free Web is coming of age, and they will find entirely new ways to embrace waste, transforming the world in the process. Because free is what you want — and free, increasingly, is what you’re going to get.

Chris Anderson (canderson@wired.com) is the editor in chief of Wired and author of The Long Tail. His next book, FREE, will be published in 2009 by Hyperion.

Search Wired

Top Stories Magazine Wired Blogs All Wired

Related Topics:

Comments (63)

Posted by: danielu23 hours ago1 Point
Your “Scenario 1” implies you know absolutely NOTHING about the movie business. Distributors and Studios make the money on ticket sales based on a percentage split with the projection houses. The bulk of ticket sales money goes to the Distributors a…
Posted by: tom2032 days ago1 Point
The information is not free, it is being paid for (in cash) mostly by advertisers trying to gain the attention of the website visitors. It is also paid for (in time wasted) by the people who are constantly distracted by the ads. Micro-payments were…
Posted by: mfouts2 days ago1 Point
That article is absolutley amazing!!! I am currently into buying real estate and I am slowly transitioning into the great world wide web. @ of my partners and I are trying to take advantage of the the www world via http://www.choiceisfreedom.com still under…
Posted by: foofah2 days ago1 Point
Great article…but give poor Zeno a break. “The answer is that you can’t [reach the wall]: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.” You’ve either missed Zeno’s point entirely, or you’…
Posted by: RainerGamer2 days ago1 Point
Sign me up.
Posted by: Lord_Jim2 days ago1 Point
Is something really free only because you don’t pay in dollars? What about being bombarded with advertising? What about giving away personal data to dubious parties? What about costly ‘upgrade options’ hidden behind every second button of allegedly …
Posted by: gdavis951293 days ago1 Point
Please Mr. Anderson, buy yourself a dictionary. You write: …Yahoo announced that Yahoo Mail… would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite”. ‘Unlimited’ means that Yahoo will not cap t…
Posted by: MikeG3 days ago1 Point
A few months ago I began researching free training & education. To be honest, I didn’t expect to find many good, free items, since I know that it takes time and effort (and time is money) to develop training. But I hoped my efforts would unearth …
Posted by: RAGZ3 days ago1 Point
You know, I subscribe to Wired, and I like the content, but please answer this question; why am I paying Wired’s comparatively high subscription cost if you’re going to stuff it so full of little ad inserts, that when I open it during my bathroom rit…
Posted by: jdwright103 days ago1 Point
This definitely true. It’s a pretty good strategy if you think about it. I just bought a $7 Gillette razor and the refill blades cost me $15!
Posted by: gdavis951293 days ago1 Point

Read Full Post »

Kurzweil Technologies
The Founders Mentoring Kurzweil Companies Contact Us About Ray Kurzweil Ray Kurzweil's Publications

In this excerpt from The Age of Spiritual Machines (Viking, 1999), Ray Kurzweil describes his work in speech recognition.

I also started Kurzweil Applied Intelligence, Inc. in 1982 with the goal of creating a voice activated word processor. This is a technology that is hungry for MIPs (i.e., computer speed) and Megabytes (i.e., memory), Ray Kurzweil introduced the first large-vocabulary speech recognition 	system in 1987 so early systems limited the size of the vocabulary that users could employ. These early systems also required users to pause briefly between words… so…. you…. had…. to…. speak….. like…. this. We combined this “discrete word” speech recognition technology with a medical knowledge base to create a system that enabled doctors to create their medical reports by simply talking to their computers. Our product, called Kurzweil VoiceMed (now Kurzweil Clinical Reporter), actually guides the doctors through the reporting process. We also introduced a general purpose dictation product called Kurzweil Voice, which enabled users to create written documents by speaking one word at a time to their personal computer. This product became particularly popular with people who have a disability in the use of their hands.

Just this year, courtesy of Moore’s Law, personal computers became fast enough to recognize fully continuous speech, so I am able to dictate the rest of this book by talking to our latest product, called Voice Xpress Plus, at speeds around a hundred words per minute. Of course, I don’t get a hundred words written every minute since I change my mind a lot, but Voice Xpress doesn’t seem to mind.

We sold this company as well, to Lernout & Hauspie (L&H), a large speech and language technology company headquartered in Belgium. Shortly after the acquisition by L&H in 1997, we arranged a strategic alliance between the dictation division of L&H (formerly Kurzweil Applied Intelligence) and Microsoft, so our speech technology is likely to be used by Microsoft in future products.

<!– –>
The Founders Mentoring Kurzweil Companies Contact Us About Ray Kurzweil Ray Kurzweil's Publications

Read Full Post »

BusinessWeek logo

AUGUST 20, 2007

 

THE FUTURE OF WORK — MANDEL ON ECONOMICS

Which Way To The Future?
Globalization and technology are drastically changing how we do our jobs—and that’s both a promise and a problem

podcast

COVER STORY PODCAST
 

The “Future of Work” is hardly a new topic. In fact, over the past quarter century, at least 20 books have used that phrase as part or all of their title.

So with all the words spilled on this question already, why is BusinessWeek addressing it now? The answer is simple: The U.S. and the global economies are coming to a crossroads that no one could have anticipated just a few years ago. Globalization and technology together are creating the potential for startling changes in how we do our jobs and the offices we do them in. Offshoring, for one, means work can be broken into smaller tasks and redistributed around the world. And the rapid growth of broader, richer channels of communication—including virtual worlds—is transforming what it means to be “at work.”


Yet despite the technological and organizational progress, it’s not clear whether we should look ahead to the future of work with enthusiasm or fear. Are Americans’ jobs going to become more interesting and complex as rote tasks are moved offshore or eliminated by technology? Or will managers and workers be ground down by competitive pressures that leave little time or room for creativity and innovation?Truth is, the trends prevailing in today’s workplace provide ammunition for optimists and pessimists alike.

On the positive side, employers are hiring workers with higher and higher levels of education, and jobs are demanding ever more sophistication. According to the Bureau of Labor Statistics, 34% of adult workers in the U.S. now have a bachelor’s degree or better, up from 29% 10 years ago. What’s more, the modern workplace no longer resembles the factory assembly line but rather the design studio, where the core values are collaboration and innovation, not mindless repetition. Talented people are still in high demand, and there’s no evidence yet that work has become less interesting because of outsourcing. “On balance, I don’t think that jobs are being fragmented,” says Paul Osterman, a labor economist at the Massachusetts Institute of Technology.

Fully 60% of respondents to a BusinessWeek poll expect working conditions for the average person to be better in 10 years than they are now. That’s according to an online survey of 2,000 U.S. executives and managers done in late June and early July. And in the same poll, 82% of respondents said that self-fulfillment will be a more powerful motivator than fear if we look 10 years out.

Then again, there are persistent signs that the gloomier outlook is gaining traction as well. Job satisfaction in the U.S. plummeted in 2006 to a record low. That’s according to a survey of 5,000 households done for the Conference Board. Only 47% of workers were satisfied with their jobs in 2006, down from 59% in 1995. “The demands in the workplace have increased tremendously,” says Lynn Franco, director of consumer research for the Conference Board, especially as technology has made it ever harder to get away from the job.

Even more disturbing, two decades of rising incomes for educated workers seem to have come to a halt, at least temporarily. When adjusted for inflation, the real wages and salaries of U.S. workers with at least a bachelor’s degree are barely higher than they were in 2000, an unpleasant surprise in a world in which education is seen as the route to success.

The wage stagnation, combined with the 60% rise in college tuitions since 2000, seems to be discouraging many young Americans from getting a college education. The percentage of 25- 29-year-olds with at least a bachelor’s degree has actually fallen during this decade. This raises the real possibility that this generation of young Americans may actually be less educated than the previous one, creating a growing gap between the kinds of people companies need and the workers who are actually available.

What can you do? Whether you are a manager or worker, this Special Report provides the intellectual tools and information you need to move toward the more optimistic vision. We’ll look at the future of work—both in the short run and much farther out—from the best way to manage a global virtual team to the pros and cons of branding yourself, to the seemingly farfetched use of brain chips—yes, brain chips—to enhance your capabilities.

The first section examines work from the perspective of managers, focusing in particular on how to get an organization full of people from different cultures and backgrounds to collaborate efficiently and effectively. That’s not an easy task, but we’ll see how global giants, such as IBM (IBM ), Nokia (NOK ), and Dow Chemical, (DOW ) are able to accomplish it. Meanwhile, successful Indian companies—among them Infosys Technologies Ltd. (INFY ) and Satyam Computer Services (SAY )—demonstrate how they recruit, train, and retain workers in a hyper- competitive environment.

The next section peeks into the future from the perspective of workers. We’ll explain how to avoid being “Bangalored” or “Shanghaied”—that is, having pieces of your job sent overseas. Our report’s reassuring message: “The offshoring trend is moving with the speed of a road paver rather than a hot rod, so there’s time for alert Americans and Europeans to scramble out of the way.” That means moving up the value chain to take advantage of new opportunities. It also can mean literally moving from one country to another, as we describe how Europe’s mobile labor force easily crosses national borders, perhaps giving a glimpse of where the rest of the world is heading.

Finally, the third section of the Special Report considers the impact of technology on the workplace, ranging from improved telecommuting to new techniques that help sleep-deprived workers, a serious problem in many occupations. In the future, advances in communication could enable new forms of workplace organization and mass collaboration of an unprecedented sort.

Beyond that, we ask: Will this be an invigorating “new world of empowered individuals encased in a bubble of time-saving technologies? Or will it be a brave new world of virtual sweatshops…?” For example, Wikipedia, the tremendously successful online encyclopedia, harnesses the efforts of thousands of volunteers to create something of great utility to society. But using a similar innovation in a profit-making corporation carries both enormous promise and problems.

In fact, the emerging ways that the workplace is being restructured have not yet been stress-tested. They have evolved in a period of rapid global growth, and no one knows how they will react if the world economy hits a rocky patch. We have entered uncharted territory—and that’s why this special report offers guideposts rather than a Google-esque road map.

Still, when the future of work comes to pass, will it be a bright or bleak one for most people? “I’ll be optimistic,” says MIT’sOsterman. We are, too.

 READER REVIEWS

Read Full Post »

BusinessWeek logo

OCTOBER 22, 2007

 

NEWS

What in the Web Are They Thinking?
Believe it or not, the crazy sums tech and media giants are paying for startups may ultimately make sense

In Silicon Valley, they love to say it’s not about the money. Yet in late September privately held online social network Facebook, with an expected $150 million in 2007 sales, sought new investment based on a stunning $10 billion-plus valuation. A few days later, a financial opinion Web site, 24/7 Wall St., speculated that TechCrunch, a blog that grosses about $200,000 a month, might fetch $100 million or more from an acquirer such as CNET Networks (CNET ). Now there’s talk that RockYou!, the second-largest maker of software “widgets” that add features to social networks such as Facebook, might seek up to $500 million to sell out. No matter that RockYou denies it. Or that, by several estimates, the sum total of monthly Facebook widget advertising revenue is less than $1 million.

None of those deals has come to pass, and maybe none will. Google (GOOG ) is real enough, though, and its stock price shot past 600 on Oct. 9, giving the search giant even more power to buy up companies. These lofty valuations have a lot of people in Silicon Valley and beyond squirming. They worry about a replay of the dot-com boom, which peaked early in 2000, only to crash later that year. “Companies like Facebook are driving everybody bananas,” says Sumant Mandal, managing director at Clearstone Venture Partners.

But the bubble chatter misses the point. Buyouts by established companies, from Google, Microsoft (MSFT ), Yahoo! (YHOO ), and eBay to News Corp. (NWS ) and CBS (CBS ), bubbly as they may appear, serve a valid strategic purpose. Marketers and media companies alike fervently believe there are lucrative opportunities to get people engaged with their brands, products, and ads in ways Madison Avenue could never dream of.

Fantastical valuations signal an important transformation in the Web economy, one that will shake things up even more than the dot-com bust did. The Web is changing from a place where people find information, entertainment, and products over to a social medium where they share videos on YouTube and communicate with friends on Facebook.

ACQUISITION BINGE
Consider Microsoft, which was said to be interested in buying a 5% stake in Facebook for up to $500 million. That’s 2% of Microsoft’s $23 billion stash of cash and short-term investments–chump change for pole position on the emerging new Web. And the downside of betting too much is minimal. As if to prove the point, when eBay recently took a $1.4 billion writedown on its $2.6 billion acquisition of Internet phone service Skype in 2005, its stock actually rose slightly. Investors had already written it off themselves.

And so the race continues to heat up. In August, Microsoft closed on its $6 billion acquisition of online ad firm aQuantive. Yahoo recently bought online office productivity software maker Zimbra for $350 million and ad network BlueLithium for $300 million. Google alone has bought 11 Web outfits so far this year, about double last year’s pace, including a “microblogging” service called Jaiku on Oct. 9.

Media companies are accelerating their activity, too, making News Corp.’s 2005 purchase of MySpace for $580 million look reasonable. CBS, for instance, bought a far smaller, more targeted music site called Last.fm in May for $280 million, a price that one venture capitalist says was more than five times what he expected. Notes Reid Hoffman, chairman of professional networking site LinkedIn and an angel investor in Facebook and other Web startups: “Strategy is a lot more important than cash to these companies.”

GRANDIOSE EXPECTATIONS
That said, rationales for some valuations can get rather creative. Some members of the fast-growing Facebook ecosystem pin their analyses on the idea that the company could become the next Google. Lee Lorenzen, CEO of Altura Ventures, which recently launched a fund for companies building applications for Facebook, even makes the case that Facebook is worth $100 billion. He’s assuming that various new kinds of ads and e-commerce will help Facebook produce $2.2 billion in profits by the end of 2008.

Not surprisingly, others are aghast at this kind of analysis. Anant K. Sundaram, a finance professor at Dartmouth College’s Tuck School of Business, notes that the only way Facebook’s value can get to $10 billion is by assuming much higher, sustained growth in users and revenue than the fabulously profitable Google. “I look at this valuation and go, ‘This is silly,'” says Sundaram. Even if Facebook somehow fulfills these projections, he warns, “the mistake would be for the rest of the world to make decisions based on that valuation.” Indeed, imagine what happens if the economy cools further, taking online advertising with it, and Google reports a quarter that misses analysts’ expectations? A stock swoon might slow Google’s and other companies’ buying binge.

But for now, Google can still spend big, and tech and media executives feel they need to keep pace in building the new Web. In a sense, startups themselves, rather than silicon chips and disk drives, are becoming the raw materials of Silicon Valley. Each provides a small, modular piece of the end product; each gets acquired and assembled by the likes of Google and News Corp.–which, after all, are the ones that really know how to make money.

Read Full Post »

The New York Times

Internet-Era Magazine Is Revived to Look at the Future

Published: February 4, 2008
SAN FRANCISCO — One of the signature publications of the first dot-com boom is being reincarnated.

Skip to next paragraph

Thor Swift for The New York Times

Derek Butcher, vice president and general manager of the new Industry Standard.

On Monday, International Data Group, the trade magazine publisher based in Framingham, Mass., will restart The Industry Standard as an online-only technology news site with a twist: readers will be asked to wager virtual money on whether various anticipated news developments and business deals in high-tech are likely to happen.

So-called prediction markets are seen as a way to use the wisdom of crowds to forecast future events, such as who will win the presidential election or the Super Bowl. On sites like the Hollywood Stock Exchange (www.hsx.com), NewsFutures (www.newsfutures.com) and TradeSports (www.tradesports.com), people can bet on the outcomes of films, news and sporting events. The new Industry Standard will extend that model to news about the Internet at www.thestandard.com.

The Industry Standard is a familiar brand to most people in high-tech. Ten years ago, I.D.G. began it as a weekly magazine based in San Francisco.

It became known as much for embodying the initial wave of dot-com hyperbole as for reporting on it: the magazine grew thick with ads from Internet companies and held industry conferences and rooftop networking parties before ad pages and subscriptions withered in the Internet bust.

Its parent company, Standard Media International, sought bankruptcy protection from creditors in August 2001. I.D.G., a major investor in the company, acquired many of its remaining assets, including the Web site and brand name.

Then for six years, I.D.G. did nothing with it.

“It was pretty dormant for the most part,” said Derek Butcher, vice president and general manager of the new Industry Standard. “But recently, we started thinking that there seemed to be a lot of equity left in the brand. People seemed to have a lot of love and respect for it,” he said.

The site, which goes live on Monday, will feature short contributions on a variety of high-tech topics from freelance writers, who will be paid around $300 a post.

But the centerpiece of the new site is the predictive market. When people register for the site, they will receive 100,000 “Standard Dollars,” which they can use to wager on such propositions as “another company will emerge as a suitor for Yahoo” or “TiVo will be bought by the end of the year.”

The listings, which can be suggested by readers but must be approved by The Standard’s editors, will each have associated odds. If the chances are 50 percent that “Google and Dell will team up to make a mobile phone” and it actually happens, a reader would double his money.

Standard Dollars can be exchanged for prizes but will probably be used on the site as a symbol of a person’s reputation for success, Mr. Butcher said.

Prediction markets have been evangelized in the writings of Thomas W. Malone, a professor at the Sloan School of Management at the Massachusetts Institute of Technology, who is advising I.D.G.

Professor Malone says prediction markets can tap into the hidden wisdom of crowds, drawing out expertise and insider knowledge.

Google and Microsoft use prediction markets to bet not only on industry events but also on whether a certain product might be shipped on time, Professor Malone said. The results of the betting, he said, often reveal internal problems that managers may not know about.

“Prediction markets provide a way of integrating information from many different people very quickly and effectively,” Professor Malone said. “That is one reason to believe the kind of thing The Industry Standard is doing is a harbinger of something that is going to be much more common in the future.”

Tips

To find reference information about the words used in this article, double-click on any word, phrase or name. A new window will open with a dictionary definition or encyclopedia entry.

Related Articles

Read Full Post »

Older Posts »

%d bloggers like this: