Feeds:
Posts
Comments

Archive for the ‘Internet’ Category

August 02, 2008

The Marginal Utility of Internet Companies

GatewayWhile I have started work in a new company (since it is in the setting up phase), a lot of interest news have emerged. With the plethora of them, I have decided to look at the following ones: (i) Facebook suing studiVZ, a German clone of the social network, (ii) the disastrous public relations disaster of Cuil, the new search engine who claimed to rival Google in indexing and search, and (iii) the launch of Google’s several new apps and initiatives: Knol and Lively (and I have a gripe with Google together with Picassa as well – they don’t have a Mac OS X version). indicates the feel that all the Internet companies are reaching marginal utility and it is a phase where everyone is seeking the next big thing. It reminded me of a question posed to Bill Gates before his departure from an active role in Microsoft during the BBC interview that he thinks that the two people in a Starbucks Cafe working on new internet innovation has now taken over the two people innovating in their parents’ garage. Here are some thoughts about why I believe that everyone is in the marginal utility phase of the web evolution.

  • Xiaonei Legal Action as a gesture to stop regional or geographic specific Facebook clones to arise: Why did Facebook choose to sue studiVZ, not XiaoNei (Chinese facebook clone) in China? If you ask me, I was looking at the interface of studiVZ and realized that they have at least changed parts of the interface. At least they bothered to paint it red instead of blue. If you look at the chinese facebook clone, XiaoNei, it is a blatant copy word to word and box to box imitation of Facebook. Guess what, XiaoNei has even cloned Facebook’s approach to open up the platform for developers and not to mention they have raised US$430 M for their funding and they are better funded than Facebook. I am pretty sure that the executives in Facebook must have realized that it is harder to deal with the perfect clone, but it may be easier to take out a German clone where there is clear legislation that favours intellectual property. One thing is beginning to happen if you have been studying the trends of online social networks, is that different countries preferred different social networks. In India and Brazil, people like Google’s Orkut and people in Thailand like Hi5. In fact, there is now an added dimension, the generic social network can now be broken into niches groups. For example, MyYearBook.com, a social network for teenagers (which means that they might grow up with another social networks) just raises US$13M. Of course, Facebook has started to internationalize their portal for other languages as an appropriate response to deal with the regional specific issues, but the generational issue of online social network users will continue to remain a challenge for them.
  • Cuil1 The Clone Wars returns and now the Europeans and Americans are cloning each other: We used to like to talk about how the Chinese clones US web 2.0 start-ups and make a killing. Seriously, the Europeans are no better. Think about studiVz, a facebook clone in Germany, and the Europeans are beginning to realize that they can do it just as well with their natural language advantage. In fact, the Amercians are doing it within among themselves. Google has not been innovative recently with Knol, who we know directly competes with Wikipedia except that it does not harnass the wisdom of crowds, and Lively, which can be their first foray to virtual worlds, and guess what they are thinking about what I have said before. Once the blogger gets tools to propagate the virtual as an avatar, the virtual world market may be trigger another web evolution via this line of thinking. Even Google is facing a clone in her backyard with Cuil, but the problem is that Cuil did not ensure that their product and ended in a PR disaster. Without haste, they have started to talk about search quality in their blogs again.
  • KnolscreenshotMarginal Utility, Everyone?: My guess is that every company is having marginal utility including Google, as they have started a venture capital company to find new ideas that they have not otherwise thought of. Of course, unlike the last web bubble, we don’t see any more of the astronomical valuations and given that we are moving along with a credit crisis in the US, people will be conservative, which means investments will flow slowly.

Of course, while the internet web services are competing out there, it might be better to look for a blue ocean where the mobile meets the web or even explore new emerging markets where the clones might succeed.

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c7bc453ef00e553c95b7a8833

Listed below are links to weblogs that reference The Marginal Utility of Internet Companies:

Read Full Post »


Real People Don’t Have Time for Social Media

Written by Sarah Perez / April 16, 2008 2:00 PM / 50 Comments


Let’s be honest here: we’re all a bunch of social media addicts. We’re junkies. Whether it’s a new Twitter app, a new Facebook feature, or a new social anything service, we’re all over it. But we may not be the norm. The truth is, being involved in social media takes time, something that most people don’t have a lot of. So how can regular folk get involved with social media? And how much time does it really take?

The Time It Takes To Be Involved

It was this post on a blog called Museum 2.0 that caught our eye.

[Side Note: Museum 2.0 is a blog whose niche is exploring the technologies and philosophies of web 2.0 and then applying them to museums, yes, like brick-and-mortar museums. The site’s owner, Nina Simon, works at the Tech Museum of Innovation as curator of the new Tech Virtual Museum Workshop and previously, worked at the Electric Sheep Company and International Spy Museum in Washington, D.C. Fascinating read, by the way.]

The post was “How Much Time Does Web 2.0 Take?” and it looks at all the different types of activities and levels of participation on a sliding scale depending on how much time you have to invest.

The point of the scale is to show regular folk, albeit those in a particular industry, how they can fit getting involved with social media into their day-to-day routine.

Let’s call it a “real person” scale.

Although she was specifically writing for the museum crowd, there is some good information here that we can all benefit from. To summarize, here are her findings:

1-5 Hours per Week = Participant

A participant is at the lower end of the scale. Participants can set up MySpace or Facebook pages and groups, run a Twitter feed, comment on blogs, and/or upload images to a site like flickr. She notes that the most time-consuming aspect of Twitter is not the broadcasting aspect but finding followers who will read your content.

5-10 Hours per Week = Content Provider

A content provider can start a blog or a podcast. Both activities require slightly more advanced technical skills and a larger time commitment. Bloggers should aim for a minimum of at least one post per week, but two or three would be better, she says. Podcasts can be as infrequent as once per month.

10-20 Hours per Week = Community Director

A community director is much more involved with social media. Here, her advice is more narrowly aimed towards museum staff, but still the overall suggestions hold up. Community directors  can get involved in community web sites, work comment boards, and create projects in Second Life. Basically this category involves getting involved in larger scale activities, but, once launched and running, they don’t require full-time management.

Time Spent on Social Media, image via Museum 2.0

How Much Time Do You Spend on Social Media?

For a comparison between what time commitments are recommended for “regular” folk versus how much time our community spends on social media, I took a completely unscientific Twitter poll where I asked that question.

Let’s see what you responded with to the question: “How much time do you spend on social media per day?”

A Couple Hours per Day:

A Bit More Involved:

There Should Be a Support Group for This:

What Can We Learn From This?

Looking at all the various web-based activities and projects, what we can tell is that not everyone is going to have the time to be as heavily involved in social media and we are.

Even those of us at the lower end of the range, offering up only a few hours per day, are still heavily involved with social media when we’re placed on this “real person” scale that Nina provides.

If we’re going to recommend a service or activity to a friend whose alarm goes off at 6 AM and doesn’t return home from the office until 6 PM, then we need to respect that their “spare” time is precious. Whatever new app or service we’re trying to push on them should have real value.

Where do you rate on this scale? Have you over-committed or under-committed your time?

Read Full Post »

Mon

Apr 14
2008

Andy Oram

Book review: “The Future of the Internet (And How to Stop It)”

 

Most of us in the computer field have heard more than our fill about the free software movement, the copyright wars, the scourge of spyware and SQL injection attacks, the Great Firewall of China, and other battles for the control of our computers and networks. But your education is stifled until you have absorbed the insights offered by comprehensive thinkers such as Jonathan Zittrain, who presents in this brand new book some critical and welcome anchor points for discussions of Internet policy. Now we have a definitive statement from a leading law professor at Harvard and Oxford, who combines a scholar’s insight into legal doctrines with a nitty-gritty knowledge of life on the Internet.

You can read Zittrain for cogent discussions of key issues in copyright, filtering, licensing, censorship, and other pressing issues in computing and networking. But you’re rewarded even more if you read this book to grasp fundamental questions of law and society, such as:

  • What determines the legitimacy of laws and those who make and enforce them?
  • What relationship does the law on the books bear to the law as enforced, and how does the gray area between them affect the evolution of society?
  • What is the proper attitude of citizens toward law-makers and regulators, and how much power is healthy for either side to have?
  • How can community self-organization stave off the need for heavy-handed legislation–and how, in contrast, can premature legislation preclude constructive solutions by self-organized communities?

Core questions such as these power Zittrain’s tour of technology and law on today’s networks. “The Future of the Internet” takes us briskly down familiar paths, offering valuable summaries of current debates, but Zittrain also tries always to hack away at the brambles that block the end of each path. Thanks to his unusually informed perspective, he usually–although not always–succeeds in pushing us forward a few meticulously footnoted footsteps.

 

Zittrain has summarized the points in this book in an online article, but reading the whole book pays off because of its depth of legal reasoning.

Informed recommendations

One of Zittrain’s most applicable suggestions–and one that exemplifies the positive philosophy he brings to his subject–is his solution for handling computer viruses. Currently, non-expert computer users are either helpless in the face of viruses or employ inadequate firewall products that block useful programs along with infections. When Internet service providers scramble to block malware at the router, proponents of network neutrality complain that they’re violating the end-to-end principle. The dilemma seems unsolvable.

Zittrain cuts the Gordian knot by suggesting user empowerment. Experts who know how to track and identify viruses or spyware can label them as such, and less expert users can check ratings on every download. Tools are urgently needed that aggregate widely distributed ratings and present them to users in a very simple screen of information whenever they initiate something potentially dangerous. (Zittrain cites, as a model, the partnership between Google and the StopBadware project run by his colleagues at the Berkman Center.)

Users could have a choice of proxies to help them decide what on put on their computers. Additionally, instead of politely hiding network activity from users, mass-market operating systems can show the information in a manner that is easy to grasp, so that the user has a clue when the computer is at risk of turning into a zombie. Zittrain would probably be gratified by a simple security enhancment recommended in the Febuary issue of Communications of the ACM: a suggestion that a wireless router notify each host using the router how many hosts are currently using it, so that wardriving could immediately be detected by users.

Other people have suggested distributed self-defending security systems, but Zittrain links the whole endeavor to the hope provided by the Internet’s ability to bring together people who shared positive goals. If software vendors and Internet security researchers gathered around this vision, a self-interested and self-organized community could protect itself, with more able members educating the less able ones.

As an alternative to restrictive software that sinks roots deep into the operating system and locks down computers, such tools could actually improve Internet users’ knowledge and sense of community while putting a dent in identity theft, spam, and distributed denial of service attacks.

Throughout the wide range of topics described in his book, Zittrain looks first to technically powered solutions that unite people of good will and encourage potential malfactors to renounce anti-social behavior. But his tone lies far from that of cocky cyberpunk hackers who boast that their technological solutions can protect them from all cyberharm (and damned be less savvy cybercitizens). Zittrain is too good a lawyer to dismiss the power of governments, or to assume that such power can only be oppressive. Thus:

  • He calls for a new Manhattan Project that would draw in government, research institutions, and individual programmers to solve the afore-mentioned malware problem.
  • He allows that the government should be allowed a lower threshold for access to financial data than access to other personal data.
  • He suggests regulation to enforce data portability, so that user data stored by online services could be retrieved by the owners when they wanted to switch services or when the services failed. (This is the online equivalent to the historic endorsement of open office standards that has been passed by governments in several countries and was nearly hatched in the state of Massachusetts, before a careless legislature ran an off-road vehicle over it.)

Zittrain is not a fan of network neutrality as most proponents describe it, but he sympathizes with the end-to-end principle and would like the principle of neutrality applied to APIs offered by web services such as Google’s. If web service providers claim that their data is available for creative uses by outsiders, they should not be allowed to arbitrarily cut off those outsiders that happen to be competitively successful or disruptive to their business models.

I find this recommendation particularly intriguing, because the promising area of web services is currently fraught with uncertainty that’s clearly holding back socially beneficial uses. Traditional PCs seem a rock of stability in comparison to the services exploited by modern web services, which vendors can whisk away like apparitions in the night.

You probably know, from such scandals as Yahoo!’s cooperation with the Chinese government in tracking down dissidents and Microsoft’s release of search data for a “research project” at the Department of Justice, that data stored at an online service is intrinsically less secure than data stored on your computer. But did you know that the law itself in the U.S. grants substantially less protection against search and seizure to your data when it’s stored at a service? Zittrain’s elucidation of this legal limbo, although it demands close reading, is a valuable window into the issues of technology and policy for lay readers.

Concerning medical privacy, in particular, the World Privacy Forunm noted in a February report (PDF) that personal health records stored by generic organizations such as Microsoft or Google are not protected by the Health Insurance Portability and Accountability Act (HIPAA). Therefore, the records will probably be fair game for subpoenas in divorce cases, lawsuits, etc. The individual also has fewer rights when trying to correct entries.

Well, I’ve given you the quick tour of Zittrain’s book, which is like doing the Smithsonian National Museum of Natural History in an hour. Now we’ll meet back in the lobby by the elephant statue, as it were, and examine the key concept that runs through his book.

Generativity: the new battle cry

We’ve all heard so much in the past decade about “innovation” that I’m in danger of having my readers snap the browser tab shut on this web page when they see the word. (I remember when the fingers-down-the-throat word in the business world was “synergy.” That word finally disappeared along with the businesses that invoked it to justify their mergers.)

Zittrain has coined a term that captures with more richness and potential what’s happening in our economy: generativity, a measure of how many new, unexpected, and (occasionally) useful things can be developed thanks to an available platform. He lists a number of famous generative technologies, ranging from duct tape and Lego bricks to the all-time heavyweight champion of generativity, the core Internet protocols. But the effects of the Internet are predicated on many other generative technologies that have contributed to the wave of innovation over the past fifteen years or so:

  • Personal computer hardware, which accepts an unlimited variety of devices
  • Personal computer operating systems, which let ordinary consumers load any program that’s compiled to run on them
  • Free software, which encourages infinite extensions

The boon of generativity is threatened in two major ways: network restrictions and locked-down devices such as the Xbox, TiVo, and iPhone, which Zittrain calls tethered appliances. The network and the endpoint are symbiotically linked in their power: freedom in one can help keep the flame of freedom burning on the other, while correspondingly, dousing the embers on one can dim generativity on the other.

Appliances are not bad. The Xbox, TiVo, and iPhone have their place, and Zittrain points out that even the trenchantly open One Laptop Per Child system embeds a trusted computing substrate called Bitfrost that combines digital signatures, sandboxing, and mandatory access controls to prevent downloads from harming the system. Unlike trusted computing platforms in proprietary products, Bitfrost can be overridden by a sophisticated user, but requires a BIOS reflash.

The degree to which a system is “appliancized” is inversely related to its generativity. We need to make sure that at least some of the population can preserve generativity in order to create technology at new levels. Furthermore, everyone needs generative systems in order to prevent vendors from choking off mass adoption of innovations.

Many of the Internet’s dangers stem from the attributes of a good generative system. Zittrain, in addition to highlighting about ease of mastery and accessibility, points out that a highly generative system makes it easy to transfer capabilities from highly sophisticated developers to untrained users. This is not entirely sweet. For instance, security guru Bruce Schneier has repeatedly pointed out that easy transferability is the bane of Internet security.

It’s bad enough, Schneier says, that systems inevitably contain bugs that can be fatally exploited by top-notch coders and cryptography experts. What really threatens the Internet is that these experts can bundle the exploits into kits that script kiddies can download and use with minimal education. Sharing tools that perform intrusions is not in itself malicious; these tools are important for system administrators, programmers who reverse engineer applications (another skill with both good and evil applications), and other users. But the practice definitely swells the number of malicious programs foraging the Internet for victims.

Once we accept the value of generativity, technical solutions can allow us to preserve it while protecting ourselves from the bugs and intrusions that it makes us so easy to succomb to. For instance, instead of adopting a fortress mentality, public libraries and other institutions could run virtual operating systems on computers they want to protect. In our homes, our computers could have one operating system open to experimental applications (and instantly reloadable if compromised), side by side with another that is locked down. This would allow ordinary people the same generative freedom as programmers, who typically maintain work platforms and development platforms.

Value at the fringe

Among Zittrain’s most alarming insights is how calls for a safer Internet, and for one more friendly to copyright and trademark holders, can feed into general governmental control over its population in an age where more and more activity moves online. This danger–also prophesied by Swedish Pirate Party leader Rickard Falkvinge–makes generativity a concern to an immensely larger citizenry than the usual suspects consisting of free software developers and remix musicians. Zittrain’s exploration of technology’s “regulability” rises far beyond the book’s opening subject toward an expansive contribution to our understanding of the relations among citizens, governments, and the commonwealth.

Every business has suffered from the hammerlock of a new computer system that turns out to prevent employees from making the tiny exceptions to rules that previously allowed smooth operations. Perfect control on operating systems or the Internet could cause similar disasters, which range from the added costs of DRM in schools to clamp-downs by repressive regimes. Zittrain lays out several interesting legal considerations that aren’t usually raised, overtly in defense of deliberately leaky enforcement regimes.

Concurring and dissenting opinions

 

I should mention before going further that Zittrain showed me an early paper on the subject underlying his book, and cited me in his acknowledgments as one of the people whose conversations with him influenced the book. Had I the chance to discuss the following issues with him, I would have advised a few changes to the text.

 

The intractability of privacy violations

Zittrain’s last chapter focuses on privacy, which is widely understood to have passed a threshold in the past few years. Given cell phone cameras, the complex data-sharing services on popular social networks, and other tools in the hands of ordinary computer users, privacy can now be violated by irresponsible crowds in addition to large companies and governments.

First, I think Zittrain exaggerates the shift. If he believes that government and corporate abuses are now only a tiny sliver of a larger problem created by peer production on the Internet, I wonder whether he’s ever been barred from an airplane by the TSA or denied coverage by an insurance company.

But the problems he points to in privacy-violating activities that have suddenly become everyday behaviors–such as tagging photos on Flickr with people’s names–are real. He tries to apply lessons from an earlier chapter focusing on the checks and balances that make Wikipedia successful. Unfortunately, I think the analogy is weak.

Wikipedia, as Zittrain points out, remains a centralized institution under the ultimate control of one man. Authority fans out from creator Jimbo Wales in an admirably broad and flexible spread, but creativity and control at each level depend on the backstop provided at the next higher level. I agree with Zittrain that some of the solutions found here can be translated to the wider and wilder Internet, but in the area of privacy I don’t find the analogy persuasive.

Even appliances depend on generative systems

The forward thrust created by generative technologies is so powerful that one finds them in even supposedly non-generative appliances. Most embedded devices with non-trivial capabilities (devices that need more than a while-loop for an operating system) use general-purpose operating systems, often Linux or the reduced-fat version of Windows known as Windows CE.

Zittrain contrasts generative PCs and free software to appliances such as the TiVo, Xbox, and iPhone. The irony is that these are all based on generative technologies. The manufacturers could not resist the opportunity to cut development costs by using robust and freely available platforms.

TiVo uses Linux as its operating system, the Xbox runs on general-purpose hardware that has been successfully hacked to run Linux, and the iPhone–which epitomizes to Zittrain the supreme tethered appliance–has BSD inside. Because of its innately generative qualities (including the relatively transparent language of its API, Objective-C), the iPhone was opened up just a few months after its release in a textbook kind of collaboration among self-organized hackers, leading to a free software toolkit that lets any programmer create new applications using all the features of the iPhone.

These examples underline the challenge Tim O’Reilly used to pose to Microsoft: without open platforms, where will its next wave of technology come from? It looks like Microsoft listened, considering its current tentative support for a few free free software projects. An industry of appliances would be poorer without generative technology.

The tether chafes

One of the central points of Zittrain’s book is that embattled computer users, worn down by the onslaught of malware, tend to retreat and give up control to centers of authority, whether by installing restrictive firewalls or buying tethered appliances that were built from the ground up to be closed.

Zittrain has several wonderful sections laying out the long-term detriment of this choice, not only for obvious topics such as technological innovation and fair use of copyrighted material, but for the balance between government and individual rights. He’s on top of all the abuses caused by manufacturers who keep control of their devices and send them automated updates–sometimes updates that deliberately disable previously available features. Tethered appliances respond to their vendors with the same flexible slavishness as computers taken over by roving bots.

But Zittrain does not use available evidence to rebut the seductive claim that choosing appliances over applications leads to more safety for the user and the overall community. Does it?

I think we have plenty of evidence to resist the tethering of previously open computers. For instance, what would most computer users trust more than a CD from Sony? And to ward off the dangers of the open Internet, should we turn to telephone companies to protect our privacy and personal data? I need say no more.

Among web services, the same worries apply. The dominant Internet appliance is Google, and every service it unveils seems to raise such fears about privacy that it has to perennially trot out its “don’t be evil” motto.

But nowhere has the trust in appliances been more dangerous than the calamitous rush to electronic voting machines without paper output, which cannot be adequately audited after deployment. We need to say loudly: closing down open systems is no solution to security risks. (Richard M. Stallman made similar points in response to Zittrain’s article, and Susan Crawford in her response.)

Web 2.0 extends generativity

The wide-area-network equivalent of a tethered alliance is “software as a service,” also known as an Application Service Provider. Here, I have to insist that Zittrain gets his terminology wrong. In place of these common industry terms, he refers to the phenomenon as Web 2.0.

Controversy has always surrounded the term Web 2.0, to be sure, despite attempts to define the phrase by Tim O’Reilly, who is credited with inventing it. Although everybody reads his own biases into the term, I don’t see any meaningful definition of Web 2.0 that includes web sites where users just log in to run an application remotely. I did see one other speaker misunderstand the term this way, but we have to resist the trend to “mash up” useful terms to the point where they lose their value and all come out in some bland uniformity.

Web 2.0 features–such as simple APIs and ways to incorporate user-submitted content–extend generativity as much as blogs and wikis do. They’re a critical stage in the ongoing evolution of the Internet. But Zittrain does offer some important critiques. Google Maps can discourage competition by co-opting it through its powerful API. And this ultimately means more control for Google–control it could leverage to artificially set the direction for mapping applications.

Thus, Web 2.0 technologies can be seen as an enablers that open up the data and applications controlled by corporations, but also as the soft glove than allow the corporate fist to push itself further and further into their clients’ lives.

My glosses and musings on “The Future of the Internet” show how much meat it provides for analysis and discussion. Anyone who can make it through this long review would get a lot from the book. In addition to drawing links among useful recommendations for preserving our freedom, Zittrain proves that the legal frameworks for making such decisions are more complex than most technologists and policy makers credit them for.

Read Full Post »

The Grid: The Next-Gen Internet?

Douglas Heingartner Email 03.08.01 | 2:00 AM

AMSTERDAM, Netherlands — The Matrix may be the future of virtual reality, but researchers say the Grid is the future of collaborative problem-solving.

More than 400 scientists gathered at the Global Grid Forum this week to discuss what may be the Internet’s next evolutionary step.

Though distributed computing evokes associations with populist initiatives like SETI@home, where individuals donate their spare computing power to worthy projects, the Grid will link PCs to each other and the scientific community like never before.

 

The Grid will not only enable sharing of documents and MP3 files, but also connect PCs with sensors, telescopes and tidal-wave simulators.

IBM’s Brian Carpenter suggested “computing will become a utility just like any other utility.”

Carpenter said, “The Grid will open up … storage and transaction power in the same way that the Web opened up content.” And just as the Internet connects various public and private networks, Cisco Systems’ Bob Aiken said, “you’re going to have multiple grids, multiple sets of middleware that people are going to choose from to satisfy their applications.”

As conference moderator Walter Hoogland suggested, “The World Wide Web gave us a taste, but the Grid gives a vision of an ICT (Information and Communication Technology)-enabled world.”

Though the task of standardizing everything from system templates to the definitions of various resources is a mammoth one, the GGF can look to the early days of the Web for guidance. The Grid that organizers are building is a new kind of Internet, only this time with the creators having a better knowledge of where the bottlenecks and teething problems will be.

The general consensus at the event was that although technical issues abound, the thorniest issues will involve social and political dimensions, for example how to facilitate sharing between strangers where there is no history of trust.

Amsterdam seemed a logical choice for the first Global Grid Forum because not only is it the world’s most densely cabled city, it was also home to the Internet Engineering Task Force’s first international gathering in 1993. The IETF has served as a model for many of the GGF’s activities: protocols, policy issues, and exchanging experiences.

The Grid Forum, a U.S.-based organization combined with eGrid – the European Grid Forum, and Asian counterparts to create the Global Grid Forum (GGF) in November, 2000.

The Global Grid Forum organizers said grid communities in the United States and Europe will now run in synch.

The Grid evolved from the early desire to connect supercomputers into “metacomputers” that could be remotely controlled. The word “grid” was borrowed from the electricity grid, to imply that any compatible device could be plugged in anywhere on the Grid and be guaranteed a certain level of resources, regardless of where those resources might come from.

Scientific communities at the conference discussed what the compatibility standards should be, and how extensive the protocols need to be.

As the number of connected devices runs from the thousands into the millions, the policy issues become exponentially more complex. So far, only draft consensus has been reached on most topics, but participants say these are the early days.

As with the Web, the initial impetus for a grid came from the scientific community, specifically high-energy physics, which needed extra resources to manage and analyze the huge amounts of data being collected.

The most nettlesome issues for industry are security and accounting. But unlike the Web, which had security measures tacked on as an afterthought, the Grid is being designed from the ground up as a secure system.


Conference participants debated what types of services (known in distributed computing circles as resource units) provided through the Grid will be charged for. And how will the administrative authority be centralized?

Corporations have been slow to cotton to this new technology’s potential, but the suits are in evidence at this year’s Grid event. As GGF chairman Charlie Catlett noted, “This is the first time I’ve seen this many ties at a Grid forum.”

In addition to IBM, firms such as Boeing, Philips and Unilever are already taking baby steps toward the Grid.

Though commercial needs tend to be more transaction-focused than those of scientific pursuits, most of the technical requirements are common. Furthermore, both science and industry participants say they require a level of reliability that’s not offered by current peer-to-peer initiatives: Downloading from Napster, for example, can take seconds or minutes, or might not work at all.

Garnering commercial interest is critical to the Grid’s future. Cisco’s Aiken explained that “if grids are really going to take off and become the major impetus for the next level of evolution in the Internet, we have to have something that allows (them) to easily transfer to industry.”

Other potential Grid components include creating a virtual observatory, and doctors performing simulations of blood flows. While some of these applications have existed for years, the Grid will make them routine rather than exceptional.

The California Institute of Technology’s Paul Messina said that by sharing computing resources, “you get more science from the same investment.”

Ian Foster of the University of Chicago said that Web precursor Arpanet was initially intended to be a distributed computing network that would share CPU-intensive tasks but instead wound up giving birth to e-mail and FTP.

The Grid may give birth to a global file-swapping network or a members-only citadel for moneyed institutions. But just as no one ten years ago would have conceived of Napster — not to mention AmIHotOrNot.com — the future of the Grid is unknown.

An associated DataGrid conference continues until Friday, focusing on a project in which resources from Pan-European research institutions will analyze data generated by a new particle collider being built at Swiss particle-physics lab CERN.

Read Full Post »

WIRED MAGAZINE: 16.03

Tech Biz  :  IT   RSS

Free! Why $0.00 Is the Future of Business

By Chris Anderson Email 02.25.08 | 12:00 AM

At the age of 40, King Gillette was a frustrated inventor, a bitter anticapitalist, and a salesman of cork-lined bottle caps. It was 1895, and despite ideas, energy, and wealthy parents, he had little to show for his work. He blamed the evils of market competition. Indeed, the previous year he had published a book, The Human Drift, which argued that all industry should be taken over by a single corporation owned by the public and that millions of Americans should live in a giant city called Metropolis powered by Niagara Falls. His boss at the bottle cap company, meanwhile, had just one piece of advice: Invent something people use and throw away.One day, while he was shaving with a straight razor that was so worn it could no longer be sharpened, the idea came to him. What if the blade could be made of a thin metal strip? Rather than spending time maintaining the blades, men could simply discard them when they became dull. A few years of metallurgy experimentation later, the disposable-blade safety razor was born. But it didn’t take off immediately. In its first year, 1903, Gillette sold a total of 51 razors and 168 blades. Over the next two decades, he tried every marketing gimmick he could think of. He put his own face on the package, making him both legendary and, some people believed, fictional. He sold millions of razors to the Army at a steep discount, hoping the habits soldiers developed at war would carry over to peacetime. He sold razors in bulk to banks so they could give them away with new deposits (“shave and save” campaigns). Razors were bundled with everything from Wrigley’s gum to packets of coffee, tea, spices, and marshmallows. The freebies helped to sell those products, but the tactic helped Gillette even more. By giving away the razors, which were useless by themselves, he was creating demand for disposable blades. A few billion blades later, this business model is now the foundation of entire industries: Give away the cell phone, sell the monthly plan; make the videogame console cheap and sell expensive games; install fancy coffeemakers in offices at no charge so you can sell managers expensive coffee sachets.

Chris Anderson discusses “Free.”

Video produced by Annaliza Savage and edited by Michael Lennon.

Thanks to Gillette, the idea that you can make money by giving something away is no longer radical. But until recently, practically everything “free” was really just the result of what economists would call a cross-subsidy: You’d get one thing free if you bought another, or you’d get a product free only if you paid for a service.

Over the past decade, however, a different sort of free has emerged. The new model is based not on cross-subsidies — the shifting of costs from one product to another — but on the fact that the cost of products themselves is falling fast. It’s as if the price of steel had dropped so close to zero that King Gillette could give away both razor and blade, and make his money on something else entirely. (Shaving cream?)

You know this freaky land of free as the Web. A decade and a half into the great online experiment, the last debates over free versus pay online are ending. In 2007 The New York Times went free; this year, so will much of The Wall Street Journal. (The remaining fee-based parts, new owner Rupert Murdoch announced, will be “really special … and, sorry to tell you, probably more expensive.” This calls to mind one version of Stewart Brand’s original aphorism from 1984: “Information wants to be free. Information also wants to be expensive … That tension will not go away.”)

Once a marketing gimmick, free has emerged as a full-fledged economy. Offering free music proved successful for Radiohead, Trent Reznor of Nine Inch Nails, and a swarm of other bands on MySpace that grasped the audience-building merits of zero. The fastest-growing parts of the gaming industry are ad-supported casual games online and free-to-try massively multiplayer online games. Virtually everything Google does is free to consumers, from Gmail to Picasa to GOOG-411.

The rise of “freeconomics” is being driven by the underlying technologies that power the Web. Just as Moore’s law dictates that a unit of processing power halves in price every 18 months, the price of bandwidth and storage is dropping even faster. Which is to say, the trend lines that determine the cost of doing business online all point the same way: to zero.

But tell that to the poor CIO who just shelled out six figures to buy another rack of servers. Technology sure doesn’t feel free when you’re buying it by the gross. Yet if you look at it from the other side of the fat pipe, the economics change. That expensive bank of hard drives (fixed costs) can serve tens of thousands of users (marginal costs). The Web is all about scale, finding ways to attract the most users for centralized resources, spreading those costs over larger and larger audiences as the technology gets more and more capable. It’s not about the cost of the equipment in the racks at the data center; it’s about what that equipment can do. And every year, like some sort of magic clockwork, it does more and more for less and less, bringing the marginal costs of technology in the units that we individuals consume closer to zero.

Photo Illustration: Jeff Mermelstein

As much as we complain about how expensive things are getting, we’re surrounded by forces that are making them cheaper. Forty years ago, the principal nutritional problem in America was hunger; now it’s obesity, for which we have the Green Revolution to thank. Forty years ago, charity was dominated by clothing drives for the poor. Now you can get a T-shirt for less than the price of a cup of coffee, thanks to China and global sourcing. So too for toys, gadgets, and commodities of every sort. Even cocaine has pretty much never been cheaper (globalization works in mysterious ways).

Digital technology benefits from these dynamics and from something else even more powerful: the 20th-century shift from Newtonian to quantum machines. We’re still just beginning to exploit atomic-scale effects in revolutionary new materials — semiconductors (processing power), ferromagnetic compounds (storage), and fiber optics (bandwidth). In the arc of history, all three substances are still new, and we have a lot to learn about them. We are just a few decades into the discovery of a new world.

What does this mean for the notion of free? Well, just take one example. Last year, Yahoo announced that Yahoo Mail, its free webmail service, would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite.” So the market price of online storage, at least for email, has now fallen to zero (see “Webmail Windfall“). And the stunning thing is that nobody was surprised; many had assumed infinite free storage was already the case.

For good reason: It’s now clear that practically everything Web technology touches starts down the path to gratis, at least as far as we consumers are concerned. Storage now joins bandwidth (YouTube: free) and processing power (Google: free) in the race to the bottom. Basic economics tells us that in a competitive market, price falls to the marginal cost. There’s never been a more competitive market than the Internet, and every day the marginal cost of digital information comes closer to nothing.

One of the old jokes from the late-’90s bubble was that there are only two numbers on the Internet: infinity and zero. The first, at least as it applied to stock market valuations, proved false. But the second is alive and well. The Web has become the land of the free.

The result is that we now have not one but two trends driving the spread of free business models across the economy. The first is the extension of King Gillette’s cross-subsidy to more and more industries. Technology is giving companies greater flexibility in how broadly they can define their markets, allowing them more freedom to give away products or services to one set of customers while selling to another set. Ryanair, for instance, has disrupted its industry by defining itself more as a full-service travel agency than a seller of airline seats (see “How Can Air Travel Be Free?”).

The second trend is simply that anything that touches digital networks quickly feels the effect of falling costs. There’s nothing new about technology’s deflationary force, but what is new is the speed at which industries of all sorts are becoming digital businesses and thus able to exploit those economics. When Google turned advertising into a software application, a classic services business formerly based on human economics (things get more expensive each year) switched to software economics (things get cheaper). So, too, for everything from banking to gambling. The moment a company’s primary expenses become things based in silicon, free becomes not just an option but the inevitable destination.

WASTE AND WASTE AGAIN
Forty years ago, Caltech professor Carver Mead identified the corollary to Moore’s law of ever-increasing computing power. Every 18 months, Mead observed, the price of a transistor would halve. And so it did, going from tens of dollars in the 1960s to approximately 0.000001 cent today for each of the transistors in Intel’s latest quad-core. This, Mead realized, meant that we should start to “waste” transistors.

Waste is a dirty word, and that was especially true in the IT world of the 1970s. An entire generation of computer professionals had been taught that their job was to dole out expensive computer resources sparingly. In the glass-walled facilities of the mainframe era, these systems operators exercised their power by choosing whose programs should be allowed to run on the costly computing machines. Their role was to conserve transistors, and they not only decided what was worthy but also encouraged programmers to make the most economical use of their computer time. As a result, early developers devoted as much code as possible to running their core algorithms efficiently and gave little thought to user interface. This was the era of the command line, and the only conceivable reason someone might have wanted to use a computer at home was to organize recipe files. In fact, the world’s first personal computer, a stylish kitchen appliance offered by Honeywell in 1969, came with integrated counter space.

Photo Illustration: Jeff Mermelstein

And here was Mead, telling programmers to embrace waste. They scratched their heads — how do you waste computer power? It took Alan Kay, an engineer working at Xerox’s Palo Alto Research Center, to show them. Rather than conserve transistors for core processing functions, he developed a computer concept — the Dynabook — that would frivolously deploy silicon to do silly things: draw icons, windows, pointers, and even animations on the screen. The purpose of this profligate eye candy? Ease of use for regular folks, including children. Kay’s work on the graphical user interface became the inspiration for the Xerox Alto, and then the Apple Macintosh, which changed the world by opening computing to the rest of us. (We, in turn, found no shortage of things to do with it; tellingly, organizing recipes was not high on the list.)

Of course, computers were not free then, and they are not free today. But what Mead and Kay understood was that the transistors in them — the atomic units of computation — would become so numerous that on an individual basis, they’d be close enough to costless that they might as well be free. That meant software writers, liberated from worrying about scarce computational resources like memory and CPU cycles, could become more and more ambitious, focusing on higher-order functions such as user interfaces and new markets such as entertainment. And that meant software of broader appeal, which brought in more users, who in turn found even more uses for computers. Thanks to that wasteful throwing of transistors against the wall, the world was changed.

What’s interesting is that transistors (or storage, or bandwidth) don’t have to be completely free to invoke this effect. At a certain point, they’re cheap enough to be safely disregarded. The Greek philosopher Zeno wrestled with this concept in a slightly different context. In Zeno’s dichotomy paradox, you run toward a wall. As you run, you halve the distance to the wall, then halve it again, and so on. But if you continue to subdivide space forever, how can you ever actually reach the wall? (The answer is that you can’t: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.)

In economics, the parallel is this: If the unitary cost of technology (“per megabyte” or “per megabit per second” or “per thousand floating-point operations per second”) is halving every 18 months, when does it come close enough to zero to say that you’ve arrived and can safely round down to nothing? The answer: almost always sooner than you think.

What Mead understood is that a psychological switch should flip as things head toward zero. Even though they may never become entirely free, as the price drops there is great advantage to be had in treating them as if they were free. Not too cheap to meter, as Atomic Energy Commission chief Lewis Strauss said in a different context, but too cheap to matter. Indeed, the history of technological innovation has been marked by people spotting such price and performance trends and getting ahead of them.

From the consumer’s perspective, though, there is a huge difference between cheap and free. Give a product away and it can go viral. Charge a single cent for it and you’re in an entirely different business, one of clawing and scratching for every customer. The psychology of “free” is powerful indeed, as any marketer will tell you.

This difference between cheap and free is what venture capitalist Josh Kopelman calls the “penny gap.” People think demand is elastic and that volume falls in a straight line as price rises, but the truth is that zero is one market and any other price is another. In many cases, that’s the difference between a great market and none at all.

The huge psychological gap between “almost zero” and “zero” is why micropayments failed. It’s why Google doesn’t show up on your credit card. It’s why modern Web companies don’t charge their users anything. And it’s why Yahoo gives away disk drive space. The question of infinite storage was not if but when. The winners made their stuff free first.

Traditionalists wring their hands about the “vaporization of value” and “demonetization” of entire industries. The success of craigslist’s free listings, for instance, has hurt the newspaper classified ad business. But that lost newspaper revenue is certainly not ending up in the craigslist coffers. In 2006, the site earned an estimated $40 million from the few things it charges for. That’s about 12 percent of the $326 million by which classified ad revenue declined that year.

But free is not quite as simple — or as stupid — as it sounds. Just because products are free doesn’t mean that someone, somewhere, isn’t making huge gobs of money. Google is the prime example of this. The monetary benefits of craigslist are enormous as well, but they’re distributed among its tens of thousands of users rather than funneled straight to Craig Newmark Inc. To follow the money, you have to shift from a basic view of a market as a matching of two parties — buyers and sellers — to a broader sense of an ecosystem with many parties, only some of which exchange cash.

The most common of the economies built around free is the three-party system. Here a third party pays to participate in a market created by a free exchange between the first two parties. Sound complicated? You’re probably experiencing it right now. It’s the basis of virtually all media.

In the traditional media model, a publisher provides a product free (or nearly free) to consumers, and advertisers pay to ride along. Radio is “free to air,” and so is much of television. Likewise, newspaper and magazine publishers don’t charge readers anything close to the actual cost of creating, printing, and distributing their products. They’re not selling papers and magazines to readers, they’re selling readers to advertisers. It’s a three-way market.

In a sense, what the Web represents is the extension of the media business model to industries of all sorts. This is not simply the notion that advertising will pay for everything. There are dozens of ways that media companies make money around free content, from selling information about consumers to brand licensing, “value-added” subscriptions, and direct ecommerce (see How-To Wiki for a complete list). Now an entire ecosystem of Web companies is growing up around the same set of models.

A TAXONOMY OF FREE
Between new ways companies have found to subsidize products and the falling cost of doing business in a digital age, the opportunities to adopt a free business model of some sort have never been greater. But which one? And how many are there? Probably hundreds, but the priceless economy can be broken down into six broad categories:

· “Freemium”
What’s free: Web software and services, some content. Free to whom: users of the basic version.

This term, coined by venture capitalist Fred Wilson, is the basis of the subscription model of media and is one of the most common Web business models. It can take a range of forms: varying tiers of content, from free to expensive, or a premium “pro” version of some site or software with more features than the free version (think Flickr and the $25-a-year Flickr Pro).

Again, this sounds familiar. Isn’t it just the free sample model found everywhere from perfume counters to street corners? Yes, but with a pretty significant twist. The traditional free sample is the promotional candy bar handout or the diapers mailed to a new mother. Since these samples have real costs, the manufacturer gives away only a tiny quantity — hoping to hook consumers and stimulate demand for many more.

Photo Illustration: Jeff Mermelstein

But for digital products, this ratio of free to paid is reversed. A typical online site follows the 1 Percent Rule — 1 percent of users support all the rest. In the freemium model, that means for every user who pays for the premium version of the site, 99 others get the basic free version. The reason this works is that the cost of serving the 99 percent is close enough to zero to call it nothing.

· Advertising
What’s free: content, services, software, and more. Free to whom: everyone.

Broadcast commercials and print display ads have given way to a blizzard of new Web-based ad formats: Yahoo’s pay-per-pageview banners, Google’s pay-per-click text ads, Amazon’s pay-per-transaction “affiliate ads,” and site sponsorships were just the start. Then came the next wave: paid inclusion in search results, paid listing in information services, and lead generation, where a third party pays for the names of people interested in a certain subject. Now companies are trying everything from product placement (PayPerPost) to pay-per-connection on social networks like Facebook. All of these approaches are based on the principle that free offerings build audiences with distinct interests and expressed needs that advertisers will pay to reach.

· Cross-subsidies
What’s free: any product that entices you to pay for something else. Free to whom: everyone willing to pay eventually, one way or another.

When Wal-Mart charges $15 for a new hit DVD, it’s a loss leader. The company is offering the DVD below cost to lure you into the store, where it hopes to sell you a washing machine at a profit. Expensive wine subsidizes food in a restaurant, and the original “free lunch” was a gratis meal for anyone who ordered at least one beer in San Francisco saloons in the late 1800s. In any package of products and services, from banking to mobile calling plans, the price of each individual component is often determined by psychology, not cost. Your cell phone company may not make money on your monthly minutes — it keeps that fee low because it knows that’s the first thing you look at when picking a carrier — but your monthly voicemail fee is pure profit.

On a busy corner in São Paulo, Brazil, street vendors pitch the latest “tecnobrega” CDs, including one by a hot band called Banda Calypso. Like CDs from most street vendors, these did not come from a record label. But neither are they illicit. They came directly from the band. Calypso distributes masters of its CDs and CD liner art to street vendor networks in towns it plans to tour, with full agreement that the vendors will copy the CDs, sell them, and keep all the money. That’s OK, because selling discs isn’t Calypso’s main source of income. The band is really in the performance business — and business is good. Traveling from town to town this way, preceded by a wave of supercheap CDs, Calypso has filled its shows and paid for a private jet.

The vendors generate literal street cred in each town Calypso visits, and its omnipresence in the urban soundscape means that it gets huge crowds to its rave/dj/concert events. Free music is just publicity for a far more lucrative tour business. Nobody thinks of this as piracy.

· Zero marginal cost
What’s free: things that can be distributed without an appreciable cost to anyone. Free to whom: everyone.

This describes nothing so well as online music. Between digital reproduction and peer-to-peer distribution, the real cost of distributing music has truly hit bottom. This is a case where the product has become free because of sheer economic gravity, with or without a business model. That force is so powerful that laws, guilt trips, DRM, and every other barrier to piracy the labels can think of have failed. Some artists give away their music online as a way of marketing concerts, merchandise, licensing, and other paid fare. But others have simply accepted that, for them, music is not a moneymaking business. It’s something they do for other reasons, from fun to creative expression. Which, of course, has always been true for most musicians anyway.

· Labor exchange
What’s free: Web sites and services. Free to whom: all users, since the act of using these sites and services actually creates something of value.

You can get free porn if you solve a few captchas, those scrambled text boxes used to block bots. What you’re actually doing is giving answers to a bot used by spammers to gain access to other sites — which is worth more to them than the bandwidth you’ll consume browsing images. Likewise for rating stories on Digg, voting on Yahoo Answers, or using Google’s 411 service (see “How Can Directory Assistance Be Free?”). In each case, the act of using the service creates something of value, either improving the service itself or creating information that can be useful somewhere else.

· Gift economy
What’s free: the whole enchilada, be it open source software or user-generated content. Free to whom: everyone.

From Freecycle (free secondhand goods for anyone who will take them away) to Wikipedia, we are discovering that money isn’t the only motivator. Altruism has always existed, but the Web gives it a platform where the actions of individuals can have global impact. In a sense, zero-cost distribution has turned sharing into an industry. In the monetary economy it all looks free — indeed, in the monetary economy it looks like unfair competition — but that says more about our shortsighted ways of measuring value than it does about the worth of what’s created.

THE ECONOMICS OF ABUNDANCE
Enabled by the miracle of abundance, digital economics has turned traditional economics upside down. Read your college textbook and it’s likely to define economics as “the social science of choice under scarcity.” The entire field is built on studying trade-offs and how they’re made. Milton Friedman himself reminded us time and time again that “there’s no such thing as a free lunch.

“But Friedman was wrong in two ways. First, a free lunch doesn’t necessarily mean the food is being given away or that you’ll pay for it later — it could just mean someone else is picking up the tab. Second, in the digital realm, as we’ve seen, the main feedstocks of the information economy — storage, processing power, and bandwidth — are getting cheaper by the day. Two of the main scarcity functions of traditional economics — the marginal costs of manufacturing and distribution — are rushing headlong to zip. It’s as if the restaurant suddenly didn’t have to pay any food or labor costs for that lunch.

Surely economics has something to say about that?

It does. The word is externalities, a concept that holds that money is not the only scarcity in the world. Chief among the others are your time and respect, two factors that we’ve always known about but have only recently been able to measure properly. The “attention economy” and “reputation economy” are too fuzzy to merit an academic department, but there’s something real at the heart of both. Thanks to Google, we now have a handy way to convert from reputation (PageRank) to attention (traffic) to money (ads). Anything you can consistently convert to cash is a form of currency itself, and Google plays the role of central banker for these new economies.

There is, presumably, a limited supply of reputation and attention in the world at any point in time. These are the new scarcities — and the world of free exists mostly to acquire these valuable assets for the sake of a business model to be identified later. Free shifts the economy from a focus on only that which can be quantified in dollars and cents to a more realistic accounting of all the things we truly value today.

FREE CHANGES EVERYTHING
Between digital economics and the wholesale embrace of King’s Gillette’s experiment in price shifting, we are entering an era when free will be seen as the norm, not an anomaly. How big a deal is that? Well, consider this analogy: In 1954, at the dawn of nuclear power, Lewis Strauss, head of the Atomic Energy Commission, promised that we were entering an age when electricity would be “too cheap to meter.” Needless to say, that didn’t happen, mostly because the risks of nuclear energy hugely increased its costs. But what if he’d been right? What if electricity had in fact become virtually free?The answer is that everything electricity touched — which is to say just about everything — would have been transformed. Rather than balance electricity against other energy sources, we’d use electricity for as many things as we could — we’d waste it, in fact, because it would be too cheap to worry about.

All buildings would be electrically heated, never mind the thermal conversion rate. We’d all be driving electric cars (free electricity would be incentive enough to develop the efficient battery technology to store it). Massive desalination plants would turn seawater into all the freshwater anyone could want, irrigating vast inland swaths and turning deserts into fertile acres, many of them making biofuels as a cheaper store of energy than batteries. Relative to free electrons, fossil fuels would be seen as ludicrously expensive and dirty, and so carbon emissions would plummet. The phrase “global warming” would have never entered the language.

Today it’s digital technologies, not electricity, that have become too cheap to meter. It took decades to shake off the assumption that computing was supposed to be rationed for the few, and we’re only now starting to liberate bandwidth and storage from the same poverty of imagination. But a generation raised on the free Web is coming of age, and they will find entirely new ways to embrace waste, transforming the world in the process. Because free is what you want — and free, increasingly, is what you’re going to get.

Chris Anderson (canderson@wired.com) is the editor in chief of Wired and author of The Long Tail. His next book, FREE, will be published in 2009 by Hyperion.

Search Wired

Top Stories Magazine Wired Blogs All Wired

Related Topics:

Comments (63)

Posted by: danielu23 hours ago1 Point
Your “Scenario 1” implies you know absolutely NOTHING about the movie business. Distributors and Studios make the money on ticket sales based on a percentage split with the projection houses. The bulk of ticket sales money goes to the Distributors a…
Posted by: tom2032 days ago1 Point
The information is not free, it is being paid for (in cash) mostly by advertisers trying to gain the attention of the website visitors. It is also paid for (in time wasted) by the people who are constantly distracted by the ads. Micro-payments were…
Posted by: mfouts2 days ago1 Point
That article is absolutley amazing!!! I am currently into buying real estate and I am slowly transitioning into the great world wide web. @ of my partners and I are trying to take advantage of the the www world via http://www.choiceisfreedom.com still under…
Posted by: foofah2 days ago1 Point
Great article…but give poor Zeno a break. “The answer is that you can’t [reach the wall]: Once you’re within a few nanometers, atomic repulsion forces become too strong for you to get any closer.” You’ve either missed Zeno’s point entirely, or you’…
Posted by: RainerGamer2 days ago1 Point
Sign me up.
Posted by: Lord_Jim2 days ago1 Point
Is something really free only because you don’t pay in dollars? What about being bombarded with advertising? What about giving away personal data to dubious parties? What about costly ‘upgrade options’ hidden behind every second button of allegedly …
Posted by: gdavis951293 days ago1 Point
Please Mr. Anderson, buy yourself a dictionary. You write: …Yahoo announced that Yahoo Mail… would provide unlimited storage. Just in case that wasn’t totally clear, that’s “unlimited” as in “infinite”. ‘Unlimited’ means that Yahoo will not cap t…
Posted by: MikeG3 days ago1 Point
A few months ago I began researching free training & education. To be honest, I didn’t expect to find many good, free items, since I know that it takes time and effort (and time is money) to develop training. But I hoped my efforts would unearth …
Posted by: RAGZ3 days ago1 Point
You know, I subscribe to Wired, and I like the content, but please answer this question; why am I paying Wired’s comparatively high subscription cost if you’re going to stuff it so full of little ad inserts, that when I open it during my bathroom rit…
Posted by: jdwright103 days ago1 Point
This definitely true. It’s a pretty good strategy if you think about it. I just bought a $7 Gillette razor and the refill blades cost me $15!
Posted by: gdavis951293 days ago1 Point

Read Full Post »

The Long Tail

Issue 12.10 – October 2004
Subscribe to WIRED magazine and receive a FREE gift!

The Long Tail 

Forget squeezing millions from a few megahits at the top of the charts. The future of entertainment is in the millions of niche markets at the shallow end of the bitstream.
By Chris AndersonPage 1 of 5 

Chris is expanding this article into a book, due out in May 2006. Follow his continuing coverage of the subject on The Long Tail blog.

* Story Tools

[Print story] [E-mail story]

* Story Images

Click thumbnails for full-size image:


* Rants + raves

* Start

* Play

* View

  • Do ‘I Agree’? Ask my attorney-bot.
  • Hot Seat: Jeff Hawkins on intelligence
  • Sterling: Tomorrow’s smart object is refreshingly dumb
  • Lessig: The Me Generation is burdening kids – and it’s all our fault
  • More »

In 1988, a British mountain climber named Joe Simpson wrote a book called Touching the Void, a harrowing account of near death in the Peruvian Andes. It got good reviews but, only a modest success, it was soon forgotten. Then, a decade later, a strange thing happened. Jon Krakauer wrote Into Thin Air, another book about a mountain-climbing tragedy, which became a publishing sensation. Suddenly Touching the Void started to sell again.

Random House rushed out a new edition to keep up with demand. Booksellers began to promote it next to their Into Thin Air displays, and sales rose further. A revised paperback edition, which came out in January, spent 14 weeks on the New York Times bestseller list. That same month, IFC Films released a docudrama of the story to critical acclaim. Now Touching the Void outsells Into Thin Air more than two to one.

What happened? In short, Amazon.com recommendations. The online bookseller’s software noted patterns in buying behavior and suggested that readers who liked Into Thin Air would also like Touching the Void. People took the suggestion, agreed wholeheartedly, wrote rhapsodic reviews. More sales, more algorithm-fueled recommendations, and the positive feedback loop kicked in.

Particularly notable is that when Krakauer’s book hit shelves, Simpson’s was nearly out of print. A few years ago, readers of Krakauer would never even have learned about Simpson’s book – and if they had, they wouldn’t have been able to find it. Amazon changed that. It created the Touching the Void phenomenon by combining infinite shelf space with real-time information about buying trends and public opinion. The result: rising demand for an obscure book.

This is not just a virtue of online booksellers; it is an example of an entirely new economic model for the media and entertainment industries, one that is just beginning to show its power. Unlimited selection is revealing truths about what consumers want and how they want to get it in service after service, from DVDs at Netflix to music videos on Yahoo! Launch to songs in the iTunes Music Store and Rhapsody. People are going deep into the catalog, down the long, long list of available titles, far past what’s available at Blockbuster Video, Tower Records, and Barnes & Noble. And the more they find, the more they like. As they wander further from the beaten path, they discover their taste is not as mainstream as they thought (or as they had been led to believe by marketing, a lack of alternatives, and a hit-driven culture).

An analysis of the sales data and trends from these services and others like them shows that the emerging digital entertainment economy is going to be radically different from today’s mass market. If the 20th- century entertainment industry was about hits, the 21st will be equally about misses.

For too long we’ve been suffering the tyranny of lowest-common-denominator fare, subjected to brain-dead summer blockbusters and manufactured pop. Why? Economics. Many of our assumptions about popular taste are actually artifacts of poor supply-and-demand matching – a market response to inefficient distribution.

The main problem, if that’s the word, is that we live in the physical world and, until recently, most of our entertainment media did, too. But that world puts two dramatic limitations on our entertainment.

The first is the need to find local audiences. An average movie theater will not show a film unless it can attract at least 1,500 people over a two-week run; that’s essentially the rent for a screen. An average record store needs to sell at least two copies of a CD per year to make it worth carrying; that’s the rent for a half inch of shelf space. And so on for DVD rental shops, videogame stores, booksellers, and newsstands.

In each case, retailers will carry only content that can generate sufficient demand to earn its keep. But each can pull only from a limited local population – perhaps a 10-mile radius for a typical movie theater, less than that for music and bookstores, and even less (just a mile or two) for video rental shops. It’s not enough for a great documentary to have a potential national audience of half a million; what matters is how many it has in the northern part of Rockville, Maryland, and among the mall shoppers of Walnut Creek, California.

There is plenty of great entertainment with potentially large, even rapturous, national audiences that cannot clear that bar. For instance, The Triplets of Belleville, a critically acclaimed film that was nominated for the best animated feature Oscar this year, opened on just six screens nationwide. An even more striking example is the plight of Bollywood in America. Each year, India’s film industry puts out more than 800 feature films. There are an estimated 1.7 million Indians in the US. Yet the top-rated (according to Amazon’s Internet Movie Database) Hindi-language film, Lagaan: Once Upon a Time in India, opened on just two screens, and it was one of only a handful of Indian films to get any US distribution at all. In the tyranny of physical space, an audience too thinly spread is the same as no audience at all.

The other constraint of the physical world is physics itself. The radio spectrum can carry only so many stations, and a coaxial cable so many TV channels. And, of course, there are only 24 hours a day of programming. The curse of broadcast technologies is that they are profligate users of limited resources. The result is yet another instance of having to aggregate large audiences in one geographic area – another high bar, above which only a fraction of potential content rises.

The past century of entertainment has offered an easy solution to these constraints. Hits fill theaters, fly off shelves, and keep listeners and viewers from touching their dials and remotes. Nothing wrong with that; indeed, sociologists will tell you that hits are hardwired into human psychology, the combinatorial effect of conformity and word of mouth. And to be sure, a healthy share of hits earn their place: Great songs, movies, and books attract big, broad audiences.

But most of us want more than just hits. Everyone’s taste departs from the mainstream somewhere, and the more we explore alternatives, the more we’re drawn to them. Unfortunately, in recent decades such alternatives have been pushed to the fringes by pumped-up marketing vehicles built to order by industries that desperately need them.

* Story Tools

[Print story] [E-mail story]

* Story Images

Click thumbnails for full-size image:


* Rants + raves

* Start

* Play

* View

  • Do ‘I Agree’? Ask my attorney-bot.
  • Hot Seat: Jeff Hawkins on intelligence
  • Sterling: Tomorrow’s smart object is refreshingly dumb
  • Lessig: The Me Generation is burdening kids – and it’s all our fault
  • More »

Hit-driven economics is a creation of an age without enough room to carry everything for everybody. Not enough shelf space for all the CDs, DVDs, and games produced. Not enough screens to show all the available movies. Not enough channels to broadcast all the TV programs, not enough radio waves to play all the music created, and not enough hours in the day to squeeze everything out through either of those sets of slots.

This is the world of scarcity. Now, with online distribution and retail, we are entering a world of abundance. And the differences are profound.

To see how, meet Robbie Vann-Adib鬠the CEO of Ecast, a digital jukebox company whose barroom players offer more than 150,000 tracks – and some surprising usage statistics. He hints at them with a question that visitors invariably get wrong: “What percentage of the top 10,000 titles in any online media store (Netflix, iTunes, Amazon, or any other) will rent or sell at least once a month?”

Most people guess 20 percent, and for good reason: We’ve been trained to think that way. The 80-20 rule, also known as Pareto’s principle (after Vilfredo Pareto, an Italian economist who devised the concept in 1906), is all around us. Only 20 percent of major studio films will be hits. Same for TV shows, games, and mass-market books – 20 percent all. The odds are even worse for major-label CDs, where fewer than 10 percent are profitable, according to the Recording Industry Association of America.

But the right answer, says Vann-Adib鬠is 99 percent. There is demand for nearly every one of those top 10,000 tracks. He sees it in his own jukebox statistics; each month, thousands of people put in their dollars for songs that no traditional jukebox anywhere has ever carried.

People get Vann-Adib駳 question wrong because the answer is counterintuitive in two ways. The first is we forget that the 20 percent rule in the entertainment industry is about hits, not sales of any sort. We’re stuck in a hit-driven mindset – we think that if something isn’t a hit, it won’t make money and so won’t return the cost of its production. We assume, in other words, that only hits deserve to exist. But Vann-Adib鬠like executives at iTunes, Amazon, and Netflix, has discovered that the “misses” usually make money, too. And because there are so many more of them, that money can add up quickly to a huge new market.

With no shelf space to pay for and, in the case of purely digital services like iTunes, no manufacturing costs and hardly any distribution fees, a miss sold is just another sale, with the same margins as a hit. A hit and a miss are on equal economic footing, both just entries in a database called up on demand, both equally worthy of being carried. Suddenly, popularity no longer has a monopoly on profitability.

The second reason for the wrong answer is that the industry has a poor sense of what people want. Indeed, we have a poor sense of what we want. We assume, for instance, that there is little demand for the stuff that isn’t carried by Wal-Mart and other major retailers; if people wanted it, surely it would be sold. The rest, the bottom 80 percent, must be subcommercial at best.

But as egalitarian as Wal-Mart may seem, it is actually extraordinarily elitist. Wal-Mart must sell at least 100,000 copies of a CD to cover its retail overhead and make a sufficient profit; less than 1 percent of CDs do that kind of volume. What about the 60,000 people who would like to buy the latest Fountains of Wayne or Crystal Method album, or any other nonmainstream fare? They have to go somewhere else. Bookstores, the megaplex, radio, and network TV can be equally demanding. We equate mass market with quality and demand, when in fact it often just represents familiarity, savvy advertising, and broad if somewhat shallow appeal. What do we really want? We’re only just discovering, but it clearly starts with more.

To get a sense of our true taste, unfiltered by the economics of scarcity, look at Rhapsody, a subscription-based streaming music service (owned by RealNetworks) that currently offers more than 735,000 tracks.

Chart Rhapsody’s monthly statistics and you get a “power law” demand curve that looks much like any record store’s, with huge appeal for the top tracks, tailing off quickly for less popular ones. But a really interesting thing happens once you dig below the top 40,000 tracks, which is about the amount of the fluid inventory (the albums carried that will eventually be sold) of the average real-world record store. Here, the Wal-Marts of the world go to zero – either they don’t carry any more CDs, or the few potential local takers for such fringy fare never find it or never even enter the store.

The Rhapsody demand, however, keeps going. Not only is every one of Rhapsody’s top 100,000 tracks streamed at least once each month, the same is true for its top 200,000, top 300,000, and top 400,000. As fast as Rhapsody adds tracks to its library, those songs find an audience, even if it’s just a few people a month, somewhere in the country.

This is the Long Tail.

You can find everything out there on the Long Tail. There’s the back catalog, older albums still fondly remembered by longtime fans or rediscovered by new ones. There are live tracks, B-sides, remixes, even (gasp) covers. There are niches by the thousands, genre within genre within genre: Imagine an entire Tower Records devoted to ’80s hair bands or ambient dub. There are foreign bands, once priced out of reach in the Import aisle, and obscure bands on even more obscure labels, many of which don’t have the distribution clout to get into Tower at all.

Oh sure, there’s also a lot of crap. But there’s a lot of crap hiding between the radio tracks on hit albums, too. People have to skip over it on CDs, but they can more easily avoid it online, since the collaborative filters typically won’t steer you to it. Unlike the CD, where each crap track costs perhaps one-twelfth of a $15 album price, online it just sits harmlessly on some server, ignored in a market that sells by the song and evaluates tracks on their own merit.

* Story Tools

[Print story] [E-mail story]

* Story Images

Click thumbnails for full-size image:


* Rants + raves

* Start

* Play

* View

  • Do ‘I Agree’? Ask my attorney-bot.
  • Hot Seat: Jeff Hawkins on intelligence
  • Sterling: Tomorrow’s smart object is refreshingly dumb
  • Lessig: The Me Generation is burdening kids – and it’s all our fault
  • More »

What’s really amazing about the Long Tail is the sheer size of it. Combine enough nonhits on the Long Tail and you’ve got a market bigger than the hits. Take books: The average Barnes & Noble carries 130,000 titles. Yet more than half of Amazon’s book sales come from outside its top 130,000 titles. Consider the implication: If the Amazon statistics are any guide, the market for books that are not even sold in the average bookstore is larger than the market for those that are (see “Anatomy of the Long Tail“). In other words, the potential book market may be twice as big as it appears to be, if only we can get over the economics of scarcity. Venture capitalist and former music industry consultant Kevin Laws puts it this way: “The biggest money is in the smallest sales.”

The same is true for all other aspects of the entertainment business, to one degree or another. Just compare online and offline businesses: The average Blockbuster carries fewer than 3,000 DVDs. Yet a fifth of Netflix rentals are outside its top 3,000 titles. Rhapsody streams more songs each month beyond its top 10,000 than it does its top 10,000. In each case, the market that lies outside the reach of the physical retailer is big and getting bigger.

When you think about it, most successful businesses on the Internet are about aggregating the Long Tail in one way or another. Google, for instance, makes most of its money off small advertisers (the long tail of advertising), and eBay is mostly tail as well – niche and one-off products. By overcoming the limitations of geography and scale, just as Rhapsody and Amazon have, Google and eBay have discovered new markets and expanded existing ones.

This is the power of the Long Tail. The companies at the vanguard of it are showing the way with three big lessons. Call them the new rules for the new entertainment economy.

Rule 1: Make everything available

If you love documentaries, Blockbuster is not for you. Nor is any other video store – there are too many documentaries, and they sell too poorly to justify stocking more than a few dozen of them on physical shelves. Instead, you’ll want to join Netflix, which offers more than a thousand documentaries – because it can. Such profligacy is giving a boost to the documentary business; last year, Netflix accounted for half of all US rental revenue for Capturing the Friedmans, a documentary about a family destroyed by allegations of pedophilia.

Netflix CEO Reed Hastings, who’s something of a documentary buff, took this newfound clout to PBS, which had produced Daughter From Danang, a documentary about the children of US soldiers and Vietnamese women. In 2002, the film was nominated for an Oscar and was named best documentary at Sundance, but PBS had no plans to release it on DVD. Hastings offered to handle the manufacturing and distribution if PBS would make it available as a Netflix exclusive. Now Daughter From Danang consistently ranks in the top 15 on Netflix documentary charts. That amounts to a market of tens of thousands of documentary renters that did not otherwise exist.

There are any number of equally attractive genres and subgenres neglected by the traditional DVD channels: foreign films, anime, independent movies, British television dramas, old American TV sitcoms. These underserved markets make up a big chunk of Netflix rentals. Bollywood alone accounts for nearly 100,000 rentals each month. The availability of offbeat content drives new customers to Netflix – and anything that cuts the cost of customer acquisition is gold for a subscription business. Thus the company’s first lesson: Embrace niches.

Netflix has made a good business out of what’s unprofitable fare in movie theaters and video rental shops because it can aggregate dispersed audiences. It doesn’t matter if the several thousand people who rent Doctor Who episodes each month are in one city or spread, one per town, across the country – the economics are the same to Netflix. It has, in short, broken the tyranny of physical space. What matters is not where customers are, or even how many of them are seeking a particular title, but only that some number of them exist, anywhere.

As a result, almost anything is worth offering on the off chance it will find a buyer. This is the opposite of the way the entertainment industry now thinks. Today, the decision about whether or when to release an old film on DVD is based on estimates of demand, availability of extras such as commentary and additional material, and marketing opportunities such as anniversaries, awards, and generational windows (Disney briefly rereleases its classics every 10 years or so as a new wave of kids come of age). It’s a high bar, which is why only a fraction of movies ever made are available on DVD.

That model may make sense for the true classics, but it’s way too much fuss for everything else. The Long Tail approach, by contrast, is to simply dump huge chunks of the archive onto bare-bones DVDs, without any extras or marketing. Call it the Silver Series and charge half the price. Same for independent films. This year, nearly 6,000 movies were submitted to the Sundance Film Festival. Of those, 255 were accepted, and just two dozen have been picked up for distribution; to see the others, you had to be there. Why not release all 255 on DVD each year as part of a discount Sundance Series?In a Long Tail economy, it’s more expensive to evaluate than to release. Just do it!

The same is true for the music industry. It should be securing the rights to release all the titles in all the back catalogs as quickly as it can – thoughtlessly, automatically, and at industrial scale. (This is one of those rare moments where the world needs more lawyers, not fewer.) So too for videogames. Retro gaming, including simulators of classic game consoles that run on modern PCs, is a growing phenomenon driven by the nostalgia of the first joystick generation. Game publishers could release every title as a 99-cent download three years after its release – no support, no guarantees, no packaging.

* Story Tools

[Print story] [E-mail story]

* Story Images

Click thumbnails for full-size image:


* Rants + raves

* Start

* Play

* View

  • Do ‘I Agree’? Ask my attorney-bot.
  • Hot Seat: Jeff Hawkins on intelligence
  • Sterling: Tomorrow’s smart object is refreshingly dumb
  • Lessig: The Me Generation is burdening kids – and it’s all our fault
  • More »

All this, of course, applies equally to books. Already, we’re seeing a blurring of the line between in and out of print. Amazon and other networks of used booksellers have made it almost as easy to find and buy a second-hand book as it is a new one. By divorcing bookselling from geography, these networks create a liquid market at low volume, dramatically increasing both their own business and the overall demand for used books. Combine that with the rapidly dropping costs of print-on-demand technologies and it’s clear why any book should always be available. Indeed, it is a fair bet that children today will grow up never knowing the meaning of out of print.

Rule 2: Cut the price in half. Now lower it.

Thanks to the success of Apple’s iTunes, we now have a standard price for a downloaded track: 99 cents. But is it the right one?

Ask the labels and they’ll tell you it’s too low: Even though 99 cents per track works out to about the same price as a CD, most consumers just buy a track or two from an album online, rather than the full CD. In effect, online music has seen a return to the singles-driven business of the 1950s. So from a label perspective, consumers should pay more for the privilege of purchasing ࠬa carte to compensate for the lost album revenue.

Ask consumers, on the other hand, and they’ll tell you that 99 cents is too high. It is, for starters, 99 cents more than Kazaa. But piracy aside, 99 cents violates our innate sense of economic justice: If it clearly costs less for a record label to deliver a song online, with no packaging, manufacturing, distribution, or shelf space overheads, why shouldn’t the price be less, too?

Surprisingly enough, there’s been little good economic analysis on what the right price for online music should be. The main reason for this is that pricing isn’t set by the market today but by the record label demi-cartel. Record companies charge a wholesale price of around 65 cents per track, leaving little room for price experimentation by the retailers.

That wholesale price is set to roughly match the price of CDs, to avoid dreaded “channel conflict.” The labels fear that if they price online music lower, their CD retailers (still the vast majority of the business) will revolt or, more likely, go out of business even more quickly than they already are. In either case, it would be a serious disruption of the status quo, which terrifies the already spooked record companies. No wonder they’re doing price calculations with an eye on the downsides in their traditional CD business rather than the upside in their new online business.

But what if the record labels stopped playing defense? A brave new look at the economics of music would calculate what it really costs to simply put a song on an iTunes server and adjust pricing accordingly. The results are surprising.

Take away the unnecessary costs of the retail channel – CD manufacturing, distribution, and retail overheads. That leaves the costs of finding, making, and marketing music. Keep them as they are, to ensure that the people on the creative and label side of the business make as much as they currently do. For a popular album that sells 300,000 copies, the creative costs work out to about $7.50 per disc, or around 60 cents a track. Add to that the actual cost of delivering music online, which is mostly the cost of building and maintaining the online service rather than the negligible storage and bandwidth costs. Current price tag: around 17 cents a track. By this calculation, hit music is overpriced by 25 percent online – it should cost just 79 cents a track, reflecting the savings of digital delivery.

Putting channel conflict aside for the moment, if the incremental cost of making content that was originally produced for physical distribution available online is low, the price should be, too. Price according to digital costs, not physical ones.

All this good news for consumers doesn’t have to hurt the industry. When you lower prices, people tend to buy more. Last year, Rhapsody did an experiment in elastic demand that suggested it could be a lot more. For a brief period, the service offered tracks at 99 cents, 79 cents, and 49 cents. Although the 49-cent tracks were only half the price of the 99-cent tracks, Rhapsody sold three times as many of them.

Since the record companies still charged 65 cents a track – and Rhapsody paid another 8 cents per track to the copyright-holding publishers – Rhapsody lost money on that experiment (but, as the old joke goes, made it up in volume). Yet much of the content on the Long Tail is older material that has already made back its money (or been written off for failing to do so): music from bands that had little record company investment and was thus cheap to make, or live recordings, remixes, and other material that came at low cost.

Such “misses” cost less to make available than hits, so why not charge even less for them? Imagine if prices declined the further you went down the Tail, with popularity (the market) effectively dictating pricing. All it would take is for the labels to lower the wholesale price for the vast majority of their content not in heavy rotation; even a two- or three-tiered pricing structure could work wonders. And because so much of that content is not available in record stores, the risk of channel conflict is greatly diminished. The lesson: Pull consumers down the tail with lower prices.

How low should the labels go? The answer comes by examining the psychology of the music consumer. The choice facing fans is not how many songs to buy from iTunes and Rhapsody, but how many songs to buy rather than download for free from Kazaa and other peer-to-peer networks. Intuitively, consumers know that free music is not really free: Aside from any legal risks, it’s a time-consuming hassle to build a collection that way. Labeling is inconsistent, quality varies, and an estimated 30 percent of tracks are defective in one way or another. As Steve Jobs put it at the iTunes Music Store launch, you may save a little money downloading from Kazaa, but “you’re working for under minimum wage.” And what’s true for music is doubly true for movies and games, where the quality of pirated products can be even more dismal, viruses are a risk, and downloads take so much longer.

* Story Tools

[Print story] [E-mail story]

* Story Images

Click thumbnails for full-size image:


* Rants + raves

* Start

* Play

* View

  • Do ‘I Agree’? Ask my attorney-bot.
  • Hot Seat: Jeff Hawkins on intelligence
  • Sterling: Tomorrow’s smart object is refreshingly dumb
  • Lessig: The Me Generation is burdening kids – and it’s all our fault
  • More »

So free has a cost: the psychological value of convenience. This is the “not worth it” moment where the wallet opens. The exact amount is an impossible calculus involving the bank balance of the average college student multiplied by their available free time. But imagine that for music, at least, it’s around 20 cents a track. That, in effect, is the dividing line between the commercial world of the Long Tail and the underground. Both worlds will continue to exist in parallel, but it’s crucial for Long Tail thinkers to exploit the opportunities between 20 and 99 cents to maximize their share. By offering fair pricing, ease of use, and consistent quality, you can compete with free.

Perhaps the best way to do that is to stop charging for individual tracks at all. Danny Stein, whose private equity firm owns eMusic, thinks the future of the business is to move away from the ownership model entirely. With ubiquitous broadband, both wired and wireless, more consumers will turn to the celestial jukebox of music services that offer every track ever made, playable on demand. Some of those tracks will be free to listeners and advertising-supported, like radio. Others, like eMusic and Rhapsody, will be subscription services. Today, digital music economics are dominated by the iPod, with its notion of a paid-up library of personal tracks. But as the networks improve, the comparative economic advantages of unlimited streamed music, either financed by advertising or a flat fee (infinite choice for $9.99 a month), may shift the market that way. And drive another nail in the coffin of the retail music model.

Rule 3: Help me find it

In 1997, an entrepreneur named Michael Robertson started what looked like a classic Long Tail business. Called MP3.com, it let anyone upload music files that would be available to all. The idea was the service would bypass the record labels, allowing artists to connect directly to listeners. MP3.com would make its money in fees paid by bands to have their music promoted on the site. The tyranny of the labels would be broken, and a thousand flowers would bloom.

Putting aside the fact that many people actually used the service to illegally upload and share commercial tracks, leading the labels to sue MP3.com, the model failed at its intended purpose, too. Struggling bands did not, as a rule, find new audiences, and independent music was not transformed. Indeed, MP3.com got a reputation for being exactly what it was: an undifferentiated mass of mostly bad music that deserved its obscurity.

The problem with MP3.com was that it was only Long Tail. It didn’t have license agreements with the labels to offer mainstream fare or much popular commercial music at all. Therefore, there was no familiar point of entry for consumers, no known quantity from which further exploring could begin.

Offering only hits is no better. Think of the struggling video-on-demand services of the cable companies. Or think of Movielink, the feeble video download service run by the studios. Due to overcontrolling providers and high costs, they suffer from limited content: in most cases just a few hundred recent releases. There’s not enough choice to change consumer behavior, to become a real force in the entertainment economy.

By contrast, the success of Netflix, Amazon, and the commercial music services shows that you need both ends of the curve. Their huge libraries of less-mainstream fare set them apart, but hits still matter in attracting consumers in the first place. Great Long Tail businesses can then guide consumers further afield by following the contours of their likes and dislikes, easing their exploration of the unknown.

For instance, the front screen of Rhapsody features Britney Spears, unsurprisingly. Next to the listings of her work is a box of “similar artists.” Among them is Pink. If you click on that and are pleased with what you hear, you may do the same for Pink’s similar artists, which include No Doubt. And on No Doubt’s page, the list includes a few “followers” and “influencers,” the last of which includes the Selecter, a 1980s ska band from Coventry, England. In three clicks, Rhapsody may have enticed a Britney Spears fan to try an album that can hardly be found in a record store.

Rhapsody does this with a combination of human editors and genre guides. But Netflix, where 60 percent of rentals come from recommendations, and Amazon do this with collaborative filtering, which uses the browsing and purchasing patterns of users to guide those who follow them (“Customers who bought this also bought …”). In each, the aim is the same: Use recommendations to drive demand down the Long Tail.

This is the difference between push and pull, between broadcast and personalized taste. Long Tail business can treat consumers as individuals, offering mass customization as an alternative to mass-market fare.

The advantages are spread widely. For the entertainment industry itself, recommendations are a remarkably efficient form of marketing, allowing smaller films and less-mainstream music to find an audience. For consumers, the improved signal-to-noise ratio that comes from following a good recommendation encourages exploration and can reawaken a passion for music and film, potentially creating a far larger entertainment market overall. (The average Netflix customer rents seven DVDs a month, three times the rate at brick-and-mortar stores.) And the cultural benefit of all of this is much more diversity, reversing the blanding effects of a century of distribution scarcity and ending the tyranny of the hit.

Such is the power of the Long Tail. Its time has come.

Chris Anderson (canderson@wiredmag.com) is Wired‘s editor in chief and writes the blog The Long Tail.

Read Full Post »

Older Posts »

%d bloggers like this: