Feeds:
Posts
Comments

Posts Tagged ‘SOCIAL’

Bjorn Martinoff

Senior Executive Coach, Global Trainer & Consultant (bjorn@fortune100coach.com)

see all my questions

What kind of skills does one need to build a web 2.0?

My dear friends and colleagues

I am looking to build a website using web 2.0 features.
This I am looking to outsource. What is the least expensive way of doing this and what skills does the outsource person(s) need to have for this to be successful?

I was thinking of hiring students in the Philippines and look forward to your thoughts on this subject.

regards,

Bjorn Martinoff
Managing Consultant & Senior Global Executive Master Coach
http://www.fortune100coach.com

Please feel free to connect with me at bjorn@fortune100coach.com

posted 7 months ago in Web Development | Closed | Flag question as…

Share This

if (window.addEventListener || window.attachEvent) { var shareMenu = new popMenu(‘share-link’,’share-menu’); shareMenu.init(); }

Good Answers (5)

Frank Guerino is a 2nd-degree contact

Frank Guerino

TraverseIT: Chairman, CEO and Founder

see all my answers

Best Answers in: Web Development (5)see more, Software Development (3), E-Commerce (2), Customer Service (1), Venture Capital and Private Equity (1), Organizational Development (1), Blogging (1), Computers and Software (1), Databases (1), Using LinkedIn (1) see less

This was selected as Best Answer

Hello Bjorn,

Web 2.0 is not a set of features, tools, or technologies. Web 2.0 is a set of “traits” that all successful web-based applications share.

The definition of Web 2.0, by its originators (O’Reilly & MediaLive International), can be found at: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html

There are 7 fundamental traits that define Web 2.0:

1) Using the Web as a backbone infrastructure for enterprise class solutions
2) Static Publishing and Page Views by Individuals is Replaced With Dynamic Collaboration by Groups
3) Harnessing of Collective Intelligence
4) Elimination of Traditional HTML With Dynamic Web Pages and Dynamic Links
5) Replacement of Static Content with Transactional Databases
6) Elimination of Traditional Software Development, Deployment, and Maintenance With Managed Solutions
7) Rich User Experiences

Examples of Companies/Products that are Web 2.0 compliant include but are not limited to:

– (Any B2C site you shop from, on the web)
– Amazon
– Business Engine Networks (BEN)
– CollabNet
– Ebay
– Google
– Salesforce.com
– TraverseIT (our own company)
– Yahoo

To make your application Web 2.0 compliant, it doesn’t matter what tools, technologies, languages, frameworks, or architecture you use. It also doesn’t matter what operational features your application has. What does matter is that you meet the above 7 requirements.

Now, this being said, you can build a fully Web 2.0 compliant application but that doesn’t mean your application will be valuable/useful to its end users. Building something of value is a totally different issue.

Anyhow, I hope this helps.

My Best,

Frank Guerino, CEO
TraverseIT
Frank.Guerino@TraverseIT.com
http://www.TraverseIT.com

Links:

posted 7 months ago | Flag answer as…

Read Full Post »

New York Times 

Entrepreneurs See a Web Guided by Common Sense

Published: November 12, 2006
SAN FRANCISCO, Nov. 11 — From the billions of documents that form the World Wide Web and the links that weave them together, computer scientists and a growing collection of start-up companies are finding new ways to mine human intelligence.

Their goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide — and even provide the foundation for systems that can reason in a human fashion. That level of artificial intelligence, with machines doing the thinking instead of simply following commands, has eluded researchers for more than half a century.

Referred to as Web 3.0, the effort is in its infancy, and the very idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like I.B.M. and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.

But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college.

The projects aimed at creating Web 3.0 all take advantage of increasingly powerful computers that can quickly and completely scour the Web.

“I call it the World Wide Database,” said Nova Spivack, the founder of a start-up firm whose technology detects relationships between nuggets of information by mining the World Wide Web. “We are going from a Web of connected documents to a Web of connected data.”

Web 2.0, which describes the ability to seamlessly connect applications (like geographic mapping) and services (like photo-sharing) over the Internet, has in recent months become the focus of dot-com-style hype in Silicon Valley. But commercial interest in Web 3.0 — or the “semantic Web,” for the idea of adding meaning — is only now emerging.

The classic example of the Web 2.0 era is the “mash-up” — for example, connecting a rental-housing Web site with Google Maps to create a new, more useful service that automatically shows the location of each rental listing.

In contrast, the Holy Grail for developers of the semantic Web is to build a system that can give a reasonable and complete response to a simple question like: “I’m looking for a warm place to vacation and I have a budget of $3,000. Oh, and I have an 11-year-old child.”

Under today’s system, such a query can lead to hours of sifting — through lists of flights, hotel, car rentals — and the options are often at odds with one another. Under Web 3.0, the same search would ideally call up a complete vacation package that was planned as meticulously as if it had been assembled by a human travel agent.

How such systems will be built, and how soon they will begin providing meaningful answers, is now a matter of vigorous debate both among academic researchers and commercial technologists. Some are focused on creating a vast new structure to supplant the existing Web; others are developing pragmatic tools that extract meaning from the existing Web.

But all agree that if such systems emerge, they will instantly become more commercially valuable than today’s search engines, which return thousands or even millions of documents but as a rule do not answer questions directly.

Underscoring the potential of mining human knowledge is an extraordinarily profitable example: the basic technology that made Google possible, known as “Page Rank,” systematically exploits human knowledge and decisions about what is significant to order search results. (It interprets a link from one page to another as a “vote,” but votes cast by pages considered popular are weighted more heavily.)

Today researchers are pushing further. Mr. Spivack’s company, Radar Networks, for example, is one of several working to exploit the content of social computing sites, which allow users to collaborate in gathering and adding their thoughts to a wide array of content, from travel to movies. 

Radar’s technology is based on a next-generation database system that stores associations, such as one person’s relationship to another (colleague, friend, brother), rather than specific items like text or numbers.

One example that hints at the potential of such systems is KnowItAll, a project by a group of University of Washington faculty members and students that has been financed by Google. One sample system created using the technology is Opine, which is designed to extract and aggregate user-posted information from product and review sites.

One demonstration project focusing on hotels “understands” concepts like room temperature, bed comfort and hotel price, and can distinguish between concepts like “great,” “almost great” and “mostly O.K.” to provide useful direct answers. Whereas today’s travel recommendation sites force people to weed through long lists of comments and observations left by others, the Web. 3.0 system would weigh and rank all of the comments and find, by cognitive deduction, just the right hotel for a particular user.

“The system will know that spotless is better than clean,” said Oren Etzioni, an artificial-intelligence researcher at the University of Washington who is a leader of the project. “There is the growing realization that text on the Web is a tremendous resource.”

In its current state, the Web is often described as being in the Lego phase, with all of its different parts capable of connecting to one another. Those who envision the next phase, Web 3.0, see it as an era when machines will start to do seemingly intelligent things.

Researchers and entrepreneurs say that while it is unlikely that there will be complete artificial-intelligence systems any time soon, if ever, the content of the Web is already growing more intelligent. Smart Webcams watch for intruders, while Web-based e-mail programs recognize dates and locations. Such programs, the researchers say, may signal the impending birth of Web 3.0.

“It’s a hot topic, and people haven’t realized this spooky thing about how much they are depending on A.I.,” said W. Daniel Hillis, a veteran artificial-intelligence researcher who founded Metaweb Technologies here last year.

Like Radar Networks, Metaweb is still not publicly describing what its service or product will be, though the company’s Web site states that Metaweb intends to “build a better infrastructure for the Web.”

“It is pretty clear that human knowledge is out there and more exposed to machines than it ever was before,” Mr. Hillis said.

Both Radar Networks and Metaweb have their roots in part in technology development done originally for the military and intelligence agencies. Early research financed by the National Security Agency, the Central Intelligence Agency and the Defense Advanced Research Projects Agency predated a pioneering call for a semantic Web made in 1999 by Tim Berners-Lee, the creator of the World Wide Web a decade earlier.

Intelligence agencies also helped underwrite the work of Doug Lenat, a computer scientist whose company, Cycorp of Austin, Tex., sells systems and services to the government and large corporations. For the last quarter-century Mr. Lenat has labored on an artificial-intelligence system named Cyc that he claimed would some day be able to answer questions posed in spoken or written language — and to reason.

Cyc was originally built by entering millions of common-sense facts that the computer system would “learn.” But in a lecture given at Google earlier this year, Mr. Lenat said, Cyc is now learning by mining the World Wide Web — a process that is part of how Web 3.0 is being built.

During his talk, he implied that Cyc is now capable of answering a sophisticated natural-language query like: “Which American city would be most vulnerable to an anthrax attack during summer?”

Separately, I.B.M. researchers say they are now routinely using a digital snapshot of the six billion documents that make up the non-pornographic World Wide Web to do survey research and answer questions for corporate customers on diverse topics, such as market research and corporate branding.

Daniel Gruhl, a staff scientist at I.B.M.’s Almaden Research Center in San Jose, Calif., said the data mining system, known as Web Fountain, has been used to determine the attitudes of young people on death for a insurance company and was able to choose between the terms “utility computing” and “grid computing,” for an I.B.M. branding effort.

“It turned out that only geeks liked the term ‘grid computing,’ ” he said.

I.B.M. has used the system to do market research for television networks on the popularity of shows by mining a popular online community site, he said. Additionally, by mining the “buzz” on college music Web sites, the researchers were able to predict songs that would hit the top of the pop charts in the next two weeks — a capability more impressive than today’s market research predictions.

There is debate over whether systems like Cyc will be the driving force behind Web 3.0 or whether intelligence will emerge in a more organic fashion, from technologies that systematically extract meaning from the existing Web. Those in the latter camp say they see early examples in services like del.icio.us and Flickr, the bookmarking and photo-sharing systems acquired by Yahoo, and Digg, a news service that relies on aggregating the opinions of readers to find stories of interest.

In Flickr, for example, users “tag” photos, making it simple to identify images in ways that have eluded scientists in the past.

“With Flickr you can find images that a computer could never find,” said Prabhakar Raghavan, head of research at Yahoo. “Something that defied us for 50 years suddenly became trivial. It wouldn’t have become trivial without the Web.”

Read Full Post »

What Is Web 2.0

Listen Print Discuss

What Is Web 2.0
Design Patterns and Business Models for the Next Generation of Software

by Tim O’Reilly
09/30/2005

The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum’s rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.

The concept of “Web 2.0” began with a conference brainstorming session between O’Reilly and MediaLive International. Dale Dougherty, web pioneer and O’Reilly VP, noted that far from having “crashed”, the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What’s more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as “Web 2.0” might make sense? We agreed that it did, and so the Web 2.0 Conference was born.

In the year and a half since, the term “Web 2.0” has clearly taken hold, with more than 9.5 million citations in Google. But there’s still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.

This article is an attempt to clarify just what we mean by Web 2.0.

In our initial brainstorming, we formulated our sense of Web 2.0 by example:

Web 1.0   Web 2.0
DoubleClick –> Google AdSense
Ofoto –> Flickr
Akamai –> BitTorrent
mp3.com –> Napster
Britannica Online –> Wikipedia
personal websites –> blogging
evite –> upcoming.org and EVDB
domain name speculation –> search engine optimization
page views –> cost per click
screen scraping –> web services
publishing –> participation
content management systems –> wikis
directories (taxonomy) –> tagging (“folksonomy”)
stickiness –> syndication

The list went on and on. But what was it that made us identify one application or approach as “Web 1.0” and another as “Web 2.0”? (The question is particularly urgent because the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means. The question is particularly difficult because many of those buzzword-addicted startups are definitely not Web 2.0, while some of the applications we identified as Web 2.0, like Napster and BitTorrent, are not even properly web applications!) We began trying to tease out the principles that are demonstrated in one way or another by the success stories of web 1.0 and by the most interesting of the new applications.

1. The Web As Platform

Like many important concepts, Web 2.0 doesn’t have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.

Web2MemeMap

Figure 1 shows a “meme map” of Web 2.0 that was developed at a brainstorming session during FOO Camp, a conference at O’Reilly Media. It’s very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core.

For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was “The web as platform.” Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What’s more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don’t often think of it as “web services”, but in fact, ad serving was the first widely deployed web service, and the first widely deployed “mashup” (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion.

Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we can also see how it’s possible to realize more of the possibilities by embracing additional Web 2.0 design patterns.

Let’s drill down for a moment into each of these three cases, teasing out some of the essential elements of difference.

Netscape vs. Google

If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. So let’s start with a comparison of these two companies and their positioning.

Netscape framed “the web as platform” in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the “horseless carriage” framed the automobile as an extension of the familiar, Netscape promoted a “webtop” to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.

In the end, both web browsers and web servers turned out to be commodities, and value moved “up the stack” to services delivered over the web platform.

Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see.

At bottom, Google requires a competency that Netscape never needed: database management. Google isn’t just a collection of software tools, it’s a specialized database. Without the data, the tools are useless; without the software, the data is unmanageable. Software licensing and control over APIs–the lever of power in the previous era–is irrelevant because the software never need be distributed but only performed, and also because without the ability to collect and manage the data, the software is of little use. In fact, the value of the software is proportional to the scale and dynamism of the data it helps to manage.

Google’s service is not a server–though it is delivered by a massive collection of internet servers–nor a browser–though it is experienced by the user within the browser. Nor does its flagship search service even host the content that it enables users to find. Much like a phone call, which happens not just on the phones at either end of the call, but on the network in between, Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.

While both Netscape and Google could be described as software companies, it’s clear that Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other companies that got their start in the 1980’s software revolution, while Google’s fellows are other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.

DoubleClick vs. Overture and AdSense

Like Google, DoubleClick is a true child of the internet era. It harnesses software as a service, has a core competency in data management, and, as noted above, was a pioneer in web services long before web services even had a name. However, DoubleClick was ultimately limited by its business model. It bought into the ’90s notion that the web was about publishing, not participation; that advertisers, not consumers, ought to call the shots; that size mattered, and that the internet was increasingly being dominated by the top websites as measured by MediaMetrix and other web ad scoring companies.

As a result, DoubleClick proudly cites on its website “over 2000 successful implementations” of its software. Yahoo! Search Marketing (formerly Overture) and Google AdSense, by contrast, already serve hundreds of thousands of advertisers apiece.

Overture and Google’s success came from an understanding of what Chris Anderson refers to as “the long tail,” the collective power of the small sites that make up the bulk of the web’s content. DoubleClick’s offerings require a formal sales contract, limiting their market to the few thousand largest websites. Overture and Google figured out how to enable ad placement on virtually any web page. What’s more, they eschewed publisher/ad-agency friendly advertising formats such as banner ads and popups in favor of minimally intrusive, context-sensitive, consumer-friendly text advertising.

The Web 2.0 lesson: leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.

A Platform Beats an Application Every Time

In each of its past confrontations with rivals, Microsoft has successfully played the platform card, trumping even the most dominant applications. Windows allowed Microsoft to displace Lotus 1-2-3 with Excel, WordPerfect with Word, and Netscape Navigator with Internet Explorer.

This time, though, the clash isn’t between a platform and an application, but between two platforms, each with a radically different business model: On the one side, a single software provider, whose massive installed base and tightly integrated operating system and APIs give control over the programming paradigm; on the other, a system without an owner, tied together by a set of protocols, open standards and agreements for cooperation.

Windows represents the pinnacle of proprietary control via software APIs. Netscape tried to wrest control from Microsoft using the same techniques that Microsoft itself had used against other rivals, and failed. But Apache, which held to the open standards of the web, has prospered. The battle is no longer unequal, a platform versus a single application, but platform versus platform, with the question being which platform, and more profoundly, which architecture, and which business model, is better suited to the opportunity ahead.

Windows was a brilliant solution to the problems of the early PC era. It leveled the playing field for application developers, solving a host of problems that had previously bedeviled the industry. But a single monolithic approach, controlled by a single vendor, is no longer a solution, it’s a problem. Communications-oriented systems, as the internet-as-platform most certainly is, require interoperability. Unless a vendor can control both ends of every interaction, the possibilities of user lock-in via software APIs are limited.

Any Web 2.0 vendor that seeks to lock in its application gains by controlling the platform will, by definition, no longer be playing to the strengths of the platform.

This is not to say that there are not opportunities for lock-in and competitive advantage, but we believe they are not to be found via control over software APIs and protocols. There is a new game afoot. The companies that succeed in the Web 2.0 era will be those that understand the rules of that game, rather than trying to go back to the rules of the PC software era.

Not surprisingly, other web 2.0 success stories demonstrate this same behavior. eBay enables occasional transactions of only a few dollars between single individuals, acting as an automated intermediary. Napster (though shut down for legal reasons) built its network not by building a centralized song database, but by architecting a system in such a way that every downloader also became a server, and thus grew the network.

Akamai vs. BitTorrent

Like DoubleClick, Akamai is optimized to do business with the head, not the tail, with the center, not the edges. While it serves the benefit of the individuals at the edge of the web by smoothing their access to the high-demand sites at the center, it collects its revenue from those central sites.

BitTorrent, like other pioneers in the P2P movement, takes a radical approach to internet decentralization. Every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of downloaders to provide both bandwidth and data to other users. The more popular the file, in fact, the faster it can be served, as there are more users providing bandwidth and fragments of the complete file.

BitTorrent thus demonstrates a key Web 2.0 principle: the service automatically gets better the more people use it. While Akamai must add servers to improve service, every BitTorrent consumer brings his own resources to the party. There’s an implicit “architecture of participation”, a built-in ethic of cooperation, in which the service acts primarily as an intelligent broker, connecting the edges to each other and harnessing the power of the users themselves.

2. Harnessing Collective Intelligence

The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era appears to be this, that they have embraced the power of the web to harness collective intelligence:

  • Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
  • Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net’s users remains the core of its value.
  • Google’s breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
  • eBay’s product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company’s role is as an enabler of a context in which that user activity can happen. What’s more, eBay’s competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
  • Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page–and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company’s own products, or sponsored results, Amazon always leads with “most popular”, a real-time computation based not only on sales but other factors that Amazon insiders call the “flow” around products. With an order of magnitude more user participation, it’s no surprise that Amazon’s sales also outpace competitors.

Now, innovative companies that pick up on this insight and perhaps extend it even further, are making their mark on the web:

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond’s dictum (originally coined in the context of open source software) that “with enough eyeballs, all bugs are shallow,” to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • Sites like del.icio.us and Flickr, two companies that have received a great deal of attention of late, have pioneered a concept that some people call “folksonomy” (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, rather than rigid categories. In the canonical example, a Flickr photo of a puppy might be tagged both “puppy” and “cute”–allowing for retrieval along natural axes generated user activity.
  • Collaborative spam filtering products like Cloudmark aggregate the individual decisions of email users about what is and is not spam, outperforming systems that rely on analysis of the messages themselves.
  • It is a truism that the greatest internet success stories don’t advertise their products. Their adoption is driven by “viral marketing”–that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn’t Web 2.0.
  • Even much of the infrastructure of the web–including the Linux, Apache, MySQL, and Perl, PHP, or Python code involved in most web servers–relies on the peer-production methods of open source, in themselves an instance of collective, net-enabled intelligence. There are more than 100,000 open source software projects listed on SourceForge.net. Anyone can add a project, anyone can download and use the code, and new projects migrate from the edges to the center as a result of users putting them to work, an organic software adoption process relying almost entirely on viral marketing.

The lesson: Network effects from user contributions are the key to market dominance in the Web 2.0 era.

Blogging and the Wisdom of Crowds

One of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal home pages have been around since the early days of the web, and the personal diary and daily opinion column around much longer than that, so just what is the fuss all about?

At its most basic, a blog is just a personal home page in diary format. But as Rich Skrenta notes, the chronological organization of a blog “seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain.”

One of the things that has made a difference is a technology called RSS. RSS is the most significant advance in the fundamental architecture of the web since early hackers realized that CGI could be used to create database-backed websites. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this “the incremental web.” Others call it the “live web”.

Now, of course, “dynamic websites” (i.e., database-backed sites with dynamically generated content) replaced static web pages well over ten years ago. What’s dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with “permalinks” for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.

The Architecture of Participation

Some systems are designed to encourage participation. In his paper, The Cornucopia of the Commons, Dan Bricklin noted that there are three ways to build a large database. The first, demonstrated by Yahoo!, is to pay people to do it. The second, inspired by lessons from the open source community, is to get volunteers to perform the same task. The Open Directory Project, an open source Yahoo competitor, is the result. But Napster demonstrated a third way. Because Napster set its defaults to automatically serve any music that was downloaded, every user automatically helped to build the value of the shared database. This same approach has been followed by all other P2P file sharing services.

One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.

Mitch Kapor once noted that “architecture is politics.” Participation is intrinsic to Napster, part of its fundamental architecture.

This architectural insight may also be more central to the success of open source software than the more frequently cited appeal to volunteerism. The architecture of the internet, and the World Wide Web, as well as of open source software projects like Linux, Apache, and Perl, is such that users pursuing their own “selfish” interests build collective value as an automatic byproduct. Each of these projects has a small core, well-defined extension mechanisms, and an approach that lets any well-behaved component be added by anyone, growing the outer layers of what Larry Wall, the creator of Perl, refers to as “the onion.” In other words, these technologies demonstrate network effects, simply through the way that they have been designed.

These projects can be seen to have a natural architecture of participation. But as Amazon demonstrates, by consistent effort (as well as economic incentives such as the Associates program), it is possible to overlay such an architecture on a system that would not normally seem to possess it.

RSS also means that the web browser is not the only means of viewing a web page. While some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and still others allow users of portable devices to subscribe to constantly updated content.

RSS is now being used to push not just notices of new blog entries, but also all kinds of data updates, including stock quotes, weather data, and photo availability. This use is actually a return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer’s “Really Simple Syndication” technology, used to push out blog updates, and Netscape’s “Rich Site Summary”, which allowed users to create custom Netscape home pages with regularly updated data flows. Netscape lost interest, and the technology was carried forward by blogging pioneer Userland, Winer’s company. In the current crop of applications, we see, though, the heritage of both parents.

But RSS is only part of what makes a weblog different from an ordinary web page. Tom Coates remarks on the significance of the permalink:

It may seem like a trivial piece of functionality now, but it was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities. For the first time it became relatively easy to gesture directly at a highly specific post on someone else’s site and talk about it. Discussion emerged. Chat emerged. And – as a result – friendships emerged or became more entrenched. The permalink was the first – and most successful – attempt to build bridges between weblogs.

In many ways, the combination of RSS and permalinks adds many of the features of NNTP, the Network News Protocol of the Usenet, onto HTTP, the web protocol. The “blogosphere” can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the conversational watering holes of the early internet. Not only can people subscribe to each others’ sites, and easily link to individual comments on a page, but also, via a mechanism known as trackbacks, they can see when anyone else links to their pages, and can respond, either with reciprocal links, or by adding comments.

Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way–rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists–any user can subscribe to any other user’s photostream via RSS. The object of attention is notified, but does not have to approve the connection.)

If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.

First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnifies their visibility and power. The “echo chamber” that critics decry is also an amplifier.

If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls “the wisdom of crowds” comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.

While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls “we, the media,” a world in which “the former audience”, not a few people in a back room, decides what’s important.

3. Data is the Next Intel Inside

Every significant internet application to date has been backed by a specialized database: Google’s web crawl, Yahoo!’s directory (and web crawl), Amazon’s database of products, eBay’s database of products and sellers, MapQuest’s map databases, Napster’s distributed song database. As Hal Varian remarked in a personal conversation last year, “SQL is the new HTML.” Database management is a core competency of Web 2.0 companies, so much so that we have sometimes referred to these applications as “infoware” rather than merely software.

This fact leads to a key question: Who owns the data?

In the internet era, one can already see a number of cases where control over the database has led to market control and outsized financial returns. The monopoly on domain name registry initially granted by government fiat to Network Solutions (later purchased by Verisign) was one of the first great moneymakers of the internet. While we’ve argued that business advantage via controlling software APIs is much more difficult in the age of the internet, control of key data sources is not, especially if those data sources are expensive to create or amenable to increasing returns via network effects.

Look at the copyright notices at the base of every map served by MapQuest, maps.yahoo.com, maps.msn.com, or maps.google.com, and you’ll see the line “Maps copyright NavTeq, TeleAtlas,” or with the new satellite imagery services, “Images copyright Digital Globe.” These companies made substantial investments in their databases (NavTeq alone reportedly invested $750 million to build their database of street addresses and directions. Digital Globe spent $500 million to launch their own satellite to improve on government-supplied imagery.) NavTeq has gone so far as to imitate Intel’s familiar Intel Inside logo: Cars with navigation systems bear the imprint, “NavTeq Onboard.” Data is indeed the Intel Inside of these applications, a sole source component in systems whose software infrastructure is largely open source or otherwise commodified.

The now hotly contested web mapping arena demonstrates how a failure to understand the importance of owning an application’s core data will eventually undercut its competitive position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo!, and then Microsoft, and most recently Google, decided to enter the market, they were easily able to offer a competing application simply by licensing the same data.

Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon “embraced and extended” their data suppliers.

Imagine if MapQuest had done the same thing, harnessing their users to annotate maps and directions, adding layers of value. It would have been much more difficult for competitors to enter the market just by licensing the base data.

The recent introduction of Google Maps provides a living laboratory for the competition between application vendors and their data suppliers. Google’s lightweight programming model has led to the creation of numerous value-added services in the form of mashups that link Google Maps with other internet-accessible data sources. Paul Rademacher’s housingmaps.com, which combines Google Maps with Craigslist apartment rental and home purchase data to create an interactive housing search tool, is the pre-eminent example of such a mashup.

At present, these mashups are mostly innovative experiments, done by hackers. But entrepreneurial activity follows close behind. And already, one can see that for at least one class of developer, Google has taken the role of data source away from Navteq and inserted themselves as a favored intermediary. We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.

The race is on to own certain classes of core data: location, identity, calendaring of public events, product identifiers and namespaces. In many cases, where there is significant cost to create the data, there may be an opportunity for an Intel Inside style play, with a single source for the data. In others, the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service.

For example, in the area of identity, PayPal, Amazon’s 1-click, and the millions of users of communications systems, may all be legitimate contenders to build a network-wide identity database. (In this regard, Google’s recent attempt to use cell phone numbers as an identifier for Gmail accounts may be a step towards embracing and extending the phone system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in quest of a kind of “distributed 1-click” that will provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB is an attempt to build the world’s largest shared calendar via a wiki-style architecture of participation. While the jury’s still out on the success of any particular startup or approach, it’s clear that standards and solutions in these areas, effectively turning certain classes of data into reliable subsystems of the “internet operating system”, will enable the next generation of applications.

A further point must be noted with regard to data, and that is user concerns about privacy and their rights to their own data. In many of the early web applications, copyright is only loosely enforced. For example, Amazon lays claim to any reviews submitted to the site, but in the absence of enforcement, people may repost the same review elsewhere. However, as companies begin to realize that control over data may be their chief source of competitive advantage, we may see heightened attempts at control.

Much as the rise of proprietary software led to the Free Software movement, we expect the rise of proprietary databases to result in a Free Data movement within the next decade. One can see early signs of this countervailing trend in open data projects such as Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which allow users to take control of how data is displayed on their computer.

4. End of the Software Release Cycle

As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:

  1. Operations must become a core competency. Google’s or Yahoo!’s expertise in product development must be matched by an expertise in daily operations. So fundamental is the shift from software as artifact to software as service that the software will cease to perform unless it is maintained on a daily basis. Google must continuously crawl the web and update its indices, continuously filter out link spam and other attempts to influence its results, continuously and dynamically respond to hundreds of millions of asynchronous user queries, simultaneously matching them with context-appropriate advertisements.It’s no accident that Google’s system administration, networking, and load balancing techniques are perhaps even more closely guarded secrets than their search algorithms. Google’s success at automating these processes is a key part of their cost advantage over competitors.

    It’s also no accident that scripting languages such as Perl, Python, PHP, and now Ruby, play such a large role at web 2.0 companies. Perl was famously described by Hassan Schroeder, Sun’s first webmaster, as “the duct tape of the internet.” Dynamic languages (often called scripting languages and looked down on by the software engineers of the era of software artifacts) are the tool of choice for system and network administrators, as well as application developers building dynamic systems that require constant change.

  2. Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It’s no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a “Beta” logo for years at a time.Real time monitoring of user behavior to see just which new features are used, and how they are used, thus becomes another required core competency. A web developer at a major online service remarked: “We put up two or three new features on some part of the site every day, and if users don’t adopt them, we take them down. If they like them, we roll them out to the entire site.”

    Cal Henderson, the lead developer of Flickr, recently revealed that they deploy new builds up to every half hour. This is clearly a radically different development model! While not all web applications are developed in as extreme a style as Flickr, almost all web applications have a development cycle that is radically unlike anything from the PC or client-server era. It is for this reason that a recent ZDnet editorial concluded that Microsoft won’t be able to beat Google: “Microsoft’s business model depends on everyone upgrading their computing environment every two to three years. Google’s depends on everyone exploring what’s new in their computing environment every day.”

While Microsoft has demonstrated enormous ability to learn from and ultimately best its competition, there’s no question that this time, the competition will require Microsoft (and by extension, every other existing software company) to become a deeply different kind of company. Native Web 2.0 companies enjoy a natural advantage, as they don’t have old patterns (and corresponding business models and revenue sources) to shed.

A Web 2.0 Investment Thesis

Venture capitalist Paul Kedrosky writes: “The key is to find the actionable investments where you disagree with the consensus”. It’s interesting to see how each Web 2.0 facet involves disagreeing with the consensus: everyone was emphasizing keeping data private, Flickr/Napster/et al. make it public. It’s not just disagreeing to be disagreeable (pet food! online!), it’s disagreeing where you can build something out of the differences. Flickr builds communities, Napster built breadth of collection.

Another way to look at it is that the successful companies all give up something expensive but considered critical to get something valuable for free that was once expensive. For example, Wikipedia gives up central editorial control in return for speed and breadth. Napster gave up on the idea of “the catalog” (all the songs the vendor was selling) and got breadth. Amazon gave up on the idea of having a physical storefront but got to serve the entire world. Google gave up on the big customers (initially) and got the 80% whose needs weren’t being met. There’s something very aikido (using your opponent’s force against them) in saying “you know, you’re right–absolutely anyone in the whole world CAN update this article. And guess what, that’s bad news for you.”

Nat Torkington

5. Lightweight Programming Models

Once the idea of web services became au courant, large companies jumped into the fray with a complex web services stack designed to create highly reliable programming environments for distributed applications.

But much as the web succeeded precisely because it overthrew much of hypertext theory, substituting a simple pragmatism for ideal design, RSS has become perhaps the single most widely deployed web service because of its simplicity, while the complex corporate web services stacks have yet to achieve wide deployment.

Similarly, Amazon.com’s web services are provided in two forms: one adhering to the formalisms of the SOAP (Simple Object Access Protocol) web services stack, the other simply providing XML data over HTTP, in a lightweight approach sometimes referred to as REST (Representational State Transfer). While high value B2B connections (like those between Amazon and retail partners like ToysRUs) use the SOAP stack, Amazon reports that 95% of the usage is of the lightweight REST service.

This same quest for simplicity can be seen in other “organic” web services. Google’s recent release of Google Maps is a case in point. Google Maps’ simple AJAX (Javascript and XML) interface was quickly decrypted by hackers, who then proceeded to remix the data into new services.

Mapping-related web services had been available for some time from GIS vendors such as ESRI as well as from MapQuest and Microsoft MapPoint. But Google Maps set the world on fire because of its simplicity. While experimenting with any of the formal vendor-supported web services required a formal contract between the parties, the way Google Maps was implemented left the data for the taking, and hackers soon found ways to creatively re-use that data.

There are several significant lessons here:

  1. Support lightweight programming models that allow for loosely coupled systems. The complexity of the corporate-sponsored web services stack is designed to enable tight coupling. While this is necessary in many cases, many of the most interesting applications can indeed remain loosely coupled, and even fragile. The Web 2.0 mindset is very different from the traditional IT mindset!
  2. Think syndication, not coordination. Simple web services, like RSS and REST-based web services, are about syndicating data outwards, not controlling what happens when it gets to the other end of the connection. This idea is fundamental to the internet itself, a reflection of what is known as the end-to-end principle.
  3. Design for “hackability” and remixability. Systems like the original web, RSS, and AJAX all have this in common: the barriers to re-use are extremely low. Much of the useful software is actually open source, but even when it isn’t, there is little in the way of intellectual property protection. The web browser’s “View Source” option made it possible for any user to copy any other user’s web page; RSS was designed to empower the user to view the content he or she wants, when it’s wanted, not at the behest of the information provider; the most successful web services are those that have been easiest to take in new directions unimagined by their creators. The phrase “some rights reserved,” which was popularized by the Creative Commons to contrast with the more typical “all rights reserved,” is a useful guidepost.

Innovation in Assembly

Lightweight business models are a natural concomitant of lightweight programming and lightweight connections. The Web 2.0 mindset is good at re-use. A new service like housingmaps.com was built simply by snapping together two existing services. Housingmaps.com doesn’t have a business model (yet)–but for many small-scale services, Google AdSense (or perhaps Amazon associates fees, or both) provides the snap-in equivalent of a revenue model.

These examples provide an insight into another key web 2.0 principle, which we call “innovation in assembly.” When commodity components are abundant, you can create value simply by assembling them in novel or effective ways. Much as the PC revolution provided many opportunities for innovation in assembly of commodity hardware, with companies like Dell making a science out of such assembly, thereby defeating companies whose business model required innovation in product development, we believe that Web 2.0 will provide opportunities for companies to beat the competition by getting better at harnessing and integrating services provided by others.

6. Software Above the Level of a Single Device

One other feature of Web 2.0 that deserves mention is the fact that it’s no longer limited to the PC platform. In his parting advice to Microsoft, long time Microsoft developer Dave Stutz pointed out that “Useful software written above the level of the single device will command high margins for a long time to come.”

Of course, any web application can be seen as software above the level of a single device. After all, even the simplest web application involves at least two computers: the one hosting the web server and the one hosting the browser. And as we’ve discussed, the development of the web as platform extends this idea to synthetic applications composed of services provided by multiple computers.

But as with many areas of Web 2.0, where the “2.0-ness” is not something new, but rather a fuller realization of the true potential of the web platform, this phrase gives us a key insight into how to design applications and services for the new platform.

To date, iTunes is the best exemplar of this principle. This application seamlessly reaches from the handheld device to a massive web back-end, with the PC acting as a local cache and control station. There have been many previous attempts to bring web content to portable devices, but the iPod/iTunes combination is one of the first such applications designed from the ground up to span multiple devices. TiVo is another good example.

iTunes and TiVo also demonstrate many of the other core principles of Web 2.0. They are not web applications per se, but they leverage the power of the web platform, making it a seamless, almost invisible part of their infrastructure. Data management is most clearly the heart of their offering. They are services, not packaged applications (although in the case of iTunes, it can be used as a packaged application, managing only the user’s local data.) What’s more, both TiVo and iTunes show some budding use of collective intelligence, although in each case, their experiments are at war with the IP lobby’s. There’s only a limited architecture of participation in iTunes, though the recent addition of podcasting changes that equation substantially.

This is one of the areas of Web 2.0 where we expect to see some of the greatest change, as more and more devices are connected to the new platform. What applications become possible when our phones and our cars are not consuming data but reporting it? Real time traffic monitoring, flash mobs, and citizen journalism are only a few of the early warning signs of the capabilities of the new platform.

7. Rich User Experiences

As early as Pei Wei’s Viola browser in 1992, the web was being used to deliver “applets” and other kinds of active content within the web browser. Java’s introduction in 1995 was framed around the delivery of such applets. JavaScript and then DHTML were introduced as lightweight ways to provide client side programmability and richer user experiences. Several years ago, Macromedia coined the term “Rich Internet Applications” (which has also been picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of Flash to deliver not just multimedia content but also GUI-style application experiences.

However, the potential of the web to deliver full scale applications didn’t hit the mainstream till Google introduced Gmail, quickly followed by Google Maps, web based applications with rich user interfaces and PC-equivalent interactivity. The collection of technologies used by Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm Adaptive Path. He wrote:

“Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

Web 2.0 Design Patterns

In his book, A Pattern Language, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: “Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

  1. The Long Tail
    Small sites make up the bulk of the internet’s content; narrow niches make up the bulk of internet’s the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
  2. Data is the Next Intel Inside
    Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.
  3. Users Add Value
    The key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don’t restrict your “architecture of participation” to software development. Involve your users both implicitly and explicitly in adding value to your application.
  4. Network Effects by Default
    Only a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.
  5. Some Rights Reserved. Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for “hackability” and “remixability.”
  6. The Perpetual Beta
    When devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don’t package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.
  7. Cooperate, Don’t Control
    Web 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.
  8. Software Above the Level of a Single Device
    The PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected. Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.

AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!, 37signals’ applications basecamp and backpack, as well as other Google applications such as Gmail and Orkut. We’re entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.

Interestingly, many of the capabilities now being explored have been around for many years. In the late ’90s, both Microsoft and Netscape had a vision of the kind of capabilities that are now finally being realized, but their battle over the standards to be used made cross-browser applications difficult. It was only when Microsoft definitively won the browser wars, and there was a single de-facto browser standard to write to, that this kind of application became possible. And while Firefox has reintroduced competition to the browser market, at least so far we haven’t seen the destructive competition over web standards that held back progress in the ’90s.

We expect to see many new web applications over the next few years, both truly novel applications, and rich web reimplementations of PC applications. Every platform change to date has also created opportunities for a leadership change in the dominant applications of the previous platform.

Gmail has already provided some interesting innovations in email, combining the strengths of the web (accessible from anywhere, deep database competencies, searchability) with user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the PC platform are nibbling away at the problem from the other end, adding IM and presence capabilities. How far are we from an integrated communications client combining the best of email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of web applications? The race is on.

It’s easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address book would treat the local address book on the PC or phone merely as a cache of the contacts you’ve explicitly asked the system to remember. Meanwhile, a web-based synchronization agent, Gmail-style, would remember every message sent or received, every email address and every phone number used, and build social networking heuristics to decide which ones to offer up as alternatives when an answer wasn’t found in the local cache. Lacking an answer there, the system would query the broader social network.

A Web 2.0 word processor would support wiki-style collaborative editing, not just standalone documents. But it would also support the rich formatting we’ve come to expect in PC-based word processors. Writely is a good example of such an application, although it hasn’t yet gained wide traction.

Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates how the web can be used to deliver software as a service, in enterprise scale applications such as CRM.

The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0. Companies that succeed will create applications that learn from their users, using an architecture of participation to build a commanding advantage not just in the software interface, but in the richness of the shared data.

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we’ve highlighted some of the principal features of Web 2.0. Each of the examples we’ve explored demonstrates one or more of those key principles, but may miss others. Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models

The next time a company claims that it’s “Web 2.0,” test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

Tim O’Reilly
O’Reilly Media, Inc., tim@oreilly.com
President and CEO


Read Full Post »

 http://www.openeducation.net/2008/01/19/student-shortcomings-anything-but-masters-of-technology/ 

  

Student Shortcomings – Anything but Masters of Technology

When it comes to today’s kids and their use of technology, a new report sponsored by the British Library and the Joint Information Systems Committee reveals some very interesting results. The biggest shock to many will be one that is actually quite obvious to those who work in education.Today’s students are anything but masters of the technology universe. In fact the report casts major dispersions on the view that teens are better with technology than are older adults.The study sought to determine just how good young people were with information technology and thereby determine what schools and libraries should in turn focus on when teaching students. To make their determinations, a log of British Library web sites and search tools was analyzed along with a “virtual” longitudinal study based on literature reviews from the past 30 years.

Higher Order Application Skills
To absolutely no one’s surprise, youngsters prefer interactive systems to passive ones. Therefore they love technology and yes, they do exhibit fairly strong basic technology skills.

However, the report indicated that these users are anything but “expert searchers.” In fact, the report indicates that younger users have real difficulty choosing good search terms.

The report also revealed the weaknesses created by the desire for interactive devices. Students in fact really, really like activity and therefore like to cut-and-paste. The report notes, “There is a lot of anecdotal evidence and plagiarism is a serious issue.”

A major surprise of the study for educators was to render one oft-heard complaint about young people fallacious. The growing belief is that technology has ultimately made students more impatient and added to their need for instant gratification. The report indicates there is no hard evidence that young people are any more impatient than are adults.”

Higher Order Thinking Skills
Another key area of deficiency is the ability to evaluate information that they obtain through electronic media. In fact the study indicates that students often fail to evaluate such information at all.

Notes the study, “There is little evidence that this (information evaluation) has improved over the last 10 to 15 years. Early research suggested nearly fifteen years ago (and pre-dating the Internet) that teenagers did not review information retrieved from online databases for relevance (e.g. from online databases) and, consequently, undertook unnecessary supplementary searches when they had already obtained the information required.”

To make determinations, the researchers examined the speed at which young people web search. They found that “the speed of young people’s web searching indicates that little time is spent in evaluating information, either for relevance, accuracy or authority and children have been observed printing-off and using Internet pages with no more than a perfunctory glance at them. Researchers have similarly found young people give a consistent lack of attention to the issue of authority.”

Students also show a real preference for visual information over text. Multi-media is often the preference but of course text is incredibly important. On the other hand, one real positive, students are indeed able to multi-task. Notes the report, “It is likely that being exposed to online media early in life may help to develop good parallel processing skills.”

Implications for Education
The report offers a rationale and a hint of where education could come into play. The researchers offer:

Students “need not only a broad understanding of how retrieval systems work and how information is represented within bibliographic or full text databases, but also some appreciation of the nature of the information space, and of how spelling, grammar and sentence structure contribute to effective searches.”

The study offers suggestions, that students are in need of “a mental map of how search engines work” as well as greater a vocabulary to give students the ability to move “from natural language” in search queries so as to “consider synonyms or other alternatives.”

In addition, there must be a focus on information skills. “Clearly people are having great difficulties navigating and profiting from the virtual scholarly environment. To facilitate this move, we should “start with effecting the shift from a content-orientation to a user-facing perspective and then on to an outcome focus.”

“Perhaps the greatest issue and the one that is the most difficult is to actually get these youngsters to understand their current shortcomings.

There is a big gap between their actual performance in information literacy tests and their self-estimates of information skill and library anxiety. The findings of these studies raise questions about the ability of schools and colleges to develop the search capabilities of the Google Generation to a level appropriate to the demands of higher education and research.”

And as with other learning issues, remediation tends to be far more difficult. “The key point is that information skills have to be developed during formative school years and that remedial information literacy programs at university level are likely to be ineffective.”

Read Full Post »

web2_framework.pdf

future_of_media_report2007.pdfweb2_framework.pdf

| |

Launching the Web 2.0 Framework

Alongside our corporate strategy consulting and research work in the media and technology space, Future Exploration Network has created a Web 2.0 Framework to share openly. Click here or on any of the images below to download the Framework as a pdf (713KB).

The intention of the Web 2.0 Framework is to provide a clear, concise view of the nature of Web 2.0, particularly for senior executives or other non-technical people who are trying to grasp the scope of Web 2.0, and the implications and opportunities for their organizations.

There are three key parts to the Web 2.0 Framework, as shown below:

Web 2.0 Framework
Web 2.0 Framework
* Web 2.0 is founded on seven key Characteristics: Participation, Standards, Decentralization, Openness, Modularity, User Control, and Identity.
* Web 2.0 is expressed in two key Domains: the Open web, and the Enterprise.
* The heart of Web 2.0 is how it converts Inputs (User Generated Content, Opinions, Applications), through a series of Mechanisms (Technologies, Recombination, Collaborative Filtering, Structures, Syndication) to Emergent Outcomes that are of value to the entire community.
Web 2.0 Definitions
Web 2.0 Definitions
* We define the Web 2.0 Characteristics, Domains, and Technologies referred to in the Framework.
* Ten definitions for Web 2.0 are provided, including the one I use to pull together the ideas in the Framework: “Distributed technologies built to integrate, that collectively transform mass participation into valuable emergent outcomes.”
Web 2.0 Landscape
Web 2.0 Landscape
* Sixty two prominent Web 2.0 companies and applications are mapped out across two major dimensions: Content Sharing to Recommendations/ Filtering; and Web Application to Social Network. The four spaces that emerge at the junctions of these dimensions are Widget/ component; Rating/ tagging; Aggregation/ Recombination; and Collaborative filtering. Collectively these cover the primary landscape of Web 2.0.

As with all our frameworks, the Web 2.0 Framework is released on a Creative Commons license, which allows anyone to use it and build on it as they please, as long as there is attribution with a link to this blog post and/ or Future Exploration Network. The framework is intended to be a stimulus to conversation and further thinking, so if you disagree on any aspect, or think you can improve on it, please take what is useful, leave the rest, and create something better.

In the Framework document we also mention our forthcoming Future of Media Summit 2007, which will be held simultaneously in Sydney and San Francisco this July 18/17. In the same spirit as this Web 2.0 Framework, we will be releasing substantial research, framework, and other content on the Future of Media in the lead-up to our event, continuing the tradition from the Future of Media Strategic Framework and Future of Media Report 2006 that we released last year. Hope this is all useful!

|

Read Full Post »

Marty Secada (linkedinmarty at yahoo dot com)

Managing Director Broad and Wall Advisors (4,800+) Alternative Investments

see all my questions

What is the future of social and business networks?

What is the future of social and business networking?

It seems that social networks are popping out of the woodwork at a faster pace than ever. New specialty business networks occur weekly and on Facebook alone, groups are created daily, the successful ones offering tremendous value to members. I just came across this article comparing Linkedin to Facebook and admiring Facebook for its richer environment. http://www.businessweek.com/technology/content/aug2007/tc2007085_238273.htm .

Many of Linkedins biggest users have complained about its lack of customer service and barren platform. Many of Linkedins power users take Linkedin to the next level for business development purposes. Is Facebook the future? Has Linkedin been left in the dust? How can Linkedin catch up? What are obstacles to a social or business community minded individual from making their own community and competing with Linkedin and Facebook, or just floating their own alternative without profit motive. Is there a substantial cost to building a state of the art social network or is it just a rush for large membership numbers.

Please share your views, we’d like to know and if you are on Facebook, feel free to connect with the many linkedin users there as well.

posted 5 months ago in Business Development, Web Development | Closed | Flag question as…

Answers (43)

Kristian Melhuus Brandser is a 2nd-degree contact

Kristian Melhuus Brandser

Computer professional & entrepreneur

see all my answers

Hi,

We at Community Reborn (a company developing specialized community software for the entertainment industry) believe we will see a shift towards fewer larger “general” communities, like Facebook for college-socializing and LinkedIn for business-networking, complemented by a lot of smaller specialized communities with specific content and functionality. E.g. a “fly-fishing community” with custom fly-fishing-bait-design-application

We have created a common platform for such specialized communities. This gives economies of scale on development of common community functionality and opens the possiblity for user and content collaboration between communities.

Facebooks problem in my opinion is that is tries to do both things at the same time.. By being a generalist community it will not have appeal to specilist community users and vica verca.

Links:

posted 5 months ago | Flag answer as…

Stephen Bailey is a 2nd-degree contact

Stephen Bailey

Senior Executive Outsourcing Industry

see all my answers

Stephen Bailey suggests this expert on this topic:

Hello Marty,
Thomas Power is one of, if not the world’s foremost experts in online communities. He is easy to find on Ecademy (as he founded it) and easy to contact (just google him and his telephone number will appear).

Thanks
Stephen

posted 5 months ago | Flag answer as…

Matt Genovese is a 2nd-degree contact

Matt Genovese

Social network builder in Austin, Texas; Hardware verification engineer, software consultant.

see all my answers

Best Answers in: Using LinkedIn (7)see more, Professional Networking (1), Mergers and Acquisitions (1), Government Policy (1), Staffing and Recruiting (1), Viral Marketing (1), Business Development (1), Public Relations (1), Planning (1), Starting Up (1), Wireless (1) see less

Hi Marty,

I understand where you’re coming from. I think LinkedIn does a tremendous job at allowing people to network on a global scale, keep up to date with their contacts, and tap into a large online knowledge base of users.

However, where it falls short is in facilitating local networking. For example, I recently started a LinkedIn group and associated website just for high-tech professionals in my home town of Austin, Texas. The goal is to network within our own geographic region, which has its own benefits (for instance, the ability to physically meet with the people you interact with online, and to discuss issues relevant to our locality and professional “scene”.) In my mind, that type of networking is very beneficial and much more tangible, but yet outside the more global scope of LinkedIn as it stands today. I think of the regional LI group as an extension of LinkedIn, and in turn LI would do well to enable bootstrap such initiatives.

Cheers,

Matt

Links:

posted 5 months ago | Flag answer as…

Alastair Bathgate is a 2nd-degree contact

Alastair Bathgate

Managing Director at Blue Prism Limited

see all my answers

I envisage a future where everyone has their own website (currently mostly blogs but this will probably be only one feature of a personal website).
Social and business networking sites will then become little more than URL exchanges. Except they will still be able to offer added value in connecting groups, events etc. The key difference though is that they need to become non proprietary. Many poeple have already complained about the walled gardens of networking sites and asked for the sites to be opened up. The easy way to achieve this is to separate the personal information layer from the connecting layer. I wonder which networking site will be first to recognise that this is an opportunity not a threat?

Links:

posted 5 months ago | Flag answer as…

Ravi Shekhar Pandey is a 2nd-degree contact

Ravi Shekhar Pandey

Manager, Syndicated Research, Springboard Research

see all my answers

While there are many aspects to the answer you are seeking, I will just focus on one — the ablity of the social / business networks to foster a more dynamic and vibrant culture of knowledge sharing and innovation. My understanding is that these networks will play a key role in all future innovations that the world will see. For instance, imagine this situation — Company A had just launched a new Computer which is being discussed threadbare by millions of its prospective customers from around the world on a networking site that brings together people with deep interest in computers — a million customers discussing a new product — that means a million new ideas for that company.

posted 5 months ago | Flag answer as…

Matthew Gallagher is a 2nd-degree contact

Matthew Gallagher

Vice President, Interactive Creative Director

see all my answers

Best Answers in: Mentoring (1)see more, Computers and Software (1), Web Development (1) see less

LinkedIn is an interesting community, but it is also a closed community. One of the issues with any of these social networks is its detachment from the other networks that a member may be participating within. For example, one of the technology boards I frequent had a post asking “How web 2.0 are you?”; meaning what networks do you participate in.

I am active in over a dozen sites (linkedin, flickr, delicious, twitter, etc.) and member of nearly twice that many. Some are business experiments, while others are social experiments.

Facebook has an open API as well as some competitions that are inviting programmers to develop applications for Facebook; integrating it with the way that people use the whole web, not just social connections. Facebook, also has some legal troubles on the horizon. If they can weather them, their platform is far superior to competitors than MySpace.

Netvibes is another site that has gained in popularity because it allows the user the ability to customize their experience to how they wish to receive the data.

In my opinion, the success will hinge on not only expansion of the services offered, but the integration with the work-flow of the visitor.

posted 5 months ago | Flag answer as…

Matthew Zachary is a 2nd-degree contact

Matthew Zachary

Founder & Executive Director, I’m Too Young For This! + Advisor, Google Health

see all my answers

Best Answers in: Viral Marketing (1)

The next two big things in social and business networking, in my belief, will first be the convergence of consumer health (with various disease verticals) followed by aggregates such as early startup SocialURL.com. User profiles will become mini-wikis with branches into all sectors and sociological components of “me-generation” metrics. Google Health, for which I am an advisor, along with other emerging enterprises such as Steve Case’s Revolution Health demonstrate a clear direction from where the next big thing is coming.


Matthew Zachary
11-Year Young Adult Survivor
Founder, Executive Director
i[2]y, I’m Too Young For This!
Advisor, Google Health
w: 877-735-4673 x701
f: 718-745-1928
e: MZachary@ImTooYoungForThis.org

I’m Too Young For This! is a global support community for young adults
affected by cancer who get busy living and rock on. We use music to make it
hip to be a survivor and talk about stupid cancer by providing ‘one-stop’
access to hard to find resources, peer support and social networks.

Got cancer? Under 40? Sucks, huh? Get busy living!

Website: http://ImTooYoungForThis.org
*TIME MAGAZINE TOP 50 WEBSITES, 2007*

Links:

posted 5 months ago | Flag answer as…

Bart Suichies is a 3rd-degree contact

Bart Suichies

New Media Strategist

see all my answers

I believe that the next step in social media will be ‘distributed social networks’, where there’s not one site or platform that will become the winner, but instead all individuals will have their own networks with them at all times.

Users are going to decide which network they need (adhocracy) at the moment they need it. Current networks like facebook/linkedin will become obsolete or will just become open storage facilities for contacts. A standard identity protocol will arise (like openID) for authentication in any given network and XML / microformats / etc will do the rest.

Adhoc networks will be created on demand on any device (we’ll see a strong rise in mobile – contextual – social networks) that communicates an open/standard language. This will give users an unprecedented level of privacy, flexibility and value.

posted 5 months ago | Flag answer as…

Danny Small is your connection (1st-degree)

Danny Small

Motivational Change Consultancy – Business & Personal Support [danny@kelta-associates.co.uk]-LION

see all my answers

Best Answers in: Using LinkedIn (5)see more, Mentoring (2), Career Development (1), Professional Networking (1), Personal Debt Management (1) see less

Hi Marty,

The future is looking good for networks, it’s all about the people and their expectations, like most developments that we as humans have created we all get on board and as times go by, we make more demands and our needs and desires grow.

My son is using a social network, he is in touch with friends who he can “Actually meet” also virtual one. It depends on what you need to get out of the network, it is after all just a tool. For connecting, communication are just a way to find out how to make money.

If the people find that the ‘tool’ does not serve a purpose they just discard it and find something more usefull.

It is musch the same with a TV, it started of small and has developed along the way but as it changes so do the things around it and we get comparisons and variables and alternatives.

I think the future is what you have said “Its in the distance” and we will Change as the Future changes! or we also will get left behind.

I have not been involved with LinkedIn as long as some people but I’m learning fast – How long have you been onboard and do you think there is a period where it just does not satify expectatons.

Good question.

Danny

posted 5 months ago | Flag answer as…

Michael Stephen Ruiz is a 2nd-degree contact

Michael Stephen Ruiz

Entrepreneurial, Bottom-line Visionary with Multiple Talents & Resources in High Technology & Security, CIPP

see all my answers

Best Answers in: Government Policy (1)see more, Personnel Policies (1) see less

Marty, that is an excellent, timely question. This is the answer:

http://www.cio.com/documents/webcasts/socialtext/wiki_workplace/

Mass collaboration inside/outside the present corporate structure to create, develop and facilitate products/services to the 80M 13-29 year old individuals who are the “masses” at this point in time.

The development of wikis is unsatisfactory to me at this point. No security protocols, no flexibility, and no dynamic liquidity. I want to move, shape, and absorb my resources in real-time instead of directing them in a two dimensional manner. The company who creates the new advanced wiki will surely create a paradigm-shifting event.

Michael Stephen Ruiz also suggests this expert on this topic:

posted 5 months ago | Flag answer as…

Diane Danielson is a 3rd-degree contact

Diane Danielson

CEO, downtownwomensclub.com

see all my answers

Best Answers in: Mentoring (2)

Very good question. My experience with social networking for business is that it’s more “task-oriented” and less “social” than the name connotates. Hence, I LOVE this Answers feature (I use the Harvard Start-ups yahoo group similarly) and that is really my main use of LinkedIn, other than running a LinkedIn group for my business. Per an earlier answer, I found that I have a better relationships with my “blogging buddies” rather than individuals on any “social network.” But, often those relationships have involved introductory phonecalls or face-to-face meetings. However, I confess to being an “older Gen X’er” so many of my peer and boomer contacts (even if they are bloggers) are less likely to be on social networks, and still prefer the phone.

posted 5 months ago | Flag answer as…

Ido Goldberg is a 2nd-degree contact

Ido Goldberg

IT Manager at Kidaro

see all my answers

Hi ,
I think that is a very interesting question,
And I’ll try to keep my answer simple because it’s a very philosophical question.
I think that WEB 2.0 and the future WEB 3.0 (and their vision) made us , the internet users and companies, realize that networking .. both social and business are the main reason we actually use the internet, I mean .. of the “old days” before 2000 we logon on the get some information on a service , product and so forth info we wanted in a certain time i.e. tickets for movies, train hours …
On WEB2 and 3 its been clear that the internet has grown in to communities its not only a big shopping mall or information counter, its where people do a lot of socialing and business in an infinite ocean of information and the smart thing to do is realize what do YOU want and when so we can provide useful information,
I think that the real answer is how the internet will allow us to overcome the cyber way on doing things on WEB 4 maybe :).
But for now networking is the real way we do business therefore there is a great future in it.

posted 5 months ago | Flag answer as…

Eric Mariacher is your connection (1st-degree)

Eric Mariacher

Embedded Software Manager ▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄▀▄ [eric.mariacher@gmail.com] LION/MyLink500.com

see all my answers

Best Answers in: Using LinkedIn (9)see more, Web Development (2), Offshoring and Outsourcing (1), Project Management (1) see less

The future is ning where you can create your own social network.

Links:

posted 5 months ago | Flag answer as…

Mike Myatt is a 2nd-degree contact

Mike Myatt

Managing Director at N2growth, America’s Top CEO Coach and author of Leadership Matters…The CEO Survival Manual

see all my answers

Best Answers in: Starting Up (2)see more, Career Development (1), Job Search (1), Ethics (1), Professional Networking (1), Staffing and Recruiting (1), Advertising (1), Events Marketing (1), Internet Marketing (1), Business Development (1), Sales Techniques (1), Organizational Development (1), Project Management (1), Retirement and Estate Planning (1), Business Plans (1), Enterprise Software (1), Computers and Software (1) see less

Hi Marty:

I hope all is well…Okay, here’s my take…I believe social networking will follow the same macro and micro economic trends consistent with new technology/market genres. The first movers will be pushed to adapt and evolve by the fast followers, and frothy capital markets interest will fuel high-velocity growth until the reality of business sets in…

This vertical will go through a harsh consolidation phase where flawed business models will be weeded out and the overall vertical will be strengthened as the strongest brands survive and prosper. Sites like LinkedIn cannot rest on their laurels. They must begin to pay attention to member needs and use a focus on member centricity to drive innovation. Your question addressed the cost side of developing social media and that is part of the problem is that there is not a significant cost barrier to entry. A simple mash-up social community can be launched in a matter of weeks. The challenge is in creating value, attracting and retaining members. Even a large member base can erode or churn if they don’t perceive a commitment on the part of principal owners to continue to add value.

Links:

posted 5 months ago | Flag answer as…

Zygmunt Lozinski is a 3rd-degree contact

Zygmunt Lozinski

Telecom Industry Technical Leader at IBM

see all my answers

I believe social networks are here to stay, but that we will see changes in how they are used and how they are designed.

Three trends:

1. Social networks will become platforms, with open APIs which allow new services. The value of these networks will then be driven by the new applications and services they support. We have seen this already in Facebook, and Second Life, and MySpace has also announced it will create open APIs.

2. The opening up of the data that underpins the social network. In effect enabling people to create data-mashups.

3. The changes in usage patterns. The question here is do we get convergence on a small number of massive platforms, or divergence onto multiple platforms. If the individual social networks allow cross-network linking, there is no reason for convergence on a single platform. (There are over 700 phone networks world wide but you can talk from anyone to any other.)

posted 5 months ago | Flag answer as…

Mark Wayman is a 2nd-degree contact

Mark Wayman

Co-founder – Social Gears (www.socialgears.com)

see all my answers

Hello Marty,

A recent Bear Stearns report has Facebook valued at an astonishing $4.5 billion to $7 billion. MySpace, Facebook and LinkedIn and many more have proven social networks to be as a viable business model and also educated their respective user bases on the concept.

As to the future, I have to agree with Kristian that the new opportunities exist for more focused communities that include the core “general” functionality and extend it with features for their specific niche.

We are currently working on exactly this for the beauty industry around the http://www.salons.com domain name. Compared to a few years ago the startup cost for this venture is substantially less. The real challenge for us is awareness and creating the “spark” that keeps our customers coming back.

Cheers, Mark.

Links:

posted 5 months ago | Flag answer as…

Michael Cayley is a 3rd-degree contact

Michael Cayley

Actualizer: strategy, brands, new media

see all my answers

In the future all businesses will employ social network platforms to help them leverage these networks in the development of corporate social capital. Social capital is the aggregate of benefits of an individual’s (and a corp is a kind of individual) social networks. For corps this means employing platforms that empower/integrate constituencies such as investors/analyst/media, suppliers (think WalMart), employees and above all customers (particularly as volunteer product development, marketing & sales forces). Each individual within the social network can now be theoretically as powerful as NBC.

Corporate valuation of many companies is already far more attributable to corporate social capital than the traditional notion of brand. Take Google, Amazon, YouTube and MySpace as a few examples. As corporate social capital becomes more apparent as an authentic source of corporate valuation, in the same way that brand has since the late 1980’s (when the Barbarians were at the Gate), more companies will invest in the development of their social networks (first through tech platforms, then through initiatives that mobilize the constituents enabled by the platforms).

Attention Getting Bold Prediction: Within the next 10 years corporate social capital, an authentic, tangible asset, will account for more corporate valuation than brand (intangible, conceived to be manipulative) in more than 50% of companies.

Once the motive for one of mankind’s most efficient forms of organisation is firmly established (i.e., social capital as a source of valuation for corporations) lots of cool things are going to start to happen. Companies will reinvest in employee loyalty in new and exciting ways, corps may find that entering markets with low social capital (no democracy, little transparency, corruption) and being a positive force of change may be a source of high corporate valuation … McLuhan’s tribal beat and global villiage (frightening notions according to him) are becoming reality as traditional broadcast media is replaced by a totally interactive, highbandwidth format that we are only beginning to discover and do not understand.

posted 5 months ago | Flag answer as…

Marty,

a few months ago I posted some thoughts under the title: “Corporations, networks … what next?” Please use the link below:

Links:

posted 5 months ago | Flag answer as…

Carrie Bedingfield is a 3rd-degree contact

Carrie Bedingfield

Owner of B2B Marketing and Internal Comms agency, Onefish Twofish

see all my answers

Hi Marty – it’s a really great question.

I think the big issue is around revenue streams. Will social/business networks remain free, in the most part, or will they start to generate their primary income from subscriptions and services. I can’t be the only person who feels that it’s now becoming quite expensive to do simple things on LinkedIn! The business model seems to be ‘get them hooked while it’s free, then start to add in the charges’. I’m a member of a couple of other networks which have moved from free to far-from-free too quickly for my liking.

I’m not sure what the price elasticity of online networks is – at what point do profits start to fall as prices increase? At what point is loyalty significantly eroded?

My prediction is that one major player in each field (social and business) will continue to offer a free, fully functional service and really start to monopolise the market. They will make their money on alternative services and clever diversification which permeates other online and offline industries. This is how the search engine market evolved – perhaps a good indicator of what’s to come for online networking?

Carrie Bedingfield
http://www.onefishtwofish.co.uk

posted 5 months ago | Flag answer as…

Laban Johnson is a 2nd-degree contact

Laban Johnson

Founder, the Laban Johnson Group -“Improving the Quality of Life”

see all my answers

I’d say the future of social and business networking is what we make of it!

posted 5 months ago | Flag answer as…

Sandra Voss is a 2nd-degree contact

Sandra Voss

Realtor at Michael Saunders & Co and Owner, Sandra Voss, Realtor

see all my answers

Best Answers in: Facilities Management (1)see more, Economics (1), Compensation and Benefits (1), Internationalization and Localization (1), Commodity Markets (1), Equity Markets (1) see less

I believe social and business networks are going to continue to grow and meld. I think every salesperson knows the value of referred leads. How else will we grow them except through our personal contact, social and business networking? However, very few salespeople know how to generate a consistent supply of referred leads….and it seems to me that the IT sector is taking the lead. In my business — real estate, as well as other service industries, relationships are the cornerstone of a successful business. The methods employed to generate leads, however, are often non-relational – activities like cold calling, door knocking, direct-mail, advertising etc. It is amazing how many folks are out there still stuck in that rut ONLY doing those things. I think if you are in a service business, building relationships through giving excellent personal service + increasing you social and business network is the number one way to increase business. But having a system of balancing these is a must to have in place. For me – I have to intentionally generate those referrals. A continuous stream of referrals doesn’t just happen. It is created and cultivated….including social and business networking.

Sandra Voss also suggests these experts on this topic:

posted 5 months ago | Flag answer as…

Sinnary Sam [LION] is a 2nd-degree contact

Sinnary Sam [LION]

Founder & CEO

see all my answers

Best Answers in: Starting Up (1)see more, Using LinkedIn (1) see less

Marty, thank you for asking this question. The answers give me a wealth of information that I can now research as well. I am still not sure as to what the true benefits of Linked In will be for me. I have found some past co-workers & contacts. Other than that, I would like something that will work with my organization so that my members are profiled and can be filtered as such, but still be connected to the entire network as well.

Sinnary Sam
http://www.FOREnetworking.com

Links:

posted 5 months ago | Flag answer as…

John Inman Ed.M. PHR is a 2nd-degree contact

John Inman Ed.M. PHR

{LION} Expert in Human & Organizational Development {jinman@wetherhaven.com} {MyLink500.com}

see all my answers

Best Answers in: Career Development (1)

I seem to be invited to an ever increasing number of obsure networking sites. My advice is to focus. Unless one has endless amounts of time, I do not see how to keep up on too many sites. I am on at least 6 main sites but only am really active on LinkedIn. My profiles are complete on each that I join. And I am not sure the question is can LnkedIn catch up to facebook. I do have a facebook account but it does not feel streamlined for business use to me. Maybe it is just me. In business I do not want to have too much cute stuff out there. My LinkedIn profile is my calling card, my online resume, my vita. I think there is a risk to make yourself look too cute for at least the current main stream.

This certainly could change within a hand full of years but by then, linkedin will still have innovated and met the needs of a changing market, at least the core business community.

Just thinking outloud. Great questions.

John

posted 5 months ago | Flag answer as…

Peter Nguyen is a 2nd-degree contact

Peter Nguyen

Editor in Chief, CareerKnowledge.net (omnidigitalbrain@yahoo.com)

see all my answers

Best Answers in: Career Development (15)see more, Using LinkedIn (8), Business Development (6), Starting Up (6), Professional Networking (4), Staffing and Recruiting (4), Organizational Development (4), Small Business (4), Mentoring (3), Ethics (3), Planning (3), Education and Schools (2), Advertising (2), Business Analytics (2), Change Management (2), Customer Service (1), Job Search (1), Government Policy (1), Intellectual Property (1), Internet Marketing (1), Public Relations (1), Project Management (1), Product Design (1), Business Plans (1), Information Storage (1), Telecommunications (1), Web Development (1) see less

Einstein said our age is characterized by “profusion of means and confusion of ends.”

I think it applies to new technologies and applications, including social networking sites.

The key question is, Why do you need to connect to other people? What is the message you carry, or what is the value you offer?

Technology cannot save people or make any person successful. Only clarity of purpose and constancy of aim.

posted 5 months ago | Flag answer as…

Hans Sluijter is a 3rd-degree contact

Hans Sluijter

Vice President at ABN AMRO- Business Manager Services Western Europe

see all my answers

Hans Sluijter suggests this expert on this topic:

posted 5 months ago | Flag answer as…

Glenn Dhooghe is a 2nd-degree contact

Glenn Dhooghe

CTO at Emmis Belgium Broadcasting. Expert in PC hardware and PC-audio solutions.

see all my answers

Hi Marty,

I’ve been giving this topic some thought too, and it was the root for many interesting discussions. Thank you for bringing it up here!

There are many insightful and viable answers here already.

I like Zygmunt Lozinski’s answer, and would like to add my 5 cents to his answer:

API’s will allow integration with other resources. As informatics are finding their way into every place (media stations at lifestyle locations: bars, hotels, restaurants, shops; home; car; …), I believe integration with these other resources will be a huge step forward. You can track people that have the same interests as you – people you cold have met in real life, but just didn’t bump in to.
In these locations, it could also be possible to use your network. It could very well once happen that your PDA warns you of a contact that is nearby, so you could finally meet that guy whose interesting blogs you’ve been reading for weeks – because he’s sitting 10 feet from you, sipping from his cocktail.
As Matthew Zachary said, connections with health administration could allow you to contact people with similar medical conditions. You are no longer facing situations alone!

Stores could remember you and your profile, so when you pass by the store, it could display advertisements that are specifically suited to your unique taste – based on previous purchases, or with which lifestyle groups you’re affiliated.

Cross-networking will eliminate the need to keep track of 20+ sites. If you like the more localized layout of another community – use that with the crosslinked databases. If you’re looking for worldwide contacts, come here. The benefits will be more emphasized, and the rough edges flattened.

Clarification added 5 months ago:

I have completed some realizations in this sector. If anyone is interested in starting business ventures, or exchanging ideas, feel free to contact me!

posted 5 months ago | Flag answer as…

Paul Pajo is a 2nd-degree contact

Paul Pajo

Regional Sales Manager for Emerging Markets at Asia Payment Technology Corporation

see all my answers

Best Answers in: Change Management (1)see more, Organizational Development (1) see less

It will be integrated into secure-online payment gateways as well as with SMS. I think that’s the “last mile” for social media optimization

posted 5 months ago | Flag answer as…

David Burta is a 3rd-degree contact

David Burta

Owner, ProVent Associate

see all my answers

One core component to the future of social and business networks is the addition of CONTENT and interaction around that content. An emerging example of that is a public learning forum where anyone can place learning materials and anyone can take advantage of it called LatitudeU which can be found at http://www.latitudeu.com .

Links:

posted 5 months ago | Flag answer as…

Marc Rapp is a 2nd-degree contact

Marc Rapp

Creative/New Business Development at Renaissance Creative

see all my answers

Best Answers in: Advertising (1)see more, Public Relations (1) see less

The future, in my opinion is; a desktop widget with a video avatar, message system from a drop down menu of contacts, public notifications, updates, etc. All existing networks that I belong too can be easily accessed through an icon-driven navigation. I can also drag+drop links, files and text documents onto other user’s avatar’s and have the information sent. I may choose to visit the ‘hub’ ( main website ) for more information and functionality throughout the day. However, I should not have too. ‘Getting online’ will become a secondary step to ‘connecting’ as re-skinnable widgets become our main working environment. Mainly because they offer companies an opportunity to brand themselves and customize the windows we navigate the web in.

This of course, will replace the browser and it’s a triple interface experience . Skitch is on it’s way to something similar.

In the immediate, I would suggest that we bare a few things in mind, LinkedIn is not a social network. It is a business network. Start there.
Freelancing services.
Invoicing.
Transactions.
Company profiles.
Portfolios.
Audio/Video resumes.
Plaxo style contacts and connections.
Ziki style feeds and updates.

Social networks are capable of becoming the new homepage for users.
Treat it that way. One interface to rule them all.

Just a few thoughts.

Clarification added 5 months ago:

Also, let us not forget that we live and breath in this environment. There is a great deal of education needed on the consumer/user side. The more layered these systems become, the more likely they will be ignored by prospects. Some social networks are dangerously close to loosing their context in spite of their content.

posted 5 months ago | Flag answer as…

Robert Hahn is a 3rd-degree contact

Robert Hahn

VP, Marketing at OnBoard

see all my answers

Interesting question.

Since I’m implementing a social network for my enterprise as I write this, I’m somewhat biased in the direction of private social networks. For a variety of reasons, corporations simply cannot use a fully public social network for its internal network — which, I argue, is far more important than a bunch of people out there on the web, for day to day productivity.

The next big thing, I hope, is a common set of data standards that will allow all social networking sites/tools to share data with each other. Single point of data entry is an absolute requirement if this social networking thing is going to expand.

So for example, we will have some 7,000 members within the Coldwell Banker Commercial network who are busy doing deals with each other, networking within the company, etc. If we could interface directly with LinkedIn or FaceBook or whatever, from a single data-entry source, that would elevate the entire industry space to the next level. Without data sharing, we’ll all be stuck in our individual silos. That’s just a fact.

Think of something like Trillian that aggregates multiple IM services, but works in the social networking arena. That is what this industry space needs.

-rsh

posted 5 months ago | Flag answer as…

Ofer Vilenko is a 2nd-degree contact

Ofer Vilenko

Acquisitions Manager for a Manhattan investments firm

see all my answers

Hi Marty,

I think Matt Genivese was right on the money. The need is out there and someone has got to satisfy it, whether it will be linkedIn or another service I really can’t tell but speaking from a business/professional point of view people would like to have both their global and local networks to work with. A simple matter of convemience

posted 5 months ago | Flag answer as…

Alex Kent is a 2nd-degree contact

Alex Kent

Corporate and Investment Real Estate Strategic Planning and Transaction Management at JULIEN J. STUDLEY

see all my answers

eHarmony-like psych interviews combined with Google Desktop search helps people to find business partners, new hires, and even customers that share their values and mindset.

Who’s in the best position to make this happen? Google, of course.

posted 5 months ago | Flag answer as…

Andy Lopata is a 2nd-degree contact

Andy Lopata

Business Networking Strategist

see all my answers

At the moment, ‘social networking’ is a catch all term that covers a multiplicity of approaches. That is why LinkedIn and Facebook get mentioned in the same sentence, despite performing completely different functions. A host of ‘business network dwellers’ are going over to investigate how to make the most of Facebook and finding it a strange environmet, because it’s designed for social interaction rather than referral generation and profile building.

That’s not to say that you can’t do business on Facebook; but it does take time to work out how to leverage it most effectively.

I think that social media will begin to seperate into distinctive camps:

1 – Truly ‘social’ networks, like Facebook, Bebo and Friendster. The prime users of these sites will be a younger demographic using them to keep in touch with friends, arrange parties and share photos and videos.

2 – ‘Social Business’ networks, like Ecademy and LinkedIn. Although Ecademy has a social element, it is still designed for and populated by business people, predominantly small business owners. Earlier responses to this question mentioned the need for locally-based social networks and referral generation, we are currently in the process of launching a new ‘social business network’ at http://www.wordofmousenetwork.com. The model will be much more locally based, bringing together businesses for referral-generation in the way BNI do offline.

3 – ‘Private Internal Social Networks’ – as IBM already run with the ‘Blue Pages’, other large organisations will slowly recognise the need to find an effective way to share expertise across a large, global workforce. Social Networks will provide the best solution but the need to get over both security and efficiency fears will be the key to the speed of this development.

4 – Brand Networks. Bigger brands are starting to recognise the need to not only engage with their consumers but involve them. The Guardian Newspaper has just launched a social network in the UK and other British brands are looking at the media. In both the UK and US (and, I am sure, elsewhere), politicians have launched their own social networks.

I am sure that there are a number of developments for Social Networking, including niche networks and consumer oriented networks. It’s a question of when people will start to look at their use in distinct platforms rather than meshing everything together.

Links:

posted 5 months ago | Flag answer as…

Gregg Butler is a 3rd-degree contact

Gregg Butler

Vice President at n-tara, inc.

see all my answers

Perhaps you will find this study interesting reading, Marty. I did. It is generously made available to all of us by Heidi Browning, a senior executive at Fox Interactive. http://www.myspace.com/neverendingfriending

posted 5 months ago | Flag answer as…

Johan Vermij is a 2nd-degree contact

Johan Vermij

Networked Virtual Environments & Innovative Projects

see all my answers

Best Answers in: Professional Networking (1)see more, Business Development (1) see less

The future of the social and business networks is integration.

The web is used in 3 basic areas of life:
1. Private
2. Social
3. Professional

Each of these areas provide different needs, as well as a large overlap. The same functionality but preferably in separate streams.

More and more the web is used as an environment for sharing and non-localised access, moving applications and files (like documents, video’s and images) from your desktop to the web.

Most web 2.0 sites now focus either on social or on professional networking. None take into account that nowadays people may have different ‘identities’ on the web. The wen 2.0 killer app should have a Single Point of Entry to the web and should be able to deal with multiple identities.

Alongside with your real You, it will offer the option to add various a.k.a. profiles. From your FriendFactory addressbook you can select who (individually or groupwise) can see which profile.

Aside from managing your individual contacts, your friends need to be categorised. Your basic networks are:
1. Family
2. School Friends
3. Professional Contacts
These can be subdivided into primary school, secondary school, college etc. as well as collegues and clients on the professional networks. For each of these networks you will be able to set permissions as to who can see which part of you.

Aside from the basic layout of your network, it’s time to get in touch with them. Import your email adresses from IE, Thubderbird, Hotmail, Gmail etc. and invite them to join your network.

The ultimate web 2.0 integration site will also have room for sharing media, documents, feeds and tags.

(needs a bit more thought, but the best killer app design I could come up with in 5 minutes)

posted 5 months ago | Flag answer as…

Brent Williams is a 2nd-degree contact

Brent Williams

Chief Technology Officer at Anakam

see all my answers

Best Answers in: Enterprise Software (1)see more, Information Security (1) see less

I am seeing more and more concern over the security and vulnerabilities of identities within social networking services. As these services expand and overlap, more and more hackers are finding ways to exploit these services for their personal gain, and these exploitations tend to come through the the falsification of credentials or identity. We are seeing greater and greater interested in mass-scale, low-cost authentication solutions that can counter these vulnerabilities and dramatically improve the confidence you have in the fact that you are doing business with people whom you intend to interact.

Links:

posted 5 months ago | Flag answer as…

Seref Turkmenoglu, CMA is a 2nd-degree contact

Seref Turkmenoglu, CMA

Finance Professional (oil & gas)

see all my answers

It seems to me that Linkedin with its direct focus on business is much more viable than Facebook. Facebook is richer in features but its broad target crowd and effort to cover all from dating to business type of business model is not working for me.

posted 5 months ago | Flag answer as…

Vikram. Singh2 is a 3rd-degree contact

Vikram. Singh2

Commercial Director – Business Jets at KLM Royal Dutch Airlines

see all my answers

Last year we created 2 business communities and 1 lifestyle community in the form of Club Africa, Club China and FB Golf club. Its been exciting to see the response and challenging to implement the learnings. As Diane and several others mentioned below the web-meeting point is just the start. These clubs/networks and communities need to be supported and nurtured by off-line platforms.
So i can see in the future that there will be some melding of these communities not only online but also off-line.
The guys who can pull that off, will infact take it to the next level!

Links:

posted 5 months ago | Flag answer as…

Scott Steimle is a 2nd-degree contact

Scott Steimle

Manager, Lotus Flower Trading, LLC

see all my answers

Best Answers in: Databases (1)see more, Information Storage (1), Software Development (1) see less

Networking is all about building opportunities. Even the most altruistic of us will receive opportunities in response to generosity, whether prompted or not. In line with other responses, I see two main categories of networks — personal and professional. Just like dating and career sites, members of these social communities will seek to be matched with or referred to people, services or businesses they desire relations with. Social networking sites that facilitate this will take the lead.

Links:

Clarification added 5 months ago:

From a technological standpoint, using a rule-based reasoner that matches RDF-based profiles is one approach.

posted 5 months ago | Flag answer as…

Rory Murray is your connection (1st-degree)

Rory Murray

Consultant specialising in Strategic Transformation for Telecoms and other complex environments

see all my answers

It really depends what features best suit the purposes of the individual in achieving their objectives…….

Some people like the personal interaction provided by sites like Facebook and Ecademy (especially important for freelancers, who may be sat at home working without the social interaction of an office environment).

Others like the database aspects of LinkedIn – the ability to build a searchable network of verifiable individuals, in order to find the knowledge, skills and experience you need to further your business, but without the same level of distracting chit-chat. Xing, on the other hand, seems to have found a balance between the two and is succeeding for these reasons.

Other networks are springing up to service a niche market and there are many who are jumping on the bandwagon to try and make money, with no obvious unique features that will make them attractive to a large enough audience and are likely to collapse relatively quickly as a result.

The real power is in combining the most valuable attributes of these platforms to create a more 3-D approach – I use Ecademy, LinkedIn, Xing and Facebook together and manage my contacts using Plaxo. This gives me a much richer perspective on my contacts and allows me to build “trusted relationships” with real people (who I may never be fortunate enough to meet in person) but we are able to get to know each-other to an extent that means we can create referrals and recommendations for each-other.

I have written about this concept, which I called Return on Relationships “ROR” and there’s a link to my blog below, if you’re interested in reading further.

Links:

posted 5 months ago | Flag answer as…

Saurabh Oberoi is a 2nd-degree contact

Saurabh Oberoi

Sales & Marketing – North India

see all my answers

Best Answers in: Business Development (1)see more, Lead Generation (1) see less

The future for any networking lies that at the end person looks for the benefit that site has for him/her. The benefit could be in terms of knowledge or money etc.

As long as the networking site addresses this and is free of cost, it will be successful supported with a good revenue model.

posted 5 months ago | Flag answer as…

Brian Ehrlich is a 2nd-degree contact

Brian Ehrlich

Co-Founder, Honeydo.com

see all my answers

Some great responses have been posted to a very timely question.

In my opinion the next evolution will be a blending of both social and business networks. Communities in which we increasingly rely on our social connections to accomplish tasks of commerce. Our society inherently trusts opinions from fellow consumers, much more than “expert” advice. We’re taking the neighborly advice and expanding it exponentially across the nation and globe. Brands for all sizes and types of businesses will begin to live and die on the support from these communities. The viral power that these networks represent can only increase.

The evolution will eventually lead, as others here have stated, to “my profile” becoming my portal to the web and that profile will follow us in the coming years everywhere via the mobile platforms that are being created.

These are some very interesting times in the evolution of communication!

posted 5 months ago | Flag answer as…

Sadiq Baig is a 2nd-degree contact

Sadiq Baig

Marketing; International Trade; Virtual Assistance; Representative Services

see all my answers

Best Answers in: Customer Service (1)see more, Foreign Investment (1), Computers and Software (1) see less

Though business networking is not a new phenomenon, online networking itself is yet emerging and it has rightly been called as a ‘social technology’; it can work wonder in various realms including business.

Technically, it depends on a good websites with right functionalities, search-engines friendliness and keen and knowledgeable users.

Practically, though almost all the available social forums provide for various activities, including business, yet a business-field focused networking forums, i.e. world-wide importers of a certain product, consulting services in certain activities, could serve business people more directly within a wide classifications.

Generally, networking as a social technology is capital-intensive where expertise and concentrated hard work also matter. Hence it is prone to be controlled by money which can buy almost anything in a materialistic environment of our human society.

Thus, it has yet to be seen whether it will help forming or breaking cartels and vested interest or it can be used for wider benefit through developing genuine relationships between people-to-people and producer-to-consumers if technical curbs are not affected by the power that be in various realms always making room for the middle-persons.

It is irony of fate or what; never in the world any specialist such as a scientist, has ruled over a country, though many statesmen did. Perhaps this way nature provides for those who are endowed with specialties but are best in making use of others capabilities.

Anyway, networking seem to have far-reaching consequences for human society, the question is how to harness it for the optimum benefit so as to make it ‘totally war-free’ or free from unnecessary wars, arms buildup and other conflicts.

Kindest regards. Sadiq

Clarification added 5 months ago:

Recently, we in Pakistan witnessed something unusual – Our dictator president General Musharraf was forced NOT to impose emergency despite he had made a decision based on counsel of his cronies. In a meeting (networking?), his top crony let the word out which reached to media and within minutes almost all notable world leaders got alarmed. This made Dr. Condi to call Musharraf at 2 a.m and next morning he announced, “No Emergency”.
It means networking and media are hands in gloves and get fast results when blended together, but beware it is double-edge sword!

Clarification added 5 months ago:

Now issues raised in the question:

Face book: has it been taken to court? See link at my profile there.

LinkedIn: I think the management is mindful of its future and does necessary R&D.

Obstacles to social/business …:Chiefly it seems money, businesses/groups must solicit and finance innovative projects.

Existing social networks still have many free users; these deserve good ROI. Ways and means should provide for financing operational costs through ad.

My experience shows that members of a large network can rarely interact amongst them.

Suggestion: Instead of a very big network, an Umbrella Network should have cluster of networks within enabling members of any cluster network to interact with others freely and ‘evolved relationships’ must replace the referrals so as to make transactions – RoR. Yes, it will take time but once any two or more people know about each other fully well through interaction asking questions tantamount to sort of ‘due diligence, transactions will be self-facilitated and follow.

Here Ecademy’s launch in other regions/countries can be replicated as applicable.

posted 5 months ago | Flag answer as…

if (window.addEventListener || window.attachEvent) { seeAllExpertise.init(); }

Read Full Post »

Older Posts »

%d bloggers like this: