Feeds:
Posts
Comments

Posts Tagged ‘WebOS’

Science 2.0: Great New Tool, or Great Risk?

Wikis, blogs and other collaborative web technologies could usher in a new era of science. Or not.

By M. Mitchell Waldrop

  Back



Welcome to a Scientific American experiment in “networked journalism,” in which readers—you—get to collaborate with the author to give a story its final form.

The article, below, is a particularly apt candidate for such an experiment: it’s my feature story on “Science 2.0,” which describes how researchers are beginning to harness wikis, blogs and other Web 2.0 technologies as a potentially transformative way of doing science. The draft article appears here, several months in advance of its print publication, and we are inviting you to comment on it. Your inputs will influence the article’s content, reporting, perhaps even its point of view.

So consider yourself invited. Please share your thoughts about the promise and peril of Science 2.0.—just post your inputs in the Comment section below. To help get you started, here are some questions to mull over:

  • What do you think of the article itself? Are there errors? Oversimplifications? Gaps?
  • What do you think of the notion of “Science 2.0?” Will Web 2.0 tools really make science much more productive? Will wikis, blogs and the like be transformative, or will they be just a minor convenience?
  • Science 2.0 is one aspect of a broader Open Science movement, which also includes Open-Access scientific publishing and Open Data practices. How do you think this bigger movement will evolve?
  • Looking at your own scientific field, how real is the suspicion and mistrust mentioned in the article? How much do you and your colleagues worry about getting “scooped”? Do you have first-hand knowledge of a case in which that has actually happened?
  • When young scientists speak out on an open blog or wiki, do they risk hurting their careers?
  • Is “open notebook” science always a good idea? Are there certain aspects of a project that researchers should keep quite, at least until the paper is published?

–M. Mitchell Waldrop

The explosively growing World Wide Web has rapidly transformed retailing, publishing, personal communication and much more. Innovations such as e-commerce, blogging, downloading and open-source software have forced old-line institutions to adopt whole new ways of thinking, working and doing business.

Science could be next. A small but growing number of researchers–and not just the younger ones–have begun to carry out their work via the wide-open blogs, wikis and social networks of Web 2.0. And although their efforts are still too scattered to be called a movement–yet–their experiences to date suggest that this kind of Web-based “Science 2.0” is not only more collegial than the traditional variety, but considerably more productive.

“Science happens not just because of people doing experiments, but because they’re discussing those experiments,” explains Christopher Surridge, editor of the Web-based journal, Public Library of Science On-Line Edition (PLoS ONE). Critiquing, suggesting, sharing ideas and data–communication is the heart of science, the most powerful tool ever invented for correcting mistakes, building on colleagues’ work and creating new knowledge. And not just communication in peer-reviewed papers; as important as those papers are, says Surridge, who publishes a lot of them, “they’re effectively just snapshots of what the authors have done and thought at this moment in time. They are not collaborative beyond that, except for rudimentary mechanisms such as citations and letters to the editor.”

The technologies of Web 2.0 open up a much richer dialog, says Bill Hooker, a postdoctoral cancer researcher at the Shriners Hospital for Children in Portland, Ore., and the author of a three-part survey of open-science efforts in the group blog, 3 Quarks Daily. “To me, opening up my lab notebook means giving people a window into what I’m doing every day. That’s an immense leap forward in clarity. In a paper, I can see what you’ve done. But I don’t know how many things you tried that didn’t work. It’s those little details that become clear with open notebook, but are obscured by every other communication mechanism we have. It makes science more efficient.” That jump in efficiency, in turn, could have huge payoffs for society, in everything from faster drug development to greater national competitiveness.

Of course, many scientists remain highly skeptical of such openness–especially in the hyper-competitive biomedical fields, where patents, promotion and tenure can hinge on being the first to publish a new discovery. From that perspective, Science 2.0 seems dangerous: using blogs and social networks for your serious work feels like an open invitation to have your online lab notebooks vandalized–or worse, have your best ideas stolen and published by a rival.

To Science 2.0 advocates, however, that atmosphere of suspicion and mistrust is an ally. “When you do your work online, out in the open,” Hooker says, “you quickly find that you’re not competing with other scientists anymore, but cooperating with them.”

Rousing Success
In principle, says PLoS ONE’s Surridge, scientists should find the transition to Web 2.0 perfectly natural. After all, since the time of Galileo and Newton, scientists have built up their knowledge about the world by “crowd-sourcing” the contributions of many researchers and then refining that knowledge through open debate. “Web 2.0 fits so perfectly with the way science works, it’s not whether the transition will happen but how fast,” he says.

The OpenWetWare project at MIT is an early success. Launched in the spring of 2005 by graduate students working for MIT biological engineers Drew Endy and Thomas Knight, who collaborate on synthetic biology, the project was originally seen as just a better way to keep the two labs’ Web sites up to date. OpenWetWare is a wiki–a collaborative Web site that can be edited by anyone who has access to it; it even uses the same software that underlies the online encyclopedia Wikipedia. Students happily started posting pages introducing themselves and their research, without having to wait for a Webmaster to do it for them.

But then, users discovered that the wiki was also a convenient place to post what they were learning about lab techniques: manipulating and analyzing DNA, getting cell cultures to grow. “A lot of the ‘how-to’ gets passed around as lore in biology labs, and never makes it into the protocol manuals,” says Jason Kelly, a graduate student of Endy’s who now sits on the OpenWetWare steering committee. “But we didn’t have that.” Most of the students came from a background in engineering; theirs was a young lab with almost no mentors. So whenever a student or postdoc managed to stumble through a new protocol, he or she would write it all down on a wiki page before the lessons were forgotten. Others would then add whatever new tricks they had learned. This was not altruism, notes steering-committee member Reshma Shetty. “The information was actually useful to me.” But by helping herself, she adds, “that information also became available around the world.”

Indeed, Kelly points out, “Most of our new users came to us because they’d been searching Google for information on a protocol, found it posted on our site, and said ‘Hey!’ As more and more labs got on, it became pretty apparent that there were lots of other interesting things they could do.”

Classes, for example. Instead of making do with a static Web page posted by a professor, users began to create dynamically evolving class sites where they could post lab results, ask questions, discuss the answers and even write collaborative essays. “And all stayed on the site, where it made the class better for next year,” says Shetty, who has created an OpenWetWare template for creating such class sites.

Laboratory management benefited too. “I didn’t even know what a wiki was,” recalls Maureen Hoatlin of the Oregon Health & Science University in Portland, where she runs a lab studying the genetic disorder Fanconi anemia. But she did know that the frenetic pace of research in her field was making it harder to keep up with what her own team members were doing, much less Fanconi researchers elsewhere. “I was looking for a tool that would help me organize all that information,” Hoatlin says. “I wanted it to be Web-based, because I travel a lot and needed to access it from wherever I was. And I wanted something my collaborators and group members could add to dynamically, so that whatever I saw on that Web page would be the most recently updated version.”

OpenWetWare, which Hoatlin saw in the spring of 2006, fit the bill perfectly. “The transparency turned out to be very powerful,” she says. “I came to love the interaction, the fact that people in other labs could comment on what we do and vice versa. When I see how fast that is, and its power to move science forward–there is nothing like it.”

Numerous others now work through OpenWetWare to coordinate research. SyntheticBiology.org, one of the site’s most active interest groups, currently comprises six laboratories in three states, and includes postings about jobs, meetings, discussions of ethics, and much more.

In short, OpenWetWare has quickly grown into a social network catering to a wide cross-section of biologists and biological engineers. It currently encompasses laboratories on five continents, dozens of courses and interest groups, and hundreds of protocol discussions–more than 6100 Web pages edited by 3,000 registered users. A May 2007 grant from the National Science Foundation launched the OpenWetWare team on a five-year effort to transform OpenWetWare to a self-sustaining community independent of its current base at MIT. The grant will also support development of many new practical tools, such as ways to interface biological databases with the wiki, as well as creation of a generic version of OpenWetWare that can be used by other research communities such as neuroscience, as well as by individual investigators.

Skepticism Persists
For all the participants’ enthusiasm, however, this wide-open approach to science still faces intense skepticism. Even Hoatlin found the openness unnerving at first. “Now I’m converted to open wikis for everything possible,” she says. “But when I originally joined I wanted to keep everything private”–not least to keep her lab pages from getting trashed by some random hacker. She did not relax until she began to understand the system’s built-in safeguards.

First and foremost, says MIT’s Kelly, “you can’t hide behind anonymity.” By default, OpenWetWare pages are visible to anyone (although researchers have the option to make pages private.) But unlike the oft-defaced Wikipedia, the system will let users make changes only after they have registered and established that they belong to a legitimate research organization. “We’ve never yet had a case of vandalism,” Kelly says. Even if they did, the wiki automatically maintains a copy of every version of every page posted: “You could always just roll back the damage with a click of your mouse.”

Unfortunately, this kind of technical safeguard does little to address a second concern: Getting scooped and losing the credit. “That’s the first argument people bring to the table,” says Drexel University chemist Jean-Claude Bradley, who created his independent laboratory wiki, UsefulChem, in December 2005. Even if incidents are rare in reality, Bradley says, everyone has heard a story, which is enough to keep most scientists from even discussing their unpublished work too freely, much less posting it on the Internet.

However, the Web provides better protection that the traditional journal system, Bradley maintains. Every change on a wiki gets a time-stamp, he notes, “so if someone actually did try to scoop you, it would be very easy to prove your priority–and to embarrass them. I think that’s really what is going to drive open science: the fear factor. If you wait for the journals, your work won’t appear for another six to nine months. But with open science, your claim to priority is out there right away.”

Under Bradley’s radically transparent “open notebook” approach, as he calls it, everything goes online: experimental protocols, successful outcomes, failed attempts, even discussions of papers being prepared for publication. “A simple wiki makes an almost perfect lab notebook,” he declares. The time-stamps on every entry not only establish priority, but allow anyone to track the contributions of every person, even in a large collaboration.

Bradley concedes that there are sometimes legitimate reasons for researchers to think twice about being so open. If work involves patients or other human subjects, for example, privacy is obviously a concern. And if you think your work might lead to a patent, it is still not clear that the patent office will accept a wiki posting as proof of your priority. Until that is sorted out, he says, “the typical legal advice is: do not disclose your ideas before you file.”

Still, Bradley says the more open scientists are, the better. When he started UsefulChem, for example, his lab was investigating the synthesis of drugs to fight diseases such as malaria. But because search engines could index what his team was doing without needing a bunch of passwords, “we suddenly found people discovering us on Google and wanting to work together. The National Cancer Institute contacted me wanting to test our compounds as anti-tumor agents. Rajarshi Guha at Indiana University offered to help us do calculations about docking–figuring out which molecules will be reactive. And there were others. So now we’re not just one lab doing research, but a network of labs collaborating.”

Blogophobia
Although wikis are gaining, scientists have been strikingly slow to embrace one of the most popular Web 2.0 applications: Web logging, or blogging.

“It’s so antithetical to the way scientists are trained,” Duke University geneticist Huntington F. Willard said at the April 2007 North Carolina Science Blogging Conference, one of the first national gatherings devoted to this topic. The whole point of blogging is spontaneity–getting your ideas out there quickly, even at the risk of being wrong or incomplete. “But to a scientist, that’s a tough jump to make,” says Willard, head of Duke’s Institute for Genome Sciences & Policy. “When we publish things, by and large, we’ve gone through a very long process of drafting a paper and getting it peer reviewed. Every word is carefully chosen, because it’s going to stay there for all time. No one wants to read, ‘Contrary to the result of Willard and his colleagues…’.”

Still, Willard favors blogging. As a frequent author of newspaper op-ed pieces, he feels that scientists should make their voices heard in every responsible way possible. Blogging is slowly beginning to catch on; because most blogs allow outsiders to comment on the individual posts, they have proved to be a good medium for brainstorming and discussions of all kinds. Bradley’s UsefulChem blog is an example. Paul Bracher’s Chembark is another. “Chembark has morphed into the water cooler of chemistry,” says Bracher, who is pursuing his Ph.D. in that field at Harvard University. “The conversations are: What should the research agencies be funding? What is the proper way to manage a lab? What types of behavior do you admire in a boss? But instead of having five people around a single water cooler you have hundreds of people around the world.”

Of course, for many members of Bracher’s primary audience–young scientists still struggling to get tenure–those discussions can look like a minefield. A fair number of the participants use pseudonyms, out of fear that a comment might offend some professor’s sensibilities, hurting a student’s chances of getting a job later. Other potential participants never get involved because they feel that time spent with the online community is time not spent on cranking out that next publication. “The peer-reviewed paper is the cornerstone of jobs and promotion,” says PLoS ONE’s Surridge. “Scientists don’t blog because they get no credit.”

The credit-assignment problem is one of the biggest barriers to the widespread adoption of blogging or any other aspect of Science 2.0, agrees Timo Hannay, head of Web publishing at the Nature Publishing Group in London. (That group’s parent company, Macmillan, also owns Scientific American.) Once again, however, the technology itself may help. “Nobody believes that a scientist’s only contribution is from the papers he or she publishes,” Hannay says. “People understand that a good scientist also gives talks at conferences, shares ideas, takes a leadership role in the community. It’s just that publications were always the one thing you could measure. Now, however, as more of this informal communication goes on line, that will get easier to measure too.”

Collaboration the Payoff
The acceptance of any such measure would require a big change in the culture of academic science. But for Science 2.0 advocates, the real significance of Web technologies is their potential to move researchers away from an obsessive focus on priority and publication, toward the kind of openness and community that were supposed to be the hallmark of science in the first place. “I don’t see the disappearance of the formal research paper anytime soon,” Surridge says. “But I do see the growth of lots more collaborative activity building up to publication.” And afterwards as well: PLoS ONE not only allows users to annotate and comment on the papers it publishes online, but to rate the papers’ quality on a scale of 1 to 5.

Meanwhile, Hannay has been taking the Nature group into the Web 2.0 world aggressively. “Our real mission isn’t to publish journals, but to facilitate scientific communication,” he says. “We’ve recognized that the Web can completely change the way that communication happens.” Among the efforts are Nature Network, a social network designed for scientists; Connotea, a social bookmarking site patterned on the popular site del.icio.us, but optimized for the management of research references; and even an experiment in open peer review, with pre-publication manuscripts made available for public comment.

Indeed, says Bora Zivkovic, a circadian rhythm expert who writes at Blog Around the Clock, and who is the Online Community Manager for PLoS ONE, the various experiments in Science 2.0 are now proliferating so rapidly that it is almost impossible to keep track of them. “It’s a Darwinian process,” he says. “About 99 percent of these ideas are going to die. But some will emerge and spread.”

“I wouldn’t like to predict where all this is going to go,” Hooker adds. “But I’d be happy to bet that we’re going to like it when we get there.”

Read Full Post »


Web 3.0 and The Virtual Generation – Marketing Take Note!

January 30, 2008 <!–sumanchaudhuri–>

Web 3.0? What’s that now? We’re still dealing with Web 2.0 and now I’m already talking about Web 3.0?! Well, if you’ve been reading my earlier posts, in essence, I’ve already talked about Web 3.0 – the Semantic Web. So I won’t go into the details of that in this post. My focus for this post is where I think Web 3.0 is heading and what one of the trends in that space will be and what it means for marketing agencies in the future.

As the concepts around Web 3.0 become more and more of a reality, one of the trends is going to be the semantic user experience. What I mean by this is how we can explore the notion of adding artificial intelligence and context around the user experience to make it a much more compelling and customized activity for each user. Any marketing person worth their salt knows that in this day and age, you should be devising your marketing around creating an exceptional user experience. The semantic web will take this to a new level. As we gain more context around data that a user wants to work with and parlay that into the user experience, we need to remember that context will be king. Mobility, audio, video and presence will the new desktop. Users will move from becoming consumers to prosumers – they will be the ones controlling the content – creating it, adding to it, enriching it, sharing it with their peers.

So what does the virtual generation (or Gen V as Gartner calls it) mean? It means a new generation that is tech savvy and carries out most of their networking and interactions via the digital medium. A generation that is immersed in virtual worlds and their identity in this world is their virtual avatar. Marketing folks will need to shift their thinking from collecting demographics information to collecting information that these avatars leave behind because the avatars will grow personalities and the user’s likes and dislikes will be reflected in their avatars. What does this mean for product companies? Think about how to allow Gen V to explore your product in virtual worlds. The semantic web will lead to augmented reality – th ability to mix real world and computer based data to arrive at decisions. That will become more of a reality once cloud computing and associated technologies such as virtualization become more and more mainstream, resulting in optimal use of bandwidth, machine computing power, expanded reasoning and dynamic visualization to create a semantically rich user application to serve the next generation – Generation V.

Entry Filed under: Marketing, Semantic Web. Tags: , , , , , .

Leave a Comment

Read Full Post »

Web 3.0: Update

Evolving Trends

November 19, 2006

Web 3.0: Update

Filed under: AI, Semantic Web, Web 3.0 — evolvingtrends @ 2:21 pm
As a key step in enabling the Web 3.0 vision (which is not to be exclusively associated with Wikipedia), startups and researchers are developing tools and processes to let domain experts with no knowledge of ontology construction build formal ontologies in a manner that is transparent to them, i.e. without them realizing that they’re building one.Such tools and processes are emerging from research organizations and Web 3.0 ventures.

This cripples the argument that domain specific ontologies can only be created by Semantic Web experts. Expert knowledge of your particular domain (or profession) is the only thing you’ll need to be part of the revolution.

Related

  1. Wikipedia 3.0: The End of Google?

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

Tags:

Web 3.0, Web 3.0, Semantic Web, AI

3 Comments »

  1. […] Web 3.0 Update […]

    Pingback by Wikipedia 3.0: The End of Google? « Evolving Trends — November 19, 2006 @ 2:51 pm

  2. Definitely yes. While looking for some info on semantic web I’ve stumbled upon an interesting article describing methodology for clearing-up taxonomies. It’s all about answering a set of proper questions e.g: “If an instance of animal ceases to be an instance of animal, does it cease to be an instance of physical-object? (Y,N)”. Here’s the link: http://www.cs.vassar.edu/faculty/welty/papers/er2000/LADSEB05-2000.pdf

    Comment by Marcin Olak — January 4, 2007 @ 2:37 pm

  3. Thanks for the link 🙂

    Comment by evolvingtrends — January 14, 2007 @ 4:20 am

RSS feed for comments on this post. TrackBack URI

Read Full Post »

Google vs Web 3.0

Account

  • My Dashboard
  • New Post
  • Evolving Trends

    March 24, 2007

    Google vs Web 3.0

    Filed under: AI, Google, Semantic Web, User Enhanced Data, User Enhanced Search, Web 3.0, Wikipedia, ontology — evolvingtrends @ 11:35 pm
    In Web 3.0, he who owns the metadata owns the Web.

    User Enhanced Search

    With Googel Co-Op, Google tried to leverage user-supplied metadata to enhance the accuracy and relevance of Google searches.

    Now they’re trying it again with Image Labeler.

    But this time they want users to actually use it so they’re making it into a Squirrel Wheel kind of game where you get to play the squirrel.

    From their description:

    “You’ll be randomly paired with a partner who’s online and using the feature. Over a 90-second period, you and your partner will be shown the same set of images and asked to provide as many labels as possible to describe each image you see. When your label matches your partner’s label, you’ll earn some points and move on to the next image until time runs out. After time expires, you can explore the images you’ve seen and the websites where those images were found. And we’ll show you the points you’ve earned throughout the session.”

    You’re better off annotating Wikipedia (using Semantic MediaWiki) and applying your knowledge of a given subject (or domain) to build intelligence into Wikipedia, which is owned by the people (as a non-profit, people funded, people powered encyclopedia.) Why be a squirrel in Google’s squirrel wheel only to have Google abuse your good will?

    Related

    1. Wikipedia 3.0: The End of Google?
    2. Google Co-Op: The End of Wikipedia?
    3. Web 3.0 Update
    4. Is Google a monopoly?
    5. Designing a Better Semantic Search Engine*
    6. Web 3.0 (Definition)
    7. Web 3.0 (Wikipedia entry)

    Posted by Marc Fawzi

    Enjoyed this analysis? You may share it with others on:

    digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us

    —-

    *  Also see this BoingBoing post re: Wikipedia’s founder jumping on the “User Enhanced Search” bandwagon with Wikisaria. But is it for the people, by the people, or a for-profit venture?

    1 Comment »

    1. […] March 2007 Update: Google vs Web 3.0 […]

      Pingback by Google tries again to co-opt the Wikipedia 3.0 vision « Evolving Trends — December 17, 2007 @ 3:50 pm

    RSS feed for comments on this post. TrackBack URI

    Read Full Post »

     lifeboat-foundation.jpg

    LIFEBOAT FOUNDATION SPECIAL REPORT

    THE THIRD GENERATION WEB IS COMING

    By Lifeboat Foundation Scientific Advisory Board member Nova Spivack.
    Print report!
     
    Also read Minding the Planet: The Meaning and Future of the Semantic Web.
     
     
    OVERVIEW
     
    The Web is entering a new phase of evolution. There has been much debate recently about what to call this new phase. Some would prefer to not name it all, while others suggest continuing to call it “Web 2.0”. However, this new phase of evolution has quite a different focus from what Web 2.0 has come to mean.
     

     
     
    WEB 3.0
     
    John Markoff of the New York Times recently suggested naming this third-generation of the Web, “Web 3.0”. This suggestion has led to quite a bit of debate within the industry. Those who are attached to the Web 2.0 moniker have reacted by claiming that such a term is not warranted while others have responded positively to the term, noting that there is indeed a characteristic difference between the coming new stage of the Web and what Web 2.0 has come to represent.
     
    The term Web 2.0 was never clearly defined and even today if one asks ten people what it means one will likely get ten different definitions. However, most people in the Web industry would agree that Web 2.0 focuses on several major themes, including AJAX, social networking, folksonomies, lightweight collaboration, social bookmarking, and media sharing. While the innovations and practices of Web 2.0 will continue to develop, they are not the final step in the evolution of the Web.
     
    In fact, there is a lot more in store for the Web. We are starting to witness the convergence of several growing technology trends that are outside the scope of what Web 2.0 has come to mean. These trends have been gestating for a decade and will soon reach a tipping point. At this juncture the third-generation of the Web will start.
     
     
    MORE INTELLIGENT WEB
     
    The threshold to the third-generation Web will be crossed in 2007. At this juncture the focus of innovation will start shift back from front-end improvements towards back-end infrastructure level upgrades to the Web. This cycle will continue for five to ten years, and will result in making the Web more connected, more open, and more intelligent. It will transform the Web from a network of separately siloed applications and content repositories to a more seamless and interoperable whole.
     
    Because the focus of the third-generation Web is quite different from that of Web 2.0, this new generation of the Web probably does deserve its own name. In keeping with the naming convention established by labeling the second generation of the Web as Web 2.0, I agree with John Markoff that this third-generation of the Web could be called Web 3.0.
     
     
    TIMELINE AND DEFINITION
     
    Web 1.0. Web 1.0 was the first generation of the Web. During this phase the focus was primarily on building the Web, making it accessible, and commercializing it for the first time. Key areas of interest centered on protocols such as HTTP, open standard markup languages such as HTML and XML, Internet access through ISPs, the first Web browsers, Web development platforms and tools, Web-centric software languages such as Java and Javascript, the creation of Web sites, the commercialization of the Web and Web business models, and the growth of key portals on the Web.
     
    Web 2.0. According to the Wikipedia, “
    Web 2.0, a phrase coined by O’Reilly Media in 2004, refers to a supposed second generation of Internet-based services — such as social networking sites, wikis, communication tools, and folksonomies — that emphasize online collaboration and sharing among users.”
     
    I would also add to this definition another trend that has been a major factor in Web 2.0 — the emergence of the mobile Internet and mobile devices (including camera phones) as a major new platform driving the adoption and growth of the Web, particularly outside of the United States.
     
    Web 3.0. Using the same pattern as the above Wikipedia definition, Web 3.0 could be defined as: “Web 3.0, a phrase coined by John Markoff of the New York Times in 2006, refers to a supposed third generation of Internet-based services that collectively comprise what might be called ‘the intelligent Web’ — such as those using semantic web, microformats, natural language search, data-mining, machine learning, recommendation agents, and artificial intelligence technologies — which emphasize machine-facilitated understanding of information in order to provide a more productive and intuitive user experience.”
     
    Web 3.0 Expanded Definition. I propose expanding the above definition of Web 3.0 to be a bit more inclusive. There are actually several major technology trends that are about to reach a new level of maturity at the same time. The simultaneous maturity of these trends is mutually reinforcing, and collectively they will drive the third-generation Web. From this broader perspective, Web 3.0 might be defined as a third-generation of the Web enabled by the convergence of several key emerging technology trends:
     
    Ubiquitous Connectivity

    • Broadband adoption
    • Mobile Internet access
    • Mobile devices

    Network Computing

    • Software-as-a-service business models
    • Web services interoperability
    • Distributed computing (P2P, grid computing, hosted “cloud computing” server farms such as Amazon S3)

    Open Technologies

    • Open APIs and protocols
    • Open data formats
    • Open-source software platforms
    • Open data (Creative Commons, Open Data License, etc.)

    Open Identity

    • Open identity (OpenID)
    • Open reputation
    • Portable identity and personal data (for example, the ability to port your user account and search history from one service to another)

    The Intelligent Web

    • Semantic Web technologies (RDF, OWL, SWRL, SPARQL, Semantic application platforms, and statement-based datastores such as triplestores, tuplestores and associative databases)
    • Distributed databases — or what I call “The World Wide Database” (wide-area distributed database interoperability enabled by Semantic Web technologies)
    • Intelligent applications (natural language processing, machine learning, machine reasoning, autonomous agents)

     
    CONCLUSION
     
    Web 3.0 will be more connected, open, and intelligent, with semantic Web technologies, distributed databases, natural language processing, machine learning, machine reasoning, and autonomous agents.

    Read Full Post »

    radarnetworkstowardsawebos-rotated.jpg

    Read Full Post »

    February 09, 2007

    How the WebOS Evolves?

    Here is my timeline of the past, present and future of the Web. Feel free to put this meme on your own site, but please link back to the master image at this site (the URL that the thumbnail below points to) because I’ll be updating the image from time to time.
    graphic image: http://novaspivack.typepad.com/RadarNetworksTowardsAWebOS.jpg

    Radarnetworkstowardsawebos

    This slide illustrates my current thinking here at Radar Networks about where the Web (and we) are heading. It shows a timeline of technology leading from the prehistoric desktop era to the possible future of the WebOS…

    Note that as well as mapping a possible future of the Web, here I am also proposing that the Web x.0 terminology be used to index the decades of the Web since 1990. Thus we are now in the tail end of Web 2.0 and are starting to lay the groundwork for Web 3.0, which fully arrives in 2010.

    This makes sense to me. Web 2.0 was really about upgrading the “front-end” and user-experience of the Web. Much of the innovation taking place today is about starting to upgrade the “backend” of the Web and I think that will be the focus of Web 3.0 (the front-end will probably not be that different from Web 2.0, but the underlying technologies will advance significantly enabling new capabilities and features).

    See also: This article I wrote redefining what the term “Web 3.0” means.

    See also: A Visual Graph of the Future of Productivity

    Please note: This is a work in progress and is not perfect yet. I’ve been tweaking the positions to get the technologies and dates right. Part of the challenge is fitting the text into the available spaces. If anyone out there has suggestions regarding where I’ve placed things on the timeline, or if I’ve left anything out that should be there, please let me know in the comments on this post and I’ll try to readjust and update the image from time to time. If you would like to produce a better version of this image, please do so and send it to me for inclusion here, with the same Creative Commons license, ideally.

    Read Full Post »

    %d bloggers like this: