Feeds:
Posts
Comments

Posts Tagged ‘metaweb’

Google: “We’re Not Doing a Good Job with Structured Data”

Written by Sarah Perez / February 2, 2009 7:32 AM / 9 Comments


During a talk at the New England Database Day conference at the Massachusetts Institute of Technology, Google’s Alon Halevy admitted that the search giant has “not been doing a good job” presenting the structured data found on the web to its users. By “structured data,” Halevy was referring to the databases of the “deep web” – those internet resources that sit behind forms and site-specific search boxes, unable to be indexed through passive means.

Google’s Deep Web Search

Halevy, who heads the “Deep Web” search initiative at Google, described the “Shallow Web” as containing about 5 million web pages while the “Deep Web” is estimated to be 500 times the size. This hidden web is currently being indexed in part by Google’s automated systems that submit queries to various databases, retrieving the content found for indexing. In addition to that aspect of the Deep Web – dubbed “vertical searching” – Halevy also referenced two other types of Deep Web Search: semantic search and product search.

Google wants to also be able to retrieve the data found in structured tables on the web, said Halevy, citing a table on a page listing the U.S. presidents as an example. There are 14 billion such tables on the web, and, after filtering, about 154 million of them are interesting enough to be worth indexing.

Can Google Dig into the Deep Web?

The question that remains is whether or not Google’s current search engine technology is going to be adept at doing all the different types of Deep Web indexing or if they will need to come up with something new. As of now, Google uses the Big Table database and MapReduce framework for everything search related, notes Alex Esterkin, Chief Architect at Infobright, Inc., a company delivering open source data warehousing solutions. During the talk, Halevy listed a number of analytical database application challenges that Google is currently dealing with: schema auto-complete, synonym discovery, creating entity lists, association between instances and aspects, and data level synonyms discovery. These challenges are addressed by Infobright’s technology, said Esterkin, but “Google will have to solve these problems the hard way.”

Also mentioned during the speech was how Google plans to organize “aspects” of search queries. The company wants to be able to separate exploratory queries (e.g., “Vietnam travel”) from ones where a user is in search of a particular fact (“Vietnam population”). The former query should deliver information about visa requirements, weather and tour packages, etc. In a way, this is like what the search service offered by Kosmix is doing. But Google wants to go further, said Halevy. “Kosmix will give you an ‘aspect,’ but it’s attached to an information source. In our case, all the aspects might be just Web search results, but we’d organize them differently.”

Yahoo Working on Similar Structured Data Retrieval

The challenges facing Google today are also being addressed by their nearest competitor in search, Yahoo. In December, Yahoo announced that they were taking their SearchMonkey technology in-house to automate the extraction of structured information from large classes of web sites. The results of that in-house extraction technique will allow Yahoo to augment their Yahoo Search results with key information returned alongside the URLs.

In this aspect of web search, it’s clear that no single company has yet to dominate. However, even if a non-Google company surges ahead, it may not be enough to get people to switch engines. Today, “Google” has become synonymous with web search, just like “Kleenex” is a tissue, “Band-Aid” is an adhesive bandage, and “Xerox” is a way to make photocopies. Once that psychological mark has been made into our collective psyches and the habit formed, people tend to stick with what they know, regardless of who does it better. That’s something that’s a bit troublesome – if better search technology for indexing the Deep Web comes into existence outside of Google, the world may not end up using it until such point Google either duplicates or acquires the invention.

Still, it’s far too soon to write Google off yet. They clearly have a lead when it comes to search and that came from hard work, incredibly smart people, and innovative technical achievements. No doubt they can figure out this Deep Web thing, too. (We hope).

Read Full Post »

2009 Predictions and Recommendations for Web 2.0 and Social Networks

Christopher Rollyson

Volatility, Uncertainly and Opportunity—Move Crisply while Competitors Are in Disarray

Now that the Year in Review 2008 has summarized key trends, we are in excellent position for 2009 prognostications, so welcome to Part II. As all experienced executives know, risk and reward are inseparable twins, and periods of disruption elevate both, so you will have much more opportunity to produce uncommon value than normal.

This is a high-stakes year in which we can expect surprises. Web 2.0 and social networks can help because they increase flexibility and adaptiveness. Alas, those who succeed will have to challenge conventional thinking considerably, which is not a trivial exercise in normal times. The volatility that many businesses face will make it more difficult because many of their clients and/or employees will be distracted. It will also make it easier because some of them will perceive that extensive change is afoot, and Web 2.0 will blend in with the cacaphony. Disruption produces unusual changes in markets, and the people that perceive the new patterns and react appropriately emerge as new leaders.

2009 Predictions

These are too diverse to be ranked in any particular order. Please share your reactions and contribute those that I have missed.

  1. The global financial crisis will continue to add significant uncertainty in the global economy in 2009 and probably beyond. I have no scientific basis for this, but there are excellent experts of every flavor on the subject, so take your pick. I believe that we are off the map, and anyone who says that he’s sure of a certain outcome should be considered with a healthy skepticism.
    • All I can say is my friends, clients and sources in investment and commercial banking tell me it’s not over yet, and uncertainty is the only certainty until further notice. This has not yet been fully leeched.
    • Western governments, led the the U.S., are probably prolonging the pain because governments usually get bailouts wrong. However, voters don’t have the stomachs for hardship, so we are probably trading short-term “feel good” efforts for a prolonged adjustment period.
  2. Widespread social media success stories in 2009 in the most easily measurable areas such as talent management, business development, R&D and marketing.
    • 2008 saw a significant increase in enterprise executives’ experimentation with LinkedIn, Facebook, YouTube and enterprise (internal) social networks. These will begin to bear fruit in 2009, after which a “mad rush of adoption” will ensue.
    • People who delay adoption will pay dearly in terms of consulting fees, delayed staff training and retarded results.
  3. Internal social networks will largely disappoint. Similar to intranets, they will produce value, but few enterprises are viable long-term without seamlessly engaging the burgeoning external world of experts.
    In general, the larger and more disparate an organization’s audience
    is, the more value it can create, but culture must encourage emergent, cross-boundary connections, which is where many organizations fall down.

 

  • If you’re a CIO who’s banking heavily on your behind-the-firewall implementation, just be aware that you need to engage externally as well.
  • Do it fast because education takes longer than you think.
  • There are always more smart people outside than inside any organization.
  • Significant consolidation among white label social network vendors, so use your usual customary caution when signing up partners.
    • Due diligence and skill portability will help you to mitigate risks. Any vendor worth their salt will use standardized SOA-friendly architecture and feature sets. As I wrote last year, Web 2.0 is not your father’s software, so focus on people and process more than technology.
    • If your vendor hopeful imposes process on your people, run.
  • No extensive M&A among big branded sites like Facebook, LinkedIn and Twitter although there will probably be some. The concept of the social ecosystem holds that nodes on pervasive networks can add value individually. LinkedIn and Facebook have completely different social contexts. “Traditional” executives tend to view disruptions as “the new thing” that they want to put into a bucket (”let them all buy each other, so I only have to learn one!”). Wrong. This is the new human nervous system, and online social venues, like their offline counterparts, want specificity because they add more value that way. People hack together the networks to which they belong based on their goals and interests.
    • LinkedIn is very focused on the executive environment, and they will not buy Facebook or Twitter. They might buy a smaller company. They are focused on building an executive collaboration platform, and a large acquisition would threaten their focus. LinkedIn is in the initial part of its value curve, they have significant cash, and they’re profitable. Their VCs can smell big money down the road, so they won’t sell this year.
    • Twitter already turned down Facebook, and my conversations with them lead me to believe that they love their company; and its value is largely undiscovered as of yet. They will hold out as long as they can.
    • Facebook has staying power past 2009. They don’t need to buy anyone of import; they are gaining global market share at a fast clip. They already enable customers to build a large part of the Facebook experience, and they have significant room to innovate. Yes, there is a backlash in some quarters against their size. I don’t know Mark Zuckerberg personally, and I don’t have a feeling for his personal goals.
    • I was sad to see that Dow Jones sold out to NewsCorp and, as a long-time Wall Street Journal subscriber, I am even more dismayed now. This will prove a quintessential example of value destruction. The Financial Times currently fields a much better offering. The WSJ is beginning to look like MySpace! As for MySpace itself, I don’t have a firm bead on it but surmise that it has a higher probability of major M&A than the aforementioned: its growth has stalled, Facebook continues to gain, and Facebook uses more Web 2.0 processes, so I believe it will surpass MySpace in terms of global audience.
    • In being completely dominant, Google is the Wal-Mart of Web 2.0, and I don’t have much visibility into their plans, but I think they could make significant waves in 2009. They are very focused on applying search innovation to video, which is still in the initial stages of adoption, so YouTube is not going anywhere.
    • I am less familiar with Digg, Xing, Bebo, Cyworld. Of course, Orkut is part of the Googleverse.
  • Significant social media use by the Obama Administration. It has the knowledge, experience and support base to pursue fairly radical change. Moreover, the degree of change will be in synch with the economy: if there is a significant worsening, expect the government to engage people to do uncommon things.
    • Change.gov is the first phase in which supporters or any interested person is invited to contribute thoughts, stories and documents to the transition team. It aims to keep people engaged and to serve the government on a volunteer basis
    • The old way of doing things was to hand out form letters that you would mail to your representative. Using Web 2.0, people can organize almost instantly, and results are visible in real-time. Since people are increasingly online somewhere, the Administration will invite them from within their favorite venue (MySpace, Facebook…).
    • Obama has learned that volunteering provides people with a sense of meaning and importance. Many volunteers become evangelists.
  • Increasing citizen activism against companies and agencies, a disquieting prospect but one that I would not omit from your scenario planning (ask yourself, “How could people come together and magnify some of our blemishes?” more here). To whit:
    • In 2007, an electronic petition opposing pay-per-use road tolls in the UK reached 1.8 million signatories, stalling a major government initiative. Although this did not primarily employ social media, it is indicative of the phenomenon.
    • In Q4 2008, numerous citizen groups organized Facebook groups (25,000 signatures in a very short time) to oppose television and radio taxes, alarming the Swiss government. Citizens are organizing to stop paying obligatory taxes—and to abolish the agency that administers the tax system. Another citizen initiative recently launched on the Internet collected 60,000 signatures to oppose biometric passports. German links. French links.
    • In the most audacious case, Ahmed Maher is using Facebook to try to topple the government of Egypt. According to Wired’s Cairo Activists Use Facebook to Rattle Regime, activists have organized several large demonstrations and have a Facebook group of 70,000 that’s growing fast.
  • Executive employment will continue to feel pressure, and job searches will get increasingly difficult for many, especially those with “traditional” jobs that depend on Industrial Economy organization.
    • In tandem with this, there will be more opportunities for people who can “free-agent” themselves in some form.
    • In 2009, an increasing portion of executives will have success at using social networks to diminish their business development costs, and their lead will subsequently accelerate the leeching of enterprises’ best and brightest, many of whom could have more flexibility and better pay as independents. This is already manifest as displaced executives choose never to go back.
    • The enterprise will continue to unbundle. I have covered this extensively on the Transourcing website.
  • Enterprise clients will start asking for “strategy” to synchronize social media initiatives. Web 2.0 is following the classic adoption pattern: thus far, most enterprises have been using a skunk works approach to their social media initiatives, or they’ve been paying their agencies to learn while delivering services.
    • In the next phase, beginning in 2009, CMOs, CTOs and CIOs will sponsor enterprise level initiatives, which will kick off executive learning and begin enterprise development of social media native skills. After 1-2 years of this, social media will be spearheaded by VPs and directors.
    • Professional services firms (PwC, KPMG, Deloitte..) will begin scrambling to pull together advisory practices after several of their clients ask for strategy help. These firms’ high costs do not permit them to build significantly ahead of demand.
    • Marketing and ad agencies (Leo Burnett, Digitas…) will also be asked for strategy help, but they will be hampered by their desires to maintain the outsourced model; social media is not marketing, even though it will displace certain types of marketing.
    • Strategy houses (McKinsey, BCG, Booz Allen…) will also be confronted by clients asking for social media strategy; their issue will be that it is difficult to quantify, and the implementation piece is not in their comfort zone, reducing revenue per client.
    • Boutiques will emerge to develop seamless strategy and implementation for social networks. This is needed because Web 2.0 and social networks programs involve strategy, but implementation involves little technology when compared to Web 1.0. As I’ll discuss in an imminent article, it will involve much more interpersonal mentoring and program development.
  • Corporate spending on Enterprise 2.0 will be very conservative, and pureplay and white label vendors (and consultants) will need to have strong business cases.
    • CIOs have better things to spend money on, and they are usually reacting to business unit executives who are still getting their arms around the value of Web 2.0, social networks and social media.
    • Enterprise software vendors will release significant Web 2.0 bolt-on improvements to their platforms in 2009. IBM is arguably out in front with Lotus Connections, with Microsoft Sharepoint fielding a solid solution. SAP and Oracle will field more robust solutions this year.
  • The financial crunch will accelerate social network adoption among those focused on substance rather than flash; this is akin to the dotbomb from 2001-2004, no one wanted to do the Web as an end in itself anymore; it flushed out the fluffy offers (and well as some really good ones).
    • Social media can save money.. how much did it cost the Obama campaign in time and money to raise $500 million? Extremely little.
    • People like to get involved and contribute, when you can frame the activity as important and you provide the tools to facilitate meaningful action. Engagement raises profits and can decrease costs. Engaged customers, for example, tend to leave less often than apathetic customers.
    • Social media is usually about engaging volunteer contributors; if you get it right, you will get a lot of help for little cash outlay.
    • Social media presents many new possibilities for revenue, but to see them, look outside existing product silos. Focus on customer experience by engaging customers, not with your organization, but with each other. Customer-customer communication is excellent for learning about experience.
  • Microblogging will completely mainstream even though Twitter is still quite emergent and few solid business cases exist.
    • Twitter (also Plurk, Jaiku, Pownce {just bought by Six Apart and closed}, Kwippy, Tumblr) are unique for two reasons: they incorporate mobility seamlessly, and they chunk communications small; this leads to a great diversity of “usage context”
    • Note that Dell sold $1 million on Twitter in 2008, using it as a channel for existing business.
    • In many businesses, customers will begin expecting your organization to be on Twitter; this year it will rapidly cease to be a novelty.

    2009 Recommendations

    Web 2.0 will affect business and culture far more than Web 1.0 (the internet), which was about real-time information access and transactions via a standards-based network and interface. Web 2.0 enables real-time knowledge and relationships, so it will profoundly affect most organizations’ stakeholders (clients, customers, regulators, employees, directors, investors, the public…). It will change how all types of buying decisions are made.

    As an individual and/or an organization leader, you have the opportunity to adopt more quickly than your peers and increase your relevance to stakeholders as their Web 2.0 expectations of you increase. 2009 will be a year of significant adoption, and I have kept this list short, general and actionable. I have assumed that your organization has been experimenting with various aspects of Web 2.0, that some people have moderate experience. Please feel free to contact me if you would like more specific or advanced information or suggestions. Recommendations are ranked in importance, the most critical at the top.

    1. What: Audit your organization’s Web 2.0 ecosystem, and conduct your readiness assessment. Why: Do this to act with purpose, mature your efforts past experimentation and increase your returns on investment.
      • The ecosystem audit will tell you what stakeholders are doing, and in what venues. Moreover, a good one will tell you trends, not just numbers. In times of rapid adoption, knowing trends is critical, so you can predict the future. Here’s more about audits.
      • The readiness assessment will help you to understand how your value proposition and resources align with creating and maintaining online relationships. The audit has told you what stakeholders are doing, now you need to assess what you can do to engage them on an ongoing basis. Here’s more about readiness assessments.
    2. What: Select a top executive to lead your organization’s adoption of Web 2.0 and social networks. Why: Web 2.0 is changing how people interact, and your organizational competence will be affected considerably, so applying it to your career and business is very important.
      • This CxO should be someone with a track record for innovation and a commitment to leading discontinuous change. Should be philosophically in synch with the idea of emergent organization and cross-boundary collaboration.
      • S/He will coordinate your creation of strategy and programs (part-time). This includes formalizing your Web 2.0 policy, legal and security due diligence.
    3. What: Use an iterative portfolio approach to pursue social media initiatives in several areas of your business, and chunk investments small.
      Why: Both iteration and portfolio approaches help you to manage risk and increase returns.
    • Use the results of the audit and the readiness assessment to help you to select the stakeholders you want to engage.
    • Engage a critical mass of stakeholders about things that inspire or irritate them and that you can help them with.
    • All else equal, pilots should include several types of Web 2.0 venues and modes like blogs, big branded networks (Facebook, MySpace), microblogs (Twitter), video and audio.
    • As a general rule, extensive opportunity exists where you can use social media to cross boundaries, which usually impose high costs and prevent collaboration. One of the most interesting in 2009 will be encouraging alumni, employees and recruits to connect and collaborate according to their specific business interests. This can significantly reduce your organization’s business development, sales and talent acquisition costs. For more insight to this, see Alumni 2.0.
    • Don’t overlook pilots with multiple returns, like profile management programs, which can reduce your talent acquisition and business development costs. Here’s more on profile management.

     

  • What: Create a Web 2.0 community with numerous roles to enable employees flexibility.
    Why: You want to keep investments small and let the most motivated employees step forward.

    • Roles should include volunteers for pilots, mentors (resident bloggers, video producers and others), community builders (rapidly codify the knowledge you are gathering from pilots), some part-time more formal roles. Perhaps a full-time person to coordinate would make sense. Roles can be progressive and intermittent. Think of this as open source.
    • To stimulate involvement, the program must be meaningful, and it must be structured to minimize conflicts with other responsibilities.
  • What: Avoid the proclivity to treat Web 2.0 as a technology initiative. Why: Web 1.0 (the Internet) involved more of IT than does Web 2.0, and many people are conditioned to think that IT drives innovation; they fall in the tech trap, select tools first and impose process. This is old school and unnecessary because the tools are far more flexible than the last generation software with which many are still familiar.
    • People create the value when they get involved, and technology often gets in the way by making investments in tools that impose process on people and turn them off. Web 2.0 tools impose far less process on people.
    • More important than what brand you invest in is your focus on social network processes and how they add value to existing business processes. If you adopt smartly, you will be able to transfer assets and processes elsewhere while minimizing disruption. More likely is that some brands will disappear (Pownce closed its doors 15 December). When you focus your organization on mastering process and you distribute learning, you will be more flexible with the tools.
    • Focus on process and people, and incent people to gather and share knowledge and help each other. This will increase your flexibility with tools.
  • What: Manage consulting, marketing and technology partners with a portfolio strategy. Why: Maximize flexibility and minimize risk.
    • From the technology point of view, there are three main vendor flavors: enterprise bolt-on (i.e. Lotus Connections), pureplay white label vendors (SmallWorldLabs) and open (Facebook, LinkedIn). As a group, pureplays have the most diversity in terms of business models, and the most uncertainty. Enterprise bolt-ons’ biggest risk is that they lag significantly behind. More comparisons here.
    • Fight the urge to go with one. If you’re serious about getting business value, you need to be in the open cross-boundary networks. If you have a Lotus or Microsoft relationship, compare Connections and Sharepoint with some pureplays to address private social network needs. An excellent way to start could be with Yammer.
    • Be careful when working with consulting- and marketing-oriented partners who are accustomed to an outsourced model. Web 2.0 is not marketing; it is communicating to form relationships and collaborate online. It does have extensive marketing applications; make sure partners have demonstrated processes for mentoring because Web 2.0 will be a core capability for knowledge-based organizations, and you need to build your resident knowledge.
  • Parting Shots

    I hope you find these thoughts useful, and I encourage you to add your insights and reactions as comments. If you have additional questions about how to use Web 2.0, please feel free to contact me. I wish all the best to you in 2009.

    Read Full Post »

    Evolving Trends

    Wikipedia 3.0: The End of Google?

    In Uncategorized on June 26, 2006 at 5:18 am

    Author: Marc Fawzi

    License: Attribution-NonCommercial-ShareAlike 3.0

    Announcements:

    Semantic Web Developers:

    Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

    1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

    Click here for more info and a list of related articles…

    Forward

    Two years after I published this article it has received over 200,000 hits and we now have several startups attempting to apply Semantic Web technology to Wikipedia and knowledge wikis in general, including Wikipedia founder’s own commercial startup as well as a startup that was recently purchased by Microsoft.

    Recently, after seeing how Wikipedia’s governance is so flawed, I decided to write about a way to decentralize and democratize Wikipedia.

    Versión española

    Article

    (Article was last updated at 10:15am EST, July 3, 2006)

    Wikipedia 3.0: The End of Google?

     

    The Semantic Web (or Web 3.0) promises to “organize the world’s information” in a dramatically more logical way than Google can ever achieve with their current engine design. This is specially true from the point of view of machine comprehension as opposed to human comprehension.The Semantic Web requires the use of a declarative ontological language like OWL to produce domain-specific ontologies that machines can use to reason about information and make new conclusions, not simply match keywords.

    However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

    Machines (or machine-based reasoning, aka AI software or ‘info agents’) would then be able to use those laboriously –but not entirely manually– constructed ontologies to build a view (or formal model) of how the individual terms within the information relate to each other. Those relationships can be thought of as the axioms (assumed starting truths), which together with the rules governing the inference process both enable as well as constrain the interpretation (and well-formed use) of those terms by the info agents to reason new conclusions based on existing information, i.e. to think. In other words, theorems (formal deductive propositions that are provable based on the axioms and the rules of inference) may be generated by the software, thus allowing formal deductive reasoning at the machine level. And given that an ontology, as described here, is a statement of Logic Theory, two or more independent info agents processing the same domain-specific ontology will be able to collaborate and deduce an answer to a query, without being driven by the same software.

    Thus, and as stated, in the Semantic Web individual machine-based agents (or a collaborating group of agents) will be able to understand and use information by translating concepts and deducing new information rather than just matching keywords.

    Once machines can understand and use information, using a standard ontology language, the world will never be the same. It will be possible to have an info agent (or many info agents) among your virtual AI-enhanced workforce each having access to different domain specific comprehension space and all communicating with each other to build a collective consciousness.

    You’ll be able to ask your info agent or agents to find you the nearest restaurant that serves Italian cuisine, even if the restaurant nearest you advertises itself as a Pizza joint as opposed to an Italian restaurant. But that is just a very simple example of the deductive reasoning machines will be able to perform on information they have.

    Far more awesome implications can be seen when you consider that every area of human knowledge will be automatically within the comprehension space of your info agents. That is because each info agent can communicate with other info agents who are specialized in different domains of knowledge to produce a collective consciousness (using the Borg metaphor) that encompasses all human knowledge. The collective “mind” of those agents-as-the-Borg will be the Ultimate Answer Machine, easily displacing Google from this position, which it does not truly fulfill.

    The problem with the Semantic Web, besides that researchers are still debating which design and implementation of the ontology language model (and associated technologies) is the best and most usable, is that it would take thousands or tens of thousands of knowledgeable people many years to boil down human knowledge to domain specific ontologies.

    However, if we were at some point to take the Wikipedia community and give them the right tools and standards to work with (whether existing or to be developed in the future), which would make it possible for reasonably skilled individuals to help reduce human knowledge to domain-specific ontologies, then that time can be shortened to just a few years, and possibly to as little as two years.

    The emergence of a Wikipedia 3.0 (as in Web 3.0, aka Semantic Web) that is built on the Semantic Web model will herald the end of Google as the Ultimate Answer Machine. It will be replaced with “WikiMind” which will not be a mere search engine like Google is but a true Global Brain: a powerful pan-domain inference engine, with a vast set of ontologies (a la Wikipedia 3.0) covering all domains of human knowledge, that can reason and deduce answers instead of just throwing raw information at you using the outdated concept of a search engine.

    Notes

    After writing the original post I found out that a modified version of the Wikipedia application, known as “Semantic” MediaWiki has already been used to implement ontologies. The name that they’ve chosen is Ontoworld. I think WikiMind would have been a cooler name, but I like ontoworld, too, as in “it descended onto the world,” since that may be seen as a reference to the global mind a Semantic-Web-enabled version of Wikipedia could lead to.

    Google’s search engine technology, which provides almost all of their revenue, could be made obsolete in the near future. That is unless they have access to Ontoworld or some such pan-domain semantic knowledge repository such that they tap into their ontologies and add inference capability to Google search to build formal deductive intelligence into Google.

    But so can Ask.com and MSN and Yahoo…

    I would really love to see more competition in this arena, not to see Google or any one company establish a huge lead over others.

    The question, to rephrase in Churchillian terms, is wether the combination of the Semantic Web and Wikipedia signals the beginning of the end for Google or the end of the beginning. Obviously, with tens of billions of dollars at stake in investors’ money, I would think that it is the latter. No one wants to see Google fail. There’s too much vested interest. However, I do want to see somebody out maneuver them (which can be done in my opinion.)

    Clarification

    Please note that Ontoworld, which currently implements the ontologies, is based on the “Wikipedia” application (also known as MediaWiki), but it is not the same as Wikipedia.org.

    Likewise, I expect Wikipedia.org will use their volunteer workforce to reduce the sum of human knowledge that has been entered into their database to domain-specific ontologies for the Semantic Web (aka Web 3.0) Hence, “Wikipedia 3.0.”

    Response to Readers’ Comments

    The argument I’ve made here is that Wikipedia has the volunteer resources to produce the needed Semantic Web ontologies for the domains of knowledge that it currently covers, while Google does not have those volunteer resources, which will make it reliant on Wikipedia.

    Those ontologies together with all the information on the Web, can be accessed by Google and others but Wikipedia will be in charge of the ontologies for the large set of knowledge domains they currently cover, and that is where I see the power shift.

    Google and other companies do not have the resources in man power (i.e. the thousands of volunteers Wikipedia has) who would help create those ontologies for the large set of knowledge domains that Wikipedia covers. Wikipedia does, and is positioned to do that better and more effectively than anyone else. Its hard to see how Google would be able create the ontologies for all domains of human knowledge (which are continuously growing in size and number) given how much work that would require. Wikipedia can cover more ground faster with their massive, dedicated force of knowledgeable volunteers.

    I believe that the party that will control the creation of the ontologies (i.e. Wikipedia) for the largest number of domains of human knowledge, and not the organization that simply accesses those ontologies (i.e. Google), will have a competitive advantage.

    There are many knowledge domains that Wikipedia does not cover. Google will have the edge there but only if people and organizations that produce the information also produce the ontologies on their own, so that Google can access them from its future Semantic Web engine. My belief is that it would happen but very slowly, and that Wikipedia can have the ontologies done for all the domain of knowledge that it currently covers much faster, and then they would have leverage by the fact that they would be in charge of those ontologies (aka the basic layer for AI enablement.)

    It still remains unclear, of course, whether the combination of Wikipedia and the Semantic Web herald the beginning of the end for Google or the end of the beginning. As I said in the original part of the post, I believe that it is the latter, and the question I pose in the title of this post, in this context, is not more than rhetorical. However, I could be wrong in my judgment and Google could fall behind Wikipedia as the world’s ultimate answer machine.

    After all, Wikipedia makes “us” count. Google doesn’t. Wikipedia derives its power from “us.” Google derives its power from its technology and inflated stock price. Who would you count on to change the world?

    Response to Basic Questions Raised by the Readers

    Reader divotdave asked a few questions, which I thought to be very basic in nature (i.e. important.) I believe more people will be pondering about the same issues, so I’m to including here them with the replies.

    Question:
    How does it distinguish between good information and bad? How does it determine which parts of the sum of human knowledge to accept and which to reject?

    Reply:
    It wouldn’t have to distinguish between good vs bad information (not to be confused with well-formed vs badly formed) if it was to use a reliable source of information (with associated, reliable ontologies.) That is if the information or knowledge to be sought can be derived from Wikipedia 3.0 then it assumes that the information is reliable.

    However, with respect to connecting the dots when it comes to returning information or deducing answers from the sea of information that lies beyond Wikipedia then your question becomes very relevant. How would it distinguish good information from bad information so that it can produce good knowledge (aka comprehended information, aka new information produced through deductive reasoning based on exiting information.)

    Question:
    Who, or what as the case may be, will determine what information is irrelevant to me as the inquiring end user?

    Reply:
    That is a good question and one which would have to be answered by the researchers working on AI engines for Web 3.0

    There will be assumptions made as to what you are inquiring about. Just as when I saw your question I had to make assumption about what you really meant to ask me, AI engines would have to make an assumption, pretty much based on the same cognitive process humans use, which is the topic of a separate post, but which has been covered by many AI researchers.

    Question:
    Is this to say that ultimately some over-arching standard will emerge that all humanity will be forced (by lack of alternative information) to conform to?

    Reply:
    There is no need for one standard, except when it comes to the language the ontologies are written in (e.g OWL, OWL-DL, OWL Full etc.) Semantic Web researchers are trying to determine the best and most usable choice, taking into consideration human and machine performance in constructing –and exclusive in the latter case– interpreting those ontologies.

    Two or more info agents working with the same domain-specific ontology but having different software (different AI engines) can collaborate with each other.

    The only standard required is that of the ontology language and associated production tools.

    Addendum

    On AI and Natural Language Processing

    I believe that the first generation of AI that will be used by Web 3.0 (aka Semantic Web) will be based on relatively simple inference engines that will NOT attempt to perform natural language processing, where current approaches still face too many serious challenges. However, they will still have the formal deductive reasoning capabilities described earlier in this article, and users would interact with these systems through some query language.

    On the Debate about the Nature and Definition of AI

    The embedding of AI into cyberspace will be done at first with relatively simple inference engines (that use algorithms and heuristics) that work collaboratively in P2P fashion and use standardized ontologies. The massively parallel interactions between the hundreds of millions of AI Agents that will run within the millions of P2P AI Engines on users’ PCs will give rise to the very complex behavior that is the future global brain.

    Related:

    1. Web 3.0 Update
    2. All About Web 3.0 <– list of all Web 3.0 articles on this site
    3. P2P 3.0: The People’s Google
    4. Reality as a Service (RaaS): The Case for GWorld <– 3D Web + Semantic Web + AI
    5. For Great Justice, Take Off Every Digg
    6. Google vs Web 3.0
    7. People-Hosted “P2P” Version of Wikipedia
    8. Beyond Google: The Road to a P2P Economy


    Update on how the Wikipedia 3.0 vision is spreading:


    Update on how Google is co-opting the Wikipedia 3.0 vision:



    Web 3D Fans:

    Here is the original Web 3D + Semantic Web + AI article:

    Web 3D + Semantic Web + AI *

    The above mentioned Web 3D + Semantic Web + AI vision which preceded the Wikipedia 3.0 vision received much less attention because it was not presented in a controversial manner. This fact was noted as the biggest flaw of social bookmarking site digg which was used to promote this article.

    Web 3.0 Developers:

    Feb 5, ‘07: The following external reference concerns the use of rule-based inference engines and ontologies in implementing the Semantic Web + AI vision (aka Web 3.0):

    1. Description Logic Programs: Combining Logic Programs with Description Logic (note: there are better, simpler ways of achieving the same purpose.)

    Jan 7, ‘07: The following Evolving Trends post discusses the current state of semantic search engines and ways to improve the paradigm:

    1. Designing a Better Web 3.0 Search Engine

    June 27, ‘06: Semantic MediaWiki project, enabling the insertion of semantic annotations (or metadata) into the content:

    1. http://semantic-mediawiki.org/wiki/Semantic_MediaWiki (see note on Wikia below)

    Wikipedia’s Founder and Web 3.0

    (more…)

    Read Full Post »

    Evolving Trends

    Google Warming Up to the Wikipedia 3.0 vision?

    In Uncategorized on December 14, 2007 at 8:09 pm

    [source: slashdot.org]

    Google’s “Knol” Reinvents Wikipedia

    Posted by CmdrTaco on Friday December 14, @08:31AM
    from the only-a-matter-of-time dept.

     

    teslatug writes “Google appears to be reinventing Wikipedia with their new product that they call knol (not yet publicly available). In an attempt to gather human knowledge, Google will accept articles from users who will be credited with the article by name. If they want, they can allow ads to appear alongside the content and they will be getting a share of the profits if that’s the case. Other users will be allowed to rate, edit or comment on the articles. The content does not have to be exclusive to Google but no mention is made on any license for it. Is this a better model for free information gathering?”

    This article Wikipedia 3.0: The End of Google?  which gives you an idea why Google would want its own Wikipedia was on the Google Finance page for at least 3 months when anyone looked up the Google stock symbol, so Google employees, investors and executive must have seen it. 

    Is it a coincidence that Google is building its own Wikipedia now?

    The only problem is a flaw in Google’s thinking. People who author those articles on Wikipedia actually have brains. People with brains tend to have principles. Getting paid pennies to build the Google empire is rarely one of those principles.

    Related

    Read Full Post »

    The Top 100 Alternative Search Engines

    Written by Charles Knight, AltSearchEngines editor / January 29, 2007 2:34 AM / 104 Comments


    Written by Charles S. Knight, SEO, and edited by Richard MacManus. The Top 100 is listed at the end of the analysis.

    Ask anyone which search engine they use to find information on the Internet and they will almost certainly reply: “Google.” Look a little further, and market research shows that people actually use four main search engines for 99.99% of their searches: Google, Yahoo!, MSN, and Ask.com (in that order). But in my travels as a Search Engine Optimizer (SEO), I have discovered that in that .01% lies a vast multitude of the most innovative and creative search engines you have never seen. So many, in fact, that I have had to limit my list of the very best ones to a mere 100.

    But it’s not just the sheer number of them that makes them worthy of attention; each one of these search engines has that standard “About Us” link at the bottom of the homepage. I call it the “why we’re better than Google” page. And after reading dozens and dozens of these pages, I have come to the conclusion that, taken as a whole, they are right!

    The Search Homepage

    In order to address their claims systematically, it helps to group them into categories and then compare them to their Google counterparts. For example, let’s look at the first thing that almost everyone sees when they go to search the Internet – the ubiquitous Google homepage. That famously sparse, clean sheet of paper with the colorful Google logo is the most popular Web page in the entire World Wide Web. For millions and millions of Internet users, that Spartan white page IS the Internet.

    Google has successfully made their site the front door through which everyone passes in order to access the Internet. But staring at an almost blank sheet of paper has become, well, boring. Take Ms. Dewey for example. While some may object to her sultry demeanor, it’s pretty hard to deny that interfacing with her is far more visually appealing than with an inert white screen.

    A second example comes from Simply Google. Instead of squeezing through the keyhole in order to reach Google’s 37 search options, Simply Google places all of those choices and many, many more all on the very first page; neatly arranged in columns.

    Artificial Intelligence

    A second arena is sometimes referred to as Natural Language Processing (NLP), or Artificial Intelligence (AI). It is the desire we all have of wanting to ask a search engine questions in everyday sentences, and receive a human-like answer (remember “Good Morning, HAL”?). Many of us remember Ask Jeeves, the famous butler, which was an early attempt in this direction – that unfortunately failed.

    Google’s approach, Google Answers, was to enlist a cadre of “experts.” The concept was that you would pose a question to one of these experts, negotiate a price for an answer, and then pay up when it was found and delivered. It was such a failure, Google had to cancel the whole program. Enter ChaCha. With ChaCha, you can pose any question that you wish, click on the “Search With Guide” button, and a ChaCha Guide appears in a Chat box and dialogues with you until you find what you are looking for. There’s no time limit, and no fee.

    Clustering Engines

    Perhaps Google’s most glaring and egregious shortcoming is their insistence on displaying the outcome of a search in an impossibly long, one-dimensional list of results. We all intuitively know that the World Wide Web is just that, a three dimensional (or “3-D”) web of interconnected Web pages. Several search engines, known as clustering engines, routinely present their search results on a two-dimensional map that one can navigate through in search of the best answer. Search engines like KartOO and Quintura are excellent examples.

    Recommendation Search Engines

    Another promising category is the recommendation search engines. While Google essentially helps you to find what you already know (you just can’t find it), recommendation engines show you a whole world of things that you didn’t even know existed. Check out What to Rent, Music Map, or the stunning Live Plasma display. When you input a favorite movie, book, or artist, they recommend to you a world of titles or similar artists that you may never have heard of, but would most likely enjoy.

    Metasearch Engines

    Next we come to the metasearch engines. When you perform a search on Google, the results that you get are all from, well, Google! But metasearch engines have been around for years. They allow you to search not only Google, but a variety of other search engines too – in one fell swoop. There are many search engines that can do this, Dogpile, for instance, searches all of the “big four” mentioned above (Google, Yahoo!, MSN, and Ask) simultaneously. You could also try Zuula or PlanetSearch – which plows through 16 search engines at a time for you. A very interesting site to watch is GoshMe. Instead of searching an incredible number of Web pages, like conventional search engines, GoshMe searches for search engines (or databases) that each tap into an incredible number of Web pages. As I perceive it, GoshMe is a meta-metasearch engine (still in Beta)!

    Other Alt Search Engines

    And so it goes, feature after feature after feature. TheFind is a better shopping experience than Google’s Froogle, IMHO. Like is a true visual search engine, unlike Google’s Images, which just matches your keywords into images that have been tagged with those same keywords. Coming soon is Mobot (see the Demo at http://www.mobot.com). Google Mobile does let you perform a search on your mobile phone, but check out the Slifter Mobile Demo when you get a chance!

    Finally, almost prophetically, Google is silent. Silent! At least Speeglebot talks to you, and Nayio listens! But of course, why should Google worry about these upstarts (all 100 of them)? Aren’t they just like flies buzzing around an elephant? Can’t Google just ignore them, as their share of the search market continues to creep upwards towards 100%, or perhaps just buy them? Perhaps.

    The Last Question

    Issac Asimov, the preeminent science fiction writer of our time, once said that his favorite story, by far, was The Last Question. The question, for those who have not read it, is “Can Entropy Be Reversed?” That is, can the ultimate running down of all things, the burning out of all stars (or their collapse) be stopped – or is it hopelessly inevitable?

    The question for this age, I submit, is‚Ķ “Can Google Be Defeated”? Or is Google’s mission “to organize the world’s information and make it universally accessible and useful” a fait accompli?

    Perhaps the place to start is by reading (or re-reading) Asimov’s “The Last Question.” I won’t give it away, but it does suggest The Answer‚Ķ.

    Charles Knight is the Principal of Charles Knight SEO, a Search Engine Optimization company in Charlottesville, VA.

    The Top 100

    For an Excel spreadsheet of the entire Top 100 Alternative Search Engines, go to: http://charlesknightseo.com/list.aspx or email the author at Charles@CharlesKnightSEO.com.

    This list is in alphabetical order. Feel free to share this list, but please retain Charles’ name and email.

    Update: Thanks Sanjeev Narang for providing a hyperlinked version of the list.

    Update, 5 February 2007: Charles Knight has left a detailed comment (#94) in response to all the great feedback in the comments to this post. He also notes:

    “…while it looks like a very simple, almost crude list of 100 names, it has taken countless hours to try and do it properly and fairly. The list will be updated all year long, and the Top 100 can only get better and better until the Best of 2007 are announced on 12/31/07.”

    Read Full Post »

    Science 2.0: Great New Tool, or Great Risk?

    Wikis, blogs and other collaborative web technologies could usher in a new era of science. Or not.

    By M. Mitchell Waldrop

      Back



    Welcome to a Scientific American experiment in “networked journalism,” in which readers—you—get to collaborate with the author to give a story its final form.

    The article, below, is a particularly apt candidate for such an experiment: it’s my feature story on “Science 2.0,” which describes how researchers are beginning to harness wikis, blogs and other Web 2.0 technologies as a potentially transformative way of doing science. The draft article appears here, several months in advance of its print publication, and we are inviting you to comment on it. Your inputs will influence the article’s content, reporting, perhaps even its point of view.

    So consider yourself invited. Please share your thoughts about the promise and peril of Science 2.0.—just post your inputs in the Comment section below. To help get you started, here are some questions to mull over:

    • What do you think of the article itself? Are there errors? Oversimplifications? Gaps?
    • What do you think of the notion of “Science 2.0?” Will Web 2.0 tools really make science much more productive? Will wikis, blogs and the like be transformative, or will they be just a minor convenience?
    • Science 2.0 is one aspect of a broader Open Science movement, which also includes Open-Access scientific publishing and Open Data practices. How do you think this bigger movement will evolve?
    • Looking at your own scientific field, how real is the suspicion and mistrust mentioned in the article? How much do you and your colleagues worry about getting “scooped”? Do you have first-hand knowledge of a case in which that has actually happened?
    • When young scientists speak out on an open blog or wiki, do they risk hurting their careers?
    • Is “open notebook” science always a good idea? Are there certain aspects of a project that researchers should keep quite, at least until the paper is published?

    –M. Mitchell Waldrop

    The explosively growing World Wide Web has rapidly transformed retailing, publishing, personal communication and much more. Innovations such as e-commerce, blogging, downloading and open-source software have forced old-line institutions to adopt whole new ways of thinking, working and doing business.

    Science could be next. A small but growing number of researchers–and not just the younger ones–have begun to carry out their work via the wide-open blogs, wikis and social networks of Web 2.0. And although their efforts are still too scattered to be called a movement–yet–their experiences to date suggest that this kind of Web-based “Science 2.0” is not only more collegial than the traditional variety, but considerably more productive.

    “Science happens not just because of people doing experiments, but because they’re discussing those experiments,” explains Christopher Surridge, editor of the Web-based journal, Public Library of Science On-Line Edition (PLoS ONE). Critiquing, suggesting, sharing ideas and data–communication is the heart of science, the most powerful tool ever invented for correcting mistakes, building on colleagues’ work and creating new knowledge. And not just communication in peer-reviewed papers; as important as those papers are, says Surridge, who publishes a lot of them, “they’re effectively just snapshots of what the authors have done and thought at this moment in time. They are not collaborative beyond that, except for rudimentary mechanisms such as citations and letters to the editor.”

    The technologies of Web 2.0 open up a much richer dialog, says Bill Hooker, a postdoctoral cancer researcher at the Shriners Hospital for Children in Portland, Ore., and the author of a three-part survey of open-science efforts in the group blog, 3 Quarks Daily. “To me, opening up my lab notebook means giving people a window into what I’m doing every day. That’s an immense leap forward in clarity. In a paper, I can see what you’ve done. But I don’t know how many things you tried that didn’t work. It’s those little details that become clear with open notebook, but are obscured by every other communication mechanism we have. It makes science more efficient.” That jump in efficiency, in turn, could have huge payoffs for society, in everything from faster drug development to greater national competitiveness.

    Of course, many scientists remain highly skeptical of such openness–especially in the hyper-competitive biomedical fields, where patents, promotion and tenure can hinge on being the first to publish a new discovery. From that perspective, Science 2.0 seems dangerous: using blogs and social networks for your serious work feels like an open invitation to have your online lab notebooks vandalized–or worse, have your best ideas stolen and published by a rival.

    To Science 2.0 advocates, however, that atmosphere of suspicion and mistrust is an ally. “When you do your work online, out in the open,” Hooker says, “you quickly find that you’re not competing with other scientists anymore, but cooperating with them.”

    Rousing Success
    In principle, says PLoS ONE’s Surridge, scientists should find the transition to Web 2.0 perfectly natural. After all, since the time of Galileo and Newton, scientists have built up their knowledge about the world by “crowd-sourcing” the contributions of many researchers and then refining that knowledge through open debate. “Web 2.0 fits so perfectly with the way science works, it’s not whether the transition will happen but how fast,” he says.

    The OpenWetWare project at MIT is an early success. Launched in the spring of 2005 by graduate students working for MIT biological engineers Drew Endy and Thomas Knight, who collaborate on synthetic biology, the project was originally seen as just a better way to keep the two labs’ Web sites up to date. OpenWetWare is a wiki–a collaborative Web site that can be edited by anyone who has access to it; it even uses the same software that underlies the online encyclopedia Wikipedia. Students happily started posting pages introducing themselves and their research, without having to wait for a Webmaster to do it for them.

    But then, users discovered that the wiki was also a convenient place to post what they were learning about lab techniques: manipulating and analyzing DNA, getting cell cultures to grow. “A lot of the ‘how-to’ gets passed around as lore in biology labs, and never makes it into the protocol manuals,” says Jason Kelly, a graduate student of Endy’s who now sits on the OpenWetWare steering committee. “But we didn’t have that.” Most of the students came from a background in engineering; theirs was a young lab with almost no mentors. So whenever a student or postdoc managed to stumble through a new protocol, he or she would write it all down on a wiki page before the lessons were forgotten. Others would then add whatever new tricks they had learned. This was not altruism, notes steering-committee member Reshma Shetty. “The information was actually useful to me.” But by helping herself, she adds, “that information also became available around the world.”

    Indeed, Kelly points out, “Most of our new users came to us because they’d been searching Google for information on a protocol, found it posted on our site, and said ‘Hey!’ As more and more labs got on, it became pretty apparent that there were lots of other interesting things they could do.”

    Classes, for example. Instead of making do with a static Web page posted by a professor, users began to create dynamically evolving class sites where they could post lab results, ask questions, discuss the answers and even write collaborative essays. “And all stayed on the site, where it made the class better for next year,” says Shetty, who has created an OpenWetWare template for creating such class sites.

    Laboratory management benefited too. “I didn’t even know what a wiki was,” recalls Maureen Hoatlin of the Oregon Health & Science University in Portland, where she runs a lab studying the genetic disorder Fanconi anemia. But she did know that the frenetic pace of research in her field was making it harder to keep up with what her own team members were doing, much less Fanconi researchers elsewhere. “I was looking for a tool that would help me organize all that information,” Hoatlin says. “I wanted it to be Web-based, because I travel a lot and needed to access it from wherever I was. And I wanted something my collaborators and group members could add to dynamically, so that whatever I saw on that Web page would be the most recently updated version.”

    OpenWetWare, which Hoatlin saw in the spring of 2006, fit the bill perfectly. “The transparency turned out to be very powerful,” she says. “I came to love the interaction, the fact that people in other labs could comment on what we do and vice versa. When I see how fast that is, and its power to move science forward–there is nothing like it.”

    Numerous others now work through OpenWetWare to coordinate research. SyntheticBiology.org, one of the site’s most active interest groups, currently comprises six laboratories in three states, and includes postings about jobs, meetings, discussions of ethics, and much more.

    In short, OpenWetWare has quickly grown into a social network catering to a wide cross-section of biologists and biological engineers. It currently encompasses laboratories on five continents, dozens of courses and interest groups, and hundreds of protocol discussions–more than 6100 Web pages edited by 3,000 registered users. A May 2007 grant from the National Science Foundation launched the OpenWetWare team on a five-year effort to transform OpenWetWare to a self-sustaining community independent of its current base at MIT. The grant will also support development of many new practical tools, such as ways to interface biological databases with the wiki, as well as creation of a generic version of OpenWetWare that can be used by other research communities such as neuroscience, as well as by individual investigators.

    Skepticism Persists
    For all the participants’ enthusiasm, however, this wide-open approach to science still faces intense skepticism. Even Hoatlin found the openness unnerving at first. “Now I’m converted to open wikis for everything possible,” she says. “But when I originally joined I wanted to keep everything private”–not least to keep her lab pages from getting trashed by some random hacker. She did not relax until she began to understand the system’s built-in safeguards.

    First and foremost, says MIT’s Kelly, “you can’t hide behind anonymity.” By default, OpenWetWare pages are visible to anyone (although researchers have the option to make pages private.) But unlike the oft-defaced Wikipedia, the system will let users make changes only after they have registered and established that they belong to a legitimate research organization. “We’ve never yet had a case of vandalism,” Kelly says. Even if they did, the wiki automatically maintains a copy of every version of every page posted: “You could always just roll back the damage with a click of your mouse.”

    Unfortunately, this kind of technical safeguard does little to address a second concern: Getting scooped and losing the credit. “That’s the first argument people bring to the table,” says Drexel University chemist Jean-Claude Bradley, who created his independent laboratory wiki, UsefulChem, in December 2005. Even if incidents are rare in reality, Bradley says, everyone has heard a story, which is enough to keep most scientists from even discussing their unpublished work too freely, much less posting it on the Internet.

    However, the Web provides better protection that the traditional journal system, Bradley maintains. Every change on a wiki gets a time-stamp, he notes, “so if someone actually did try to scoop you, it would be very easy to prove your priority–and to embarrass them. I think that’s really what is going to drive open science: the fear factor. If you wait for the journals, your work won’t appear for another six to nine months. But with open science, your claim to priority is out there right away.”

    Under Bradley’s radically transparent “open notebook” approach, as he calls it, everything goes online: experimental protocols, successful outcomes, failed attempts, even discussions of papers being prepared for publication. “A simple wiki makes an almost perfect lab notebook,” he declares. The time-stamps on every entry not only establish priority, but allow anyone to track the contributions of every person, even in a large collaboration.

    Bradley concedes that there are sometimes legitimate reasons for researchers to think twice about being so open. If work involves patients or other human subjects, for example, privacy is obviously a concern. And if you think your work might lead to a patent, it is still not clear that the patent office will accept a wiki posting as proof of your priority. Until that is sorted out, he says, “the typical legal advice is: do not disclose your ideas before you file.”

    Still, Bradley says the more open scientists are, the better. When he started UsefulChem, for example, his lab was investigating the synthesis of drugs to fight diseases such as malaria. But because search engines could index what his team was doing without needing a bunch of passwords, “we suddenly found people discovering us on Google and wanting to work together. The National Cancer Institute contacted me wanting to test our compounds as anti-tumor agents. Rajarshi Guha at Indiana University offered to help us do calculations about docking–figuring out which molecules will be reactive. And there were others. So now we’re not just one lab doing research, but a network of labs collaborating.”

    Blogophobia
    Although wikis are gaining, scientists have been strikingly slow to embrace one of the most popular Web 2.0 applications: Web logging, or blogging.

    “It’s so antithetical to the way scientists are trained,” Duke University geneticist Huntington F. Willard said at the April 2007 North Carolina Science Blogging Conference, one of the first national gatherings devoted to this topic. The whole point of blogging is spontaneity–getting your ideas out there quickly, even at the risk of being wrong or incomplete. “But to a scientist, that’s a tough jump to make,” says Willard, head of Duke’s Institute for Genome Sciences & Policy. “When we publish things, by and large, we’ve gone through a very long process of drafting a paper and getting it peer reviewed. Every word is carefully chosen, because it’s going to stay there for all time. No one wants to read, ‘Contrary to the result of Willard and his colleagues…’.”

    Still, Willard favors blogging. As a frequent author of newspaper op-ed pieces, he feels that scientists should make their voices heard in every responsible way possible. Blogging is slowly beginning to catch on; because most blogs allow outsiders to comment on the individual posts, they have proved to be a good medium for brainstorming and discussions of all kinds. Bradley’s UsefulChem blog is an example. Paul Bracher’s Chembark is another. “Chembark has morphed into the water cooler of chemistry,” says Bracher, who is pursuing his Ph.D. in that field at Harvard University. “The conversations are: What should the research agencies be funding? What is the proper way to manage a lab? What types of behavior do you admire in a boss? But instead of having five people around a single water cooler you have hundreds of people around the world.”

    Of course, for many members of Bracher’s primary audience–young scientists still struggling to get tenure–those discussions can look like a minefield. A fair number of the participants use pseudonyms, out of fear that a comment might offend some professor’s sensibilities, hurting a student’s chances of getting a job later. Other potential participants never get involved because they feel that time spent with the online community is time not spent on cranking out that next publication. “The peer-reviewed paper is the cornerstone of jobs and promotion,” says PLoS ONE’s Surridge. “Scientists don’t blog because they get no credit.”

    The credit-assignment problem is one of the biggest barriers to the widespread adoption of blogging or any other aspect of Science 2.0, agrees Timo Hannay, head of Web publishing at the Nature Publishing Group in London. (That group’s parent company, Macmillan, also owns Scientific American.) Once again, however, the technology itself may help. “Nobody believes that a scientist’s only contribution is from the papers he or she publishes,” Hannay says. “People understand that a good scientist also gives talks at conferences, shares ideas, takes a leadership role in the community. It’s just that publications were always the one thing you could measure. Now, however, as more of this informal communication goes on line, that will get easier to measure too.”

    Collaboration the Payoff
    The acceptance of any such measure would require a big change in the culture of academic science. But for Science 2.0 advocates, the real significance of Web technologies is their potential to move researchers away from an obsessive focus on priority and publication, toward the kind of openness and community that were supposed to be the hallmark of science in the first place. “I don’t see the disappearance of the formal research paper anytime soon,” Surridge says. “But I do see the growth of lots more collaborative activity building up to publication.” And afterwards as well: PLoS ONE not only allows users to annotate and comment on the papers it publishes online, but to rate the papers’ quality on a scale of 1 to 5.

    Meanwhile, Hannay has been taking the Nature group into the Web 2.0 world aggressively. “Our real mission isn’t to publish journals, but to facilitate scientific communication,” he says. “We’ve recognized that the Web can completely change the way that communication happens.” Among the efforts are Nature Network, a social network designed for scientists; Connotea, a social bookmarking site patterned on the popular site del.icio.us, but optimized for the management of research references; and even an experiment in open peer review, with pre-publication manuscripts made available for public comment.

    Indeed, says Bora Zivkovic, a circadian rhythm expert who writes at Blog Around the Clock, and who is the Online Community Manager for PLoS ONE, the various experiments in Science 2.0 are now proliferating so rapidly that it is almost impossible to keep track of them. “It’s a Darwinian process,” he says. “About 99 percent of these ideas are going to die. But some will emerge and spread.”

    “I wouldn’t like to predict where all this is going to go,” Hooker adds. “But I’d be happy to bet that we’re going to like it when we get there.”

    Read Full Post »

    Older Posts »

    %d bloggers like this: