Feeds:
Posts
Comments

Archive for the ‘Future’ Category

April 3, 2009 11:27 AM PDT

Google shows off Gmail mobile Web app

 

gmail_android_app-11Google’s HTML 5-based Web version of Gmail shown on an Android phone

(Credit: Stephen Shankland/CNET)

SAN FRANCISCO–What Google did with Gmail in conventional browsers five years ago it is expecting to do again with a new mobile version of its Web-based e-mail service.

Vic Gundotra, who leads Google’s mobile software and developer relations efforts, showed off the Web application “technical prototype” Friday in an onstage interview here at the Web 2.0 Expo. Google offers Gmail applications that run natively on BlackBerry and Android mobile phones, but the company clearly has high hopes for a Web-based version as well.

Building a Web interface means Google can reach more phones more easily, Gundotra said, as phone browsers get more sophisticated and their Internet connectivity gets better. “Imagine if you could build apps that ran across all these phones,” Gundotra said.

As he did in a similar demonstration in February, Gundotra showed a version running on an iPhone and on a phone using Google’s Android operating system–apparently the HTC Magic.

The software relied on features in HTML 5, the still-under-development version of the technology that underpins Web site design. Specifically, it used offline data access so the application could read e-mail even while there was no Internet connection.

“When we make it broadly available, people are going to see this as the first HTML 5 mobile application,” Gundotra said, declining to say when it would become available. “It’ll be like Gmail in 2004. It was a great watershed moment for Ajax apps,” which employ JavaScript for relatively sophisticated browser-based interfaces.

 

vic_gundotra_web20Vic Gundotra, head of Google’s mobile sofware and developer work, speaking at Web 2.0 Expo.

(Credit: Stephen Shankland/CNET)

The mobile Gmail application also featured a floating toolbar that stayed perched at the top of the inbox, offering constant access to delete and archive buttons and a menu of further options.

Mobile is central to Google’s work. The company already offers a search application for the iPhone and some other models that lets people issue queries by speaking rather than just typing. The accuracy of the speech recognition has improved 15 percent in the last quarter, Gundotra said, and usage of the service is growing fast.

Gundotra previously worked at Microsoft, but it was a few words from his then 4-year-old daughter that led him to Google. He’d told a friend he didn’t know the answer to a question, and his daughter, overhearing, asked him, “Daddy, where’s your phone?”

“In her brief four years of life, she assumed any time you didn’t know the answer to a question, you brought out your phone. For her the phone was the ultimate answering machine,” something that answered questions. That helped him realize that Google’s mission of organizing the world’s information and presenting it to people would happen in mobile phones, too.

Google likes HTML 5, but it’ll take time for it to become adopted broadly. In the meantime, other alternatives exist for richer Internet applications, notably Adobe Systems’ Flash. Also up and coming are a browserless relative of Flash from Adobe called AIR and a Flash rival from Microsoft called Silverlight.

gmail_iphone_app-1 

Google showed off a better browser version of Gmail on the iPhone.

(Credit: Stephen Shankland/CNET)

Asked about AIR, Gundotra said, “I think Adobe has got some great products,” mentioning Google’s use of Flash to power video streaming at YouTube. “There’s also Silverlight from Microsoft. I am biased toward open Web standards,” Gundotra said.

And he touted another HTML 5 feature: “I predict we will see video tag become broadly adopted,” a technology that could enable video streaming without a Flash player, similar to the way Web browsers can show graphics without requiring separate plug-ins.

Gundotra also had words of praise for Google App Engine, a year-old service that can be used to run Web-based applications. One such application hosted on Google App Engine is Google Moderator, which lets people submit questions and rank which ones they want to hear answered. Moderator originated as a way for Google employees to ask questions of co-founders Larry Page and Sergey Brin during weekly employee meetings, Gundotra said.

Google was excited but scared when the White House said it planned to use Google Moderator for an online town hall meeting with President Barack Obama, Gundotra said.

But it held up under the load, and “the 45,000 other apps (on Google App Engine) were totally unaffected by this much scale,” Gundotra said.

The town hall moderator system handled nearly 700 queries per second at its peak, with 3.6 million people voting on the questions they wanted to hear answered, he said.

 

google_moderator_stats-1

Traffic spiked at Google Moderator when the White House used it to handle questions.

(Credit: Stephen Shankland/CNET)

 Stephen Shankland covers Google, Yahoo, search, online advertising, portals, digital photography, and related subjects. He joined CNET News in 1998 and since then also has covered servers, supercomputing, open-source software, and science. E-mail Stephen.

Read Full Post »

The Cognitive Age

Op-Ed Columnist

The Cognitive Age

 

Published: May 2, 2008
If you go into a good library, you will find thousands of books on globalization. Some will laud it. Some will warn about its dangers. But they’ll agree that globalization is the chief process driving our age. Our lives are being transformed by the increasing movement of goods, people and capital across borders.

The globalization paradigm has led, in the political arena, to a certain historical narrative: There were once nation-states like the U.S. and the European powers, whose economies could be secured within borders. But now capital flows freely. Technology has leveled the playing field. Competition is global and fierce.

New dynamos like India and China threaten American dominance thanks to their cheap labor and manipulated currencies. Now, everything is made abroad. American manufacturing is in decline. The rest of the economy is threatened.

Hillary Clinton summarized the narrative this week: “They came for the steel companies and nobody said anything. They came for the auto companies and nobody said anything. They came for the office companies, people who did white-collar service jobs, and no one said anything. And they came for the professional jobs that could be outsourced, and nobody said anything.”

The globalization paradigm has turned out to be very convenient for politicians. It allows them to blame foreigners for economic woes. It allows them to pretend that by rewriting trade deals, they can assuage economic anxiety. It allows them to treat economic and social change as a great mercantilist competition, with various teams competing for global supremacy, and with politicians starring as the commanding generals.

But there’s a problem with the way the globalization paradigm has evolved. It doesn’t really explain most of what is happening in the world.

Globalization is real and important. It’s just not the central force driving economic change. Some Americans have seen their jobs shipped overseas, but global competition has accounted for a small share of job creation and destruction over the past few decades. Capital does indeed flow around the world. But as Pankaj Ghemawat of the Harvard Business School has observed, 90 percent of fixed investment around the world is domestic. Companies open plants overseas, but that’s mainly so their production facilities can be close to local markets.

Nor is the globalization paradigm even accurate when applied to manufacturing. Instead of fleeing to Asia, U.S. manufacturing output is up over recent decades. As Thomas Duesterberg of Manufacturers Alliance/MAPI, a research firm, has pointed out, the U.S.’s share of global manufacturing output has actually increased slightly since 1980.

The chief force reshaping manufacturing is technological change (hastened by competition with other companies in Canada, Germany or down the street). Thanks to innovation, manufacturing productivity has doubled over two decades. Employers now require fewer but more highly skilled workers. Technological change affects China just as it does the America. William Overholt of the RAND Corporation has noted that between 1994 and 2004 the Chinese shed 25 million manufacturing jobs, 10 times more than the U.S.

The central process driving this is not globalization. It’s the skills revolution. We’re moving into a more demanding cognitive age. In order to thrive, people are compelled to become better at absorbing, processing and combining information. This is happening in localized and globalized sectors, and it would be happening even if you tore up every free trade deal ever inked.

The globalization paradigm emphasizes the fact that information can now travel 15,000 miles in an instant. But the most important part of information’s journey is the last few inches — the space between a person’s eyes or ears and the various regions of the brain. Does the individual have the capacity to understand the information? Does he or she have the training to exploit it? Are there cultural assumptions that distort the way it is perceived?

The globalization paradigm leads people to see economic development as a form of foreign policy, as a grand competition between nations and civilizations. These abstractions, called “the Chinese” or “the Indians,” are doing this or that. But the cognitive age paradigm emphasizes psychology, culture and pedagogy — the specific processes that foster learning. It emphasizes that different societies are being stressed in similar ways by increased demands on human capital. If you understand that you are living at the beginning of a cognitive age, you’re focusing on the real source of prosperity and understand that your anxiety is not being caused by a foreigner.

It’s not that globalization and the skills revolution are contradictory processes. But which paradigm you embrace determines which facts and remedies you emphasize. Politicians, especially Democratic ones, have fallen in love with the globalization paradigm. It’s time to move beyond it.

Read Full Post »

Pursuing the Next Level of Artificial Intelligence

Jim Wilson/The New York Times

Daphne Koller’s award-winning work in artificial intelligence has had commercial impact.

 

Published: May 3, 2008

PALO ALTO, Calif. — Like a good gambler, Daphne Koller, a researcher at Stanford whose work has led to advances in artificial intelligence, sees the world as a web of probabilities.

There is, however, nothing uncertain about her impact.

A mathematical theoretician, she has made contributions in areas like robotics and biology. Her biggest accomplishment — and at age 39, she is expected to make more — is creating a set of computational tools for artificial intelligence that can be used by scientists and engineers to do things like predict traffic jams, improve machine vision and understand the way cancer spreads.

Ms. Koller’s work, building on an 18th-century theorem about probability, has already had an important commercial impact, and her colleagues say that will grow in the coming decade. Her techniques have been used to improve computer vision systems and in understanding natural language, and in the future they are expected to lead to an improved generation of Web search.

“She’s on the bleeding edge of the leading edge,” said Gary Bradski, a machine vision researcher at Willow Garage, a robotics start-up firm in Menlo Park, Calif.

Ms. Koller was honored last week with a new computer sciences award sponsored by the Association for Computing Machinery and the Infosys Foundation, the philanthropic arm of the Indian computer services firm Infosys.

The award to Ms. Koller, with a prize of $150,000, is viewed by scientists and industry executives as validating her research, which has helped transform artificial intelligence from science fiction and speculation into an engineering discipline that is creating an array of intelligent machines and systems. It is not the first such recognition; in 2004, Ms. Koller received a $500,000 MacArthur Fellowship.

Ms. Koller is part of a revival of interest in artificial intelligence. After three decades of disappointments, artificial intelligence researchers are making progress. Recent developments made possible spam filters, Microsoft’s new ClearFlow traffic maps and the driverless robotic cars that Stanford teams have built for competitions sponsored by the Defense Advanced Research Projects Agency.

Since arriving at Stanford as a professor in 1995, Ms. Koller has led a group of researchers who have reinvented the discipline of artificial intelligence. Pioneered during the 1960s, the field was originally dominated by efforts to build reasoning systems from logic and rules. Judea Pearl, a computer scientist at the University of California, Los Angeles, had a decade earlier advanced statistical techniques that relied on repeated measurements of real-world phenomena.

Called the Bayesian approach, it centers on a formula for updating the probabilities of events based on repeated observations. The Bayes rule, named for the 18th-century mathematician Thomas Bayes, describes how to transform a current assumption about an event into a revised, more accurate assumption after observing further evidence.

Ms. Koller has led research that has greatly increased the scope of existing Bayesian-related software. “When I started in the mid- to late 1980s, there was a sense that numbers didn’t belong in A.I.,” she said in a recent interview. “People didn’t think in numbers, so why should computers use numbers?”

Ms. Koller is beginning to apply her algorithms more generally to help scientists discern patterns in vast collections of data.

“The world is noisy and messy,” Ms. Koller said. “You need to deal with the noise and uncertainty.”

That philosophy has led her to do research in game theory and artificial intelligence, and more recently in molecular biology.

Her tools led to a new type of cancer gene map based on examining the behavior of a large number of genes that are active in a variety of tumors. From the research, scientists were able to develop a new explanation of how breast tumors spread into bone.

One potentially promising area to apply Ms. Koller’s theoretical work will be the emerging field of information extraction, which could be applied to Web searches. Web pages would be read by software systems that could organize the information and effectively understand unstructured text.

“Daphne is one of the most passionate researchers in the A.I. community,” said Eric Horvitz, a Microsoft researcher and president of the Association for the Advancement of Artificial Intelligence. “After being immersed for a few years with the computational challenges of decoding regulatory genomics, she confided her excitement to me, saying something like, ‘I think I’ve become a biologist — I mean a real biologist — and it’s fabulous.’ ”

To that end, Ms. Koller is spending a sabbatical doing research with biologists at the University of California, San Francisco. Because biology is increasingly computational, her expertise is vital in gaining deeper understanding of cellular processes.

Ms. Koller grew up in an academic family in Israel, the daughter of a botanist and an English professor. While her father spent a year at Stanford in 1981 when she was 12, she began programming on a Radio Shack PC that she shared with another student.

When her family returned to Israel the next year, she told her father, the botanist, that she was bored with high school and wanted to pursue something more stimulating in college. After half a year, she persuaded him to let her enter Hebrew University, where she studied computer science and mathematics.

By 17, she was teaching a database course at the university. The next year she received her master’s degree and then joined the Israeli Army before coming to the United States to study for a Ph.D. at Stanford.

She didn’t spend her time looking at a computer monitor. “I find it distressing that the view of the field is that you sit in your office by yourself surrounded by old pizza boxes and cans of Coke, hacking away at the bowels of the Windows operating system,” she said. “I spend most of my time thinking about things like how does a cell work or how do we understand images in the world around us?”

In recent years, many of her graduate students have gone to work at Google. However she tries to persuade undergraduates to stay in academia and not rush off to become software engineers at start-up companies.

She acknowledges that the allure of Silicon Valley riches can be seductive. “My husband still berates me for not having jumped on the Google bandwagon at the beginning,” she said. Still, she insists she does not regret her decision to stay in academia. “I like the freedom to explore the things I care about,” she said.

Read Full Post »

April 19, 2008

The Wikipedia, Knowledge Preservation and DNA

I had an interesting thought today about the long-term preservation and transmission of human knowledge.

The Wikipedia may be on its way to becoming the one of the best places in which to preserve knowledge for future generations. But this is just the beginning. What if we could encode the Wikipedia into the Junk DNA portion of our own genome? It appears that something like this may actually be possible — at least according some recent studies of the non-coding regions of the human genome.

If we could actually encode knowledge, like the Wikipedia for example, into our genome, the next logical step would be to find a way to access it directly.

At first we might only be able to access and read the knowledge stored in our DNA through a computationally intensive genetic analysis of an individual’s DNA. In order to correct any errors in the data from mutuation, we would also need to cross-reference this individual data with similar analyses from the DNA of other people who also carry this data in their DNA. But this is just the beginning. There are however ways to stored data such that there is enough redundancy to protect against degradation. Assuming we could do this we might be able to eliminate the need for cross referencing as a form of error correction — the data itself would be self-correcting so to speak. If we could accomplish this then the next step would be to find a way for an individual to access the knowledge stored in their DNA in real-time, directly. That’s a long way off but there may be a way to do this using some future nano-scale genomic-brain interface. This opens up some fascinating areas of speculation to say the least.

 

Why The Wikipedia?

The Wikipedia has certain qualities that make it better than other forms of knowledge preservation and transmission:

  • The Wikipedia exists primarily in electronic form. It is not subject to age or decay like a physical encyclopedia or document. This means it can persist forever, and will not be lost to time, if it continues to be maintained electronically in the future.
  • The Wikipedia is replicated in multiple locations around the world. The fact that it is so easy to replicate, and is so widely replicated means that it is less at risk of being lost due to a local disaster at any given storage location. It also means it is more likely to continue, somewhere, as a living document that goes on to reflect majority consensus reality into the distant future. It is highly improbable that it will ever suffer the same fate as certain ancient documents which only existed in one place and were subsequently lost in floods, fires, or wars, etc. At this point only a planet-wide extinction level event could erase the Wikipedia and/or prevent future generations from finding it.
  • The Wikipedia is highly viral, it’s content is increasingly cited and it is far ahead of any competing system in terms of coverage and brand-recognition. Because so many other pieces of content on the Web and in other media refer to the Wikipedia as the world’s global authority for knowledge, it is considered increasingly authoritative and is increasingly visible and increasingly cited. The Law of Increasing Returns indicates that this will continue to self-amplify, making the Wikipedia the best candidate for an authoritative global repository of knowledge.

What this means is that if you have any knowledge that you want to preserve for future generations, a good place to put it is in the Wikipedia. Putting it there almost guarantees that it will propagate around the world and throughout the human-explored universe (in the future, if we become a spacefaring civilization), and into the distant future of human civilizations.

The Potential For Storing Knowledge in DNA

Is it possible to store knowledge — such as the Wikipedia — in human DNA? It would certainly be useful if we could do this. By storing knowledge in human DNA of living humans, or of common bacteria for that matter, it could then potentially be passed down and spread through generations into the far future. However the mutability of DNA over time might gradually introduce errors that would degrade the information within particular lines of DNA over long periods of time.

Perhaps this could however be mitigated by comparing DNA samples from a large cross-section of individuals within the population of descendants of original holders of DNA-knowledge-archives in the future — this would effectively enable statistical error cancellation. The farther in the future from the date at which the knowledge is “written” to the DNA of some number of humans, the more people’s DNA would be needed to eliminate the errors statistically. This would however in principle counteract mutations and enable the reliable recovery of messages in DNA even very far in the future.

The fact that it is in principle possible to encode knowledge into human (or other) DNA begs the question of whether there is already knowledge stored there? It’s certainly worth a look! Maybe there is already a message there for us? One can only wonder if there is already an ancient “Wikipedia” of sorts already written there.

Interestingly enough, when certain statistical tests are run against human DNA,  it does seem to have properties that are indicative of written language, but only in the “junk” regions of the genome. Maybe it’s not “junk” after all. Below is an article that discusses a recent discovery related to this:

Language in junk DNA

You’ve probably heard of a molecule called DNA, otherwise known as “The Blueprint Of Life”. Molecular biologists have been examining and mapping the DNA for a few decades now. But as they’ve looked more closely at the DNA, they’ve been getting increasingly bothered by one inconvenient little fact – the fact that 97% of the DNA is junk, and it has no known use or function! But, an usual collaboration between molecular biologists, cryptoanalysists (people who break secret codes), linguists (people who study languages) and physicists, has found strange hints of a hidden language in this so- called “junk DNA”.

Only about 3% of the DNA actually codes for amino acids, which in turn make proteins, and eventually, little babies. The remaining 97% of the DNA is, according to conventional wisdom, not gems, but junk.

The molecular biologists call this junk DNA, introns. Introns are like enormous commercial breaks or advertisements that interrupt the real program – except in the DNA, they take up 97% of the broadcast time. Introns are so important, that Richard Roberts and Phillip Sharp, who did much of the early work on introns back in 1977, won a Nobel Prize for their work in 1993. But even today, we still don’t know what introns are really for.

Simon Shepherd, who lectures in cryptography and computer security at the University of Bradford in the United Kingdom, took an approach, that was based on his line of work. He looked on the junk DNA, as just another secret code to be broken. He analysed it, and he now reckons that one probable function of introns, is that they are some sort of error correction code – to fix up the occasional mistakes that happen as the DNA replicates itself. But even if he’s right, introns could have lots of other uses.

The next big breakthrough came from a really unusual collaboration between medical doctors, physicists and linguists. They found even more evidence that there was a sort-of language buried in the introns.

According to the linguists, all human languages obey Zipf’s Law. It’s a really weird law, but it’s not that hard to understand. Start off by getting a big fat book. Then, count the number of times each word appears in that book. You might find that the number one most popular word is “the” (which appears 2,000 times), followed by the second most popular word “a” (which appears 1,800 times), and so on. Right down at the bottom of the list, you have the least popular word, which might be “elephant”, and which appears just once.

Set up two columns of numbers. One column is the order of popularity of the words, running from “1” for “the”, and “2” for “a”, right down “1,000” for “elephant”. The other column counts how many times each word appeared, starting off with 2,000 appearances of “the”, then 1,800 appearances of “a”, down to one appearance of “elephant”.

If you then plot on the right kind of graph paper, the order of popularity of the words, against the number of times each word appears you get a straight line! Even more amazingly, this straight line appears for every human language – whether it’s English or Egyptian, Eskimo or Chinese! Now the DNA is just one continuous ladder of squillions of rungs, and is not neatly broken up into individual words (like a book).

So the scientists looked at a very long bit of DNA, and made artificial words by breaking up the DNA into “words” each 3 rungs long. And then they tried it again for “words” 4 rungs long, 5 rungs long, and so on up to 8 rungs long. They then analysed all these words, and to their surprise, they got the same sort of Zipf Law/straight-line-graph for the human DNA (which is mostly introns), as they did for the human languages!

There seems to be some sort of language buried in the so-called junk DNA! Certainly, the next few years will be a very good time to make a career change into the field of genetics.

So now, around the edge of the new millennium, we have a reasonable understanding of the 3% of the DNA that makes amino acids, proteins and babies. And the remaining 97% – well, we’re pretty sure that there is some language buried there, even if we don’t yet know what it says. It might say “It’s all a joke”, or it might say “Don’t worry, be happy”, or it might say “Have a nice day, lots of love, from your friendly local DNA”.   (source)

Now to complete this thought: what if the information-carrying capacity of the so-called Junk DNA of the human genome is sufficient to hold the content of the Wikipedia? Then all we would need is some way of writing to it — perhaps via gene therapy via infection by a virus that carries a copy of the Wikipedia.

This would enable volunteers to accept copies of the Wikipedia into their DNA and become vectors for the Wikipedia. They and their descendants would become walking encyclopedias and would preserve human knowledge for future generations. If only some people had this done then they and their lineages would be a sort of priesthood with particular importance for the future of humanity. It sounds like the basis for a really great science-fiction thriller!

By copying the Wikipedia into our own DNA we might be able to ensure that wherever human beings end up in the universe, the Wikipedia will go with them. Even if in some distant world humans destroy their civilization in a nuclear holocaust or are almost wiped out by an asteroid and have to rebuild from the stone-age again, they will eventually rediscover genomics and soon after that they will find the Wikipedia in their genome.

This is a kind of “backup strategy” for our civilization and all the knowledge we consider to be most important. Of course it is not clear yet whether the Junk DNA could carry enough information to encode the entire Wikipedia, nor is it clear that the Junk DNA is actually “junk” — perhaps there is already something there that should not be overwritten? Or perhaps it serves some other purpose in human development and evolution that we shouldn’t mess around with. It remains to be seen.

Read Full Post »

Tuesday, April 29, 2008 – Page updated at 03:56 PM

E-mail article     Print view      Share:    Digg     Newsvine

Microsoft device helps police pluck evidence from cyberscene of crime

Seattle Times technology reporter

Microsoft has developed a small plug-in device that investigators can use to quickly extract forensic data from computers that may have been used in crimes.

The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB “thumb drive” that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.

The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer’s Internet activity, as well as data stored in the computer.

It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.

More than 2,000 officers in 15 countries, including Poland, the Philippines, Germany, New Zealand and the United States, are using the device, which Microsoft provides free.

“These are things that we invest substantial resources in, but not from the perspective of selling to make money,” Smith said in an interview. “We’re doing this to help ensure that the Internet stays safe.”

Law-enforcement officials from agencies in 35 countries are in Redmond this week to talk about how technology can help fight crime. Microsoft held a similar event in 2006. Discussions there led to the creation of COFEE.

Smith compared the Internet of today to London and other Industrial Revolution cities in the early 1800s. As people flocked from small communities where everyone knew each other, an anonymity emerged in the cities and a rise in crime followed.

The social aspects of Web 2.0 are like “new digital cities,” Smith said. Publishers, interested in creating huge audiences to sell advertising, let people participate anonymously.

That’s allowing “criminals to infiltrate the community, become part of the conversation and persuade people to part with personal information,” Smith said.

Children are particularly at risk to anonymous predators or those with false identities. “Criminals seek to win a child’s confidence in cyberspace and meet in real space,” Smith cautioned.

Expertise and technology like COFEE are needed to investigate cybercrime, and, increasingly, real-world crimes.

advertising

“So many of our crimes today, just as our lives, involve the Internet and other digital evidence,” said Lisa Johnson, who heads the Special Assault Unit in the King County Prosecuting Attorney’s Office.

A suspect’s online activities can corroborate a crime or dispel an alibi, she said.

The 35 individual law-enforcement agencies in King County, for example, don’t have the resources to investigate the explosion of digital evidence they seize, said Johnson, who attended the conference.

“They might even choose not to seize it because they don’t know what to do with it,” she said. “… We’ve kind of equated it to asking specific law-enforcement agencies to do their own DNA analysis. You can’t possibly do that.”

Johnson said the prosecutor’s office, the Washington Attorney General’s Office and Microsoft are working on a proposal to the Legislature to fund computer forensic crime labs.

Microsoft also got credit for other public-private partnerships around law enforcement.

Jean-Michel Louboutin, Interpol’s executive director of police services, said only 10 of 50 African countries have dedicated cybercrime investigative units.

“The digital divide is no exaggeration,” he told the conference. “Even in countries with dedicated cybercrime units, expertise is often too scarce.”

He credited Microsoft for helping Interpol develop training materials and international databases used to prevent child abuse.

Smith acknowledged Microsoft’s efforts are not purely altruistic. It benefits from selling collaboration software and other technology to law-enforcement agencies, just like everybody else, he said.

Benjamin J. Romano: 206-464-2149 or bromano@seattletimes.com

Copyright © 2008 The Seattle Times Company

Read Full Post »

Enterprise 2.0 To Become a $4.6 Billion Industry By 2013

Written by Sarah Perez / April 20, 2008 9:01 PM / 25 Comments


A new report released today by Forrester Research is predicting that enterprise spending on Web 2.0 technologies is going to increase dramatically over the next five years. This increase will include more spending on social networking tools, mashups, and RSS, with the end result being a global enterprise market of $4.6 billion by the year 2013.

This change is not without its challenges. Although there is money to be made in the industry by vendors, Web 2.0 tools by their very nature are defined by commoditization; as is much of the new social media industry, a topic we touched on briefly here, when discussing how content has become a commodity.

For vendors specifically, there are 3 main challenges to becoming successful in this new industry, including:

  1. I.T. shops being wary of what they perceive as “consumer-grade” technology
  2. Ad-supported web tools generally have “free” as the starting point
  3. Web 2.0 tools will have to now compete in a space currently dominated by legacy enterprise software investments

What is Enterprise Web 2.0?

Most technologists segment the Web 2.0 market between “consumer” Web 2.0 technologies and “business” Web 2.0 technologies. So what does Enterprise 2.0 include then?

Well, what it doesn’t include is consumer services like Blogger, Facebook, Netvibes, and Twitter, says Forrester. These types of services are aimed at consumers and are often supported by ads, so they do not qualify as Enterprise 2.0 tools.

Instead, collaboration and productivity tools based on the concepts of web 2.0, but designed for the enterprise worker will count as being Enterprise 2.0. In addition, for-pay services, like those from BEA Systems, IBM, Microsoft, Awareness, NewsGator Technologies, and Six Apart will factor in.

Enterprise marketing tools have also expanded to include Web 2.0 technologies. For example, money spent on the creation and syndication of a Facebook app or a web site/social network widget could be considered Enterprise 2.0. However, pure ad spending dollars, including those spent on consumer Web 2.0 sites, will not count as Enterprise 2.0.

Getting Past the I.T. Gatekeeper

One of the main challenges of getting Web 2.0 into the enterprise will be getting past the gatekeepers of traditional I.T. Businesses have been showing interest in these new technologies, but, ironically, the interest comes from departments outside of I.T. Instead, it’s the marketing department, R&D, and corporate communications pushing for the adoption of more Web 2.0-like tools.

Unfortunately, as often is the case, the business owners themselves don’t have the knowledge or expertise to make technology purchasing decisions for their company. They rely on I.T. to do so – a department that currently spends 70% of their budget maintaining past investments.

Despite the absolute mission-critical nature of I.T. in today’s business, the department is often provided with slim budgets, which tends to only allow for maintaining current infrastructure, not experimenting with new, unproven technologies.

To make matters worse, I.T. tends to view Web 2.0 tools as being insecure at best, or, at worst, a security threat to the business. They also don’t trust what they perceive to be “consumer-grade” technologies, which they don’t believe have the power to scale to the size that an enterprise demands.

In addition, I.T. departments currently work with a host of legacy applications. The new tools, in order to compete with these, will have to be able to integrate with existing technology, at least for the time being, in order to be fully effective.

Finally, given the tight budgets, there is still a chance that even if a particular tool does meet all the requirements to get in the door at a particular company, I.T. or other company personnel utilizing the service may try to exploit the free version of the service if the price point for the “enterprise” version gets to be too high. They may also choose to look for a free, open source alternative.

Enterprise 2.0 Adoption

How Web 2.0 Will Reach $4.6 Billion

All that being said, the Web 2.0 market, as  small as it is now, is, in fact, growing. In 2008, firms with 1000 employees or more will spend $764 million on Web 2.0 tools and technologies. Over the next five years, that expenditure will grow at a compound annual rate of 43%.

The top spending category will be social networking tools. In 2008, for example, companies will spend $258 million on tools like those from Awareness, Communispace, and Jive Software. After social networking, the next-largest category is RSS, followed by blogs and wikis, and then mashups.

The vendors expected to do the best in this new marketplace will be those that bundle their offerings, offering the complete package of tools to the businesses they serve.

However, newer, “pure” Web 2.0 companies hoping to capitalize on this trend will still have to fight with traditional I.T. software for a foothold, specifically fighting with the likes of Microsoft and IBM. Many I.T. shops will choose to stick with their existing software from these large, well-known vendors, especially now that both are integrating Web 2.0 into their offerings.

Microsoft’s SharePoint, for example, now includes wikis, blogs, and RSS technologies in their collaboration suite. IBM offers social networking and mashup tools via their Lotus Connections and Lotus Mashups products and SAP Business Suite includes social networking and widgets.

What this means is that much of the Web 2.0 tool kit will simply “fade into the fabric of enterprise collaboration suites,” says Forrester. By 2013, few buyers will seek out and purchase Web 2.0 tools specifically. Web 2.0 will become a feature, not a product.

Enterprise 2.0 Spending

Other Trends

Other trends will also have an impact on this new marketplace, including the following:

External Spending Will Beat Internal Spending: External Web 2.0 expenditure will surpass internal expenditure in 2009, and, by 2013, will dwarf internal spending by a billion dollars. Internally, companies will spend money on internal social networking, blogs, wikis, and RSS; externally, the spending patterns will be very similar. Social networking tools that provide customer interaction, allowing customers the ability to create profiles, join discussion boards, and read company blogs, for example, will receive more investment and development over the next five years.

Europe & Asia Pacific Markets Grow: Europe and Asia Pacific will become more substantial markets in 2009. Fewer European companies have embraced Web 2.0 tools, leaving much room for growth. Asia Pacific will also grow in 2009.

Web 2.0 Graduates from “Kids’ Stuff”:  Right now, it’s people between the ages of 12 and 17 that are the more avid consumers of social computing technology, with one-third of them acting as content creators. Meanwhile, only 7% of those 51-61 do the same. However, this is another trend that is going to change over the next few years. By 2011, Forrester believes that users of Web 2.0 tools will mirror users of the web at large.

Retirement of Baby Boomers: As with many things, it takes the passing of the older generation from executive status into retirement before a true shift can occur. Over the next three years, millions of baby boomers will retire and the younger workers brought in to fill the void will not only want, but will expect similar tools in the office as those they use at home in their personal lives.

What It All Means

For vendors wanting to play in the Enterprise 2.0 space, there are a few key takeaways to be learned from this research. For one, they can help ensure their success in this niche by selling across deployment types. That is, plan to grow beyond just selling to either the internal or external market.

Another option is to segment the enterprise marketplace by industry and then by company size. Some industries are more customer-focused than others when it comes to the external market, so developing customized solutions for a particular industry could be a key to success. For internal tools, focusing efforts on deploying enterprise grade tools that include things like integration or security will help sell products to larger customers. Other  levels of service can be designed specifically for the SMBs, featuring simple, self-provisioning products to help cut down on costs.

Finally, vendors looking to grow should consider making a name for themselves in the Europe or Asia Pacific markets, where the opportunity comes from the expected increased investment rates for Web 2.0/Enterprise 2.0 in those geographic regions.

However, the most valuable aspect of this change for vendors is the knowledge they obtain about how to run a successful SaaS business – something that will help propel them into the next decade and beyond and, ultimately, will provide more value than any single Web 2.0 offering alone ever will.

Read Full Post »

Older Posts »

%d bloggers like this: