Feeds:
Posts
Comments

Archive for the ‘Future’ Category

April 3, 2009 11:27 AM PDT

Google shows off Gmail mobile Web app

 

gmail_android_app-11Google’s HTML 5-based Web version of Gmail shown on an Android phone

(Credit: Stephen Shankland/CNET)

SAN FRANCISCO–What Google did with Gmail in conventional browsers five years ago it is expecting to do again with a new mobile version of its Web-based e-mail service.

Vic Gundotra, who leads Google’s mobile software and developer relations efforts, showed off the Web application “technical prototype” Friday in an onstage interview here at the Web 2.0 Expo. Google offers Gmail applications that run natively on BlackBerry and Android mobile phones, but the company clearly has high hopes for a Web-based version as well.

Building a Web interface means Google can reach more phones more easily, Gundotra said, as phone browsers get more sophisticated and their Internet connectivity gets better. “Imagine if you could build apps that ran across all these phones,” Gundotra said.

As he did in a similar demonstration in February, Gundotra showed a version running on an iPhone and on a phone using Google’s Android operating system–apparently the HTC Magic.

The software relied on features in HTML 5, the still-under-development version of the technology that underpins Web site design. Specifically, it used offline data access so the application could read e-mail even while there was no Internet connection.

“When we make it broadly available, people are going to see this as the first HTML 5 mobile application,” Gundotra said, declining to say when it would become available. “It’ll be like Gmail in 2004. It was a great watershed moment for Ajax apps,” which employ JavaScript for relatively sophisticated browser-based interfaces.

 

vic_gundotra_web20Vic Gundotra, head of Google’s mobile sofware and developer work, speaking at Web 2.0 Expo.

(Credit: Stephen Shankland/CNET)

The mobile Gmail application also featured a floating toolbar that stayed perched at the top of the inbox, offering constant access to delete and archive buttons and a menu of further options.

Mobile is central to Google’s work. The company already offers a search application for the iPhone and some other models that lets people issue queries by speaking rather than just typing. The accuracy of the speech recognition has improved 15 percent in the last quarter, Gundotra said, and usage of the service is growing fast.

Gundotra previously worked at Microsoft, but it was a few words from his then 4-year-old daughter that led him to Google. He’d told a friend he didn’t know the answer to a question, and his daughter, overhearing, asked him, “Daddy, where’s your phone?”

“In her brief four years of life, she assumed any time you didn’t know the answer to a question, you brought out your phone. For her the phone was the ultimate answering machine,” something that answered questions. That helped him realize that Google’s mission of organizing the world’s information and presenting it to people would happen in mobile phones, too.

Google likes HTML 5, but it’ll take time for it to become adopted broadly. In the meantime, other alternatives exist for richer Internet applications, notably Adobe Systems’ Flash. Also up and coming are a browserless relative of Flash from Adobe called AIR and a Flash rival from Microsoft called Silverlight.

gmail_iphone_app-1 

Google showed off a better browser version of Gmail on the iPhone.

(Credit: Stephen Shankland/CNET)

Asked about AIR, Gundotra said, “I think Adobe has got some great products,” mentioning Google’s use of Flash to power video streaming at YouTube. “There’s also Silverlight from Microsoft. I am biased toward open Web standards,” Gundotra said.

And he touted another HTML 5 feature: “I predict we will see video tag become broadly adopted,” a technology that could enable video streaming without a Flash player, similar to the way Web browsers can show graphics without requiring separate plug-ins.

Gundotra also had words of praise for Google App Engine, a year-old service that can be used to run Web-based applications. One such application hosted on Google App Engine is Google Moderator, which lets people submit questions and rank which ones they want to hear answered. Moderator originated as a way for Google employees to ask questions of co-founders Larry Page and Sergey Brin during weekly employee meetings, Gundotra said.

Google was excited but scared when the White House said it planned to use Google Moderator for an online town hall meeting with President Barack Obama, Gundotra said.

But it held up under the load, and “the 45,000 other apps (on Google App Engine) were totally unaffected by this much scale,” Gundotra said.

The town hall moderator system handled nearly 700 queries per second at its peak, with 3.6 million people voting on the questions they wanted to hear answered, he said.

 

google_moderator_stats-1

Traffic spiked at Google Moderator when the White House used it to handle questions.

(Credit: Stephen Shankland/CNET)

 Stephen Shankland covers Google, Yahoo, search, online advertising, portals, digital photography, and related subjects. He joined CNET News in 1998 and since then also has covered servers, supercomputing, open-source software, and science. E-mail Stephen.

Read Full Post »

The Cognitive Age

Op-Ed Columnist

The Cognitive Age

 

Published: May 2, 2008
If you go into a good library, you will find thousands of books on globalization. Some will laud it. Some will warn about its dangers. But they’ll agree that globalization is the chief process driving our age. Our lives are being transformed by the increasing movement of goods, people and capital across borders.

The globalization paradigm has led, in the political arena, to a certain historical narrative: There were once nation-states like the U.S. and the European powers, whose economies could be secured within borders. But now capital flows freely. Technology has leveled the playing field. Competition is global and fierce.

New dynamos like India and China threaten American dominance thanks to their cheap labor and manipulated currencies. Now, everything is made abroad. American manufacturing is in decline. The rest of the economy is threatened.

Hillary Clinton summarized the narrative this week: “They came for the steel companies and nobody said anything. They came for the auto companies and nobody said anything. They came for the office companies, people who did white-collar service jobs, and no one said anything. And they came for the professional jobs that could be outsourced, and nobody said anything.”

The globalization paradigm has turned out to be very convenient for politicians. It allows them to blame foreigners for economic woes. It allows them to pretend that by rewriting trade deals, they can assuage economic anxiety. It allows them to treat economic and social change as a great mercantilist competition, with various teams competing for global supremacy, and with politicians starring as the commanding generals.

But there’s a problem with the way the globalization paradigm has evolved. It doesn’t really explain most of what is happening in the world.

Globalization is real and important. It’s just not the central force driving economic change. Some Americans have seen their jobs shipped overseas, but global competition has accounted for a small share of job creation and destruction over the past few decades. Capital does indeed flow around the world. But as Pankaj Ghemawat of the Harvard Business School has observed, 90 percent of fixed investment around the world is domestic. Companies open plants overseas, but that’s mainly so their production facilities can be close to local markets.

Nor is the globalization paradigm even accurate when applied to manufacturing. Instead of fleeing to Asia, U.S. manufacturing output is up over recent decades. As Thomas Duesterberg of Manufacturers Alliance/MAPI, a research firm, has pointed out, the U.S.’s share of global manufacturing output has actually increased slightly since 1980.

The chief force reshaping manufacturing is technological change (hastened by competition with other companies in Canada, Germany or down the street). Thanks to innovation, manufacturing productivity has doubled over two decades. Employers now require fewer but more highly skilled workers. Technological change affects China just as it does the America. William Overholt of the RAND Corporation has noted that between 1994 and 2004 the Chinese shed 25 million manufacturing jobs, 10 times more than the U.S.

The central process driving this is not globalization. It’s the skills revolution. We’re moving into a more demanding cognitive age. In order to thrive, people are compelled to become better at absorbing, processing and combining information. This is happening in localized and globalized sectors, and it would be happening even if you tore up every free trade deal ever inked.

The globalization paradigm emphasizes the fact that information can now travel 15,000 miles in an instant. But the most important part of information’s journey is the last few inches — the space between a person’s eyes or ears and the various regions of the brain. Does the individual have the capacity to understand the information? Does he or she have the training to exploit it? Are there cultural assumptions that distort the way it is perceived?

The globalization paradigm leads people to see economic development as a form of foreign policy, as a grand competition between nations and civilizations. These abstractions, called “the Chinese” or “the Indians,” are doing this or that. But the cognitive age paradigm emphasizes psychology, culture and pedagogy — the specific processes that foster learning. It emphasizes that different societies are being stressed in similar ways by increased demands on human capital. If you understand that you are living at the beginning of a cognitive age, you’re focusing on the real source of prosperity and understand that your anxiety is not being caused by a foreigner.

It’s not that globalization and the skills revolution are contradictory processes. But which paradigm you embrace determines which facts and remedies you emphasize. Politicians, especially Democratic ones, have fallen in love with the globalization paradigm. It’s time to move beyond it.

Read Full Post »

Pursuing the Next Level of Artificial Intelligence

Jim Wilson/The New York Times

Daphne Koller’s award-winning work in artificial intelligence has had commercial impact.

 

Published: May 3, 2008

PALO ALTO, Calif. — Like a good gambler, Daphne Koller, a researcher at Stanford whose work has led to advances in artificial intelligence, sees the world as a web of probabilities.

There is, however, nothing uncertain about her impact.

A mathematical theoretician, she has made contributions in areas like robotics and biology. Her biggest accomplishment — and at age 39, she is expected to make more — is creating a set of computational tools for artificial intelligence that can be used by scientists and engineers to do things like predict traffic jams, improve machine vision and understand the way cancer spreads.

Ms. Koller’s work, building on an 18th-century theorem about probability, has already had an important commercial impact, and her colleagues say that will grow in the coming decade. Her techniques have been used to improve computer vision systems and in understanding natural language, and in the future they are expected to lead to an improved generation of Web search.

“She’s on the bleeding edge of the leading edge,” said Gary Bradski, a machine vision researcher at Willow Garage, a robotics start-up firm in Menlo Park, Calif.

Ms. Koller was honored last week with a new computer sciences award sponsored by the Association for Computing Machinery and the Infosys Foundation, the philanthropic arm of the Indian computer services firm Infosys.

The award to Ms. Koller, with a prize of $150,000, is viewed by scientists and industry executives as validating her research, which has helped transform artificial intelligence from science fiction and speculation into an engineering discipline that is creating an array of intelligent machines and systems. It is not the first such recognition; in 2004, Ms. Koller received a $500,000 MacArthur Fellowship.

Ms. Koller is part of a revival of interest in artificial intelligence. After three decades of disappointments, artificial intelligence researchers are making progress. Recent developments made possible spam filters, Microsoft’s new ClearFlow traffic maps and the driverless robotic cars that Stanford teams have built for competitions sponsored by the Defense Advanced Research Projects Agency.

Since arriving at Stanford as a professor in 1995, Ms. Koller has led a group of researchers who have reinvented the discipline of artificial intelligence. Pioneered during the 1960s, the field was originally dominated by efforts to build reasoning systems from logic and rules. Judea Pearl, a computer scientist at the University of California, Los Angeles, had a decade earlier advanced statistical techniques that relied on repeated measurements of real-world phenomena.

Called the Bayesian approach, it centers on a formula for updating the probabilities of events based on repeated observations. The Bayes rule, named for the 18th-century mathematician Thomas Bayes, describes how to transform a current assumption about an event into a revised, more accurate assumption after observing further evidence.

Ms. Koller has led research that has greatly increased the scope of existing Bayesian-related software. “When I started in the mid- to late 1980s, there was a sense that numbers didn’t belong in A.I.,” she said in a recent interview. “People didn’t think in numbers, so why should computers use numbers?”

Ms. Koller is beginning to apply her algorithms more generally to help scientists discern patterns in vast collections of data.

“The world is noisy and messy,” Ms. Koller said. “You need to deal with the noise and uncertainty.”

That philosophy has led her to do research in game theory and artificial intelligence, and more recently in molecular biology.

Her tools led to a new type of cancer gene map based on examining the behavior of a large number of genes that are active in a variety of tumors. From the research, scientists were able to develop a new explanation of how breast tumors spread into bone.

One potentially promising area to apply Ms. Koller’s theoretical work will be the emerging field of information extraction, which could be applied to Web searches. Web pages would be read by software systems that could organize the information and effectively understand unstructured text.

“Daphne is one of the most passionate researchers in the A.I. community,” said Eric Horvitz, a Microsoft researcher and president of the Association for the Advancement of Artificial Intelligence. “After being immersed for a few years with the computational challenges of decoding regulatory genomics, she confided her excitement to me, saying something like, ‘I think I’ve become a biologist — I mean a real biologist — and it’s fabulous.’ ”

To that end, Ms. Koller is spending a sabbatical doing research with biologists at the University of California, San Francisco. Because biology is increasingly computational, her expertise is vital in gaining deeper understanding of cellular processes.

Ms. Koller grew up in an academic family in Israel, the daughter of a botanist and an English professor. While her father spent a year at Stanford in 1981 when she was 12, she began programming on a Radio Shack PC that she shared with another student.

When her family returned to Israel the next year, she told her father, the botanist, that she was bored with high school and wanted to pursue something more stimulating in college. After half a year, she persuaded him to let her enter Hebrew University, where she studied computer science and mathematics.

By 17, she was teaching a database course at the university. The next year she received her master’s degree and then joined the Israeli Army before coming to the United States to study for a Ph.D. at Stanford.

She didn’t spend her time looking at a computer monitor. “I find it distressing that the view of the field is that you sit in your office by yourself surrounded by old pizza boxes and cans of Coke, hacking away at the bowels of the Windows operating system,” she said. “I spend most of my time thinking about things like how does a cell work or how do we understand images in the world around us?”

In recent years, many of her graduate students have gone to work at Google. However she tries to persuade undergraduates to stay in academia and not rush off to become software engineers at start-up companies.

She acknowledges that the allure of Silicon Valley riches can be seductive. “My husband still berates me for not having jumped on the Google bandwagon at the beginning,” she said. Still, she insists she does not regret her decision to stay in academia. “I like the freedom to explore the things I care about,” she said.

Read Full Post »

April 19, 2008

The Wikipedia, Knowledge Preservation and DNA

I had an interesting thought today about the long-term preservation and transmission of human knowledge.

The Wikipedia may be on its way to becoming the one of the best places in which to preserve knowledge for future generations. But this is just the beginning. What if we could encode the Wikipedia into the Junk DNA portion of our own genome? It appears that something like this may actually be possible — at least according some recent studies of the non-coding regions of the human genome.

If we could actually encode knowledge, like the Wikipedia for example, into our genome, the next logical step would be to find a way to access it directly.

At first we might only be able to access and read the knowledge stored in our DNA through a computationally intensive genetic analysis of an individual’s DNA. In order to correct any errors in the data from mutuation, we would also need to cross-reference this individual data with similar analyses from the DNA of other people who also carry this data in their DNA. But this is just the beginning. There are however ways to stored data such that there is enough redundancy to protect against degradation. Assuming we could do this we might be able to eliminate the need for cross referencing as a form of error correction — the data itself would be self-correcting so to speak. If we could accomplish this then the next step would be to find a way for an individual to access the knowledge stored in their DNA in real-time, directly. That’s a long way off but there may be a way to do this using some future nano-scale genomic-brain interface. This opens up some fascinating areas of speculation to say the least.

 

Why The Wikipedia?

The Wikipedia has certain qualities that make it better than other forms of knowledge preservation and transmission:

  • The Wikipedia exists primarily in electronic form. It is not subject to age or decay like a physical encyclopedia or document. This means it can persist forever, and will not be lost to time, if it continues to be maintained electronically in the future.
  • The Wikipedia is replicated in multiple locations around the world. The fact that it is so easy to replicate, and is so widely replicated means that it is less at risk of being lost due to a local disaster at any given storage location. It also means it is more likely to continue, somewhere, as a living document that goes on to reflect majority consensus reality into the distant future. It is highly improbable that it will ever suffer the same fate as certain ancient documents which only existed in one place and were subsequently lost in floods, fires, or wars, etc. At this point only a planet-wide extinction level event could erase the Wikipedia and/or prevent future generations from finding it.
  • The Wikipedia is highly viral, it’s content is increasingly cited and it is far ahead of any competing system in terms of coverage and brand-recognition. Because so many other pieces of content on the Web and in other media refer to the Wikipedia as the world’s global authority for knowledge, it is considered increasingly authoritative and is increasingly visible and increasingly cited. The Law of Increasing Returns indicates that this will continue to self-amplify, making the Wikipedia the best candidate for an authoritative global repository of knowledge.

What this means is that if you have any knowledge that you want to preserve for future generations, a good place to put it is in the Wikipedia. Putting it there almost guarantees that it will propagate around the world and throughout the human-explored universe (in the future, if we become a spacefaring civilization), and into the distant future of human civilizations.

The Potential For Storing Knowledge in DNA

Is it possible to store knowledge — such as the Wikipedia — in human DNA? It would certainly be useful if we could do this. By storing knowledge in human DNA of living humans, or of common bacteria for that matter, it could then potentially be passed down and spread through generations into the far future. However the mutability of DNA over time might gradually introduce errors that would degrade the information within particular lines of DNA over long periods of time.

Perhaps this could however be mitigated by comparing DNA samples from a large cross-section of individuals within the population of descendants of original holders of DNA-knowledge-archives in the future — this would effectively enable statistical error cancellation. The farther in the future from the date at which the knowledge is “written” to the DNA of some number of humans, the more people’s DNA would be needed to eliminate the errors statistically. This would however in principle counteract mutations and enable the reliable recovery of messages in DNA even very far in the future.

The fact that it is in principle possible to encode knowledge into human (or other) DNA begs the question of whether there is already knowledge stored there? It’s certainly worth a look! Maybe there is already a message there for us? One can only wonder if there is already an ancient “Wikipedia” of sorts already written there.

Interestingly enough, when certain statistical tests are run against human DNA,  it does seem to have properties that are indicative of written language, but only in the “junk” regions of the genome. Maybe it’s not “junk” after all. Below is an article that discusses a recent discovery related to this:

Language in junk DNA

You’ve probably heard of a molecule called DNA, otherwise known as “The Blueprint Of Life”. Molecular biologists have been examining and mapping the DNA for a few decades now. But as they’ve looked more closely at the DNA, they’ve been getting increasingly bothered by one inconvenient little fact – the fact that 97% of the DNA is junk, and it has no known use or function! But, an usual collaboration between molecular biologists, cryptoanalysists (people who break secret codes), linguists (people who study languages) and physicists, has found strange hints of a hidden language in this so- called “junk DNA”.

Only about 3% of the DNA actually codes for amino acids, which in turn make proteins, and eventually, little babies. The remaining 97% of the DNA is, according to conventional wisdom, not gems, but junk.

The molecular biologists call this junk DNA, introns. Introns are like enormous commercial breaks or advertisements that interrupt the real program – except in the DNA, they take up 97% of the broadcast time. Introns are so important, that Richard Roberts and Phillip Sharp, who did much of the early work on introns back in 1977, won a Nobel Prize for their work in 1993. But even today, we still don’t know what introns are really for.

Simon Shepherd, who lectures in cryptography and computer security at the University of Bradford in the United Kingdom, took an approach, that was based on his line of work. He looked on the junk DNA, as just another secret code to be broken. He analysed it, and he now reckons that one probable function of introns, is that they are some sort of error correction code – to fix up the occasional mistakes that happen as the DNA replicates itself. But even if he’s right, introns could have lots of other uses.

The next big breakthrough came from a really unusual collaboration between medical doctors, physicists and linguists. They found even more evidence that there was a sort-of language buried in the introns.

According to the linguists, all human languages obey Zipf’s Law. It’s a really weird law, but it’s not that hard to understand. Start off by getting a big fat book. Then, count the number of times each word appears in that book. You might find that the number one most popular word is “the” (which appears 2,000 times), followed by the second most popular word “a” (which appears 1,800 times), and so on. Right down at the bottom of the list, you have the least popular word, which might be “elephant”, and which appears just once.

Set up two columns of numbers. One column is the order of popularity of the words, running from “1” for “the”, and “2” for “a”, right down “1,000” for “elephant”. The other column counts how many times each word appeared, starting off with 2,000 appearances of “the”, then 1,800 appearances of “a”, down to one appearance of “elephant”.

If you then plot on the right kind of graph paper, the order of popularity of the words, against the number of times each word appears you get a straight line! Even more amazingly, this straight line appears for every human language – whether it’s English or Egyptian, Eskimo or Chinese! Now the DNA is just one continuous ladder of squillions of rungs, and is not neatly broken up into individual words (like a book).

So the scientists looked at a very long bit of DNA, and made artificial words by breaking up the DNA into “words” each 3 rungs long. And then they tried it again for “words” 4 rungs long, 5 rungs long, and so on up to 8 rungs long. They then analysed all these words, and to their surprise, they got the same sort of Zipf Law/straight-line-graph for the human DNA (which is mostly introns), as they did for the human languages!

There seems to be some sort of language buried in the so-called junk DNA! Certainly, the next few years will be a very good time to make a career change into the field of genetics.

So now, around the edge of the new millennium, we have a reasonable understanding of the 3% of the DNA that makes amino acids, proteins and babies. And the remaining 97% – well, we’re pretty sure that there is some language buried there, even if we don’t yet know what it says. It might say “It’s all a joke”, or it might say “Don’t worry, be happy”, or it might say “Have a nice day, lots of love, from your friendly local DNA”.   (source)

Now to complete this thought: what if the information-carrying capacity of the so-called Junk DNA of the human genome is sufficient to hold the content of the Wikipedia? Then all we would need is some way of writing to it — perhaps via gene therapy via infection by a virus that carries a copy of the Wikipedia.

This would enable volunteers to accept copies of the Wikipedia into their DNA and become vectors for the Wikipedia. They and their descendants would become walking encyclopedias and would preserve human knowledge for future generations. If only some people had this done then they and their lineages would be a sort of priesthood with particular importance for the future of humanity. It sounds like the basis for a really great science-fiction thriller!

By copying the Wikipedia into our own DNA we might be able to ensure that wherever human beings end up in the universe, the Wikipedia will go with them. Even if in some distant world humans destroy their civilization in a nuclear holocaust or are almost wiped out by an asteroid and have to rebuild from the stone-age again, they will eventually rediscover genomics and soon after that they will find the Wikipedia in their genome.

This is a kind of “backup strategy” for our civilization and all the knowledge we consider to be most important. Of course it is not clear yet whether the Junk DNA could carry enough information to encode the entire Wikipedia, nor is it clear that the Junk DNA is actually “junk” — perhaps there is already something there that should not be overwritten? Or perhaps it serves some other purpose in human development and evolution that we shouldn’t mess around with. It remains to be seen.

Read Full Post »

Tuesday, April 29, 2008 – Page updated at 03:56 PM

E-mail article     Print view      Share:    Digg     Newsvine

Microsoft device helps police pluck evidence from cyberscene of crime

Seattle Times technology reporter

Microsoft has developed a small plug-in device that investigators can use to quickly extract forensic data from computers that may have been used in crimes.

The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB “thumb drive” that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.

The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer’s Internet activity, as well as data stored in the computer.

It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.

More than 2,000 officers in 15 countries, including Poland, the Philippines, Germany, New Zealand and the United States, are using the device, which Microsoft provides free.

“These are things that we invest substantial resources in, but not from the perspective of selling to make money,” Smith said in an interview. “We’re doing this to help ensure that the Internet stays safe.”

Law-enforcement officials from agencies in 35 countries are in Redmond this week to talk about how technology can help fight crime. Microsoft held a similar event in 2006. Discussions there led to the creation of COFEE.

Smith compared the Internet of today to London and other Industrial Revolution cities in the early 1800s. As people flocked from small communities where everyone knew each other, an anonymity emerged in the cities and a rise in crime followed.

The social aspects of Web 2.0 are like “new digital cities,” Smith said. Publishers, interested in creating huge audiences to sell advertising, let people participate anonymously.

That’s allowing “criminals to infiltrate the community, become part of the conversation and persuade people to part with personal information,” Smith said.

Children are particularly at risk to anonymous predators or those with false identities. “Criminals seek to win a child’s confidence in cyberspace and meet in real space,” Smith cautioned.

Expertise and technology like COFEE are needed to investigate cybercrime, and, increasingly, real-world crimes.

advertising

“So many of our crimes today, just as our lives, involve the Internet and other digital evidence,” said Lisa Johnson, who heads the Special Assault Unit in the King County Prosecuting Attorney’s Office.

A suspect’s online activities can corroborate a crime or dispel an alibi, she said.

The 35 individual law-enforcement agencies in King County, for example, don’t have the resources to investigate the explosion of digital evidence they seize, said Johnson, who attended the conference.

“They might even choose not to seize it because they don’t know what to do with it,” she said. “… We’ve kind of equated it to asking specific law-enforcement agencies to do their own DNA analysis. You can’t possibly do that.”

Johnson said the prosecutor’s office, the Washington Attorney General’s Office and Microsoft are working on a proposal to the Legislature to fund computer forensic crime labs.

Microsoft also got credit for other public-private partnerships around law enforcement.

Jean-Michel Louboutin, Interpol’s executive director of police services, said only 10 of 50 African countries have dedicated cybercrime investigative units.

“The digital divide is no exaggeration,” he told the conference. “Even in countries with dedicated cybercrime units, expertise is often too scarce.”

He credited Microsoft for helping Interpol develop training materials and international databases used to prevent child abuse.

Smith acknowledged Microsoft’s efforts are not purely altruistic. It benefits from selling collaboration software and other technology to law-enforcement agencies, just like everybody else, he said.

Benjamin J. Romano: 206-464-2149 or bromano@seattletimes.com

Copyright © 2008 The Seattle Times Company

Read Full Post »

Enterprise 2.0 To Become a $4.6 Billion Industry By 2013

Written by Sarah Perez / April 20, 2008 9:01 PM / 25 Comments


A new report released today by Forrester Research is predicting that enterprise spending on Web 2.0 technologies is going to increase dramatically over the next five years. This increase will include more spending on social networking tools, mashups, and RSS, with the end result being a global enterprise market of $4.6 billion by the year 2013.

This change is not without its challenges. Although there is money to be made in the industry by vendors, Web 2.0 tools by their very nature are defined by commoditization; as is much of the new social media industry, a topic we touched on briefly here, when discussing how content has become a commodity.

For vendors specifically, there are 3 main challenges to becoming successful in this new industry, including:

  1. I.T. shops being wary of what they perceive as “consumer-grade” technology
  2. Ad-supported web tools generally have “free” as the starting point
  3. Web 2.0 tools will have to now compete in a space currently dominated by legacy enterprise software investments

What is Enterprise Web 2.0?

Most technologists segment the Web 2.0 market between “consumer” Web 2.0 technologies and “business” Web 2.0 technologies. So what does Enterprise 2.0 include then?

Well, what it doesn’t include is consumer services like Blogger, Facebook, Netvibes, and Twitter, says Forrester. These types of services are aimed at consumers and are often supported by ads, so they do not qualify as Enterprise 2.0 tools.

Instead, collaboration and productivity tools based on the concepts of web 2.0, but designed for the enterprise worker will count as being Enterprise 2.0. In addition, for-pay services, like those from BEA Systems, IBM, Microsoft, Awareness, NewsGator Technologies, and Six Apart will factor in.

Enterprise marketing tools have also expanded to include Web 2.0 technologies. For example, money spent on the creation and syndication of a Facebook app or a web site/social network widget could be considered Enterprise 2.0. However, pure ad spending dollars, including those spent on consumer Web 2.0 sites, will not count as Enterprise 2.0.

Getting Past the I.T. Gatekeeper

One of the main challenges of getting Web 2.0 into the enterprise will be getting past the gatekeepers of traditional I.T. Businesses have been showing interest in these new technologies, but, ironically, the interest comes from departments outside of I.T. Instead, it’s the marketing department, R&D, and corporate communications pushing for the adoption of more Web 2.0-like tools.

Unfortunately, as often is the case, the business owners themselves don’t have the knowledge or expertise to make technology purchasing decisions for their company. They rely on I.T. to do so – a department that currently spends 70% of their budget maintaining past investments.

Despite the absolute mission-critical nature of I.T. in today’s business, the department is often provided with slim budgets, which tends to only allow for maintaining current infrastructure, not experimenting with new, unproven technologies.

To make matters worse, I.T. tends to view Web 2.0 tools as being insecure at best, or, at worst, a security threat to the business. They also don’t trust what they perceive to be “consumer-grade” technologies, which they don’t believe have the power to scale to the size that an enterprise demands.

In addition, I.T. departments currently work with a host of legacy applications. The new tools, in order to compete with these, will have to be able to integrate with existing technology, at least for the time being, in order to be fully effective.

Finally, given the tight budgets, there is still a chance that even if a particular tool does meet all the requirements to get in the door at a particular company, I.T. or other company personnel utilizing the service may try to exploit the free version of the service if the price point for the “enterprise” version gets to be too high. They may also choose to look for a free, open source alternative.

Enterprise 2.0 Adoption

How Web 2.0 Will Reach $4.6 Billion

All that being said, the Web 2.0 market, as  small as it is now, is, in fact, growing. In 2008, firms with 1000 employees or more will spend $764 million on Web 2.0 tools and technologies. Over the next five years, that expenditure will grow at a compound annual rate of 43%.

The top spending category will be social networking tools. In 2008, for example, companies will spend $258 million on tools like those from Awareness, Communispace, and Jive Software. After social networking, the next-largest category is RSS, followed by blogs and wikis, and then mashups.

The vendors expected to do the best in this new marketplace will be those that bundle their offerings, offering the complete package of tools to the businesses they serve.

However, newer, “pure” Web 2.0 companies hoping to capitalize on this trend will still have to fight with traditional I.T. software for a foothold, specifically fighting with the likes of Microsoft and IBM. Many I.T. shops will choose to stick with their existing software from these large, well-known vendors, especially now that both are integrating Web 2.0 into their offerings.

Microsoft’s SharePoint, for example, now includes wikis, blogs, and RSS technologies in their collaboration suite. IBM offers social networking and mashup tools via their Lotus Connections and Lotus Mashups products and SAP Business Suite includes social networking and widgets.

What this means is that much of the Web 2.0 tool kit will simply “fade into the fabric of enterprise collaboration suites,” says Forrester. By 2013, few buyers will seek out and purchase Web 2.0 tools specifically. Web 2.0 will become a feature, not a product.

Enterprise 2.0 Spending

Other Trends

Other trends will also have an impact on this new marketplace, including the following:

External Spending Will Beat Internal Spending: External Web 2.0 expenditure will surpass internal expenditure in 2009, and, by 2013, will dwarf internal spending by a billion dollars. Internally, companies will spend money on internal social networking, blogs, wikis, and RSS; externally, the spending patterns will be very similar. Social networking tools that provide customer interaction, allowing customers the ability to create profiles, join discussion boards, and read company blogs, for example, will receive more investment and development over the next five years.

Europe & Asia Pacific Markets Grow: Europe and Asia Pacific will become more substantial markets in 2009. Fewer European companies have embraced Web 2.0 tools, leaving much room for growth. Asia Pacific will also grow in 2009.

Web 2.0 Graduates from “Kids’ Stuff”:  Right now, it’s people between the ages of 12 and 17 that are the more avid consumers of social computing technology, with one-third of them acting as content creators. Meanwhile, only 7% of those 51-61 do the same. However, this is another trend that is going to change over the next few years. By 2011, Forrester believes that users of Web 2.0 tools will mirror users of the web at large.

Retirement of Baby Boomers: As with many things, it takes the passing of the older generation from executive status into retirement before a true shift can occur. Over the next three years, millions of baby boomers will retire and the younger workers brought in to fill the void will not only want, but will expect similar tools in the office as those they use at home in their personal lives.

What It All Means

For vendors wanting to play in the Enterprise 2.0 space, there are a few key takeaways to be learned from this research. For one, they can help ensure their success in this niche by selling across deployment types. That is, plan to grow beyond just selling to either the internal or external market.

Another option is to segment the enterprise marketplace by industry and then by company size. Some industries are more customer-focused than others when it comes to the external market, so developing customized solutions for a particular industry could be a key to success. For internal tools, focusing efforts on deploying enterprise grade tools that include things like integration or security will help sell products to larger customers. Other  levels of service can be designed specifically for the SMBs, featuring simple, self-provisioning products to help cut down on costs.

Finally, vendors looking to grow should consider making a name for themselves in the Europe or Asia Pacific markets, where the opportunity comes from the expected increased investment rates for Web 2.0/Enterprise 2.0 in those geographic regions.

However, the most valuable aspect of this change for vendors is the knowledge they obtain about how to run a successful SaaS business – something that will help propel them into the next decade and beyond and, ultimately, will provide more value than any single Web 2.0 offering alone ever will.

Read Full Post »

Mon

Apr 14
2008

Andy Oram

Book review: “The Future of the Internet (And How to Stop It)”

 

Most of us in the computer field have heard more than our fill about the free software movement, the copyright wars, the scourge of spyware and SQL injection attacks, the Great Firewall of China, and other battles for the control of our computers and networks. But your education is stifled until you have absorbed the insights offered by comprehensive thinkers such as Jonathan Zittrain, who presents in this brand new book some critical and welcome anchor points for discussions of Internet policy. Now we have a definitive statement from a leading law professor at Harvard and Oxford, who combines a scholar’s insight into legal doctrines with a nitty-gritty knowledge of life on the Internet.

You can read Zittrain for cogent discussions of key issues in copyright, filtering, licensing, censorship, and other pressing issues in computing and networking. But you’re rewarded even more if you read this book to grasp fundamental questions of law and society, such as:

  • What determines the legitimacy of laws and those who make and enforce them?
  • What relationship does the law on the books bear to the law as enforced, and how does the gray area between them affect the evolution of society?
  • What is the proper attitude of citizens toward law-makers and regulators, and how much power is healthy for either side to have?
  • How can community self-organization stave off the need for heavy-handed legislation–and how, in contrast, can premature legislation preclude constructive solutions by self-organized communities?

Core questions such as these power Zittrain’s tour of technology and law on today’s networks. “The Future of the Internet” takes us briskly down familiar paths, offering valuable summaries of current debates, but Zittrain also tries always to hack away at the brambles that block the end of each path. Thanks to his unusually informed perspective, he usually–although not always–succeeds in pushing us forward a few meticulously footnoted footsteps.

 

Zittrain has summarized the points in this book in an online article, but reading the whole book pays off because of its depth of legal reasoning.

Informed recommendations

One of Zittrain’s most applicable suggestions–and one that exemplifies the positive philosophy he brings to his subject–is his solution for handling computer viruses. Currently, non-expert computer users are either helpless in the face of viruses or employ inadequate firewall products that block useful programs along with infections. When Internet service providers scramble to block malware at the router, proponents of network neutrality complain that they’re violating the end-to-end principle. The dilemma seems unsolvable.

Zittrain cuts the Gordian knot by suggesting user empowerment. Experts who know how to track and identify viruses or spyware can label them as such, and less expert users can check ratings on every download. Tools are urgently needed that aggregate widely distributed ratings and present them to users in a very simple screen of information whenever they initiate something potentially dangerous. (Zittrain cites, as a model, the partnership between Google and the StopBadware project run by his colleagues at the Berkman Center.)

Users could have a choice of proxies to help them decide what on put on their computers. Additionally, instead of politely hiding network activity from users, mass-market operating systems can show the information in a manner that is easy to grasp, so that the user has a clue when the computer is at risk of turning into a zombie. Zittrain would probably be gratified by a simple security enhancment recommended in the Febuary issue of Communications of the ACM: a suggestion that a wireless router notify each host using the router how many hosts are currently using it, so that wardriving could immediately be detected by users.

Other people have suggested distributed self-defending security systems, but Zittrain links the whole endeavor to the hope provided by the Internet’s ability to bring together people who shared positive goals. If software vendors and Internet security researchers gathered around this vision, a self-interested and self-organized community could protect itself, with more able members educating the less able ones.

As an alternative to restrictive software that sinks roots deep into the operating system and locks down computers, such tools could actually improve Internet users’ knowledge and sense of community while putting a dent in identity theft, spam, and distributed denial of service attacks.

Throughout the wide range of topics described in his book, Zittrain looks first to technically powered solutions that unite people of good will and encourage potential malfactors to renounce anti-social behavior. But his tone lies far from that of cocky cyberpunk hackers who boast that their technological solutions can protect them from all cyberharm (and damned be less savvy cybercitizens). Zittrain is too good a lawyer to dismiss the power of governments, or to assume that such power can only be oppressive. Thus:

  • He calls for a new Manhattan Project that would draw in government, research institutions, and individual programmers to solve the afore-mentioned malware problem.
  • He allows that the government should be allowed a lower threshold for access to financial data than access to other personal data.
  • He suggests regulation to enforce data portability, so that user data stored by online services could be retrieved by the owners when they wanted to switch services or when the services failed. (This is the online equivalent to the historic endorsement of open office standards that has been passed by governments in several countries and was nearly hatched in the state of Massachusetts, before a careless legislature ran an off-road vehicle over it.)

Zittrain is not a fan of network neutrality as most proponents describe it, but he sympathizes with the end-to-end principle and would like the principle of neutrality applied to APIs offered by web services such as Google’s. If web service providers claim that their data is available for creative uses by outsiders, they should not be allowed to arbitrarily cut off those outsiders that happen to be competitively successful or disruptive to their business models.

I find this recommendation particularly intriguing, because the promising area of web services is currently fraught with uncertainty that’s clearly holding back socially beneficial uses. Traditional PCs seem a rock of stability in comparison to the services exploited by modern web services, which vendors can whisk away like apparitions in the night.

You probably know, from such scandals as Yahoo!’s cooperation with the Chinese government in tracking down dissidents and Microsoft’s release of search data for a “research project” at the Department of Justice, that data stored at an online service is intrinsically less secure than data stored on your computer. But did you know that the law itself in the U.S. grants substantially less protection against search and seizure to your data when it’s stored at a service? Zittrain’s elucidation of this legal limbo, although it demands close reading, is a valuable window into the issues of technology and policy for lay readers.

Concerning medical privacy, in particular, the World Privacy Forunm noted in a February report (PDF) that personal health records stored by generic organizations such as Microsoft or Google are not protected by the Health Insurance Portability and Accountability Act (HIPAA). Therefore, the records will probably be fair game for subpoenas in divorce cases, lawsuits, etc. The individual also has fewer rights when trying to correct entries.

Well, I’ve given you the quick tour of Zittrain’s book, which is like doing the Smithsonian National Museum of Natural History in an hour. Now we’ll meet back in the lobby by the elephant statue, as it were, and examine the key concept that runs through his book.

Generativity: the new battle cry

We’ve all heard so much in the past decade about “innovation” that I’m in danger of having my readers snap the browser tab shut on this web page when they see the word. (I remember when the fingers-down-the-throat word in the business world was “synergy.” That word finally disappeared along with the businesses that invoked it to justify their mergers.)

Zittrain has coined a term that captures with more richness and potential what’s happening in our economy: generativity, a measure of how many new, unexpected, and (occasionally) useful things can be developed thanks to an available platform. He lists a number of famous generative technologies, ranging from duct tape and Lego bricks to the all-time heavyweight champion of generativity, the core Internet protocols. But the effects of the Internet are predicated on many other generative technologies that have contributed to the wave of innovation over the past fifteen years or so:

  • Personal computer hardware, which accepts an unlimited variety of devices
  • Personal computer operating systems, which let ordinary consumers load any program that’s compiled to run on them
  • Free software, which encourages infinite extensions

The boon of generativity is threatened in two major ways: network restrictions and locked-down devices such as the Xbox, TiVo, and iPhone, which Zittrain calls tethered appliances. The network and the endpoint are symbiotically linked in their power: freedom in one can help keep the flame of freedom burning on the other, while correspondingly, dousing the embers on one can dim generativity on the other.

Appliances are not bad. The Xbox, TiVo, and iPhone have their place, and Zittrain points out that even the trenchantly open One Laptop Per Child system embeds a trusted computing substrate called Bitfrost that combines digital signatures, sandboxing, and mandatory access controls to prevent downloads from harming the system. Unlike trusted computing platforms in proprietary products, Bitfrost can be overridden by a sophisticated user, but requires a BIOS reflash.

The degree to which a system is “appliancized” is inversely related to its generativity. We need to make sure that at least some of the population can preserve generativity in order to create technology at new levels. Furthermore, everyone needs generative systems in order to prevent vendors from choking off mass adoption of innovations.

Many of the Internet’s dangers stem from the attributes of a good generative system. Zittrain, in addition to highlighting about ease of mastery and accessibility, points out that a highly generative system makes it easy to transfer capabilities from highly sophisticated developers to untrained users. This is not entirely sweet. For instance, security guru Bruce Schneier has repeatedly pointed out that easy transferability is the bane of Internet security.

It’s bad enough, Schneier says, that systems inevitably contain bugs that can be fatally exploited by top-notch coders and cryptography experts. What really threatens the Internet is that these experts can bundle the exploits into kits that script kiddies can download and use with minimal education. Sharing tools that perform intrusions is not in itself malicious; these tools are important for system administrators, programmers who reverse engineer applications (another skill with both good and evil applications), and other users. But the practice definitely swells the number of malicious programs foraging the Internet for victims.

Once we accept the value of generativity, technical solutions can allow us to preserve it while protecting ourselves from the bugs and intrusions that it makes us so easy to succomb to. For instance, instead of adopting a fortress mentality, public libraries and other institutions could run virtual operating systems on computers they want to protect. In our homes, our computers could have one operating system open to experimental applications (and instantly reloadable if compromised), side by side with another that is locked down. This would allow ordinary people the same generative freedom as programmers, who typically maintain work platforms and development platforms.

Value at the fringe

Among Zittrain’s most alarming insights is how calls for a safer Internet, and for one more friendly to copyright and trademark holders, can feed into general governmental control over its population in an age where more and more activity moves online. This danger–also prophesied by Swedish Pirate Party leader Rickard Falkvinge–makes generativity a concern to an immensely larger citizenry than the usual suspects consisting of free software developers and remix musicians. Zittrain’s exploration of technology’s “regulability” rises far beyond the book’s opening subject toward an expansive contribution to our understanding of the relations among citizens, governments, and the commonwealth.

Every business has suffered from the hammerlock of a new computer system that turns out to prevent employees from making the tiny exceptions to rules that previously allowed smooth operations. Perfect control on operating systems or the Internet could cause similar disasters, which range from the added costs of DRM in schools to clamp-downs by repressive regimes. Zittrain lays out several interesting legal considerations that aren’t usually raised, overtly in defense of deliberately leaky enforcement regimes.

Concurring and dissenting opinions

 

I should mention before going further that Zittrain showed me an early paper on the subject underlying his book, and cited me in his acknowledgments as one of the people whose conversations with him influenced the book. Had I the chance to discuss the following issues with him, I would have advised a few changes to the text.

 

The intractability of privacy violations

Zittrain’s last chapter focuses on privacy, which is widely understood to have passed a threshold in the past few years. Given cell phone cameras, the complex data-sharing services on popular social networks, and other tools in the hands of ordinary computer users, privacy can now be violated by irresponsible crowds in addition to large companies and governments.

First, I think Zittrain exaggerates the shift. If he believes that government and corporate abuses are now only a tiny sliver of a larger problem created by peer production on the Internet, I wonder whether he’s ever been barred from an airplane by the TSA or denied coverage by an insurance company.

But the problems he points to in privacy-violating activities that have suddenly become everyday behaviors–such as tagging photos on Flickr with people’s names–are real. He tries to apply lessons from an earlier chapter focusing on the checks and balances that make Wikipedia successful. Unfortunately, I think the analogy is weak.

Wikipedia, as Zittrain points out, remains a centralized institution under the ultimate control of one man. Authority fans out from creator Jimbo Wales in an admirably broad and flexible spread, but creativity and control at each level depend on the backstop provided at the next higher level. I agree with Zittrain that some of the solutions found here can be translated to the wider and wilder Internet, but in the area of privacy I don’t find the analogy persuasive.

Even appliances depend on generative systems

The forward thrust created by generative technologies is so powerful that one finds them in even supposedly non-generative appliances. Most embedded devices with non-trivial capabilities (devices that need more than a while-loop for an operating system) use general-purpose operating systems, often Linux or the reduced-fat version of Windows known as Windows CE.

Zittrain contrasts generative PCs and free software to appliances such as the TiVo, Xbox, and iPhone. The irony is that these are all based on generative technologies. The manufacturers could not resist the opportunity to cut development costs by using robust and freely available platforms.

TiVo uses Linux as its operating system, the Xbox runs on general-purpose hardware that has been successfully hacked to run Linux, and the iPhone–which epitomizes to Zittrain the supreme tethered appliance–has BSD inside. Because of its innately generative qualities (including the relatively transparent language of its API, Objective-C), the iPhone was opened up just a few months after its release in a textbook kind of collaboration among self-organized hackers, leading to a free software toolkit that lets any programmer create new applications using all the features of the iPhone.

These examples underline the challenge Tim O’Reilly used to pose to Microsoft: without open platforms, where will its next wave of technology come from? It looks like Microsoft listened, considering its current tentative support for a few free free software projects. An industry of appliances would be poorer without generative technology.

The tether chafes

One of the central points of Zittrain’s book is that embattled computer users, worn down by the onslaught of malware, tend to retreat and give up control to centers of authority, whether by installing restrictive firewalls or buying tethered appliances that were built from the ground up to be closed.

Zittrain has several wonderful sections laying out the long-term detriment of this choice, not only for obvious topics such as technological innovation and fair use of copyrighted material, but for the balance between government and individual rights. He’s on top of all the abuses caused by manufacturers who keep control of their devices and send them automated updates–sometimes updates that deliberately disable previously available features. Tethered appliances respond to their vendors with the same flexible slavishness as computers taken over by roving bots.

But Zittrain does not use available evidence to rebut the seductive claim that choosing appliances over applications leads to more safety for the user and the overall community. Does it?

I think we have plenty of evidence to resist the tethering of previously open computers. For instance, what would most computer users trust more than a CD from Sony? And to ward off the dangers of the open Internet, should we turn to telephone companies to protect our privacy and personal data? I need say no more.

Among web services, the same worries apply. The dominant Internet appliance is Google, and every service it unveils seems to raise such fears about privacy that it has to perennially trot out its “don’t be evil” motto.

But nowhere has the trust in appliances been more dangerous than the calamitous rush to electronic voting machines without paper output, which cannot be adequately audited after deployment. We need to say loudly: closing down open systems is no solution to security risks. (Richard M. Stallman made similar points in response to Zittrain’s article, and Susan Crawford in her response.)

Web 2.0 extends generativity

The wide-area-network equivalent of a tethered alliance is “software as a service,” also known as an Application Service Provider. Here, I have to insist that Zittrain gets his terminology wrong. In place of these common industry terms, he refers to the phenomenon as Web 2.0.

Controversy has always surrounded the term Web 2.0, to be sure, despite attempts to define the phrase by Tim O’Reilly, who is credited with inventing it. Although everybody reads his own biases into the term, I don’t see any meaningful definition of Web 2.0 that includes web sites where users just log in to run an application remotely. I did see one other speaker misunderstand the term this way, but we have to resist the trend to “mash up” useful terms to the point where they lose their value and all come out in some bland uniformity.

Web 2.0 features–such as simple APIs and ways to incorporate user-submitted content–extend generativity as much as blogs and wikis do. They’re a critical stage in the ongoing evolution of the Internet. But Zittrain does offer some important critiques. Google Maps can discourage competition by co-opting it through its powerful API. And this ultimately means more control for Google–control it could leverage to artificially set the direction for mapping applications.

Thus, Web 2.0 technologies can be seen as an enablers that open up the data and applications controlled by corporations, but also as the soft glove than allow the corporate fist to push itself further and further into their clients’ lives.

My glosses and musings on “The Future of the Internet” show how much meat it provides for analysis and discussion. Anyone who can make it through this long review would get a lot from the book. In addition to drawing links among useful recommendations for preserving our freedom, Zittrain proves that the legal frameworks for making such decisions are more complex than most technologists and policy makers credit them for.

Read Full Post »

Mon Apr 14 2008

Andy Oram

Book review: “The Future of the Internet (And How to Stop It)”

 

Most of us in the computer field have heard more than our fill about the free software movement, the copyright wars, the scourge of spyware and SQL injection attacks, the Great Firewall of China, and other battles for the control of our computers and networks. But your education is stifled until you have absorbed the insights offered by comprehensive thinkers such as Jonathan Zittrain, who presents in this brand new book some critical and welcome anchor points for discussions of Internet policy. Now we have a definitive statement from a leading law professor at Harvard and Oxford, who combines a scholar’s insight into legal doctrines with a nitty-gritty knowledge of life on the Internet.

You can read Zittrain for cogent discussions of key issues in copyright, filtering, licensing, censorship, and other pressing issues in computing and networking. But you’re rewarded even more if you read this book to grasp fundamental questions of law and society, such as:

  • What determines the legitimacy of laws and those who make and enforce them?
  • What relationship does the law on the books bear to the law as enforced, and how does the gray area between them affect the evolution of society?
  • What is the proper attitude of citizens toward law-makers and regulators, and how much power is healthy for either side to have?
  • How can community self-organization stave off the need for heavy-handed legislation–and how, in contrast, can premature legislation preclude constructive solutions by self-organized communities?

Core questions such as these power Zittrain’s tour of technology and law on today’s networks. “The Future of the Internet” takes us briskly down familiar paths, offering valuable summaries of current debates, but Zittrain also tries always to hack away at the brambles that block the end of each path. Thanks to his unusually informed perspective, he usually–although not always–succeeds in pushing us forward a few meticulously footnoted footsteps.

 

Zittrain has summarized the points in this book in an online article, but reading the whole book pays off because of its depth of legal reasoning.

Informed recommendations

One of Zittrain’s most applicable suggestions–and one that exemplifies the positive philosophy he brings to his subject–is his solution for handling computer viruses. Currently, non-expert computer users are either helpless in the face of viruses or employ inadequate firewall products that block useful programs along with infections. When Internet service providers scramble to block malware at the router, proponents of network neutrality complain that they’re violating the end-to-end principle. The dilemma seems unsolvable.

Zittrain cuts the Gordian knot by suggesting user empowerment. Experts who know how to track and identify viruses or spyware can label them as such, and less expert users can check ratings on every download. Tools are urgently needed that aggregate widely distributed ratings and present them to users in a very simple screen of information whenever they initiate something potentially dangerous. (Zittrain cites, as a model, the partnership between Google and the StopBadware project run by his colleagues at the Berkman Center.)

Users could have a choice of proxies to help them decide what on put on their computers. Additionally, instead of politely hiding network activity from users, mass-market operating systems can show the information in a manner that is easy to grasp, so that the user has a clue when the computer is at risk of turning into a zombie. Zittrain would probably be gratified by a simple security enhancment recommended in the Febuary issue of Communications of the ACM: a suggestion that a wireless router notify each host using the router how many hosts are currently using it, so that wardriving could immediately be detected by users.

Other people have suggested distributed self-defending security systems, but Zittrain links the whole endeavor to the hope provided by the Internet’s ability to bring together people who shared positive goals. If software vendors and Internet security researchers gathered around this vision, a self-interested and self-organized community could protect itself, with more able members educating the less able ones.

As an alternative to restrictive software that sinks roots deep into the operating system and locks down computers, such tools could actually improve Internet users’ knowledge and sense of community while putting a dent in identity theft, spam, and distributed denial of service attacks.

Throughout the wide range of topics described in his book, Zittrain looks first to technically powered solutions that unite people of good will and encourage potential malfactors to renounce anti-social behavior. But his tone lies far from that of cocky cyberpunk hackers who boast that their technological solutions can protect them from all cyberharm (and damned be less savvy cybercitizens). Zittrain is too good a lawyer to dismiss the power of governments, or to assume that such power can only be oppressive. Thus:

  • He calls for a new Manhattan Project that would draw in government, research institutions, and individual programmers to solve the afore-mentioned malware problem.
  • He allows that the government should be allowed a lower threshold for access to financial data than access to other personal data.
  • He suggests regulation to enforce data portability, so that user data stored by online services could be retrieved by the owners when they wanted to switch services or when the services failed. (This is the online equivalent to the historic endorsement of open office standards that has been passed by governments in several countries and was nearly hatched in the state of Massachusetts, before a careless legislature ran an off-road vehicle over it.)

Zittrain is not a fan of network neutrality as most proponents describe it, but he sympathizes with the end-to-end principle and would like the principle of neutrality applied to APIs offered by web services such as Google’s. If web service providers claim that their data is available for creative uses by outsiders, they should not be allowed to arbitrarily cut off those outsiders that happen to be competitively successful or disruptive to their business models.

I find this recommendation particularly intriguing, because the promising area of web services is currently fraught with uncertainty that’s clearly holding back socially beneficial uses. Traditional PCs seem a rock of stability in comparison to the services exploited by modern web services, which vendors can whisk away like apparitions in the night.

You probably know, from such scandals as Yahoo!’s cooperation with the Chinese government in tracking down dissidents and Microsoft’s release of search data for a “research project” at the Department of Justice, that data stored at an online service is intrinsically less secure than data stored on your computer. But did you know that the law itself in the U.S. grants substantially less protection against search and seizure to your data when it’s stored at a service? Zittrain’s elucidation of this legal limbo, although it demands close reading, is a valuable window into the issues of technology and policy for lay readers.

Concerning medical privacy, in particular, the World Privacy Forunm noted in a February report (PDF) that personal health records stored by generic organizations such as Microsoft or Google are not protected by the Health Insurance Portability and Accountability Act (HIPAA). Therefore, the records will probably be fair game for subpoenas in divorce cases, lawsuits, etc. The individual also has fewer rights when trying to correct entries.

Well, I’ve given you the quick tour of Zittrain’s book, which is like doing the Smithsonian National Museum of Natural History in an hour. Now we’ll meet back in the lobby by the elephant statue, as it were, and examine the key concept that runs through his book.

Generativity: the new battle cry

We’ve all heard so much in the past decade about “innovation” that I’m in danger of having my readers snap the browser tab shut on this web page when they see the word. (I remember when the fingers-down-the-throat word in the business world was “synergy.” That word finally disappeared along with the businesses that invoked it to justify their mergers.)

Zittrain has coined a term that captures with more richness and potential what’s happening in our economy: generativity, a measure of how many new, unexpected, and (occasionally) useful things can be developed thanks to an available platform. He lists a number of famous generative technologies, ranging from duct tape and Lego bricks to the all-time heavyweight champion of generativity, the core Internet protocols. But the effects of the Internet are predicated on many other generative technologies that have contributed to the wave of innovation over the past fifteen years or so:

  • Personal computer hardware, which accepts an unlimited variety of devices
  • Personal computer operating systems, which let ordinary consumers load any program that’s compiled to run on them
  • Free software, which encourages infinite extensions

The boon of generativity is threatened in two major ways: network restrictions and locked-down devices such as the Xbox, TiVo, and iPhone, which Zittrain calls tethered appliances. The network and the endpoint are symbiotically linked in their power: freedom in one can help keep the flame of freedom burning on the other, while correspondingly, dousing the embers on one can dim generativity on the other.

Appliances are not bad. The Xbox, TiVo, and iPhone have their place, and Zittrain points out that even the trenchantly open One Laptop Per Child system embeds a trusted computing substrate called Bitfrost that combines digital signatures, sandboxing, and mandatory access controls to prevent downloads from harming the system. Unlike trusted computing platforms in proprietary products, Bitfrost can be overridden by a sophisticated user, but requires a BIOS reflash.

The degree to which a system is “appliancized” is inversely related to its generativity. We need to make sure that at least some of the population can preserve generativity in order to create technology at new levels. Furthermore, everyone needs generative systems in order to prevent vendors from choking off mass adoption of innovations.

Many of the Internet’s dangers stem from the attributes of a good generative system. Zittrain, in addition to highlighting about ease of mastery and accessibility, points out that a highly generative system makes it easy to transfer capabilities from highly sophisticated developers to untrained users. This is not entirely sweet. For instance, security guru Bruce Schneier has repeatedly pointed out that easy transferability is the bane of Internet security.

It’s bad enough, Schneier says, that systems inevitably contain bugs that can be fatally exploited by top-notch coders and cryptography experts. What really threatens the Internet is that these experts can bundle the exploits into kits that script kiddies can download and use with minimal education. Sharing tools that perform intrusions is not in itself malicious; these tools are important for system administrators, programmers who reverse engineer applications (another skill with both good and evil applications), and other users. But the practice definitely swells the number of malicious programs foraging the Internet for victims.

Once we accept the value of generativity, technical solutions can allow us to preserve it while protecting ourselves from the bugs and intrusions that it makes us so easy to succomb to. For instance, instead of adopting a fortress mentality, public libraries and other institutions could run virtual operating systems on computers they want to protect. In our homes, our computers could have one operating system open to experimental applications (and instantly reloadable if compromised), side by side with another that is locked down. This would allow ordinary people the same generative freedom as programmers, who typically maintain work platforms and development platforms.

Value at the fringe

Among Zittrain’s most alarming insights is how calls for a safer Internet, and for one more friendly to copyright and trademark holders, can feed into general governmental control over its population in an age where more and more activity moves online. This danger–also prophesied by Swedish Pirate Party leader Rickard Falkvinge–makes generativity a concern to an immensely larger citizenry than the usual suspects consisting of free software developers and remix musicians. Zittrain’s exploration of technology’s “regulability” rises far beyond the book’s opening subject toward an expansive contribution to our understanding of the relations among citizens, governments, and the commonwealth.

Every business has suffered from the hammerlock of a new computer system that turns out to prevent employees from making the tiny exceptions to rules that previously allowed smooth operations. Perfect control on operating systems or the Internet could cause similar disasters, which range from the added costs of DRM in schools to clamp-downs by repressive regimes. Zittrain lays out several interesting legal considerations that aren’t usually raised, overtly in defense of deliberately leaky enforcement regimes.

Concurring and dissenting opinions

 

I should mention before going further that Zittrain showed me an early paper on the subject underlying his book, and cited me in his acknowledgments as one of the people whose conversations with him influenced the book. Had I the chance to discuss the following issues with him, I would have advised a few changes to the text.

 

The intractability of privacy violations

Zittrain’s last chapter focuses on privacy, which is widely understood to have passed a threshold in the past few years. Given cell phone cameras, the complex data-sharing services on popular social networks, and other tools in the hands of ordinary computer users, privacy can now be violated by irresponsible crowds in addition to large companies and governments.

First, I think Zittrain exaggerates the shift. If he believes that government and corporate abuses are now only a tiny sliver of a larger problem created by peer production on the Internet, I wonder whether he’s ever been barred from an airplane by the TSA or denied coverage by an insurance company.

But the problems he points to in privacy-violating activities that have suddenly become everyday behaviors–such as tagging photos on Flickr with people’s names–are real. He tries to apply lessons from an earlier chapter focusing on the checks and balances that make Wikipedia successful. Unfortunately, I think the analogy is weak.

Wikipedia, as Zittrain points out, remains a centralized institution under the ultimate control of one man. Authority fans out from creator Jimbo Wales in an admirably broad and flexible spread, but creativity and control at each level depend on the backstop provided at the next higher level. I agree with Zittrain that some of the solutions found here can be translated to the wider and wilder Internet, but in the area of privacy I don’t find the analogy persuasive.

Even appliances depend on generative systems

The forward thrust created by generative technologies is so powerful that one finds them in even supposedly non-generative appliances. Most embedded devices with non-trivial capabilities (devices that need more than a while-loop for an operating system) use general-purpose operating systems, often Linux or the reduced-fat version of Windows known as Windows CE.

Zittrain contrasts generative PCs and free software to appliances such as the TiVo, Xbox, and iPhone. The irony is that these are all based on generative technologies. The manufacturers could not resist the opportunity to cut development costs by using robust and freely available platforms.

TiVo uses Linux as its operating system, the Xbox runs on general-purpose hardware that has been successfully hacked to run Linux, and the iPhone–which epitomizes to Zittrain the supreme tethered appliance–has BSD inside. Because of its innately generative qualities (including the relatively transparent language of its API, Objective-C), the iPhone was opened up just a few months after its release in a textbook kind of collaboration among self-organized hackers, leading to a free software toolkit that lets any programmer create new applications using all the features of the iPhone.

These examples underline the challenge Tim O’Reilly used to pose to Microsoft: without open platforms, where will its next wave of technology come from? It looks like Microsoft listened, considering its current tentative support for a few free free software projects. An industry of appliances would be poorer without generative technology.

The tether chafes

One of the central points of Zittrain’s book is that embattled computer users, worn down by the onslaught of malware, tend to retreat and give up control to centers of authority, whether by installing restrictive firewalls or buying tethered appliances that were built from the ground up to be closed.

Zittrain has several wonderful sections laying out the long-term detriment of this choice, not only for obvious topics such as technological innovation and fair use of copyrighted material, but for the balance between government and individual rights. He’s on top of all the abuses caused by manufacturers who keep control of their devices and send them automated updates–sometimes updates that deliberately disable previously available features. Tethered appliances respond to their vendors with the same flexible slavishness as computers taken over by roving bots.

But Zittrain does not use available evidence to rebut the seductive claim that choosing appliances over applications leads to more safety for the user and the overall community. Does it?

I think we have plenty of evidence to resist the tethering of previously open computers. For instance, what would most computer users trust more than a CD from Sony? And to ward off the dangers of the open Internet, should we turn to telephone companies to protect our privacy and personal data? I need say no more.

Among web services, the same worries apply. The dominant Internet appliance is Google, and every service it unveils seems to raise such fears about privacy that it has to perennially trot out its “don’t be evil” motto.

But nowhere has the trust in appliances been more dangerous than the calamitous rush to electronic voting machines without paper output, which cannot be adequately audited after deployment. We need to say loudly: closing down open systems is no solution to security risks. (Richard M. Stallman made similar points in response to Zittrain’s article, and Susan Crawford in her response.)

Web 2.0 extends generativity

The wide-area-network equivalent of a tethered alliance is “software as a service,” also known as an Application Service Provider. Here, I have to insist that Zittrain gets his terminology wrong. In place of these common industry terms, he refers to the phenomenon as Web 2.0.

Controversy has always surrounded the term Web 2.0, to be sure, despite attempts to define the phrase by Tim O’Reilly, who is credited with inventing it. Although everybody reads his own biases into the term, I don’t see any meaningful definition of Web 2.0 that includes web sites where users just log in to run an application remotely. I did see one other speaker misunderstand the term this way, but we have to resist the trend to “mash up” useful terms to the point where they lose their value and all come out in some bland uniformity.

Web 2.0 features–such as simple APIs and ways to incorporate user-submitted content–extend generativity as much as blogs and wikis do. They’re a critical stage in the ongoing evolution of the Internet. But Zittrain does offer some important critiques. Google Maps can discourage competition by co-opting it through its powerful API. And this ultimately means more control for Google–control it could leverage to artificially set the direction for mapping applications.

Thus, Web 2.0 technologies can be seen as an enablers that open up the data and applications controlled by corporations, but also as the soft glove than allow the corporate fist to push itself further and further into their clients’ lives.

My glosses and musings on “The Future of the Internet” show how much meat it provides for analysis and discussion. Anyone who can make it through this long review would get a lot from the book. In addition to drawing links among useful recommendations for preserving our freedom, Zittrain proves that the legal frameworks for making such decisions are more complex than most technologists and policy makers credit them for.

 

tags: free software, law, Open Source  | comments: 0   | Sphere It

Read Full Post »

WIRED MAGAZINE: 16.03

Tech Biz  :  IT   

How Can Directory Assistance Be Free?

By Chris Anderson  02.25.08 | 12:00 AM

AT&T and its competitors rake in $7 billion a year from directory assistance, charging 50 cents to $1.75 per call. Google, on the other hand, offers its automated GOOG-411 service gratis. How can the search juggernaut afford not to charge?

A) Get free data. Each time callers to GOOG-411 request a phone number, they’re giving Google valuable information. Each call provides voice data representing unique variations in accent, phrasing, and business names that Google uses to improve its service. Estimated market value of that data since the service launched last spring: $14 million. B) Invest in the next big thing. Still, the value of that information hardly compares with potential earnings if Google were to charge $1 per call. Why give away the store? Honcho Peter Norvig has said that GOOG-411 is a test bed for a voice-driven search engine for mobile phones. If it serves ads to those phones, Google’s share of that market could be measured in billions. 

 
Chart: Steven Leckart; Chart design: Nicholas Felton; Sources: Jingle Networks, Linguistic Data Consortium, Opus Research

Read Full Post »

Older Posts »

%d bloggers like this: