Feeds:
Posts
Comments

Archive for January 21st, 2008

Q&A with John Doerr

Home Explore tags Browse companies Full Archive

Digg thisBookmark this Print version Sphere It

Q&A with John Doerr

By Matt Marshall 11.21.07

Here’s an interview with venture capitalist John Doerr at the Web 2.0 summit a few weeks ago. Doerr, the backer of Google, Netscape, Sun, Amazon and others, is in full form, talking about need for clean tech, but also his view that a “radically immersive” web will emerge over the next year or two, and that his firm is looking to invest in this.

8 comments on this story

Filter: All CommentsGood Comment(3+) Great Comments(4+) What is this?

Tom (Profile)
I enjoyed this piece. My one issue was Doerr’s comment on climate change (paraphrasing here): “The debate is over. Al Gore winning the Nobel Prize tells us that everyone is in agreement.”Yassir Arafat won the Nobel Peace Prize years ago and I would argue the “debate” as to whether he was mainly a terrorist is alive and well today. A “subjective” Nobel prize does not mean consensus of anything, it’s not hard science, it’s marketing.The climate change crowd loves the “debate is over phrase” mainly because they are afraid of the “debate”. Scientific problems should always be open to scrutiny and debate.

Was this comment useful to you ?

Thumbster (Profile)
Tom, let’s put aside that pesky lack of scientific support for anthropogenic global warming, there’s a consensus. Sure consensus is a political term and not a scientific one, but embrace the hype. Just as there was consensus among scientists that the earth was flat, the sun revolved around the earth and DDT was thinning egg shells (resulting in a ban that kills 2M people a year in sub-Saharan Africa). Embrace the politics of global warming…and the financial gain. The green investment money and Al Gore’s $100M annual income from hype, speaking fees and carbon offsets are really a good thing, just as Yassir being the father of modern terrorism is a good thing. Debate Over!

Was this comment useful to you ?

Well, as far as I know, it’s the oil companies who are afraid of the debates… they are so afraid that they have to “buy” scientists to publish reports saying that there’s no such thing as global warming or that the oil wells won’t dry for another 100 years… Speaking of the consensus thing, please tell me if you open up any newspaper or magazine these days and don’t find green/cleantech related article there. Now, is that not “consensus” enough?

Was this comment useful to you ?

Naane (Profile)
Consensus is a great word. Doerr made the mistake of using the word “everyone” – all you need is for one person to disagree and he’s immediately wrong. “Consensus”, though, allows the possibility of some nuts who aren’t engaging in Correct Thought. But not only can there be consensus with dissent, the dissent is made irrelevant by the existence of the consensus. It’s an even better word than ‘majority’, which requires you to prove that more than 50% agree. To prove a consensus you just have to say things like ‘open up any newspaper or magazine’.To put it simply, a consensus is defined by media, business, politicians, and generally those who ‘matter’. A majority is defined by everyone, even the people who don’t matter at all, which is why it’s so annoying. For some reason people resolutely refuse to believe what their self-appointed thought leaders tell them to believe. They ask awkward questions, like how this century’s belief that natural disasters are the direct result of mankind’s misdeeds is any different from when the Biblical 40-day flood was blamed on human sin, as was virtually every natural disaster since we invented religion.Consensus doesn’t ask awkward questions. Consensus just knows, and is self-reinforcing. Why should people pay to stop global warming? Because of the consensus that global warming exists. Why is there a consensus that global warming exists? Well, if there wasn’t a consensus that it exists, why would people be paying to stop it?I’m sure Doerr just made a slip of the tongue and will use Approved Vocabulary in future in order to ensure continuing Correct Thought, not to mention money.

Was this comment useful to you ?

Al Gorbachev (Profile)
Consensus sucks.

Was this comment useful to you ?

Harvey (Profile)
Global warming may or may not be stopped or reversed or actually be a problem. BUT energy independence is a necessity!! For geopolitical reasons!!

Was this comment useful to you ?

[…] investment is considerable, and comes at a time when a number of experts are betting that a more powerful, “semantic” Web is about to emerge, where data about information […]
[…] investment is considerable, and comes at a time when a number of experts are betting that a more powerful, “semantic” Web is about to emerge, where data about information is much […]

Read Full Post »

Connections #018 – Improving Performance & Profitability

icon for podpress  Connections #018 – Improving Performance & Profitability [20:06m]: Hide Player | Play in Popup | Download
podPressPlayerToLoad(‘podPressPlayerSpace_27’, ‘mp3Player_27_0’, ‘300:30’, ‘http%3A%2F%2Fconnections.thepodcastnetwork.com%2Faudio%2Ftpn_connections_20080120_018_Ross_Dawson.mp3’);

Ross Dawson is a highly-intelligent, articulate & insightful Strategy Consultant, Author, Keynote Speaker and leading authority on Web 2.0, Social Networking and their applications in the business environment.

In one of the most informative & stimulating conversations I’ve had the privilege of being of being involved in, Ross examines some of the themes that have arisen in recent discussions – such as Collaboration in the Workplace & Corporate Mash-Ups.

Ross discusses in fascinating detail how leading corporations are using Social & Business Networking tools to improve their performance – and more importantly, how these applications can be used to create real competitive differentiation & profitability.

For any Corporate Manager, Technology Enthusiast or Social Entrepreneur interested in the impact of these elements on society over the next 5-10 years – this is NOT an Episode to miss!

DOWNLOADS & RESOURCES:
Web 2.0 Framework
Future of Media Report 2007

WEB SITES:
Future Exploration Network
Advanced Human Technologies
‘Trends in the Living Networks’ (blog)
RossDawson.com

Like this PodCast?

Then share it with your network – and show your support by ‘casting a vote’ at Digg.com, or writing a Review on iTunes.

And don’t be shy – please leave a Comment. Your feedback & suggestions are greatly appreciated.

If you’re a Blogger or Webmaster – you can download Embed Code that makes it easy to include this Episode of ‘Connections’ in your web page.

Thank you

Stan Relihan

My Company: www.expertsearch.com.au

Read Full Post »

Trends in the Living Networks

Ross Dawson’s Trends in the Living Networks blog offers high-level commentary on developments in our intensely networked world, and how it is coming to life. The blog is primarily intended for a general business audience, in identifying critical technology, social, and business trends and their implications.

January 21, 2008

See our latest Trend Map! What to expect in 2008 and beyond….

Nowandnext.com and Future Exploration Network have once again collaborated to create a trend map for 2008 and beyond.

Our Trend Map for 2007+ had a major impact, with over 40,000 downloads, fantastic feedback (“The World’s Best Trend Map. Ever.” “I got shivers” “Amazing” “Fascinating” “Magnifique” etc. etc.), and inspired several other trend maps including Information Architects’ first map of web trends.

While last year’s map was based on the London tube map, the 2008 map is derived from Shanghai’s underground routes. Limited to just five lines, the map uncovers key trends across Society, Politics, Demographics, Economy, and Technology.

Click on the map below to get the full pdf.

trendblend2008.jpg

Trends mentioned in the map include:

Continue reading “See our latest Trend Map! What to expect in 2008 and beyond….”

Posted by Ross Dawson at 2:06 AM | Permalink | Comments (1)

http://www.rossdawsonblog.com/TrendBlend08_map.pdf

trendblend08_map.pdf

trend_blend_2007_map.pdf

Read Full Post »

web2_framework.pdf

future_of_media_report2007.pdfweb2_framework.pdf

| |

Launching the Web 2.0 Framework

Alongside our corporate strategy consulting and research work in the media and technology space, Future Exploration Network has created a Web 2.0 Framework to share openly. Click here or on any of the images below to download the Framework as a pdf (713KB).

The intention of the Web 2.0 Framework is to provide a clear, concise view of the nature of Web 2.0, particularly for senior executives or other non-technical people who are trying to grasp the scope of Web 2.0, and the implications and opportunities for their organizations.

There are three key parts to the Web 2.0 Framework, as shown below:

Web 2.0 Framework
Web 2.0 Framework
* Web 2.0 is founded on seven key Characteristics: Participation, Standards, Decentralization, Openness, Modularity, User Control, and Identity.
* Web 2.0 is expressed in two key Domains: the Open web, and the Enterprise.
* The heart of Web 2.0 is how it converts Inputs (User Generated Content, Opinions, Applications), through a series of Mechanisms (Technologies, Recombination, Collaborative Filtering, Structures, Syndication) to Emergent Outcomes that are of value to the entire community.
Web 2.0 Definitions
Web 2.0 Definitions
* We define the Web 2.0 Characteristics, Domains, and Technologies referred to in the Framework.
* Ten definitions for Web 2.0 are provided, including the one I use to pull together the ideas in the Framework: “Distributed technologies built to integrate, that collectively transform mass participation into valuable emergent outcomes.”
Web 2.0 Landscape
Web 2.0 Landscape
* Sixty two prominent Web 2.0 companies and applications are mapped out across two major dimensions: Content Sharing to Recommendations/ Filtering; and Web Application to Social Network. The four spaces that emerge at the junctions of these dimensions are Widget/ component; Rating/ tagging; Aggregation/ Recombination; and Collaborative filtering. Collectively these cover the primary landscape of Web 2.0.

As with all our frameworks, the Web 2.0 Framework is released on a Creative Commons license, which allows anyone to use it and build on it as they please, as long as there is attribution with a link to this blog post and/ or Future Exploration Network. The framework is intended to be a stimulus to conversation and further thinking, so if you disagree on any aspect, or think you can improve on it, please take what is useful, leave the rest, and create something better.

In the Framework document we also mention our forthcoming Future of Media Summit 2007, which will be held simultaneously in Sydney and San Francisco this July 18/17. In the same spirit as this Web 2.0 Framework, we will be releasing substantial research, framework, and other content on the Future of Media in the lead-up to our event, continuing the tradition from the Future of Media Strategic Framework and Future of Media Report 2006 that we released last year. Hope this is all useful!

|

Read Full Post »

New York Times

Entrepreneurs See a Web Guided by Common Sense

Published: November 12, 2006
SAN FRANCISCO, Nov. 11 — From the billions of documents that form the World Wide Web and the links that weave them together, computer scientists and a growing collection of start-up companies are finding new ways to mine human intelligence.

Their goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide — and even provide the foundation for systems that can reason in a human fashion. That level of artificial intelligence, with machines doing the thinking instead of simply following commands, has eluded researchers for more than half a century.

Referred to as Web 3.0, the effort is in its infancy, and the very idea has given rise to skeptics who have called it an unobtainable vision. But the underlying technologies are rapidly gaining adherents, at big companies like I.B.M. and Google as well as small ones. Their projects often center on simple, practical uses, from producing vacation recommendations to predicting the next hit song.

But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college.

The projects aimed at creating Web 3.0 all take advantage of increasingly powerful computers that can quickly and completely scour the Web.

“I call it the World Wide Database,” said Nova Spivack, the founder of a start-up firm whose technology detects relationships between nuggets of information by mining the World Wide Web. “We are going from a Web of connected documents to a Web of connected data.”

Web 2.0, which describes the ability to seamlessly connect applications (like geographic mapping) and services (like photo-sharing) over the Internet, has in recent months become the focus of dot-com-style hype in Silicon Valley. But commercial interest in Web 3.0 — or the “semantic Web,” for the idea of adding meaning — is only now emerging.

The classic example of the Web 2.0 era is the “mash-up” — for example, connecting a rental-housing Web site with Google Maps to create a new, more useful service that automatically shows the location of each rental listing.

In contrast, the Holy Grail for developers of the semantic Web is to build a system that can give a reasonable and complete response to a simple question like: “I’m looking for a warm place to vacation and I have a budget of $3,000. Oh, and I have an 11-year-old child.”

Under today’s system, such a query can lead to hours of sifting — through lists of flights, hotel, car rentals — and the options are often at odds with one another. Under Web 3.0, the same search would ideally call up a complete vacation package that was planned as meticulously as if it had been assembled by a human travel agent.

How such systems will be built, and how soon they will begin providing meaningful answers, is now a matter of vigorous debate both among academic researchers and commercial technologists. Some are focused on creating a vast new structure to supplant the existing Web; others are developing pragmatic tools that extract meaning from the existing Web.

But all agree that if such systems emerge, they will instantly become more commercially valuable than today’s search engines, which return thousands or even millions of documents but as a rule do not answer questions directly.

Underscoring the potential of mining human knowledge is an extraordinarily profitable example: the basic technology that made Google possible, known as “Page Rank,” systematically exploits human knowledge and decisions about what is significant to order search results. (It interprets a link from one page to another as a “vote,” but votes cast by pages considered popular are weighted more heavily.)

Today researchers are pushing further. Mr. Spivack’s company, Radar Networks, for example, is one of several working to exploit the content of social computing sites, which allow users to collaborate in gathering and adding their thoughts to a wide array of content, from travel to movies.

Radar’s technology is based on a next-generation database system that stores associations, such as one person’s relationship to another (colleague, friend, brother), rather than specific items like text or numbers.

One example that hints at the potential of such systems is KnowItAll, a project by a group of University of Washington faculty members and students that has been financed by Google. One sample system created using the technology is Opine, which is designed to extract and aggregate user-posted information from product and review sites.

One demonstration project focusing on hotels “understands” concepts like room temperature, bed comfort and hotel price, and can distinguish between concepts like “great,” “almost great” and “mostly O.K.” to provide useful direct answers. Whereas today’s travel recommendation sites force people to weed through long lists of comments and observations left by others, the Web. 3.0 system would weigh and rank all of the comments and find, by cognitive deduction, just the right hotel for a particular user.

“The system will know that spotless is better than clean,” said Oren Etzioni, an artificial-intelligence researcher at the University of Washington who is a leader of the project. “There is the growing realization that text on the Web is a tremendous resource.”

In its current state, the Web is often described as being in the Lego phase, with all of its different parts capable of connecting to one another. Those who envision the next phase, Web 3.0, see it as an era when machines will start to do seemingly intelligent things.

Researchers and entrepreneurs say that while it is unlikely that there will be complete artificial-intelligence systems any time soon, if ever, the content of the Web is already growing more intelligent. Smart Webcams watch for intruders, while Web-based e-mail programs recognize dates and locations. Such programs, the researchers say, may signal the impending birth of Web 3.0.

“It’s a hot topic, and people haven’t realized this spooky thing about how much they are depending on A.I.,” said W. Daniel Hillis, a veteran artificial-intelligence researcher who founded Metaweb Technologies here last year.

Like Radar Networks, Metaweb is still not publicly describing what its service or product will be, though the company’s Web site states that Metaweb intends to “build a better infrastructure for the Web.”

“It is pretty clear that human knowledge is out there and more exposed to machines than it ever was before,” Mr. Hillis said.

Both Radar Networks and Metaweb have their roots in part in technology development done originally for the military and intelligence agencies. Early research financed by the National Security Agency, the Central Intelligence Agency and the Defense Advanced Research Projects Agency predated a pioneering call for a semantic Web made in 1999 by Tim Berners-Lee, the creator of the World Wide Web a decade earlier.

Intelligence agencies also helped underwrite the work of Doug Lenat, a computer scientist whose company, Cycorp of Austin, Tex., sells systems and services to the government and large corporations. For the last quarter-century Mr. Lenat has labored on an artificial-intelligence system named Cyc that he claimed would some day be able to answer questions posed in spoken or written language — and to reason.

Cyc was originally built by entering millions of common-sense facts that the computer system would “learn.” But in a lecture given at Google earlier this year, Mr. Lenat said, Cyc is now learning by mining the World Wide Web — a process that is part of how Web 3.0 is being built.

During his talk, he implied that Cyc is now capable of answering a sophisticated natural-language query like: “Which American city would be most vulnerable to an anthrax attack during summer?”

Separately, I.B.M. researchers say they are now routinely using a digital snapshot of the six billion documents that make up the non-pornographic World Wide Web to do survey research and answer questions for corporate customers on diverse topics, such as market research and corporate branding.

Daniel Gruhl, a staff scientist at I.B.M.’s Almaden Research Center in San Jose, Calif., said the data mining system, known as Web Fountain, has been used to determine the attitudes of young people on death for a insurance company and was able to choose between the terms “utility computing” and “grid computing,” for an I.B.M. branding effort.

“It turned out that only geeks liked the term ‘grid computing,’ ” he said.

I.B.M. has used the system to do market research for television networks on the popularity of shows by mining a popular online community site, he said. Additionally, by mining the “buzz” on college music Web sites, the researchers were able to predict songs that would hit the top of the pop charts in the next two weeks — a capability more impressive than today’s market research predictions.

There is debate over whether systems like Cyc will be the driving force behind Web 3.0 or whether intelligence will emerge in a more organic fashion, from technologies that systematically extract meaning from the existing Web. Those in the latter camp say they see early examples in services like del.icio.us and Flickr, the bookmarking and photo-sharing systems acquired by Yahoo, and Digg, a news service that relies on aggregating the opinions of readers to find stories of interest.

In Flickr, for example, users “tag” photos, making it simple to identify images in ways that have eluded scientists in the past.

“With Flickr you can find images that a computer could never find,” said Prabhakar Raghavan, head of research at Yahoo. “Something that defied us for 50 years suddenly became trivial. It wouldn’t have become trivial without the Web.”

Read Full Post »

New York Times

Technology

Start-Up Aims for Database to Automate Web Searching

Darcy Padilla for The New York Times

Danny Hillis, left, is a founder of Metaweb Technologies and Robert Cook is the executive vice president for product development.

Published: March 9, 2007
SAN FRANCISCO, March 8 — A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.He says his latest effort, to be announced Friday, will help develop a realm frequently described as the “semantic Web” — a set of services that will give rise to software agents that automate many functions now performed manually in front of a Web browser.The idea of a centralized database storing all of the world’s digital information is a fundamental shift away from today’s World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

In contrast, Mr. Hillis envisions a centralized repository that is more like a digital almanac. The new system can be extended freely by those wishing to share their information widely.

On the Web, there are few rules governing how information should be organized. But in the Metaweb database, to be named Freebase, information will be structured to make it possible for software programs to discern relationships and even meaning.

For example, an entry for California’s governor, Arnold Schwarzenegger, would be entered as a topic that would include a variety of attributes or “views” describing him as an actor, athlete and politician — listing them in a highly structured way in the database.

That would make it possible for programmers and Web developers to write programs allowing Internet users to pose queries that might produce a simple, useful answer rather than a long list of documents.

Since it could offer an understanding of relationships like geographic location and occupational specialties, Freebase might be able to field a query about a child-friendly dentist within 10 miles of one’s home and yield a single result.

The system will also make it possible to transform the way electronic devices communicate with one another, Mr. Hillis said. An Internet-enabled remote control could reconfigure itself automatically to be compatible with a new television set by tapping into data from Freebase. Or the video recorder of the future might stop blinking and program itself without confounding its owner.

In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world’s information and make it universally accessible and useful. But its approach sets it apart.

“As wonderful as Google is, there is still much to do,” said Esther Dyson, a computer and Internet industry analyst and investor at EDventure, based in New York.

Most search engines are about algorithms and statistics without structure, while databases have been solely about structure until now, she said.

“In the middle there is something that represents things as they are,” she said. “Something that captures the relationships between things.”

That addition has long been a vision of researchers in artificial intelligence. The Freebase system will offer a set of controls that will allow both programmers and Web designers to extract information easily from the system.

“It’s like a system for building the synapses for the global brain,” said Tim O’Reilly, chief executive of O’Reilly Media, a technology publishing firm based in Sebastopol, Calif.

Mr. Hillis received his Ph.D. in computer science while studying artificial intelligence at the Massachusetts Institute of Technology.

In 1985 he founded one of the first companies focused on massively parallel computing, Thinking Machines. When the company failed commercially at the end of the cold war, he became vice president for research and development at Walt Disney Imagineering. More recently he was a founder of Applied Minds, a research and consulting firm based in Glendale, Calif. Metaweb, founded in 2005 with venture capital backing, is a spinoff of that company.

Mr. Hillis first described his idea for creating a knowledge web he called Aristotle in a paper in 2000. But he said he did not try to build the system until he had recruited two technical experts as co-founders. Robert Cook, an expert in parallel computing and database design, is Metaweb’s executive vice president for product development. John Giannandrea, formerly chief technologist at Tellme Networks and chief technologist of the Web browser group at Netscape/AOL, is the company’s chief technology officer.

“We’re trying to create the world’s database, with all of the world’s information,” Mr. Hillis said.

All of the information in Freebase will be available under a license that makes it freely shareable, Mr. Hillis said. In the future, he said, the company plans to create a business by organizing proprietary information in a similar fashion.

Contributions already added into the Freebase system include descriptive information about four million songs from Musicbrainz, a user-maintained database; details on 100,000 restaurants supplied by Chemoz; extensive information from Wikipedia; and census data and location information.

A number of private companies, including Encyclopaedia Britannica, have indicated that they are willing to add some of their existing databases to the system, Mr. Hillis said.

To find reference information about the words used in this article, double-click on any word, phrase or name. A new window will open with a dictionary definition or encyclopedia entry.

Read Full Post »

The Sun BabelFish Blog

Don’t panic !

Friday Mar 09, 2007

Metaweb: a semantic wiki startup

O’Reilly groks the Semantic Web in the latest article “Freebase will prove addictive“. From his article:

But hopefully, this narrative will give you a sense of what Metaweb is reaching for: a wikipedia like system for building the semantic web. But unlike the W3C approach to the semantic web, which starts with controlled ontologies, Metaweb adopts a folksonomy approach, in which people can add new categories (much like tags), in a messy sprawl of potentially overlapping assertions.

Now that’s a very partial simplification. The Semantic Web has always been designed to be grown, though there has been a lot of misunderstanding on this issue as I reported in UFO’s seen growing on the web.

The idea of using semantic wikis to grow ontologies is an excellent idea. Seed with a few tags, nourish with plain text, add a little structure with simple ontologies; water; repeat with a littel more complexity at each iteration. With love and attention and a few lullabies the Semantic Web will be born (see Search, tagging and wikis).

A little further he says:

Metaweb still has a long way to go, but it seems to me that they are pointing the way to a fascinating new chapter in the evolution of Web 2.0.

Soon O’Reilly is going to use the word Web 3.0, just you wait and see!

See also:

Comments:

apparenly wikipedia is coming out with their own search engine to challenge google and yahoo! newslink: http://fly2.ws/wikipedia_Google

Posted by louisa on March 09, 2007 at 06:55 PM CET #

And of course one should mention Nova Spivack’s Radar as the other semantic web startup to watch.

Posted by Henry Story on March 09, 2007 at 11:12 PM CET #

[Trackback] Henry Story, Danny Ayers and Shelley Powers wrote some astute criticisms of Tim O’Reilly’s write-up on freebase. Why do Tim O’Reilly, Cory Doctorow and others continue to mis-cast a means to agree on how to communicate as a centrally-controlled system?…

Posted by Semantic Wave on March 10, 2007 at 07:41 PM CET #

An article in the Economist Sharing what matters in the June 7th quarterly tech review speaks of Danny Hillis and how freebase is based on a graph database.

Posted by Henry Story on June 23, 2007 at 06:10 AM CEST #

Note on comments:

  • Comments are moderated, so they will take a little time to appear. Currently moderation means I have to read them personally. Hopefully with OpenId deployment, this will become more automated.
  • HTML markup no longer works here, due to some decision made somewhere. Sorry about that.
  • Check your comments by using the preview button…

Read Full Post »

Older Posts »

%d bloggers like this: