Feeds:
Posts
Comments

Posts Tagged ‘Money’

Tuesday, April 29, 2008 – Page updated at 03:56 PM

E-mail article     Print view      Share:    Digg     Newsvine

Microsoft device helps police pluck evidence from cyberscene of crime

Seattle Times technology reporter

Microsoft has developed a small plug-in device that investigators can use to quickly extract forensic data from computers that may have been used in crimes.

The COFEE, which stands for Computer Online Forensic Evidence Extractor, is a USB “thumb drive” that was quietly distributed to a handful of law-enforcement agencies last June. Microsoft General Counsel Brad Smith described its use to the 350 law-enforcement experts attending a company conference Monday.

The device contains 150 commands that can dramatically cut the time it takes to gather digital evidence, which is becoming more important in real-world crime, as well as cybercrime. It can decrypt passwords and analyze a computer’s Internet activity, as well as data stored in the computer.

It also eliminates the need to seize a computer itself, which typically involves disconnecting from a network, turning off the power and potentially losing data. Instead, the investigator can scan for evidence on site.

More than 2,000 officers in 15 countries, including Poland, the Philippines, Germany, New Zealand and the United States, are using the device, which Microsoft provides free.

“These are things that we invest substantial resources in, but not from the perspective of selling to make money,” Smith said in an interview. “We’re doing this to help ensure that the Internet stays safe.”

Law-enforcement officials from agencies in 35 countries are in Redmond this week to talk about how technology can help fight crime. Microsoft held a similar event in 2006. Discussions there led to the creation of COFEE.

Smith compared the Internet of today to London and other Industrial Revolution cities in the early 1800s. As people flocked from small communities where everyone knew each other, an anonymity emerged in the cities and a rise in crime followed.

The social aspects of Web 2.0 are like “new digital cities,” Smith said. Publishers, interested in creating huge audiences to sell advertising, let people participate anonymously.

That’s allowing “criminals to infiltrate the community, become part of the conversation and persuade people to part with personal information,” Smith said.

Children are particularly at risk to anonymous predators or those with false identities. “Criminals seek to win a child’s confidence in cyberspace and meet in real space,” Smith cautioned.

Expertise and technology like COFEE are needed to investigate cybercrime, and, increasingly, real-world crimes.

advertising

“So many of our crimes today, just as our lives, involve the Internet and other digital evidence,” said Lisa Johnson, who heads the Special Assault Unit in the King County Prosecuting Attorney’s Office.

A suspect’s online activities can corroborate a crime or dispel an alibi, she said.

The 35 individual law-enforcement agencies in King County, for example, don’t have the resources to investigate the explosion of digital evidence they seize, said Johnson, who attended the conference.

“They might even choose not to seize it because they don’t know what to do with it,” she said. “… We’ve kind of equated it to asking specific law-enforcement agencies to do their own DNA analysis. You can’t possibly do that.”

Johnson said the prosecutor’s office, the Washington Attorney General’s Office and Microsoft are working on a proposal to the Legislature to fund computer forensic crime labs.

Microsoft also got credit for other public-private partnerships around law enforcement.

Jean-Michel Louboutin, Interpol’s executive director of police services, said only 10 of 50 African countries have dedicated cybercrime investigative units.

“The digital divide is no exaggeration,” he told the conference. “Even in countries with dedicated cybercrime units, expertise is often too scarce.”

He credited Microsoft for helping Interpol develop training materials and international databases used to prevent child abuse.

Smith acknowledged Microsoft’s efforts are not purely altruistic. It benefits from selling collaboration software and other technology to law-enforcement agencies, just like everybody else, he said.

Benjamin J. Romano: 206-464-2149 or bromano@seattletimes.com

Copyright © 2008 The Seattle Times Company

Read Full Post »

Enterprise 2.0 To Become a $4.6 Billion Industry By 2013

Written by Sarah Perez / April 20, 2008 9:01 PM / 25 Comments


A new report released today by Forrester Research is predicting that enterprise spending on Web 2.0 technologies is going to increase dramatically over the next five years. This increase will include more spending on social networking tools, mashups, and RSS, with the end result being a global enterprise market of $4.6 billion by the year 2013.

This change is not without its challenges. Although there is money to be made in the industry by vendors, Web 2.0 tools by their very nature are defined by commoditization; as is much of the new social media industry, a topic we touched on briefly here, when discussing how content has become a commodity.

For vendors specifically, there are 3 main challenges to becoming successful in this new industry, including:

  1. I.T. shops being wary of what they perceive as “consumer-grade” technology
  2. Ad-supported web tools generally have “free” as the starting point
  3. Web 2.0 tools will have to now compete in a space currently dominated by legacy enterprise software investments

What is Enterprise Web 2.0?

Most technologists segment the Web 2.0 market between “consumer” Web 2.0 technologies and “business” Web 2.0 technologies. So what does Enterprise 2.0 include then?

Well, what it doesn’t include is consumer services like Blogger, Facebook, Netvibes, and Twitter, says Forrester. These types of services are aimed at consumers and are often supported by ads, so they do not qualify as Enterprise 2.0 tools.

Instead, collaboration and productivity tools based on the concepts of web 2.0, but designed for the enterprise worker will count as being Enterprise 2.0. In addition, for-pay services, like those from BEA Systems, IBM, Microsoft, Awareness, NewsGator Technologies, and Six Apart will factor in.

Enterprise marketing tools have also expanded to include Web 2.0 technologies. For example, money spent on the creation and syndication of a Facebook app or a web site/social network widget could be considered Enterprise 2.0. However, pure ad spending dollars, including those spent on consumer Web 2.0 sites, will not count as Enterprise 2.0.

Getting Past the I.T. Gatekeeper

One of the main challenges of getting Web 2.0 into the enterprise will be getting past the gatekeepers of traditional I.T. Businesses have been showing interest in these new technologies, but, ironically, the interest comes from departments outside of I.T. Instead, it’s the marketing department, R&D, and corporate communications pushing for the adoption of more Web 2.0-like tools.

Unfortunately, as often is the case, the business owners themselves don’t have the knowledge or expertise to make technology purchasing decisions for their company. They rely on I.T. to do so – a department that currently spends 70% of their budget maintaining past investments.

Despite the absolute mission-critical nature of I.T. in today’s business, the department is often provided with slim budgets, which tends to only allow for maintaining current infrastructure, not experimenting with new, unproven technologies.

To make matters worse, I.T. tends to view Web 2.0 tools as being insecure at best, or, at worst, a security threat to the business. They also don’t trust what they perceive to be “consumer-grade” technologies, which they don’t believe have the power to scale to the size that an enterprise demands.

In addition, I.T. departments currently work with a host of legacy applications. The new tools, in order to compete with these, will have to be able to integrate with existing technology, at least for the time being, in order to be fully effective.

Finally, given the tight budgets, there is still a chance that even if a particular tool does meet all the requirements to get in the door at a particular company, I.T. or other company personnel utilizing the service may try to exploit the free version of the service if the price point for the “enterprise” version gets to be too high. They may also choose to look for a free, open source alternative.

Enterprise 2.0 Adoption

How Web 2.0 Will Reach $4.6 Billion

All that being said, the Web 2.0 market, as  small as it is now, is, in fact, growing. In 2008, firms with 1000 employees or more will spend $764 million on Web 2.0 tools and technologies. Over the next five years, that expenditure will grow at a compound annual rate of 43%.

The top spending category will be social networking tools. In 2008, for example, companies will spend $258 million on tools like those from Awareness, Communispace, and Jive Software. After social networking, the next-largest category is RSS, followed by blogs and wikis, and then mashups.

The vendors expected to do the best in this new marketplace will be those that bundle their offerings, offering the complete package of tools to the businesses they serve.

However, newer, “pure” Web 2.0 companies hoping to capitalize on this trend will still have to fight with traditional I.T. software for a foothold, specifically fighting with the likes of Microsoft and IBM. Many I.T. shops will choose to stick with their existing software from these large, well-known vendors, especially now that both are integrating Web 2.0 into their offerings.

Microsoft’s SharePoint, for example, now includes wikis, blogs, and RSS technologies in their collaboration suite. IBM offers social networking and mashup tools via their Lotus Connections and Lotus Mashups products and SAP Business Suite includes social networking and widgets.

What this means is that much of the Web 2.0 tool kit will simply “fade into the fabric of enterprise collaboration suites,” says Forrester. By 2013, few buyers will seek out and purchase Web 2.0 tools specifically. Web 2.0 will become a feature, not a product.

Enterprise 2.0 Spending

Other Trends

Other trends will also have an impact on this new marketplace, including the following:

External Spending Will Beat Internal Spending: External Web 2.0 expenditure will surpass internal expenditure in 2009, and, by 2013, will dwarf internal spending by a billion dollars. Internally, companies will spend money on internal social networking, blogs, wikis, and RSS; externally, the spending patterns will be very similar. Social networking tools that provide customer interaction, allowing customers the ability to create profiles, join discussion boards, and read company blogs, for example, will receive more investment and development over the next five years.

Europe & Asia Pacific Markets Grow: Europe and Asia Pacific will become more substantial markets in 2009. Fewer European companies have embraced Web 2.0 tools, leaving much room for growth. Asia Pacific will also grow in 2009.

Web 2.0 Graduates from “Kids’ Stuff”:  Right now, it’s people between the ages of 12 and 17 that are the more avid consumers of social computing technology, with one-third of them acting as content creators. Meanwhile, only 7% of those 51-61 do the same. However, this is another trend that is going to change over the next few years. By 2011, Forrester believes that users of Web 2.0 tools will mirror users of the web at large.

Retirement of Baby Boomers: As with many things, it takes the passing of the older generation from executive status into retirement before a true shift can occur. Over the next three years, millions of baby boomers will retire and the younger workers brought in to fill the void will not only want, but will expect similar tools in the office as those they use at home in their personal lives.

What It All Means

For vendors wanting to play in the Enterprise 2.0 space, there are a few key takeaways to be learned from this research. For one, they can help ensure their success in this niche by selling across deployment types. That is, plan to grow beyond just selling to either the internal or external market.

Another option is to segment the enterprise marketplace by industry and then by company size. Some industries are more customer-focused than others when it comes to the external market, so developing customized solutions for a particular industry could be a key to success. For internal tools, focusing efforts on deploying enterprise grade tools that include things like integration or security will help sell products to larger customers. Other  levels of service can be designed specifically for the SMBs, featuring simple, self-provisioning products to help cut down on costs.

Finally, vendors looking to grow should consider making a name for themselves in the Europe or Asia Pacific markets, where the opportunity comes from the expected increased investment rates for Web 2.0/Enterprise 2.0 in those geographic regions.

However, the most valuable aspect of this change for vendors is the knowledge they obtain about how to run a successful SaaS business – something that will help propel them into the next decade and beyond and, ultimately, will provide more value than any single Web 2.0 offering alone ever will.

Read Full Post »

The Grid: The Next-Gen Internet?

Douglas Heingartner Email 03.08.01 | 2:00 AM

AMSTERDAM, Netherlands — The Matrix may be the future of virtual reality, but researchers say the Grid is the future of collaborative problem-solving.

More than 400 scientists gathered at the Global Grid Forum this week to discuss what may be the Internet’s next evolutionary step.

Though distributed computing evokes associations with populist initiatives like SETI@home, where individuals donate their spare computing power to worthy projects, the Grid will link PCs to each other and the scientific community like never before.

 

The Grid will not only enable sharing of documents and MP3 files, but also connect PCs with sensors, telescopes and tidal-wave simulators.

IBM’s Brian Carpenter suggested “computing will become a utility just like any other utility.”

Carpenter said, “The Grid will open up … storage and transaction power in the same way that the Web opened up content.” And just as the Internet connects various public and private networks, Cisco Systems’ Bob Aiken said, “you’re going to have multiple grids, multiple sets of middleware that people are going to choose from to satisfy their applications.”

As conference moderator Walter Hoogland suggested, “The World Wide Web gave us a taste, but the Grid gives a vision of an ICT (Information and Communication Technology)-enabled world.”

Though the task of standardizing everything from system templates to the definitions of various resources is a mammoth one, the GGF can look to the early days of the Web for guidance. The Grid that organizers are building is a new kind of Internet, only this time with the creators having a better knowledge of where the bottlenecks and teething problems will be.

The general consensus at the event was that although technical issues abound, the thorniest issues will involve social and political dimensions, for example how to facilitate sharing between strangers where there is no history of trust.

Amsterdam seemed a logical choice for the first Global Grid Forum because not only is it the world’s most densely cabled city, it was also home to the Internet Engineering Task Force’s first international gathering in 1993. The IETF has served as a model for many of the GGF’s activities: protocols, policy issues, and exchanging experiences.

The Grid Forum, a U.S.-based organization combined with eGrid – the European Grid Forum, and Asian counterparts to create the Global Grid Forum (GGF) in November, 2000.

The Global Grid Forum organizers said grid communities in the United States and Europe will now run in synch.

The Grid evolved from the early desire to connect supercomputers into “metacomputers” that could be remotely controlled. The word “grid” was borrowed from the electricity grid, to imply that any compatible device could be plugged in anywhere on the Grid and be guaranteed a certain level of resources, regardless of where those resources might come from.

Scientific communities at the conference discussed what the compatibility standards should be, and how extensive the protocols need to be.

As the number of connected devices runs from the thousands into the millions, the policy issues become exponentially more complex. So far, only draft consensus has been reached on most topics, but participants say these are the early days.

As with the Web, the initial impetus for a grid came from the scientific community, specifically high-energy physics, which needed extra resources to manage and analyze the huge amounts of data being collected.

The most nettlesome issues for industry are security and accounting. But unlike the Web, which had security measures tacked on as an afterthought, the Grid is being designed from the ground up as a secure system.


Conference participants debated what types of services (known in distributed computing circles as resource units) provided through the Grid will be charged for. And how will the administrative authority be centralized?

Corporations have been slow to cotton to this new technology’s potential, but the suits are in evidence at this year’s Grid event. As GGF chairman Charlie Catlett noted, “This is the first time I’ve seen this many ties at a Grid forum.”

In addition to IBM, firms such as Boeing, Philips and Unilever are already taking baby steps toward the Grid.

Though commercial needs tend to be more transaction-focused than those of scientific pursuits, most of the technical requirements are common. Furthermore, both science and industry participants say they require a level of reliability that’s not offered by current peer-to-peer initiatives: Downloading from Napster, for example, can take seconds or minutes, or might not work at all.

Garnering commercial interest is critical to the Grid’s future. Cisco’s Aiken explained that “if grids are really going to take off and become the major impetus for the next level of evolution in the Internet, we have to have something that allows (them) to easily transfer to industry.”

Other potential Grid components include creating a virtual observatory, and doctors performing simulations of blood flows. While some of these applications have existed for years, the Grid will make them routine rather than exceptional.

The California Institute of Technology’s Paul Messina said that by sharing computing resources, “you get more science from the same investment.”

Ian Foster of the University of Chicago said that Web precursor Arpanet was initially intended to be a distributed computing network that would share CPU-intensive tasks but instead wound up giving birth to e-mail and FTP.

The Grid may give birth to a global file-swapping network or a members-only citadel for moneyed institutions. But just as no one ten years ago would have conceived of Napster — not to mention AmIHotOrNot.com — the future of the Grid is unknown.

An associated DataGrid conference continues until Friday, focusing on a project in which resources from Pan-European research institutions will analyze data generated by a new particle collider being built at Swiss particle-physics lab CERN.

Read Full Post »

Tuesday, March 18, 2008

Is venture capital’s love affair with Web 2.0 over? | Tech news blog – CNET News.com

“Silicon Valley remains the hotbed of Web 2.0 activity, but the hipness of start-ups with goofy names is starting to cool in the face of economic reality.
Dow Jones VentureSource on Tuesday released numbers of venture capital activity in Web 2.0 companies and declared that the ‘investment boom may be peaking.’
Venture capitalists put $1.34 billion into 178 deals in 2007, an 88 percent jump over 2006. But once you strip out the $300 million that Facebook raised from Microsoft and others, the numbers don’t look as bullish.
The pace of deal flow, or the number of fundings, has slowed, particularly in the San Francisco Bay Area. Deal flow in 2007 went up 25 percent to 178 deals, but nearly all of those occurred outside the Bay Area, where the number of deals slipped downward.
‘Web 2.0 deals in the Bay Area actually dropped from 74 deals in 2006 to 69 last year and investments were down 3 percent from the $431 million invested in 2006. It’s clear that the real growth in the Web 2.0 sector is happening outside of the Bay Area,’ Jessica Canning, director of global research at Dow Jones VentureSource, said in a statement.”

Read Full Post »

From Logic to Ontology: The limit of “The Semantic Web”

 

 

(Some post are written in English and Spanish language) 

http://www.linkedin.com/answers/technology/web-development/TCH_WDD/165684-18926951 

From Logic to Ontology: The limit of “The Semantic Web” 

 http://en.wikipedia.org/wiki/Undecidable_problem#Other_problems

If you read the next posts on this blog: 

Semantic Web

The Semantic Web

What is the Semantic Web, Actually?

The Metaweb: Beyond Weblogs. From the Metaweb to the Semantic Web: A Roadmap

Semantics to the people! ontoworld

What’s next for the Internet

Web 3.0: Update

How the Wikipedia 3.0: The End of Google? article reached 2 million people in 4 days!

Google vs Web 3.0

Google dont like Web 3.0 [sic] Why am I not surprised?

Designing a better Web 3.0 search engine

From semantic Web (3.0) to the WebOS (4.0)

Search By Meaning

A Web That Thinks Like You

MINDING THE PLANET: THE MEANING AND FUTURE OF THE SEMANTIC WEB

The long-promised “semantic” web is starting to take shape

Start-Up Aims for Database to Automate Web Searching

Metaweb: a semantic wiki startup

http://www.freebase.com/

The Semantic Web, Collective Intelligence and Hyperdata.

Informal logic 

Logical argument

Consistency proof 

Consistency proof and completeness: Gödel’s incompleteness theorems

Computability theory (computer science): The halting problem

Gödel’s incompleteness theorems: Relationship with computability

Non-formal or Inconsistency Logic: LACAN’s LOGIC. Gödel’s incompleteness theorems,

You will realize the internal relationship between them linked from Logic to Ontology.  

I am writing from now on an article about the existence of the semantic web.  

I will prove that it does not exist at all, and that it is impossible to build from machines like computers.  

It does not depend on the software and hardware you use to build it: You cannot do that at all! 

You will notice the internal relations among them, and the connecting thread is the title of this post: “Logic to ontology.”   

I will prove that there is no such construction, which can not be done from the machines, and that does not depend on the hardware or software used.  

More precisely, the limits of the semantic web are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems remark, the final tautology of each construction or metonymic language (mathematical), which leads to inconsistencies. 

This consistent logic is completely opposite to the logic that makes inconsistent use of time, inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.  

The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem or stop of calculation.  

So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.  

As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine and “the halting problem”.  

Then as the necessary and sufficient condition for the theorem is not fulfilled, we still have the necessary condition that if a language uses time, it lacks formal logic, the logic used is inconsistent and therefore has no stop problem.

This is a necessary condition for the semantic web, but it is not enough and therefore no machine, whether it is a Turing Machine, a computer or a device as random as a black body related to physics field, can deal with any language other than mathematics language hence it is implied that this language is forced to meet the halting problem, a result of Gödel theorem.   

De la lógica a la ontología: El límite de la “web semántica”  

Si lee los siguientes artículos de este blog: 

http://es.wikipedia.org/wiki/Web_sem%C3%A1ntica  

Wikipedia 3.0: El fin de Google (traducción Spanish)

Lógica 

Lógica Consistente y completitud: Teoremas de la incompletitud de Gödel (Spanish)

Consistencia lógica (Spanish)

Teoría de la computabilidad. Ciencia de la computación.

Teoremas de la incompletitud de Gödel y teoría de la computación: Problema de la parada 

Lógica inconsistente e incompletitud: LOGICAS LACANIANAS y Teoremas de la incompletitud de Gödel (Spanish)  

Jacques Lacan (Encyclopædia Britannica Online)

Usted puede darse cuenta de las relaciones internas entre ellos, y el hilo conductor es el título de este mismo post: “de la lógica a la ontología”.  

Probaré que no existe en absoluto tal construcción, que no se puede hacer desde las máquinas, y que no depende ni del hardware ni del software utilizado.   

Matizando la cuestión, el límite de la web semántica está dado no por las máquinas y/o sistemas biológicos que se pudieran usar, sino porque la lógica con que se intenta construir carece del uso del tiempo, ya que la lógica formal es puramente metonímica y carece de la metáfora, y eso es lo que marcan los teoremas de Gödel, la tautología final de toda construcción y /o lenguaje metonímico (matemático), que lleva a contradicciones.  

Esta lógica consistente es opuesta a la lógica inconsistente que hace uso del tiempo, propia del insconciente humano, pero el uso del tiempo está construido en base a la falta, no en torno a lo positivo sino en base a negaciones y ausencias, y eso es imposible de reflejar en una máquina porque la percepción de la falta necesita de la conciencia de sí mismo que se adquiere con la ausencia.   

El problema está en que pretendemos construir un sistema inteligente que sustituya nuestro pensamiento, al menos en las búsquedas de información, pero la particularidad de nuestro pensamiento humano es el uso del tiempo el que permite concluir, por eso no existe en la mente humana el problema de la parada o detención del cálculo, o lo que es lo mismo ausencia del momento de concluir.  

Así que todos los esfuerzos encaminados a la web semántica están destinados al fracaso a priori si lo que se pretende es prolongar nuestro pensamiento humano en las máquinas, ellas carecen de discurso metafórico, pues sólo son una construcción matemática, que siempre será tautológica y metonímica, ya que además carece del uso del tiempo que es lo que lleva al corte, la conclusión o la “parada”.  

Como demostración vale la del contraejemplo, o sea que si suponemos que es posible construir la web semántica, como un lenguaje con capacidades similares al lenguaje humano, que tiene el uso del tiempo, entonces si ese es un teorema general, con un solo contraejemplo se viene abajo, y el contraejemplo está dado en el caso particular de la máquina de Turing y el “problema de la parada”.  

Luego no se cumple la condición necesaria y suficiente del teorema, nos queda la condición necesaria que es que si un lenguaje tiene el uso del tiempo, carece de lógica formal, usa la lógica inconsistente y por lo tanto no tiene el problema de la parada”, esa es condición necesaria para la web semántica, pero no suficiente y por ello ninguna máquina, sea de Turing, computador o dispositivo aleatorio como un cuerpo negro en física, puede alcanzar el uso de un lenguaje que no sea el matemático con la paradoja de la parada, consecuencia del teorema de Gödel.

Jacques Lacan (Encyclopædia Britannica Online)

Read Full Post »

New York Times

Technology

Start-Up Aims for Database to Automate Web Searching

Darcy Padilla for The New York Times

Danny Hillis, left, is a founder of Metaweb Technologies and Robert Cook is the executive vice president for product development.

Published: March 9, 2007
SAN FRANCISCO, March 8 — A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.He says his latest effort, to be announced Friday, will help develop a realm frequently described as the “semantic Web” — a set of services that will give rise to software agents that automate many functions now performed manually in front of a Web browser.The idea of a centralized database storing all of the world’s digital information is a fundamental shift away from today’s World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

In contrast, Mr. Hillis envisions a centralized repository that is more like a digital almanac. The new system can be extended freely by those wishing to share their information widely.

On the Web, there are few rules governing how information should be organized. But in the Metaweb database, to be named Freebase, information will be structured to make it possible for software programs to discern relationships and even meaning.

For example, an entry for California’s governor, Arnold Schwarzenegger, would be entered as a topic that would include a variety of attributes or “views” describing him as an actor, athlete and politician — listing them in a highly structured way in the database.

That would make it possible for programmers and Web developers to write programs allowing Internet users to pose queries that might produce a simple, useful answer rather than a long list of documents.

Since it could offer an understanding of relationships like geographic location and occupational specialties, Freebase might be able to field a query about a child-friendly dentist within 10 miles of one’s home and yield a single result.

The system will also make it possible to transform the way electronic devices communicate with one another, Mr. Hillis said. An Internet-enabled remote control could reconfigure itself automatically to be compatible with a new television set by tapping into data from Freebase. Or the video recorder of the future might stop blinking and program itself without confounding its owner.

In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world’s information and make it universally accessible and useful. But its approach sets it apart.

“As wonderful as Google is, there is still much to do,” said Esther Dyson, a computer and Internet industry analyst and investor at EDventure, based in New York.

Most search engines are about algorithms and statistics without structure, while databases have been solely about structure until now, she said.

“In the middle there is something that represents things as they are,” she said. “Something that captures the relationships between things.”

That addition has long been a vision of researchers in artificial intelligence. The Freebase system will offer a set of controls that will allow both programmers and Web designers to extract information easily from the system.

“It’s like a system for building the synapses for the global brain,” said Tim O’Reilly, chief executive of O’Reilly Media, a technology publishing firm based in Sebastopol, Calif.

Mr. Hillis received his Ph.D. in computer science while studying artificial intelligence at the Massachusetts Institute of Technology.

In 1985 he founded one of the first companies focused on massively parallel computing, Thinking Machines. When the company failed commercially at the end of the cold war, he became vice president for research and development at Walt Disney Imagineering. More recently he was a founder of Applied Minds, a research and consulting firm based in Glendale, Calif. Metaweb, founded in 2005 with venture capital backing, is a spinoff of that company.

Mr. Hillis first described his idea for creating a knowledge web he called Aristotle in a paper in 2000. But he said he did not try to build the system until he had recruited two technical experts as co-founders. Robert Cook, an expert in parallel computing and database design, is Metaweb’s executive vice president for product development. John Giannandrea, formerly chief technologist at Tellme Networks and chief technologist of the Web browser group at Netscape/AOL, is the company’s chief technology officer.

“We’re trying to create the world’s database, with all of the world’s information,” Mr. Hillis said.

All of the information in Freebase will be available under a license that makes it freely shareable, Mr. Hillis said. In the future, he said, the company plans to create a business by organizing proprietary information in a similar fashion.

Contributions already added into the Freebase system include descriptive information about four million songs from Musicbrainz, a user-maintained database; details on 100,000 restaurants supplied by Chemoz; extensive information from Wikipedia; and census data and location information.

A number of private companies, including Encyclopaedia Britannica, have indicated that they are willing to add some of their existing databases to the system, Mr. Hillis said.

To find reference information about the words used in this article, double-click on any word, phrase or name. A new window will open with a dictionary definition or encyclopedia entry.

Read Full Post »

Older Posts »

%d bloggers like this: