Feeds:
Posts
Comments

Archive for the ‘database’ Category

Google Goes Its Own Way In The Data Center

With patents on cooling baffles and fan mounts, its next big step is hydropowered electricity.



How many servers does Google (NSDQ: GOOG) have in how many data centers? Google’s mum on that. But counting servers reveals as much about Google’s infrastructure as tallying teeth tells you about a bear; better to worry about what the bear will eat next.VP of engineering Douglas Merrill acknowledges Google uses a standard server setup–what he calls an “office in a box”–for provisioning IT services as it expands. It’s possible, even likely, that Google has adopted a common design for data centers, too.George Mason University professor Paul Strassmann suggested in a lecture last December that Google’s Linux-based infrastructure is considerably cheaper to buy and maintain than a comparable setup of Sun Microsystems servers or Windows servers would cost. For IT shops that spend half their budgets just keeping machines up and patched, the implications are significant. Strassmann said IT pros know where they need to go–toward a Google-style architecture.

Organizational growth has been the No. 1 issue, Urs Holzle, Google senior VP of operations, says.

Organizational growth has been the No. 1 issue, Holzle says.

Some of the new servers going into Google’s data centers are probably equipped with AMD Opteron multicore processors. Google won’t confirm that, but one of the reasons AMD chips are selling so well to other companies is that they don’t throw off as much heat as older alternatives. Google engineers, who pay close attention to microprocessor efficiency and heat dissipation, must find AMD chips hard to ignore. Intel is racing to improve the performance-per-watt of its own processor line.

Google has tackled some of the tough issues of data center management with internally developed technology. The company has patented homegrown designs for a better cooling baffle and fan mount for its rack servers. The U.S. Patent and Trademark Office lists 23 patents granted to Google, with 14 patent applications in the pipeline. That doesn’t include patents obtained through acquisitions or filed outside the United States.

Though Google doesn’t build its own fans or power supplies, it cares about such components. The company requests specific high-efficiency designs from manufacturing partners, says Urs Holzle, senior VP of operations.

Google’s newest data center, under construction in The Dalles, Ore., has become a focus of media attention. An aerial photo of the data center appeared on the front page of The New York Times on June 14. Why Oregon? “We’re always looking for candidate sites to host our infrastructure–selecting a site always means balancing a number of factors, including the availability of land and power, as well as a suitable workforce,” Holzle explains in an e-mail interview. “The Dalles was one of the sites we found that met our needs, plus it’s a beautiful area to live.”

The availability of cheap power–widely cited as the main reason Google is building on a river in Oregon–isn’t an issue for most companies. Edward Koplin, a principal in the Baltimore engineering firm of Jack Dale Associates, says that among his firm’s clients, which include the Army Corps of Engineers, Citigroup, and Wells Fargo, “not one of them brings up as an issue, ‘How cheap can my power be?'” But then, their data centers aren’t growing the way Google’s are.

Google declines to discuss how much it spends on electricity, but its financial documents note that data center costs have been rising. In a June filing, Google attributes an increase in the cost of revenues to, among other things, increasing data center costs, which include depreciation, labor, energy, and bandwidth costs.

Yet, even more than the availability of power and local subsidies, beauty may explain Google’s decision to locate in Oregon. Rob Enderle, principal analyst at the Enderle Group, suggests that locating in an affordable, attractive, relatively undeveloped community represents an established strategy in high tech to attract and retain talent. “That was the old Microsoft model,” he explains, noting that Redmond, Wash., was a nice place to live without much industry at the time Microsoft got started. “So once you got there it wasn’t like you were going to go anyplace else. And they were able to keep turnover to an absolute minimum as a result.”

Electric power may be expensive, but it’s cheap compared to brainpower. “For as long as I can remember, organizational growth has been the No. 1 issue,” says Holzle. “If you need to grow quickly but don’t want to fall apart as a company, you need to focus a lot more on hiring, training, and nurturing the right culture.”

Return to the story:
Google Revealed: The IT Strategy That Makes It WorkContinue to the sidebars:
Google’s Brew Of Open Source And Custom Code
and Profile: Google Technologist Knows Problem Solving Firsthand


0 message(s). Last at: Nov 28, 2007 10:21:29 PM

    Add Your Comment:

    Read Full Post »

    From Logic to Ontology: The limit of “The Semantic Web”

     

     

    (Some post are written in English and Spanish language) 

    http://www.linkedin.com/answers/technology/web-development/TCH_WDD/165684-18926951 

    From Logic to Ontology: The limit of “The Semantic Web” 

     http://en.wikipedia.org/wiki/Undecidable_problem#Other_problems

    If you read the next posts on this blog: 

    Semantic Web

    The Semantic Web

    What is the Semantic Web, Actually?

    The Metaweb: Beyond Weblogs. From the Metaweb to the Semantic Web: A Roadmap

    Semantics to the people! ontoworld

    What’s next for the Internet

    Web 3.0: Update

    How the Wikipedia 3.0: The End of Google? article reached 2 million people in 4 days!

    Google vs Web 3.0

    Google dont like Web 3.0 [sic] Why am I not surprised?

    Designing a better Web 3.0 search engine

    From semantic Web (3.0) to the WebOS (4.0)

    Search By Meaning

    A Web That Thinks Like You

    MINDING THE PLANET: THE MEANING AND FUTURE OF THE SEMANTIC WEB

    The long-promised “semantic” web is starting to take shape

    Start-Up Aims for Database to Automate Web Searching

    Metaweb: a semantic wiki startup

    http://www.freebase.com/

    The Semantic Web, Collective Intelligence and Hyperdata.

    Informal logic 

    Logical argument

    Consistency proof 

    Consistency proof and completeness: Gödel’s incompleteness theorems

    Computability theory (computer science): The halting problem

    Gödel’s incompleteness theorems: Relationship with computability

    Non-formal or Inconsistency Logic: LACAN’s LOGIC. Gödel’s incompleteness theorems,

    You will realize the internal relationship between them linked from Logic to Ontology.  

    I am writing from now on an article about the existence of the semantic web.  

    I will prove that it does not exist at all, and that it is impossible to build from machines like computers.  

    It does not depend on the software and hardware you use to build it: You cannot do that at all! 

    You will notice the internal relations among them, and the connecting thread is the title of this post: “Logic to ontology.”   

    I will prove that there is no such construction, which can not be done from the machines, and that does not depend on the hardware or software used.  

    More precisely, the limits of the semantic web are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems remark, the final tautology of each construction or metonymic language (mathematical), which leads to inconsistencies. 

    This consistent logic is completely opposite to the logic that makes inconsistent use of time, inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.  

    The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem or stop of calculation.  

    So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.  

    As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine and “the halting problem”.  

    Then as the necessary and sufficient condition for the theorem is not fulfilled, we still have the necessary condition that if a language uses time, it lacks formal logic, the logic used is inconsistent and therefore has no stop problem.

    This is a necessary condition for the semantic web, but it is not enough and therefore no machine, whether it is a Turing Machine, a computer or a device as random as a black body related to physics field, can deal with any language other than mathematics language hence it is implied that this language is forced to meet the halting problem, a result of Gödel theorem.   

    De la lógica a la ontología: El límite de la “web semántica”  

    Si lee los siguientes artículos de este blog: 

    http://es.wikipedia.org/wiki/Web_sem%C3%A1ntica  

    Wikipedia 3.0: El fin de Google (traducción Spanish)

    Lógica 

    Lógica Consistente y completitud: Teoremas de la incompletitud de Gödel (Spanish)

    Consistencia lógica (Spanish)

    Teoría de la computabilidad. Ciencia de la computación.

    Teoremas de la incompletitud de Gödel y teoría de la computación: Problema de la parada 

    Lógica inconsistente e incompletitud: LOGICAS LACANIANAS y Teoremas de la incompletitud de Gödel (Spanish)  

    Jacques Lacan (Encyclopædia Britannica Online)

    Usted puede darse cuenta de las relaciones internas entre ellos, y el hilo conductor es el título de este mismo post: “de la lógica a la ontología”.  

    Probaré que no existe en absoluto tal construcción, que no se puede hacer desde las máquinas, y que no depende ni del hardware ni del software utilizado.   

    Matizando la cuestión, el límite de la web semántica está dado no por las máquinas y/o sistemas biológicos que se pudieran usar, sino porque la lógica con que se intenta construir carece del uso del tiempo, ya que la lógica formal es puramente metonímica y carece de la metáfora, y eso es lo que marcan los teoremas de Gödel, la tautología final de toda construcción y /o lenguaje metonímico (matemático), que lleva a contradicciones.  

    Esta lógica consistente es opuesta a la lógica inconsistente que hace uso del tiempo, propia del insconciente humano, pero el uso del tiempo está construido en base a la falta, no en torno a lo positivo sino en base a negaciones y ausencias, y eso es imposible de reflejar en una máquina porque la percepción de la falta necesita de la conciencia de sí mismo que se adquiere con la ausencia.   

    El problema está en que pretendemos construir un sistema inteligente que sustituya nuestro pensamiento, al menos en las búsquedas de información, pero la particularidad de nuestro pensamiento humano es el uso del tiempo el que permite concluir, por eso no existe en la mente humana el problema de la parada o detención del cálculo, o lo que es lo mismo ausencia del momento de concluir.  

    Así que todos los esfuerzos encaminados a la web semántica están destinados al fracaso a priori si lo que se pretende es prolongar nuestro pensamiento humano en las máquinas, ellas carecen de discurso metafórico, pues sólo son una construcción matemática, que siempre será tautológica y metonímica, ya que además carece del uso del tiempo que es lo que lleva al corte, la conclusión o la “parada”.  

    Como demostración vale la del contraejemplo, o sea que si suponemos que es posible construir la web semántica, como un lenguaje con capacidades similares al lenguaje humano, que tiene el uso del tiempo, entonces si ese es un teorema general, con un solo contraejemplo se viene abajo, y el contraejemplo está dado en el caso particular de la máquina de Turing y el “problema de la parada”.  

    Luego no se cumple la condición necesaria y suficiente del teorema, nos queda la condición necesaria que es que si un lenguaje tiene el uso del tiempo, carece de lógica formal, usa la lógica inconsistente y por lo tanto no tiene el problema de la parada”, esa es condición necesaria para la web semántica, pero no suficiente y por ello ninguna máquina, sea de Turing, computador o dispositivo aleatorio como un cuerpo negro en física, puede alcanzar el uso de un lenguaje que no sea el matemático con la paradoja de la parada, consecuencia del teorema de Gödel.

    Jacques Lacan (Encyclopædia Britannica Online)

    Read Full Post »

    New York Times

    Technology

    Start-Up Aims for Database to Automate Web Searching

    Darcy Padilla for The New York Times

    Danny Hillis, left, is a founder of Metaweb Technologies and Robert Cook is the executive vice president for product development.

    Published: March 9, 2007
    SAN FRANCISCO, March 8 — A new company founded by a longtime technologist is setting out to create a vast public database intended to be read by computers rather than people, paving the way for a more automated Internet in which machines will routinely share information.The company, Metaweb Technologies, is led by Danny Hillis, whose background includes a stint at Walt Disney Imagineering and who has long championed the idea of intelligent machines.He says his latest effort, to be announced Friday, will help develop a realm frequently described as the “semantic Web” — a set of services that will give rise to software agents that automate many functions now performed manually in front of a Web browser.The idea of a centralized database storing all of the world’s digital information is a fundamental shift away from today’s World Wide Web, which is akin to a library of linked digital documents stored separately on millions of computers where search engines serve as the equivalent of a card catalog.

    In contrast, Mr. Hillis envisions a centralized repository that is more like a digital almanac. The new system can be extended freely by those wishing to share their information widely.

    On the Web, there are few rules governing how information should be organized. But in the Metaweb database, to be named Freebase, information will be structured to make it possible for software programs to discern relationships and even meaning.

    For example, an entry for California’s governor, Arnold Schwarzenegger, would be entered as a topic that would include a variety of attributes or “views” describing him as an actor, athlete and politician — listing them in a highly structured way in the database.

    That would make it possible for programmers and Web developers to write programs allowing Internet users to pose queries that might produce a simple, useful answer rather than a long list of documents.

    Since it could offer an understanding of relationships like geographic location and occupational specialties, Freebase might be able to field a query about a child-friendly dentist within 10 miles of one’s home and yield a single result.

    The system will also make it possible to transform the way electronic devices communicate with one another, Mr. Hillis said. An Internet-enabled remote control could reconfigure itself automatically to be compatible with a new television set by tapping into data from Freebase. Or the video recorder of the future might stop blinking and program itself without confounding its owner.

    In its ambitions, Freebase has some similarities to Google — which has asserted that its mission is to organize the world’s information and make it universally accessible and useful. But its approach sets it apart.

    “As wonderful as Google is, there is still much to do,” said Esther Dyson, a computer and Internet industry analyst and investor at EDventure, based in New York.

    Most search engines are about algorithms and statistics without structure, while databases have been solely about structure until now, she said.

    “In the middle there is something that represents things as they are,” she said. “Something that captures the relationships between things.”

    That addition has long been a vision of researchers in artificial intelligence. The Freebase system will offer a set of controls that will allow both programmers and Web designers to extract information easily from the system.

    “It’s like a system for building the synapses for the global brain,” said Tim O’Reilly, chief executive of O’Reilly Media, a technology publishing firm based in Sebastopol, Calif.

    Mr. Hillis received his Ph.D. in computer science while studying artificial intelligence at the Massachusetts Institute of Technology.

    In 1985 he founded one of the first companies focused on massively parallel computing, Thinking Machines. When the company failed commercially at the end of the cold war, he became vice president for research and development at Walt Disney Imagineering. More recently he was a founder of Applied Minds, a research and consulting firm based in Glendale, Calif. Metaweb, founded in 2005 with venture capital backing, is a spinoff of that company.

    Mr. Hillis first described his idea for creating a knowledge web he called Aristotle in a paper in 2000. But he said he did not try to build the system until he had recruited two technical experts as co-founders. Robert Cook, an expert in parallel computing and database design, is Metaweb’s executive vice president for product development. John Giannandrea, formerly chief technologist at Tellme Networks and chief technologist of the Web browser group at Netscape/AOL, is the company’s chief technology officer.

    “We’re trying to create the world’s database, with all of the world’s information,” Mr. Hillis said.

    All of the information in Freebase will be available under a license that makes it freely shareable, Mr. Hillis said. In the future, he said, the company plans to create a business by organizing proprietary information in a similar fashion.

    Contributions already added into the Freebase system include descriptive information about four million songs from Musicbrainz, a user-maintained database; details on 100,000 restaurants supplied by Chemoz; extensive information from Wikipedia; and census data and location information.

    A number of private companies, including Encyclopaedia Britannica, have indicated that they are willing to add some of their existing databases to the system, Mr. Hillis said.

    To find reference information about the words used in this article, double-click on any word, phrase or name. A new window will open with a dictionary definition or encyclopedia entry.

    Read Full Post »

    Older Posts »

    %d bloggers like this: