Feeds:
Posts
Comments

Archive for May 3rd, 2008

Google AdWords phishing threat
May 2, 2008
Web User

Google Adwords phishing scam Google AdWords customers are being warned of a phishing scam currently in circulation.

Researchers at security firm Trend Micro have discovered the trick which seeks to get AdWords users to hand over credit card details.
Google AdWords is a sevice used by advertisers to get their products seen online and is employed by all sorts of companies, small and big.
Phishing gangs send AdWords customers an email telling them that their last payment has not been successful and asking them to update their payment information for Google AdWords.
The link displayed in the email body is a legitimate one but the actual website it takes you to is a compromised site, where the details are stolen.
Rik Ferguson at Trend Micro said: “In many ways Google can be seen as a victim of its own success, as its market share has increased along with the variety of products and services it offers, so its value to the cybercriminal as a platform to exploit has grown alongside it.”
www.trendmicro.com

Read Full Post »

China ‘plans to spy’ on Olympics
May 2, 2008
Web User

Watch Beijing Olympics movies online The Chinese government has been accused of ordering US-owned hotels to install web filters that can spy on international visitors ahead of this summer’s Olympics.

Republican Senator Sam Brownback made the charge at a news conference in Washington.
Along with several other lawmakers, he denounced China’s human rights record and urged President Bush not to attend the Olympic opening ceremonies in Beijing.
The Senator said he had seen memos received by at least two American-owned hotels in China requesting them to install the contentious filters. He declined to reveal his sources.
The filters would allow third-party monitoring of websites being visited by hotel guests and also restrict information coming in and out of China, according to Senator Brownback.
“This is wrong, it’s against international conventions, it’s certainly against the Olympic spirit,” Brownback said. “The Chinese government should remove that request and that order.”
China has repeatedly blocked access to websites including Yahoo, YouTube, the BBC and Google News in the past.
One Republican senator, Chris Smith, compared the Beijing Olympics to the 1936 games in Nazi Germany.
“When Berlin happened, a lot of people didn’t know what the Nazis were all about. But we’ve had year after year of credible reporting of [the] Chinese government’s human rights abuses,” Smith said.

http://en.beijing2008.cn

Read Full Post »

The IBM-Google connection

May 1, 2008 1:55 PM PDT

The IBM-Google connection

LOS ANGELES–Google Chief Executive Eric Schmidt gave a speech and chatted with IBM’s CEO Sam Palmisano onstage Thursday at IBM’s Business Partner Leadership Conference here. The two talked up their relationship, which primarily involves a joint research project. In October, Google and IBM announced a cloud computing initiative, based on Google’s expertise in distributed, parallel computing and IBM’s industrial enterprise management technologies, for public use by universities.

IBM is taking some of the learnings from the project and plans to operate a cloud that will allow partners to house their Web-based applications and sell them to customers, Palmisano said. “It is the first time we have taken something from the consumer arena and applied it to the enterprise,” he said.

Google CEO Eric Schmidt joins hands with IBM CEO Sam Palmisano.

(Credit: Dan Farber/CNET News.com)

Schmidt said that over time there won’t be much differentiation between consumer and enterprise architectures. The major difference is that enterprise customers will pay for software and services, with required security and other features, and consumers won’t.

Schmidt gave IBM lots of credit for pioneering many of the technologies that underlie today’s computing architectures. He noted that IBM, which has about 87 years on Google, has figured out that the underlying platform is a server and Web services.

“Cloud computing is the story of our lifetime,” Schmidt said. “Eventually all devices will be on the network.” Both IBM and Google, and a host of competitors, have the same idea, which was actually first promoted by Sun with its “the network is the computer” slogan. Google figured out how to monetize the fruits of the pages its massively parallel servers manage.

IBM wants to provide the infrastructure and support services to the planet, and Google wants to provide the world’s information, and some applications, on its platform. “The two companies are great and have lots of innovation in their gene pool,” Palmisano said. “There isn’t a lot of overlap in the strategies.” Both are committed to open standards and an open Internet, and they are both going in the same direction, he added.

Google’s YouTube captures 10 hours of video every 60 seconds, and IBM might like that business if it could figure out how to make money at it. But eventually, IBM, Microsoft, Sun, Google, and other big players will look more similar in their technical architectures and business models.

Google and IBM have more in common than a shared view of the world and an academic research project. It turns out that Google outsources its accounting to IBM and that Schmidt considers IBM’s sales organization important to Google’s enterprise software efforts.

As more companies look for Web-based tools, mashups, and standard applications, such as word processors, Google stands to benefit. “IBM is one of the key planks of our strategy–otherwise we couldn’t reach enterprise customers,” Schmidt said.

While IBM isn’t selling directly for Google in the enterprise, IBM’s software division and business partners are integrating Google applications and widgets into custom software solutions based on IBM’s development framework. The “business context” is the secret of the Google and IBM collaboration, Schmidt said. Embedding Google Gadgets in business applications, that can work on any device, is a common theme for both Google and IBM.

Currently, Salesforce.com is selling Google Apps as an integrated part of its platform. It’s not far-fetched to think that Google would seek out IBM’s help with its business partners to spread the Google word in the enterprise.

Recent posts from Outside the Lines
YouTube disappears from the screen temporarily
The Silicon Valley triangle: Google, Yahoo, and Microsoft
EIC Squared: Microhoo; Google and Big Blue; and Sun’s blues
Imagining the tech world in 2050
The IBM-Google connection

Read Full Post »

Subscribe to Outside the Lines

May 2, 2008 7:25 PM PDT

The Silicon Valley triangle: Google, Yahoo, and Microsoft

It’s Friday night and still no word from the Microsoft or Yahoo bunkers. The headlines for today tell the story (see Techmeme).

The Wall Street Journal, which appears to be a conduit for the negotiations, has a story, “Microsoft, Yahoo Talks Intensify In Push to Reach a Friendly Deal,” and another one, “Yahoo-Google Pact May Be Close.” It doesn’t seem that a Yahoo-Google mating on advertising would lead to a friendly Microsoft-Yahoo discussion this weekend. It’s an interesting game of chicken, with many issues, such as regulatory approval, up in the air for any permutation of a deal with the triangle.

It’s clear that Yahoo and Google are trying to check Microsoft by hooking up, but it would only be strategic for Yahoo if Microsoft ends up paying a higher price, meaning it has to motivate Steve Ballmer into coming up with more cash.

Of course, Ballmer has other options. He could take the hostile takeover route, or walk away for now and perhaps come back later if Yahoo goes into freefall, even with Google filling its coffers.

Whatever the outcome, it doesn’t alter the reality that Microsoft senior management strongly believes that it has to do something dramatic to compete with Google. The fact that Google has inserted itself into the process has to be galling to Microsoft, which could lead to the hostile route.

For a reminder, here is the cast of characters and social graph.

 

Read Full Post »

The Cognitive Age

Op-Ed Columnist

The Cognitive Age

 

Published: May 2, 2008
If you go into a good library, you will find thousands of books on globalization. Some will laud it. Some will warn about its dangers. But they’ll agree that globalization is the chief process driving our age. Our lives are being transformed by the increasing movement of goods, people and capital across borders.

The globalization paradigm has led, in the political arena, to a certain historical narrative: There were once nation-states like the U.S. and the European powers, whose economies could be secured within borders. But now capital flows freely. Technology has leveled the playing field. Competition is global and fierce.

New dynamos like India and China threaten American dominance thanks to their cheap labor and manipulated currencies. Now, everything is made abroad. American manufacturing is in decline. The rest of the economy is threatened.

Hillary Clinton summarized the narrative this week: “They came for the steel companies and nobody said anything. They came for the auto companies and nobody said anything. They came for the office companies, people who did white-collar service jobs, and no one said anything. And they came for the professional jobs that could be outsourced, and nobody said anything.”

The globalization paradigm has turned out to be very convenient for politicians. It allows them to blame foreigners for economic woes. It allows them to pretend that by rewriting trade deals, they can assuage economic anxiety. It allows them to treat economic and social change as a great mercantilist competition, with various teams competing for global supremacy, and with politicians starring as the commanding generals.

But there’s a problem with the way the globalization paradigm has evolved. It doesn’t really explain most of what is happening in the world.

Globalization is real and important. It’s just not the central force driving economic change. Some Americans have seen their jobs shipped overseas, but global competition has accounted for a small share of job creation and destruction over the past few decades. Capital does indeed flow around the world. But as Pankaj Ghemawat of the Harvard Business School has observed, 90 percent of fixed investment around the world is domestic. Companies open plants overseas, but that’s mainly so their production facilities can be close to local markets.

Nor is the globalization paradigm even accurate when applied to manufacturing. Instead of fleeing to Asia, U.S. manufacturing output is up over recent decades. As Thomas Duesterberg of Manufacturers Alliance/MAPI, a research firm, has pointed out, the U.S.’s share of global manufacturing output has actually increased slightly since 1980.

The chief force reshaping manufacturing is technological change (hastened by competition with other companies in Canada, Germany or down the street). Thanks to innovation, manufacturing productivity has doubled over two decades. Employers now require fewer but more highly skilled workers. Technological change affects China just as it does the America. William Overholt of the RAND Corporation has noted that between 1994 and 2004 the Chinese shed 25 million manufacturing jobs, 10 times more than the U.S.

The central process driving this is not globalization. It’s the skills revolution. We’re moving into a more demanding cognitive age. In order to thrive, people are compelled to become better at absorbing, processing and combining information. This is happening in localized and globalized sectors, and it would be happening even if you tore up every free trade deal ever inked.

The globalization paradigm emphasizes the fact that information can now travel 15,000 miles in an instant. But the most important part of information’s journey is the last few inches — the space between a person’s eyes or ears and the various regions of the brain. Does the individual have the capacity to understand the information? Does he or she have the training to exploit it? Are there cultural assumptions that distort the way it is perceived?

The globalization paradigm leads people to see economic development as a form of foreign policy, as a grand competition between nations and civilizations. These abstractions, called “the Chinese” or “the Indians,” are doing this or that. But the cognitive age paradigm emphasizes psychology, culture and pedagogy — the specific processes that foster learning. It emphasizes that different societies are being stressed in similar ways by increased demands on human capital. If you understand that you are living at the beginning of a cognitive age, you’re focusing on the real source of prosperity and understand that your anxiety is not being caused by a foreigner.

It’s not that globalization and the skills revolution are contradictory processes. But which paradigm you embrace determines which facts and remedies you emphasize. Politicians, especially Democratic ones, have fallen in love with the globalization paradigm. It’s time to move beyond it.

Read Full Post »

Pursuing the Next Level of Artificial Intelligence

Jim Wilson/The New York Times

Daphne Koller’s award-winning work in artificial intelligence has had commercial impact.

 

Published: May 3, 2008

PALO ALTO, Calif. — Like a good gambler, Daphne Koller, a researcher at Stanford whose work has led to advances in artificial intelligence, sees the world as a web of probabilities.

There is, however, nothing uncertain about her impact.

A mathematical theoretician, she has made contributions in areas like robotics and biology. Her biggest accomplishment — and at age 39, she is expected to make more — is creating a set of computational tools for artificial intelligence that can be used by scientists and engineers to do things like predict traffic jams, improve machine vision and understand the way cancer spreads.

Ms. Koller’s work, building on an 18th-century theorem about probability, has already had an important commercial impact, and her colleagues say that will grow in the coming decade. Her techniques have been used to improve computer vision systems and in understanding natural language, and in the future they are expected to lead to an improved generation of Web search.

“She’s on the bleeding edge of the leading edge,” said Gary Bradski, a machine vision researcher at Willow Garage, a robotics start-up firm in Menlo Park, Calif.

Ms. Koller was honored last week with a new computer sciences award sponsored by the Association for Computing Machinery and the Infosys Foundation, the philanthropic arm of the Indian computer services firm Infosys.

The award to Ms. Koller, with a prize of $150,000, is viewed by scientists and industry executives as validating her research, which has helped transform artificial intelligence from science fiction and speculation into an engineering discipline that is creating an array of intelligent machines and systems. It is not the first such recognition; in 2004, Ms. Koller received a $500,000 MacArthur Fellowship.

Ms. Koller is part of a revival of interest in artificial intelligence. After three decades of disappointments, artificial intelligence researchers are making progress. Recent developments made possible spam filters, Microsoft’s new ClearFlow traffic maps and the driverless robotic cars that Stanford teams have built for competitions sponsored by the Defense Advanced Research Projects Agency.

Since arriving at Stanford as a professor in 1995, Ms. Koller has led a group of researchers who have reinvented the discipline of artificial intelligence. Pioneered during the 1960s, the field was originally dominated by efforts to build reasoning systems from logic and rules. Judea Pearl, a computer scientist at the University of California, Los Angeles, had a decade earlier advanced statistical techniques that relied on repeated measurements of real-world phenomena.

Called the Bayesian approach, it centers on a formula for updating the probabilities of events based on repeated observations. The Bayes rule, named for the 18th-century mathematician Thomas Bayes, describes how to transform a current assumption about an event into a revised, more accurate assumption after observing further evidence.

Ms. Koller has led research that has greatly increased the scope of existing Bayesian-related software. “When I started in the mid- to late 1980s, there was a sense that numbers didn’t belong in A.I.,” she said in a recent interview. “People didn’t think in numbers, so why should computers use numbers?”

Ms. Koller is beginning to apply her algorithms more generally to help scientists discern patterns in vast collections of data.

“The world is noisy and messy,” Ms. Koller said. “You need to deal with the noise and uncertainty.”

That philosophy has led her to do research in game theory and artificial intelligence, and more recently in molecular biology.

Her tools led to a new type of cancer gene map based on examining the behavior of a large number of genes that are active in a variety of tumors. From the research, scientists were able to develop a new explanation of how breast tumors spread into bone.

One potentially promising area to apply Ms. Koller’s theoretical work will be the emerging field of information extraction, which could be applied to Web searches. Web pages would be read by software systems that could organize the information and effectively understand unstructured text.

“Daphne is one of the most passionate researchers in the A.I. community,” said Eric Horvitz, a Microsoft researcher and president of the Association for the Advancement of Artificial Intelligence. “After being immersed for a few years with the computational challenges of decoding regulatory genomics, she confided her excitement to me, saying something like, ‘I think I’ve become a biologist — I mean a real biologist — and it’s fabulous.’ ”

To that end, Ms. Koller is spending a sabbatical doing research with biologists at the University of California, San Francisco. Because biology is increasingly computational, her expertise is vital in gaining deeper understanding of cellular processes.

Ms. Koller grew up in an academic family in Israel, the daughter of a botanist and an English professor. While her father spent a year at Stanford in 1981 when she was 12, she began programming on a Radio Shack PC that she shared with another student.

When her family returned to Israel the next year, she told her father, the botanist, that she was bored with high school and wanted to pursue something more stimulating in college. After half a year, she persuaded him to let her enter Hebrew University, where she studied computer science and mathematics.

By 17, she was teaching a database course at the university. The next year she received her master’s degree and then joined the Israeli Army before coming to the United States to study for a Ph.D. at Stanford.

She didn’t spend her time looking at a computer monitor. “I find it distressing that the view of the field is that you sit in your office by yourself surrounded by old pizza boxes and cans of Coke, hacking away at the bowels of the Windows operating system,” she said. “I spend most of my time thinking about things like how does a cell work or how do we understand images in the world around us?”

In recent years, many of her graduate students have gone to work at Google. However she tries to persuade undergraduates to stay in academia and not rush off to become software engineers at start-up companies.

She acknowledges that the allure of Silicon Valley riches can be seductive. “My husband still berates me for not having jumped on the Google bandwagon at the beginning,” she said. Still, she insists she does not regret her decision to stay in academia. “I like the freedom to explore the things I care about,” she said.

Read Full Post »

April 19, 2008

The Wikipedia, Knowledge Preservation and DNA

I had an interesting thought today about the long-term preservation and transmission of human knowledge.

The Wikipedia may be on its way to becoming the one of the best places in which to preserve knowledge for future generations. But this is just the beginning. What if we could encode the Wikipedia into the Junk DNA portion of our own genome? It appears that something like this may actually be possible — at least according some recent studies of the non-coding regions of the human genome.

If we could actually encode knowledge, like the Wikipedia for example, into our genome, the next logical step would be to find a way to access it directly.

At first we might only be able to access and read the knowledge stored in our DNA through a computationally intensive genetic analysis of an individual’s DNA. In order to correct any errors in the data from mutuation, we would also need to cross-reference this individual data with similar analyses from the DNA of other people who also carry this data in their DNA. But this is just the beginning. There are however ways to stored data such that there is enough redundancy to protect against degradation. Assuming we could do this we might be able to eliminate the need for cross referencing as a form of error correction — the data itself would be self-correcting so to speak. If we could accomplish this then the next step would be to find a way for an individual to access the knowledge stored in their DNA in real-time, directly. That’s a long way off but there may be a way to do this using some future nano-scale genomic-brain interface. This opens up some fascinating areas of speculation to say the least.

 

Why The Wikipedia?

The Wikipedia has certain qualities that make it better than other forms of knowledge preservation and transmission:

  • The Wikipedia exists primarily in electronic form. It is not subject to age or decay like a physical encyclopedia or document. This means it can persist forever, and will not be lost to time, if it continues to be maintained electronically in the future.
  • The Wikipedia is replicated in multiple locations around the world. The fact that it is so easy to replicate, and is so widely replicated means that it is less at risk of being lost due to a local disaster at any given storage location. It also means it is more likely to continue, somewhere, as a living document that goes on to reflect majority consensus reality into the distant future. It is highly improbable that it will ever suffer the same fate as certain ancient documents which only existed in one place and were subsequently lost in floods, fires, or wars, etc. At this point only a planet-wide extinction level event could erase the Wikipedia and/or prevent future generations from finding it.
  • The Wikipedia is highly viral, it’s content is increasingly cited and it is far ahead of any competing system in terms of coverage and brand-recognition. Because so many other pieces of content on the Web and in other media refer to the Wikipedia as the world’s global authority for knowledge, it is considered increasingly authoritative and is increasingly visible and increasingly cited. The Law of Increasing Returns indicates that this will continue to self-amplify, making the Wikipedia the best candidate for an authoritative global repository of knowledge.

What this means is that if you have any knowledge that you want to preserve for future generations, a good place to put it is in the Wikipedia. Putting it there almost guarantees that it will propagate around the world and throughout the human-explored universe (in the future, if we become a spacefaring civilization), and into the distant future of human civilizations.

The Potential For Storing Knowledge in DNA

Is it possible to store knowledge — such as the Wikipedia — in human DNA? It would certainly be useful if we could do this. By storing knowledge in human DNA of living humans, or of common bacteria for that matter, it could then potentially be passed down and spread through generations into the far future. However the mutability of DNA over time might gradually introduce errors that would degrade the information within particular lines of DNA over long periods of time.

Perhaps this could however be mitigated by comparing DNA samples from a large cross-section of individuals within the population of descendants of original holders of DNA-knowledge-archives in the future — this would effectively enable statistical error cancellation. The farther in the future from the date at which the knowledge is “written” to the DNA of some number of humans, the more people’s DNA would be needed to eliminate the errors statistically. This would however in principle counteract mutations and enable the reliable recovery of messages in DNA even very far in the future.

The fact that it is in principle possible to encode knowledge into human (or other) DNA begs the question of whether there is already knowledge stored there? It’s certainly worth a look! Maybe there is already a message there for us? One can only wonder if there is already an ancient “Wikipedia” of sorts already written there.

Interestingly enough, when certain statistical tests are run against human DNA,  it does seem to have properties that are indicative of written language, but only in the “junk” regions of the genome. Maybe it’s not “junk” after all. Below is an article that discusses a recent discovery related to this:

Language in junk DNA

You’ve probably heard of a molecule called DNA, otherwise known as “The Blueprint Of Life”. Molecular biologists have been examining and mapping the DNA for a few decades now. But as they’ve looked more closely at the DNA, they’ve been getting increasingly bothered by one inconvenient little fact – the fact that 97% of the DNA is junk, and it has no known use or function! But, an usual collaboration between molecular biologists, cryptoanalysists (people who break secret codes), linguists (people who study languages) and physicists, has found strange hints of a hidden language in this so- called “junk DNA”.

Only about 3% of the DNA actually codes for amino acids, which in turn make proteins, and eventually, little babies. The remaining 97% of the DNA is, according to conventional wisdom, not gems, but junk.

The molecular biologists call this junk DNA, introns. Introns are like enormous commercial breaks or advertisements that interrupt the real program – except in the DNA, they take up 97% of the broadcast time. Introns are so important, that Richard Roberts and Phillip Sharp, who did much of the early work on introns back in 1977, won a Nobel Prize for their work in 1993. But even today, we still don’t know what introns are really for.

Simon Shepherd, who lectures in cryptography and computer security at the University of Bradford in the United Kingdom, took an approach, that was based on his line of work. He looked on the junk DNA, as just another secret code to be broken. He analysed it, and he now reckons that one probable function of introns, is that they are some sort of error correction code – to fix up the occasional mistakes that happen as the DNA replicates itself. But even if he’s right, introns could have lots of other uses.

The next big breakthrough came from a really unusual collaboration between medical doctors, physicists and linguists. They found even more evidence that there was a sort-of language buried in the introns.

According to the linguists, all human languages obey Zipf’s Law. It’s a really weird law, but it’s not that hard to understand. Start off by getting a big fat book. Then, count the number of times each word appears in that book. You might find that the number one most popular word is “the” (which appears 2,000 times), followed by the second most popular word “a” (which appears 1,800 times), and so on. Right down at the bottom of the list, you have the least popular word, which might be “elephant”, and which appears just once.

Set up two columns of numbers. One column is the order of popularity of the words, running from “1” for “the”, and “2” for “a”, right down “1,000” for “elephant”. The other column counts how many times each word appeared, starting off with 2,000 appearances of “the”, then 1,800 appearances of “a”, down to one appearance of “elephant”.

If you then plot on the right kind of graph paper, the order of popularity of the words, against the number of times each word appears you get a straight line! Even more amazingly, this straight line appears for every human language – whether it’s English or Egyptian, Eskimo or Chinese! Now the DNA is just one continuous ladder of squillions of rungs, and is not neatly broken up into individual words (like a book).

So the scientists looked at a very long bit of DNA, and made artificial words by breaking up the DNA into “words” each 3 rungs long. And then they tried it again for “words” 4 rungs long, 5 rungs long, and so on up to 8 rungs long. They then analysed all these words, and to their surprise, they got the same sort of Zipf Law/straight-line-graph for the human DNA (which is mostly introns), as they did for the human languages!

There seems to be some sort of language buried in the so-called junk DNA! Certainly, the next few years will be a very good time to make a career change into the field of genetics.

So now, around the edge of the new millennium, we have a reasonable understanding of the 3% of the DNA that makes amino acids, proteins and babies. And the remaining 97% – well, we’re pretty sure that there is some language buried there, even if we don’t yet know what it says. It might say “It’s all a joke”, or it might say “Don’t worry, be happy”, or it might say “Have a nice day, lots of love, from your friendly local DNA”.   (source)

Now to complete this thought: what if the information-carrying capacity of the so-called Junk DNA of the human genome is sufficient to hold the content of the Wikipedia? Then all we would need is some way of writing to it — perhaps via gene therapy via infection by a virus that carries a copy of the Wikipedia.

This would enable volunteers to accept copies of the Wikipedia into their DNA and become vectors for the Wikipedia. They and their descendants would become walking encyclopedias and would preserve human knowledge for future generations. If only some people had this done then they and their lineages would be a sort of priesthood with particular importance for the future of humanity. It sounds like the basis for a really great science-fiction thriller!

By copying the Wikipedia into our own DNA we might be able to ensure that wherever human beings end up in the universe, the Wikipedia will go with them. Even if in some distant world humans destroy their civilization in a nuclear holocaust or are almost wiped out by an asteroid and have to rebuild from the stone-age again, they will eventually rediscover genomics and soon after that they will find the Wikipedia in their genome.

This is a kind of “backup strategy” for our civilization and all the knowledge we consider to be most important. Of course it is not clear yet whether the Junk DNA could carry enough information to encode the entire Wikipedia, nor is it clear that the Junk DNA is actually “junk” — perhaps there is already something there that should not be overwritten? Or perhaps it serves some other purpose in human development and evolution that we shouldn’t mess around with. It remains to be seen.

Read Full Post »

%d bloggers like this: