Feeds:
Posts
Comments

Archive for the ‘bussines’ Category

Four Bad Bears Markets

Four Bears and Inflation

Four Bears and Inflation

Four Bears and Inflation
April 17, 2009
Earlier this week the Bureau of Labor Statistics (BLS) announced a Consumer Price Index number for March that showed an annualized negative number. It was tiny, a mere -0.38%. But it was the first negative annualized rate since August of 1955. Is it a hint of more to come?

Deflation has been a chronic problem in the Japanese economy since the Nikkei 225 topped out in 1989, and it was a debilitating problem during the Great Depression. There are a few economists who see deflation as a threat to the U.S. economy on the expectation of continuing unemployment and consumer deleveraging.

But the consensus seems to believe that monetary easing by the Federal Reserve is more likely to trigger the reverse problem — higher inflation. Some even foresee a return to the sustained inflation of the seventies and early eighties. This is yet another topic that demonstrates the heightened “Uncertainty Factor” in today’s economy and its imperfect reflection in the markets.

And speaking of the markets, most people think only in terms of nominal price values with little consideration of real (inflation-adjusted) performance. But over longer periods inflation and deflation are major factors. The thumbnails to the right offer a quick comparison of our Four Bad Bears chart in nominal, real, and alternate-real formats. In nominal prices, our current bear has begun to pull away from the treacherous slope that led to the Great Depression. That’s not the case in real prices, mostly because (ironically) the deflation of the earlier period makes the 1929-32 decline seem less grave. If the ShadowStats Alternate CPI adjustment has any credence, the real comparison is even more bizarre. The ShadowStats claim of understated inflation since 1982 makes the Tech and current declines significantly more severe.

Click on the small charts for a series of larger versions. Use the blue links at the top to navigate among them.

——————————————————————————–
four-bears-large

 

 

 

 

 

 

 

 

 

Bear Turns to Bull?
April 17, 2009 updated daily
The S&P 500 closed the week by rallying to a new high 28.5% above the March 9th low. Are we in a new bull market, or is this just another bear rally? Click here to review the previous rallies during the current bear market, and here’s a table showing the 1929-1932 Dow rallies.

We continue to be fascinated with the saga of the Four Bad Bears. In nominal terms, the latest rally puts the S&P 500 just slightly higher than Dow Crash of 1929 over the equivalent time frame. In real (inflation adjusted) terms, the Dow fares better.

The accompanying charts are intended not as a forecast but rather as a way to study the current decline in relation to three familiar bears from history.

For a better sense of how these declines figure into a larger historical context, here’s a long-term view of secular bull and bear markets in the S&P Composite since 1871.

For a bit of international flavor, here’s a chart series that includes the so-called L-shaped “recovery” of the Nikkei 225. I update these weekly.

Since inflation is a favorite topic on this website, I now regularly update a pair of charts to facilitate a comparison of the nominal and real declines. See also my logarithmic scale view of the “Four Bad Bears” comparison.

For a visual analysis of bear market recoveries, be sure to see my Bear Bottoming charts introduced in the next section.

mega-bear-quartet

 

 

 

 

 

 

 

 

 

 The Mega-Bear Quartet and L-Shaped Recoveries
April 17, 2009 updated weekly
Here’s an update of the Mega-Bear Quartet. It’s especially relevant these days because of the frequent mention of L-shaped recoveries and references to the Japanese market after the 1989 bubble.

To see the mega-bear comparison more clearly, here’s musical analogy that allows you to view the similarities incrementally. Use the blue links to add the parts.

This latest update now includes an inflation-adjusted chart, which gives us a fascinating visualization of the impact of inflation on long-term market prices. The higher the rate of inflation during a bear market, the greater the real decline. Compare the peak of the Dow rally in year seven against the nominal chart. The difference is the result of deflation during the great depression.

It’s rather stunning to see the real (inflation-adjusted) decline of the Nikkei, 19 years after its crash. The current lows rival the traumatic Dow bottom in 1932, less than 3 years after its peak.

Over the past few decades, equity markets in the U.S. have had an extended bull run. These charts remind us that bear markets can last a long time. And it’s not necessary to go back to the Great

Read Full Post »

Business & TechAdd Time News
My Yahoo!
My Google
Netvibes
My AOL
RSS Feed
See all feeds SEARCH TIME.COM Full Archive
Covers
INSIDE: Main | Global Business | Small Business | Curious Capitalist | Nerd World | Money & Main Street | Videos
google_04011
As Google’s Growth Falters, Microsoft Could Regain Momentum
By 24/7 Wall St. Wednesday, Apr. 01, 2009People sit under a Google logoitted.
JOHN MACDOUGALL/AFP/Getty
Print
Email Share
Digg
Facebook Yahoo! BuzzTwitter Linkedin Permalink Reprints Related Most of the recent news about Google (GOOG) has been bad. Online advertising posted a slow fourth quarter. That unexpectedly included both display ads and search marketing which has made Google one of the fastest growing large companies in America. Several Wall St. analysts have commented that Google’s search revenue’s rate of increase flattened out in January and February. Since the consensus among experts who cover the company is that revenue will rise 11% in the first quarter, a flat quarter would be devastating.

Related
China’s Dream of Big GDP Growth is Disappearing
Why GE’s new Global “Theme” is an Excuse to Sell Medical Equipment
Internet Advertising: Bad in Q1, Possibly Worse in Q2
More Related
Google: The Economy in a Tea Cup
Catching Up to Stay Ahead
Google Gets Friendly
One of the things that Wall St. hates about Google is that it does one thing better than any other company in the world, but that is all it does. Google Chrome browser, Google Earth, Google Maps, and YouTube have really made much money. Some of the features have not produced any revenue at all. If its search operation falters, Google’s run as the hottest tech company in the world could be over. (See pictures of Google Earth.)

At this point, Google is a $22 billion company. If the search business drops to a growth rate of 10% a year, it will take three years for Google’s sales to get to $30 billion. From the time Microsoft (MSFT) hit $22 billion in sales in 2000, it took the company less than three years to get to the $30 billion plateau. Then from 2002 to 2008, Microsoft’s sales doubled. The software business not only grew. Until recently, it grew quickly. (See pictures of Bill Gates.)

The assumption about Google’s prospects is that the search company is the next Microsoft. Twenty years ago, Microsoft had the hot hand. Sales of Windows and the company’s business and server software were stunning. The margins on some of Microsoft’s software franchises were over 70%. Then the hyper-growth stopped as the company’s market penetration of PCs and servers reached a saturation point. Microsoft’s stock never saw the level it hit in 2000 again. Without lucrative stock options, employees who wanted to make it rich moved to start-ups. The people who had been at the company thirty years were already rich. Many of them retired.

About seven years after Microsoft’s stock hit an all-time high, Google traded at $747, its peak. It now changes hands at $348, and if the company’s sales can only grow at 10% or 15%, the stock is not going back above $700, ever. The myth about companies like Microsoft and Google is that what they do is so important to business and consumers and so pervasive that the growth curve never flattens out. It does flatten at every company. No exceptions.

The press coverage of Google this week included a few pathetic announcements. Disney (DIS) will put some of its premium content on Google’s YouTube. That should be good for $10 million in revenue a year. Google is starting a $100 million venture capital arm which will make it the 1,000th largest venture operation in the world. In other words, it will not be managing enough venture money to matter. Then word came out that Hewlett-Packard (HPQ) might use Google’s operating system in some of its netbooks instead of Microsoft Windows. The important word in that report is “might.” The news that Google is adding thousands of employees a quarter and that the founders have bought a 747 or an aircraft carrier probably hit a high point two years ago.

Saying that Google is doing poorly is not the same as saying that Microsoft is doing well. What matters to Microsoft is that Google becomes less of a threat each day as it fails in its diversification attempts. Google’s cash flow does not continue to give it an almost limitless capital arsenal. Google has to consider cutting people in areas which will never be profitable. The entire ethos at Google is in the process of changing. Microsoft may be in third place in the search business, but it is in first place in software, which is still the larger industry.

Investors still ask Microsoft why it is in the video game business. There is not any reasonable answer. It is an awful business with poor margins. It has nothing to do with selling Windows. There may have been some idea that being in the hardware business would help the software business, but, if so, that idea didn’t work out a long time ago.

With the perceived playing field that Microsoft and Google operate on a bit more level now, they can race after the one market that could be substantial for either one or both of them, which is providing software and search on mobile devices. The smartphone, which is really a PC for the pocket, is part of the one-billion-units-per-year-in-sales handset industry. Providing the operating software and other key components for wireless devices is almost certainly the next big thing for tech companies from Google to Yahoo (YHOO) to Microsoft to Adobe (ADBE). Trying to milk more money out of the PC gets harder and harder. For the largest companies in the industry, it has become a zero sum game. (See pictures of the 50 best websites of 2008.)

For Google and Microsoft, the best days are over, unless one can dominate the handset world the way it did the universe of computers.

— Douglas A. McIntyre
01

Read Full Post »

How Loomia Aims to Drive Revenue for Media Websites in 2009

Written by Richard MacManus / March 3, 2009 8:00 AM / 6 Comments


Loomia is a content recommendations service, used on sites such as the Wall Street Journal and PC World. We’ve profiled Loomia’s Facebook app before, which tracks what you and your Facebook friends are reading on Loomia-supported sites and then shows you what content is most popular among your social circle. Loomia has recently started to focus on revenue-driving recommendations for its media clients, as well as getting more active in the video industry. In this post we take a look at what Loomia is focusing on in 2009, which is an indicator of what media websites must do to ramp up this year.

On media websites, Loomia is most commonly seen as a widget that recommends content. For example, in the WSJ screenshot to the right, the contents of this widget are obtained by measuring the popularity of the content, user behavior, data about the content itself (for example its topic). For some of the publishers which use Loomia, there is a social element too.

Loomia is similar to Sphere and another app we reviewed recently, Apture. These services all aim to serve up more clickable content options on media websites – which means more user engagement and time spent on site for publishers.

We spoke to Loomia CEO David Marks and asked him how Loomia compares to Sphere, which at first glance appears to have much in common with Loomia. Marks said that Sphere is trying to do “semantic classification”, i.e. analyzing the content of an article and recommending further content based on the findings. However Loomia focuses more on the user and so it does behavioral type recommendations. This can result in a more diverse set of topics, because users typically have a range of content preferences. It depends on the article though, said Marks.

Loomia currently has 2 types of deployment:

  • Content (e.g. WSJ)
  • Video (e.g. Brightcove)

Marks told ReadWriteWeb that video advertising is currently selling well for big media publishers. Accordingly these publishers typically now want to drive users to their videos – and Loomia has a widget to do that.

Marks told us that a lot of their publishers are “dollar focused” this year, therefore recommendations have become more than just an interesting feature on a website – they can drive more advertising dollars. As an example, Marks told us that a media website’s Finance section may sell out with ads, but its Politics section may not (fairly common in big media websites). But the Politics section tends to get bigger page views, so to address the imbalance Loomia’s recommendations widgets can drive users from Politics to Finance.

We’ve been looking at how recommendations are being used in the retail sector a lot, and Loomia is a neat example of how the same technology can have real value for the media segment. Let us know in the comments what other recommendation technologies have caught your eye in publishing.

Read Full Post »

Google’s US Search Market Dominance Hits All Time High

Written by Marshall Kirkpatrick / April 7, 2008 1:51 PM / 15 Comments


Traffic analysts Hitwise released new numbers today finding that Google’s marketshare in US searches rose last month to an all time high of 67% of searches performed. Yahoo! Search (20%), MSN Search (5.25%) and Ask.com (4%) trail far behind but aren’t insignificant either.

At this time last year Google was at 64% and MSN was at 9%. Momentum remains with Google, but is that momentum inevitable? Could things change? We’ve written about three ways that it could.

Innovation

Some have argued that Google’s approach to search is outdated and slow to change. Apparently it’s working just fine for them today, but there’s a world of opportunities for other innovators to come up with a better search experience. We wrote about this situation in our recent post titled “How Vulnerable is Google in Search?

Hitwise tracks 46 other search engines as well, which added up for a combined 1.7% of searches last month. 46 alternative search engines is like a week’s work for our network blog AltSearchEgines, check it out if you’d like to learn about the rest of the industry, including some that may become the challengers of the future.

Semantic Web

Yahoo! is #2 today, but is taking the lead in support for standards based microformats and semantic web indexing. Yahoo! announced that it would index semantic markup three weeks ago. Since semantic markup could enable improvements in search quality by orders of magnitude, this could be a turning point for Google and Yahoo!

As we explained when that announcement was made:

Today, a web service might work very hard to scour the internet to discover all the book reviews written on various sites, by friends of mine, who live in Europe. That would be so hard that no one would probably try it. The suite of technologies Yahoo! is moving to support will make such searches trivial. Once publishers start including things like hReview, FOAF and geoRSS in their content then Yahoo!, and other sites leveraging Yahoo! search results, will be able to ask easily what it is we want to do with those book reviews. Say hello to a new level of innovation.

 

We’d like to get an update on the Yahoo! semantic indexing announcement, though, and presumably this is the kind of thing that Google will do soon as well.

Privacy Backlash

As Google grows continually stronger and more knowledgeable, the importance of the social contract between the company and its customers becomes increasingly more important. Google has not been as good as it needs to be about taking clear steps to guarantee security and prevent misuse of user data – including its own misuse of that data!

We wrote in February about how Microsoft’s new levels of engagement with oppenness and data portability could offer an avenue to challenge Google, but few of our readers agreed in comments. You know what they say, though – if your mouth gets washed out with soap, you may be saying something important!

It may not be Microsoft that challenges Google, but it certainly seems possible that users will draw the line somewhere and look to limit Google’s omniscience.

Perhaps not, though. Perhaps Google’s search dominance will continue to grow and grow, month over month, year over year. Someday, if you want to know about your genetic propensity for a particular disease, you’ll just as the Google. If you want to know what your kids are doing at home while you’re away, you’ll just ask the Google. Certainly today when we want to know what’s on the web, a clear majority of us just ask the Google.

smarket.png

Read Full Post »

Hakia Takes On Google With Semantic Technologies

Written by Richard MacManus / March 23, 2007 12:14 PM / 17 Comments


This week I spoke to Hakia founder and CEO Dr. Riza C. Berkan and COO Melek Pulatkonak. Hakia is one of the more promising Alt Search Engines around, with a focus on natural language processing methods to try and deliver ‘meaningful’ search results. Alex Iskold profiled Hakia for R/WW at the beginning of December and he concluded, after a number of search experiments, that Hakia was intriguing – but it was not a level to compete with Google yet. It is important to note that Hakia is a relatively early beta product and is still in development. But given the speed of Internet time, 3.5 months is probably a good time to check back and see how Hakia is progressing…

What is Hakia?

Riza and Melek firstly told me what makes Hakia different from Google. Hakia attempts to analyze the concept of a search query, in particular by doing sentence analysis. Most other major search engines, including Google, analyze keywords. Riza and Melek told me that the future of search engines will go beyond keyword analysis – search engines will talk back to you and in effect become your search assistant. 

One point worth noting here is that, currently, Hakia still has some human post-editing going on – so it isn’t 100% computer powered at this point.

Hakia has two main technologies:

1) QDEX Infrastructure (which stands for Query Detection and Extraction)  – this does the heavy lifting of analyzing search queries at a sentence level.

2) SemanticRank Algorithm – this is essentially the science they use, made up of ontological semantics that relate concepts to each other.

If you’re interested in the tech aspects, also check out hakia-Lab – which features their latest technology R&D.

How is Hakia different from Ask.com?

Hakia most reminds me of Ask.com, which uses more a natural language approach than the other big search engines (‘ask’ a question, get an answer) – and also Ask.com uses human editing too, as with Hakia. [I interviewed Ask.com back in November]. So I asked Riza and Melek what is the difference between Hakia and Ask.com?

Riza told me that Ask.com is an indexing search engine and it has no semantic analysis. Going one step below, he says to look at the basis of their results. Ask.com bolds keywords (i.e. it works at a keywords level), whereas Riza said that Hakia understands the sentence. He also said that Ask.com categories are not meaning-based – they are “canned or prefixed”. Hakia, he said, understands the semantic relationships.

Hakia vs Google

I next referred Riza and Melek to Read/WriteWeb’s interview with Matt Cutts of Google, in which Matt told me that Google is essentially already using semantic technologies, because the sheer amount of data that Google has “really does help us understand the meanings of words and synonyms”. Riza’s view on that is that Google works with popularity algorithms and so it can “never have enough statistical material to handle the Long Tail”. He says a search engine has to understand the language, in order to properly serve the Long Tail.

Moreover, Hakia’s view is that the vastness of data that Google has doesn’t solve the semantic problem – Riza and Melek think there needs to be that semantic connection present.

Their bigger claim though is that the big search companies are still thinking within an indexing framework (personalization etc). Hakia thinks that indexing has plateaued and that semantic technologies will take over for the next generation of search. They say that semantic technologies allow you to analyze content, which they think is ‘outside the box’ of what the big search companies are doing. Riza admitted that it was possible Google was investigating semantic technologies, behind closed doors. Nevertheless, he was adamant that the future is understanding info, not merely finding it – which he said is a very difficult problem to solve, but it’s Hakia’s mission.

Semantic web and Tim Berners-Lee

Throughout the interview, I noticed the word “semantic” was being used a lot – but their interpretation seemed to be different to that of Tim Berners-Lee, whose notion of a Semantic Web is generally what Web people think about when uttering the ‘S’ word. Riza confirmed that their concept of semantic technology is indeed different. He said that Tim Berners-Lee is banking on certain standards being accepted by web authors and writers – which Riza said is “such a big assumption to start this technology”. He said that it forces people to be linguists, which is not a common skill.

Furthermore, Riza told me that Berners-Lee’s Semantic Web is about “imposing a structure that assumes people will obey [and] follow”. He said that the “entire Semantic Web concept relies on utilizing semantic tagging, or labeling, which requires people to know it.” Hakia, he said, doesn’t depend on such structures. Hakia is all about analyzing the normal language of people – so a web author “doesn’t need to mess with that”.

Competitors

Apart from Google and the other big ‘indexing’ search engines, Hakia is competing against other semantic search engines like Powerset and hybrids like Wikia. Perhaps also Freebase – although Riza thinks the latter may be “old semantic web” (but he says there’s not enough information about it to say for sure).

Conclusion

Hakia plans to launch its version 1.0 (i.e. get out of beta) by the end of 2007. As of now my assessment is the same as Alex’s was in December – it’s a very promising, but as yet largely unproven, technology.

I also suspect that Google is much more advanced in search technology than Mountain View is letting on. We know that Google’s scale is a huge advantage, but their experiments with things like personalization and structured data (Google Base) show me that Google is also well aware of the need to implement next-generation search technologies. Also, as Riza noted during the interview, who knows what Google is doing behind closed doors.

Will semantic technologies and ‘sentence analysis’ be the next wave of search? It seems very plausible. So with a bit more development, Hakia could well become compelling to a mass market. Therefore how and when Google responds to Hakia will be something to watch carefully.

Read Full Post »


Report: Semantic Web Companies Are, or Will Soon Begin, Making Money

Written by Marshall Kirkpatrick / October 3, 2008 5:13 PM / 14 Comments


provostpic-1.jpgSemantic Web entrepreneur David Provost has published a report about the state of business in the Semantic Web and it’s a good read for anyone interested in the sector. It’s titled On the Cusp: A Global Review of the Semantic Web Industry. We also mentioned it in our post Where Are All The RDF-based Semantic Web Apps?.

The Semantic Web is a collection of technologies that makes the meaning of content online understandable by machines. After surveying 17 Semantic Web companies, Provost concludes that Semantic science is being productized, differentiated, invested in by mainstream players and increasingly sought after in the business world.

Provost aims to use real-world examples to articulate the value proposition of the Semantic Web in accessible, non-technical language. That there are enough examples available for him to do this is great. His conclusions don’t always seem as well supported by his evidence as he’d like – but the profiles he writes of 17 Semantic Web companies are very interesting to read.

What are these companies doing? Provost writes:

“..some companies are beginning to focus on specific uses of Semantic technology to create solutions in areas like knowledge management, risk management, content management and more. This is a key development in the Semantic Web industry because until fairly recently, most vendors simply sold development tools.”

 

The report surveys companies ranging from the innovative but unlaunched Anzo for Excel from Cambridge Semantics, to well-known big players like Down Jones Client Solutions and RWW sponsor Reuters Calais Initiative, to relatively unknown big players like the already very commercialized Expert System. 10 of the companies were from the US, 6 from Europe and 1 from South Korea.

semwebchart.jpgAbove: Chart from Provost’s report.We’ve been wanting to learn more about “under the radar” but commercialized semantic web companies ever since doing a briefing with Expert System a few months ago. We had never heard of the Italian company before, but they believe they already have they have a richer, deeper semantic index than anyone else online. They told us their database at the time contained 350k English words and 2.8m relationships between them. including geographic representations. They power Microsoft’s spell checker and the Natural Language Processing (NLP) in the Blackberry. They also sell NLP software to the US military and Department of Homeland Security, which didn’t seem like anything to brag about to us but presumably makes up a significant part of the $12 million+ in revenue they told Provost they made last year.

And some people say the Semantic Web only exists inside the laboratories of Web 3.0 eggheads!

Shortcomings of the Report

Provost writes that “the vendors [in] this report have all the appearances of thriving, emerging technology companies and they have shown their readiness to cross borders, continents, and oceans to reach customers.” You’d think they turned water into wine. Those are strong words for a study in which only 4 of 17 companies were willing to report their revenue and several hadn’t launched products yet.

The logic here is sometimes pretty amazing.

The above examples [there were two discussed – RWW] are just a brief sampling of the commercial success that the Semantic Web has been experiencing. In broad terms, it’s easy to point out the longevity of many companies in this industry and use that as a proxy for commercial success [wow – RWW]. With more time (and space in this report), additional examples could be described but the most interesting prospect pertains to what the industry landscape will look like in twelve months. [hmmm…-RWW]

 

In fact, while Provost has glowingly positive things to about all the companies he surveyed, the absence of engagement with any of their shortcomings makes the report read more like marketing material than any objective take on what’s supposed to be world-changing technology.

This is a Fun Read

The fact is, though, that Provost writes a great introduction to many companies working to sell software in a field still too widely believed to be ephemeral. The stories of each of the 17 companies profiled are fun to read and many of Provost’s points of analysis are both intuitive and thought provoking.

He says the sector is “on the cusp” of major penetration into existing markets currently served by non-semantic software. Provost argues that the Semantic Web struggles to explain itself because the World Wide Web is so intensely visual and semantics are not. He says that reselling business partners in specific distribution channels are combining their domain knowledge with the science of the software developers to bring these tools to market. He tells a great, if unattributed, story about what Linked Data could mean to the banking industry.

We hadn’t heard of several of the companies profiled in the report, and a handful of them had never been mentioned by the 34 semantic web specialist blogs we track, either.

There’s something here for everyone. You can read the full report here.

Read Full Post »

Older Posts »

%d bloggers like this: