Feeds:
Posts
Comments

Archive for February 28th, 2008

About the social semantic web

Web 2.0 – what´s next?

Yahoo Researcher Declares Semantic Web Dead – and reborn again…

When Mor Naaman from Yahoo said in a special track on Web 3.0 at WWW2007 that the “Semantic Web” is dead, he obviously tried to attract attention. Nevertheless, in my opinion he is absolutely right – there is no chance to “teach” people to annotate web content in a more sophisticated way than “social tagging” (and I´m pretty sure that also in the future it will always be a small community which will tag their content).

But in one point Mor Naaman missed the point: The “Semantic Web” was always there, under-cover more or less. Living in a tin with a lousy HTML-lid. And inside the tin there has always been enough semantics. There is no need to re-invent the data models, the namespaces, the ontologies (at least for most of the basic “things”) as Naaman proposes in his talk (slide 13). How easily all the existing semantics can be released and mapped against the “Semantic Web” (and suddenly it was born again ;-) ) is demonstrated by projects like [1] or [2].

May 17, 2007Posted by ablvienna | semantic web, tagging, web 3.0 | | No Comments

Read Full Post »

Page View Metric Dying – But What Will Replace It?

Written by Guest Author / February 28, 2008 1:13 PM / 1 Comments
digg_url = ‘http://www.readwriteweb.com/archives/page_view_metric_dying.php’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;


We’ve all seen the signs. Ding dong the page view is dead… well, dying. First Compete announced that they would be using attention-based web metrics, or Attention Metrics for short. Then Facebook announced that they will move to a similar metric. Perhaps most importantly, Nielsen NetRatings announced last July that they would stop using page views for comparing popularity on the web, and move towards more attention based metrics. Also, Microsoft announced this week the release of a new ROI measurement tool called “engagement mapping”.
This is a guest post by Muhammad Saleem, a social media consultant and a top-ranked community member on multiple social news sites.

The reasoning is simple enough: While unique visits and page views are useful in measuring how much incoming traffic a site has, it isn’t exactly a good or accurate way of measuring impact or even engagement. You could have high incoming traffic (for example, any site that is hugely successful on social sites) but if there is an incredibly high exit rate and only 30 seconds to a minute spent on the site, the traffic numbers don’t mean much (i.e. not all traffic is created equal). Furthermore, the rise of new web technologies such as AJAX which don’t require page reloads to refresh elements or modules in a page, or video embeds (such as from YouTube) that allow you to watch a video and then browse related videos without ever refreshing the page, are making page views a mostly inaccurate measure and rendering it largely irrelevant.

While most people agree that page views are becoming irrelevant, the same people are uncertain about the future. For example, many agree that attention-based metrics are the future. Attention metrics calculate the total time spent on a site or interacting with a page (or element on a page in the case of Facebook applications) as a percentage of total time that people spend online, to measure a site’s relative importance on the web. However, there are many others, like the Tel Aviv-based Nuconomy Studio and even Yahoo’s Buzz, that believe using factors like comments on posts, ratings from users, number of times something is shared, and clicks on ads as a measure of how popular something is is a better/more accurate metric.

The problem it seems, arises because there is a disconnect between the advertising industry and the publishing industry. The reason why there is an eternal quest for traffic, not only in terms of unique visitors, but also maximizing page views per visitor, is because advertising networks let you in on the basis of how much traffic you’re generating, and your eventual income is based on the number of impressions (and clicks). While it is true that the page view as a metric is on it’s way out, this isn’t going to happen unless a new metric comes from within the advertising industry, which, with over $20 billion at stake, has the most to gain from a more accurate way of determining where to spend their money.

But it’s not that simple either. As Scott Ross explains, different web technologies and applications have unique effects on different sites. What technologies you use and how they effect engagement and interaction on your site may depend on the size of your site, the niche you operate in, and a host of other factors. In fact, the metric that is most applicable could even change from page to page depending on the content on those pages. That being the case, perhaps one metric that is applied to everyone is just not enough and just not practical/efficient. As web technologies evolve, the page view is bound to die as a metric, but unless the advertising industry can get it’s act together and work alongside the publishing industry, a good set of new metrics that would be widely adopted is not imminent.

Leave a comment or trackback on ReadWriteWeb and be in to win a daily $30 Amazon gift voucher, courtesy of AdaptiveBlue and their Amazon Wishlist Widget.

0 TrackBacks

Listed below are links to blogs that reference this entry: Page View Metric Dying – But What Will Replace It?.

TrackBack URL for this entry: http://www.readwriteweb.com/cgi-bin/mt/mt-tb.cgi/3430

Comments

Subscribe to comments for this post OR Subscribe to comments for all Read/WriteWeb posts

  • I think the future lies in producing a dynamic metric that represents a sort of “unified theory” addressing and accounting for all the different types of engagement a website can receive. I’m imagining something resembling a quadratic formula. Engagement = (Unique Users x Events (i.e. clicks, video views, ad interactions, comments posted) x Time On Site)/(Number of Different Content Pieces Per Page + Number of Unique Advertisers Per Page).

    The result is something like an engagement density, directly increasing every time a unique user interacts with a specific element of a site whether its an ad or a piece of content and decreasing the shorter the time spent on the site or the more advertisers displayed on the page. With Microsoft moving towards “Engagement Mapping” and the other big players following I’m sure some standard metric like this will emerge.

    Evan Moore

    Posted by: evohollywood | February 28, 2008 3:05 PM

Read Full Post »

Is Web Technology Making Your Life Better?

Written by Josh Catone / February 28, 2008 10:15 AM / 5 Comments
digg_url = ‘http://www.readwriteweb.com/archives/is_web_technology_making_your_life_better.php’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;


Technology, broadly, is a tool or set of tools aimed at making some aspect of life better, easier, or more efficient. On the web, that could mean scripting languages that make it easier for developers to create applications, or it could mean applications that make it easier for us to accomplish a task. Let’s not debate the definition of the word technology, but rather, is web technology working for you? Are so-called web 2.0 applications making your life easier or overloading you with too much information?
“It is no secret that we live in an information overload age,” is how Alex Iskold began his must-read Attention Economy overview that was published on ReadWriteWeb about one year ago. We’re constantly bombarded with information these days — news, blogs, photos, videos, Twitter, emails, text messages, phone calls, etc. All of these things are vying for and tugging at our attention.

So the question becomes: is the technology that is supposed to make our lives easier, actually overwhelming us and making our lives more difficult? And if so, how do we escape the negative effect of technology overload?

The latest in the compelling series of Oxford 2.0 debates over at the Economist web site (which we covered in December) deals with the proposition: If the promise of technology is to simplify our lives, it is failing.

Arguing on the pro side (that technology is complicating our lives) is Richard Szafranski, Partner, Toffler Associates. On the con side (that technology is simplifying our lives) is John Maeda, President Elect of the Rhode Island School of Design. The debate runs until March 6 and spectators are right now split 64%-34% in favor of the con side.

The Economist debate is speaking broadly to technology as a whole (which might include everything from the hammer and nail to the Large Hadron Collider), but the relevance to our problem of information overload is undeniable.

From Szafranski’s opening statement:

“We–hundreds of millions of us and growing–embrace the very technologies that make our lives and our relationships more difficult and fill many of our waking moments with activity. We love–to the point of gluttony–to communicate, play, invent, learn, imagine and acquire. Information technology has given us tools to do all of those anywhere and round the clock. We are awash in the benefits that high-bandwidth fixed and mobile wireless communications, email, text messages, pictures, games, data and information give us, including instant access to thousands of products. The seductive ease with which we can engage in any and all of those activities, or quests or endeavours makes it difficult and stressful to not be overwhelmed by choices. Choosing takes time and our time is not unlimited. Devices and applications that save us labour in one area may merely allow us, and sometimes seem to compel us, to invest labour in other areas.

We say or hear, “I must do my email tonight, or by tomorrow I’ll have over 600 to read.” We want to buy a pot. Search on “pottery” and get 254,000,000 results. We want to find the John Li we met at a conference. Search on “John Li” and get 8,600,000 results. Do I do email, narrow the searches, eat dinner, pick up my laundry or call a friend? Because technology has spawned numerous complex variations I must repeatedly go through the act of evaluating and choosing — a labour of deciding. Technology has imposed the encumbrance of over-choice on us.”

And from Maeda’s first parry:

“Recognize simplicity as being about two goals realized simultaneously: the saving of time to realize efficiencies, and later wasting the time that you have gained on some humanly pursuit. Thus true simplicity in life is one part technology, and the other part away from technology.

We voluntarily let technology enter our lives in the infantile state that it currently exists, and the challenge is to wait for it to mature to something we can all be proud of. Patience is a virtue I am told, and I await the many improvements that lie ahead. To say that technology is failing to simplify our lives misses the point that in the past decade we have lived in an era of breakneck innovation. This pace is fortunately slowing and industries are retrenching so that design-led approaches can take command to give root to more meaningful technology experiences.”

Szafranski is arguing that the benefit of technology has been overwhelmed by the sheer complexity and enormity of it. Technology may have solved some problems, but it has created others that are just as negative, or perhaps worse. Or, for example, Google gives us access to so much information that finding what we’re looking for is such a complex task that our lives are worse off for it. On the other hand, Maeda’s argument is that information technology is so new that we’re only now beginning to refine it in ways that make it more simple. It can be a tad overwhelming when a Google search return 4 million results, but give it a few years and it is bound to get better.

This is an intensely interesting debate, and we thought it would be fun to try to continue it here with a focus on web technologies. Is the information overload that we’re all acutely experiencing worth the utility we’re getting out of it? Has technology on the web failed us or has it made our lives easier? What do you think? The floor is open for debate, let us know your thoughts in the comments.

Image via a Geico ad.

Leave a comment or trackback on ReadWriteWeb and be in to win a daily $30 Amazon gift voucher, courtesy of AdaptiveBlue and their Amazon Wishlist Widget.

Read Full Post »

 10 Semantic Apps to Watch

Written by Richard MacManus / November 29, 2007 12:30 AM / 39 Comments
digg_url = ‘http://www.readwriteweb.com/archives/10_semantic_apps_to_watch.php’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;


digg_url = ‘http://digg.com/software/10_Semantic_Apps_to_Watch’; digg_bgcolor = ‘#ffffff’; digg_skin = ‘compact’;One of the highlights of October’s Web 2.0 Summit in San Francisco was the emergence of ‘Semantic Apps’ as a force. Note that we’re not necessarily talking about the Semantic Web, which is the Tim Berners-Lee W3C led initiative that touts technologies like RDF, OWL and other standards for metadata. Semantic Apps may use those technologies, but not necessarily. This was a point made by the founder of one of the Semantic Apps listed below, Danny Hillis of Freebase (who is as much a tech legend as Berners-Lee).

The purpose of this post is to highlight 10 Semantic Apps. We’re not touting this as a ‘Top 10’, because there is no way to rank these apps at this point – many are still non-public apps, e.g. in private beta. It reflects the nascent status of this sector, even though people like Hillis and Spivack have been working on their apps for years now.

What is a Semantic App?

Firstly let’s define “Semantic App”. A key element is that the apps below all try to determine the meaning of text and other data, and then create connections for users. Another of the founders mentioned below, Nova Spivack of Twine, noted at the Summit that data portability and connectibility are keys to these new semantic apps – i.e. using the Web as platform.

In September Alex Iskold wrote a great primer on this topic, called Top-Down: A New Approach to the Semantic Web. In that post, Alex Iskold explained that there are two main approaches to Semantic Apps:

1) Bottom Up – involves embedding semantical annotations (meta-data) right into the data.
2) Top down – relies on analyzing existing information; the ultimate top-down solution would be a fully blown natural language processor, which is able to understand text like people do.

Now that we know what Semantic Apps are, let’s take a look at some of the current leading (or promising) products…

Freebase

Freebase aims to “open up the silos of data and the connections between them”, according to founder Danny Hillis at the Web 2.0 Summit. Freebase is a database that has all kinds of data in it and an API. Because it’s an open database, anyone can enter new data in Freebase. An example page in the Freebase db looks pretty similar to a Wikipedia page. When you enter new data, the app can make suggestions about content. The topics in Freebase are organized by type, and you can connect pages with links, semantic tagging. So in summary, Freebase is all about shared data and what you can do with it.

Powerset

Powerset (see our coverage here and here) is a natural language search engine. The system relies on semantic technologies that have only become available in the last few years. It can make “semantic connections”, which helps make the semantic database. The idea is that meaning and knowledge gets extracted automatically from Powerset. The product isn’t yet public, but it has been riding a wave of publicity over 2007.

Twine

Twine claims to be the first mainstream Semantic Web app, although it is still in private beta. See our in-depth review. Twine automatically learns about you and your interests as you populate it with content – a “Semantic Graph”. When you put in new data, Twine picks out and tags certain content with semantic tags – e.g. the name of a person. An important point is that Twine creates new semantic and rich data. But it’s not all user-generated. They’ve also done machine learning against Wikipedia to ‘learn’ about new concepts. And they will eventually tie into services like Freebase. At the Web 2.0 Summit, founder Nova Spivack compared Twine to Google, saying it is a “bottom-up, user generated crawl of the Web”.

AdaptiveBlue

AdaptiveBlue are makers of the Firefox plugin, BlueOrganizer. They also recently launched a new version of their SmartLinks product, which allows web site publishers to add semantically charged links to their site. SmartLinks are browser ‘in-page overlays’ (similar to popups) that add additional contextual information to certain types of links, including links to books, movies, music, stocks, and wine. AdaptiveBlue supports a large list of top web sites, automatically recognizing and augmenting links to those properties.

SmartLinks works by understanding specific types of information (in this case links) and wrapping them with additional data. SmartLinks takes unstructured information and turns it into structured information by understanding a basic item on the web and adding semantics to it.

[Disclosure: AdaptiveBlue founder and CEO Alex Iskold is a regular RWW writer]

Hakia

Hakia is one of the more promising Alt Search Engines around, with a focus on natural language processing methods to try and deliver ‘meaningful’ search results. Hakia attempts to analyze the concept of a search query, in particular by doing sentence analysis. Most other major search engines, including Google, analyze keywords. The company told us in a March interview that the future of search engines will go beyond keyword analysis – search engines will talk back to you and in effect become your search assistant. One point worth noting here is that, currently, Hakia has limited post-editing/human interaction for the editing of hakia Galleries, but the rest of the engine is 100% computer powered.

Hakia has two main technologies:

1) QDEX Infrastructure (which stands for Query Detection and Extraction) – this does the heavy lifting of analyzing search queries at a sentence level.

2) SemanticRank Algorithm – this is essentially the science they use, made up of ontological semantics that relate concepts to each other.

Talis

Talis is a 40-year old UK software company which has created a semantic web application platform. They are a bit different from the other 9 companies profiled here, as Talis has released a platform and not a single product. The Talis platform is kind of a mix between Web 2.0 and the Semantic Web, in that it enables developers to create apps that allow for sharing, remixing and re-using data. Talis believes that Open Data is a crucial component of the Web, yet there is also a need to license data in order to ensure its openness. Talis has developed its own content license, called the Talis Community License, and recently they funded some legal work around the Open Data Commons License.

According to Dr Paul Miller, Technology Evangelist at Talis, the company’s platform emphasizes “the importance of context, role, intention and attention in meaningfully tracking behaviour across the web.” To find out more about Talis, check out their regular podcasts – the most recent one features Kaila Colbin (an occassional AltSearchEngines correspondent) and Branton Kenton-Dau of VortexDNA.

UPDATE: Marshall Kirkpatrick published an interview with Dr Miller the day after this post. Check it out here.

TrueKnowledge

Venture funded UK semantic search engine TrueKnowledge unveiled a demo of its private beta earlier this month. It reminded Marshall Kirkpatrick of the still-unlaunched Powerset, but it’s also reminiscent of the very real Ask.com “smart answers”. TrueKnowledge combines natural language analysis, an internal knowledge base and external databases to offer immediate answers to various questions. Instead of just pointing you to web pages where the search engine believes it can find your answer, it will offer you an explicit answer and explain the reasoning patch by which that answer was arrived at. There’s also an interesting looking API at the center of the product. “Direct answers to humans and machine questions” is the company’s tagline.

Founder William Tunstall-Pedoe said he’s been working on the software for the past 10 years, really putting time into it since coming into initial funding in early 2005.

TripIt

Tripit is an app that manages your travel planning. Emre Sokullu reviewed it when it presented at TechCrunch40 in September. With TripIt, you forward incoming bookings to plans@tripit.com and the system manages the rest. Their patent pending “itinerator” technology is a baby step in the semantic web – it extracts useful infomation from these mails and makes a well structured and organized presentation of your travel plan. It pulls out information from Wikipedia for the places that you visit. It uses microformats – the iCal format, which is well integrated into GCalendar and other calendar software.

The company claimed at TC40 that “instead of dealing with 20 pages of planning, you just print out 3 pages and everything is done for you”. Their future plans include a recommendation engine which will tell you where to go and who to meet.

Clear Forest

ClearForest is one of the companies in the top-down camp. We profiled the product in December ’06 and at that point ClearForest was applying its core natural language processing technology to facilitate next generation semantic applications. In April 2007 the company was acquired by Reuters. The company has both a Web Service and a Firefox extension that leverages an API to deliver the end-user application.

The Firefox extension is called Gnosis and it enables you to “identify the people, companies, organizations, geographies and products on the page you are viewing.” With one click from the menu, a webpage you view via Gnosis is filled with various types of annotations. For example it recognizes Companies, Countries, Industry Terms, Organizations, People, Products and Technologies. Each word that Gnosis recognizes, gets colored according to the category.

Also, ClearForest’s Semantic Web Service offers a SOAP interface for analyzing text, documents and web pages.

Spock

Spock is a people search engine that got a lot of buzz when it launched. Alex Iskold went so far as to call it “one of the best vertical semantic search engines built so far.” According to Alex there are four things that makes their approach special:

  • The person-centric perspective of a query
  • Rich set of attributes that characterize people (geography, birthday, occupation, etc.)
  • Usage of tags as links or relationships between people
  • Self-correcting mechanism via user feedback loop

As a vertical engine, Spock knows important attributes that people have: name, gender, age, occupation and location just to name a few. Perhaps the most interesting aspect of Spock is its usage of tags – all frequent phrases that Spock extracts via its crawler become tags; and also users can add tags. So Spock leverages a combination of automated tags and people power for tagging.

Conclusion

What have we missed? 😉 Please use the comments to list other Semantic Apps you know of. It’s an exciting sector right now, because Semantic Web and Web 2.0 technologies alike are being used to create new semantic applications. One gets the feeling we’re only at the beginning of this trend.

Leave a comment or trackback on ReadWriteWeb and be in to win a daily $30 Amazon gift voucher, courtesy of AdaptiveBlue and their Amazon Wishlist Widget.

2 TrackBacks

Listed below are links to blogs that reference this entry: 10 Semantic Apps to Watch.

TrackBack URL for this entry: http://www.readwriteweb.com/cgi-bin/mt/mt-tb.cgi/1796

» Top 10 semantic web players from wAve the mAchines
Wondering who the top applications are in the emerging semantic web field? The excellent Read/WriteWeb blog published a top 10 list last week. Freebase, Powerset, Twine, AdaptiveBlue, Hakia, Talis, TrueKnowledge, TripIt (an interesting application of t… Read More
On Read/WriteWeb there was a post about 10 semantic Apps to watch. It seems that the terminology of semantics is used for all sort of kinds. Our understanding is that semantic applications should use some form of ontology in order to describe the meani… Read More

Read Full Post »

About the social semantic web

Web 2.0 – what´s next?

Why Yahoo bought del.icio.us …

An outstanding talk at this year´s European Semantic Web Conference was Ron Brachman´s “Emerging Sciences of the Internet: Some New Opportunities“. Yahoo´s (and somehow this company has more and more to do with the Semantic Web…) Vice-President of Worldwide Research (with his strong background in the fields of AI and description logics) pointed out the need for rethinking traditional approaches of computer science: “…another emerging element in what we might call a new Science of Search is a social one… Will ontologies matter or do folksonomies rule? Others have begun to address the substantial differences between the social Web world and the Semantic Web world. While sometimes portrayed as diametrically opposed, the sides may benefit from each other if we look more deeply. My intuition is that there is room for synergy, and it would behoove us to investigate.”

That´s it! The Social Semantic Web finally became a huge business opportunity… (And again, I think it´s actually more accurate to say “Semantic Social Web”, because it´s the Social Web which will be enhanced by Semantic Web technologies and not vice versa.)

June 9, 2007Posted by ablvienna | semantic social web | | No Comments

Read Full Post »

%d bloggers like this: