Posts Tagged ‘net neutrality’

From Logic to Ontology: The limit of “The Semantic Web”



(Some post are written in English and Spanish language) 


From Logic to Ontology: The limit of “The Semantic Web” 


If you read the next posts on this blog: 

Semantic Web

The Semantic Web

What is the Semantic Web, Actually?

The Metaweb: Beyond Weblogs. From the Metaweb to the Semantic Web: A Roadmap

Semantics to the people! ontoworld

What’s next for the Internet

Web 3.0: Update

How the Wikipedia 3.0: The End of Google? article reached 2 million people in 4 days!

Google vs Web 3.0

Google dont like Web 3.0 [sic] Why am I not surprised?

Designing a better Web 3.0 search engine

From semantic Web (3.0) to the WebOS (4.0)

Search By Meaning

A Web That Thinks Like You


The long-promised “semantic” web is starting to take shape

Start-Up Aims for Database to Automate Web Searching

Metaweb: a semantic wiki startup


The Semantic Web, Collective Intelligence and Hyperdata.

Informal logic 

Logical argument

Consistency proof 

Consistency proof and completeness: Gödel’s incompleteness theorems

Computability theory (computer science): The halting problem

Gödel’s incompleteness theorems: Relationship with computability

Non-formal or Inconsistency Logic: LACAN’s LOGIC. Gödel’s incompleteness theorems,

You will realize the internal relationship between them linked from Logic to Ontology.  

I am writing from now on an article about the existence of the semantic web.  

I will prove that it does not exist at all, and that it is impossible to build from machines like computers.  

It does not depend on the software and hardware you use to build it: You cannot do that at all! 

You will notice the internal relations among them, and the connecting thread is the title of this post: “Logic to ontology.”   

I will prove that there is no such construction, which can not be done from the machines, and that does not depend on the hardware or software used.  

More precisely, the limits of the semantic web are not set by the use of machines themselves and biological systems could be used to reach this goal, but as the logic that is being used to construct it does not contemplate the concept of time, since it is purely formal logic and metonymic lacks the metaphor, and that is what Gödel’s theorems remark, the final tautology of each construction or metonymic language (mathematical), which leads to inconsistencies. 

This consistent logic is completely opposite to the logic that makes inconsistent use of time, inherent of human unconscious, but the use of time is built on the lack, not on positive things, it is based on denials and absences, and that is impossible to reflect on a machine because of the perceived lack of the required self-awareness is acquired with the absence.  

The problem is we are trying to build an intelligent system to replace our way of thinking, at least in the information search, but the special nature of human mind is the use of time which lets human beings reach a conclusion, therefore does not exist in the human mind the halting problem or stop of calculation.  

So all efforts faced toward semantic web are doomed to failure a priori if the aim is to extend our human way of thinking into machines, they lack the metaphorical speech, because only a mathematical construction, which will always be tautological and metonymic, and lacks the use of the time that is what leads to the conclusion or “stop”.  

As a demonstration of that, if you suppose it is possible to construct the semantic web, as a language with capabilities similar to human language, which has the use of time, should we face it as a theorem, we can prove it to be false with a counter example, and it is given in the particular case of the Turing machine and “the halting problem”.  

Then as the necessary and sufficient condition for the theorem is not fulfilled, we still have the necessary condition that if a language uses time, it lacks formal logic, the logic used is inconsistent and therefore has no stop problem.

This is a necessary condition for the semantic web, but it is not enough and therefore no machine, whether it is a Turing Machine, a computer or a device as random as a black body related to physics field, can deal with any language other than mathematics language hence it is implied that this language is forced to meet the halting problem, a result of Gödel theorem.   

De la lógica a la ontología: El límite de la “web semántica”  

Si lee los siguientes artículos de este blog: 


Wikipedia 3.0: El fin de Google (traducción Spanish)


Lógica Consistente y completitud: Teoremas de la incompletitud de Gödel (Spanish)

Consistencia lógica (Spanish)

Teoría de la computabilidad. Ciencia de la computación.

Teoremas de la incompletitud de Gödel y teoría de la computación: Problema de la parada 

Lógica inconsistente e incompletitud: LOGICAS LACANIANAS y Teoremas de la incompletitud de Gödel (Spanish)  

Jacques Lacan (Encyclopædia Britannica Online)

Usted puede darse cuenta de las relaciones internas entre ellos, y el hilo conductor es el título de este mismo post: “de la lógica a la ontología”.  

Probaré que no existe en absoluto tal construcción, que no se puede hacer desde las máquinas, y que no depende ni del hardware ni del software utilizado.   

Matizando la cuestión, el límite de la web semántica está dado no por las máquinas y/o sistemas biológicos que se pudieran usar, sino porque la lógica con que se intenta construir carece del uso del tiempo, ya que la lógica formal es puramente metonímica y carece de la metáfora, y eso es lo que marcan los teoremas de Gödel, la tautología final de toda construcción y /o lenguaje metonímico (matemático), que lleva a contradicciones.  

Esta lógica consistente es opuesta a la lógica inconsistente que hace uso del tiempo, propia del insconciente humano, pero el uso del tiempo está construido en base a la falta, no en torno a lo positivo sino en base a negaciones y ausencias, y eso es imposible de reflejar en una máquina porque la percepción de la falta necesita de la conciencia de sí mismo que se adquiere con la ausencia.   

El problema está en que pretendemos construir un sistema inteligente que sustituya nuestro pensamiento, al menos en las búsquedas de información, pero la particularidad de nuestro pensamiento humano es el uso del tiempo el que permite concluir, por eso no existe en la mente humana el problema de la parada o detención del cálculo, o lo que es lo mismo ausencia del momento de concluir.  

Así que todos los esfuerzos encaminados a la web semántica están destinados al fracaso a priori si lo que se pretende es prolongar nuestro pensamiento humano en las máquinas, ellas carecen de discurso metafórico, pues sólo son una construcción matemática, que siempre será tautológica y metonímica, ya que además carece del uso del tiempo que es lo que lleva al corte, la conclusión o la “parada”.  

Como demostración vale la del contraejemplo, o sea que si suponemos que es posible construir la web semántica, como un lenguaje con capacidades similares al lenguaje humano, que tiene el uso del tiempo, entonces si ese es un teorema general, con un solo contraejemplo se viene abajo, y el contraejemplo está dado en el caso particular de la máquina de Turing y el “problema de la parada”.  

Luego no se cumple la condición necesaria y suficiente del teorema, nos queda la condición necesaria que es que si un lenguaje tiene el uso del tiempo, carece de lógica formal, usa la lógica inconsistente y por lo tanto no tiene el problema de la parada”, esa es condición necesaria para la web semántica, pero no suficiente y por ello ninguna máquina, sea de Turing, computador o dispositivo aleatorio como un cuerpo negro en física, puede alcanzar el uso de un lenguaje que no sea el matemático con la paradoja de la parada, consecuencia del teorema de Gödel.

Jacques Lacan (Encyclopædia Britannica Online)

Read Full Post »

Evolving Trends

July 15, 2006

Why Net Neutrality is Good for Web 3.0

(this post was last updated at 10:00am EST, July 22, ‘06)


1. Telcos and Cable companies in the US are legally disallowed from blocking other carriers’ VoIP traffic. Last year, the FCC fined a North Carolina CLEC for doing that to Vonage.

2. Telcos and Cable companies have been in a turf war ever since cable companies started offering Internet access. This turf war escalated after cable companies started offering VoIP phone service, thus cutting deeply into the telcos’ main revenue stream.

3. The telcos’ response to the Cable companies’ entry into the phone market is to roll out their own TV services, based on IPTV (TV over IP), which are being rolled out at the speed of local and state government bureaucracies. IPTV would be carried on DSL lines, FTTC or FTTH.

4. The telcos’ response to Skype, Vonage, Yahoo IM (with VoIP) as well as their response to YouTube (and Google Video), who combinedly threaten the Telcos’ business model in the phone service and video delivery areas, was their push for a two-tiered internet, where the telcos, who happen to own the Internet backbones, would de-prioritize VoIP and video traffic from Skype, Vonage, YouTube, Google, Yahoo and others.

Net Neutrality

The telcos already charge the end user (in case they serve the end user directly) and the cable companies (for use of their backbone when traffic has to travel outside of the cable company’s own network.)

So I just don’t see why the telcos would have to charge the cable companies, Google, YouTube, Yahoo, Vonage, Skype, MSN, etc one more time.

The telcos’ backbones are not being used for free. They are either paid for by the telco’s users (if the telco is the ISP) or by the cable companies and CLECs using those backbones, who pass the cost to their users. So it’s us, the end users, who are paying for those backbones, not the telcos as the telcos make it sound like.

But it seems that the telcos are saying that they’re not charging enough for those backbones to ensure continued investment on their part in growing their backbone capacities and instead of increasing how much they charge for traffic, which would increase our monthly access fees, they’re suggesting to charge the heavy content providers (e.g. YouTube, Google, others) for high-priority traffic (e.g. VoIP, video streams) and do the same to the VoIP transport providers (e.g. Skype, Vonage, etc.)

Google, Skype, Yahoo, MSN and others, seeing how that would hurt their business interests and the interest of their users by forcing them to charge users for content and VoIP transport, have sponsored a Net Neutrality bill, which to the best of my knowledge has had a hard time going through Congress and the Senate.

Two Tier Internet

The telcos are struggling against the inevitable: that they will be a commodity industry like the railroad or trucking industries. The telcos, who understand all of the above, do not want to be confined to the transport of traffic because the transport business has become a commodity.

The same argument applies to VoIP transport providers. VoIP transport has become (or is becoming) a commodity business.

And if you ask me, “content” is also becoming a commodity business since the huge and ever-growing number of news, analysis and entertainment blogs, the millions of people who contribute their home videos, the pirates who can always figure out ways to share copyrighted content, and the tons of yet-to-be-explored opportunities for user-generated content all mean that content is now officially commoditized. In fact, content is so commoditized all it costs now is the small monthly fee users pay their ISP to access the net.

The Two-Tier Internet is an attempt by the telcos to attach artificially enhanced value to content once again by making content producers pay them for delivering their content without jitters and delays. It is also an attempt to attach artificially enhanced value to transport by forcing VoIP transport providers like Skype, Vonage, Yahoo etc to pay them to have their VoIP traffic transported without jitters and delays.

The Two-Tier Internet, aka the attempt by the telcos to attach artificially enhanced value to content and transport seems anti-progress and simply going nowhere.

However, the question is who will pay to invest in new backbone capacity? The answer (or part of the answer) is that content providers like Google are investing in building thier own networks (between their data centers) and such efforts can conceivably grow into new backbone investments, where Google, Yahoo, AOL et al would be investing in new network capacity growth.

If Content has Become a Commodity Then How Will Content and Transport Providers Deliver Genuine Enhanced Value?

The answer that I propose is by embedding intelligent findability (forget keyword –and tag– indexed information, think Web 3.0!) into their Ad-supported content layer.

So instead of “dumb search” (which gives us “dumb content”) we would embrace the Web 3.0 model of intelligent findability (i.e. allowing the machines to use information in an intelligent manner to find what we’re looking for.)

No wonder Tim Berners-Lee (the father of the Web and the originator of the “Semantic Web,” which I had popularized as Web 3.0 in the Wikipedia 3.0 article) has come out strongly in favor of net neutrality. Having said that, I’m not sure whether or not he would agree that the the natural commoditization of “dumb content,” which would be assured continuance under Net Neutrality, would help us get to the Web 3.0 model of intelligent findability sooner than if there was to be a two-tier Internet. The latter, in my opinion, would slow down the commoditization of ‘dumb content’, thus giving value-driven innovators less reason to explore the next layer of value in the content business, which I’m proposing is the Web 3.0 model of intelligent findability.


  1. Towards Intelligent Findability
  2. Wikipedia 3.0: The End of Google?
  3. Intelligence (Not Content) is King in Web 3.0

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us


net neutrality, two-tier internet, content, Web 3.0, VoIP transport, VoIP, IPTV, Semantic Web

Read Full Post »

Evolving Trends

July 17, 2006

Intelligence (Not Content) is King in Web 3.0


  1. There’s an enormous amount of free content on the Web.
  2. Pirates will aways find ways to share copyrighted content, i.e. get content for free.
  3. There’s an exponential growth in the amount of free, user-generated content.
  4. Net Neutrality (or the lack of a two-tier Internet) will only help ensure the continuance of this trend.
  5. Content is is becoming so commoditized that it only costs us the monthly ISP fee to access.

Conslusions (or Hypotheses)

The next value paradigm in the content business is going to be about embedding “intelligent findability” into the content layer, by using a semantic CMS (like Semantic MediaWiki, that enables domain experts to build informal ontologies [or semantic annotations] on top of the information) and by adding inferencing capabilities to existing search engines. I know this represents less than the full vision for Web 3.0 as I’ve outlined in the Wikipedia 3.0 and Web 3.0 articles but it’s a quantum leap above and beyond the level of intelligence that exists today within the content layer. Also, semantic CMS can be part of P2P Semantic Web Inference Engine applications that would push central search model’s like Google’s a step closer to being a “utility” like transport, unless Google builds their own AI, which would then have to compete with the people’s P2P version (see: P2P 3.0: The People’s Google and Get Your DBin.)

In other words, “intelligent findability” NOT content in itself will be King in Web 3.0.


  1. Towards Intelligent Findability
  2. Wikipedia 3.0: The End of Google?
  3. Web 3.0: Basic Concepts
  4. P2P 3.0: The People’s Google
  5. Why Net Neutrality is Good for Web 3.0
  6. Semantic MediaWiki
  7. Get Your DBin

Posted by Marc Fawzi

Enjoyed this analysis? You may share it with others on:

digg.png newsvine.png nowpublic.jpg reddit.png blinkbits.png co.mments.gif stumbleupon.png webride.gif del.icio.us


net neutrality, two-tier internet, content, Web 3.0, inference engine, semantic-web, artificial intelligence, ai

Read Full Post »

%d bloggers like this: