From Wikipedia, the free encyclopedia
Jerry A. Fodor‘s Language of Thought (LOT) hypothesis, or LOTH, states that cognition and cognitive processes are only ‘remotely plausible’ when expressed as computational in terms of representational systems. He uses empirical data drawn from linguistics and cognitive science to express internal representations through a philosophical vantage point. Like other languages, it contains its own syntax as well as semantics which have a causal effect on the properties of these mental representations. Fodor believed that there was merit in characterizing the representational system that one was ‘provisionally committed’ to, and that research strategies, if not necessarily true, at least contain an ‘air of prima facie plausibility.’ The hypothesis describes that thoughts are represented in a “language” (sometimes known as mentalese) which allows complex thoughts to be built up by combining simpler thoughts in various ways. It is clear from the biology of the brain that these mental representations are not present in the same way as symbols written on paper; rather, the LOT is supposed to exist at the cognitive level, the level of thoughts and concepts. For example the thought that “John is tall” is clearly composed of at least two sub-parts: the concept of John (the person), and the concept of tallness. The manner in which these two sub-parts are combined could be expressed in first-order predicate calculus.
The Language of thought hypothesis applies to thoughts which have propositional content, and as such is not meant to describe everything that goes on in the mind. However, the aim of the theory is to accurately describe the way in which our thoughts relate by providing a semantic structure for our thoughts. In the most basic form, the theory states that thought follows the same rules as language; thought has syntax. In order for the theory to accomplish this, it must claim that the linguistic tokens used in mental language must be simple concepts; of course, these simple concepts taken together with logical rules can be manipulated to form significantly more complex concepts.
Once the primary claim has been made, namely that thought has tokens which follow linguistic rules, the LOTH appeals to the Representational theory of thought to explain what those tokens actually are and how they behave. The Representational theory of thought claims that each propositional attitude stands in a unique relationship with subject or subjects of the attitude. In other words, there must be a mental representation which stands in some unique relationship with the subject of the representation and has specific content. The theory will also contend that these mental representations can be connected causally to allow for complex thought.
Once this has been established, the theory will need to hold that these representations are part of a system of representation which follows combinatorial syntax rules. This means that complex thoughts are built from basic thoughts, and that complex thoughts get their semantic content from the content of the basic thoughts and the relations that they hold to each other. The theory must also hold that these thoughts can only be held in relation to each other in ways that do not violate the syntax of thought. This theory relies on a version of functionalist materialism, which holds that mental representations are actualized and modified by the individual which is holding the propositional attitude. In the case of human beings, this means that the brain alters our thoughts in certain ways depending on the brain’s chemistry.
For example the thought that “John is tall” is clearly composed of at least two sub-parts: the concept of John (the person), and the concept of tallness. The manner in which these two sub-parts are combined could be expressed in first-order predicate calculus. This expression states that the predicate ‘T’ (“is tall”) holds of the entity ‘j’ (John). A fully articulated proposal for what a language of thought might look like would have to be more complex than a simple extensional logical representation such as this, since it would have to take into account complex aspects of human thought such as quantification and propositional attitudes (the various attitudes people can have towards statements; for example I might believe that John is tall, but on the other hand I could merely suspect that John is tall).
The LOT hypothesis has wide-ranging significance for a number of domains in cognitive science. It implies a strongly rationalist model of cognition in which many of the fundamentals of cognition are innate, and challenges eliminative materialism and connectionism.
 Reasons for Supporting Language of Thought
When looking at Fodor’s works as well as others there appear to be 4 main reasons why the Language of Thought forms a compelling and well structured argument.
1. That there can be no higher cognitive processes without mental representation
2. His reasons on internal representation
3. The methodological argument
4. Arguments from other authors which support Fodor.
1. Fodor explains his first argument that there can be no higher cognitive processes without mental representation in his book Language of Thought. In this book Fodor gives us arguments and reason from contemporary cognitive psychological theory. He states that the only psychological models that seem plausible represent these processes as representational and that computational thought needs a medium to compute, a representational system. With these reasons put forth he goes on in his book to explain these processes and that even if this is remotely plausible that it is better then the other explanations that are put forth.
2. Fodor’s arguments on internal representation continue on from his theory on cognitive processes which are well-rooted in cognitive psychology. Fodor states that because of the aforementioned reasons we must attribute a representational system to organisms for cognition and thought to occur. This applies to ourselves as well as we communicate with each other and our environment. Internal representation is how we see ourselves moving about and being involved in the world around us and that for our higher cognitive processes to function there is no other plausible theory to counter these arguments.
3. The methodological argument states that there is causal relationship with the things we do versus our intentions. This supports Language of Thought because we all seem to think in a seemingly logical order that supports the theory. Because mental states are structured in such a way that causes our intentions to manifest themselves by what we do this supports the argument that there is a connection between how we view the world and ourselves and what is done by our actions. We can all deduce actions occurring in a sequence whether they involve multiple actions or just one and this is an argument for Language of Thought.
4. One author that provides an argument support the Language of Thought theory is Tim Crane in his book The Mechanical Mind. In his book Crane explains Fodor’s Mentalese Hypothesis as desiring one thing and something else. Crane returns to the question of why we should believe the vehicle of mental representation is a language. Crane states that while he agrees with Fodor his method of reaching it is very different. Crane goes on to say that reason: our ability as humans to decide a rational decision from the information giving is his argument for this question. Association of ideas lead to other ideas that can apparently of no connection except to the thinker. Fodor agrees that free association goes on but he says that is in a systemic, rational way that can be shown to work with the Language of Thought theory. Fodor states you must look at in a computational manner and that this allows it to be seen in a different light than normally and that free association follows a certain manner that can be broken down and explained with Language of Thought.
Other philosophers, following Ludwig Wittgenstein, have argued that our public language is used as our mental language; that a person who speaks English thinks in English. Others contend that people who do not know a public language (e.g. babies, aphasics) can think, and that therefore some form of mentalese must be present innately.
LOTH accepts that folk-psychology as a correct way of explaining the mind, which mental states are causally efficacious, that mental states are representational, computational and rule governed. So it makes sense to break down objections to LOTH by the specific theoretical points in contention.
 Refutations of Representational Theory of Mind
Philosophers have objected to some of the theoretical points that are necessary for accepting Representational theory of Mind. These points begin with the refutation of folk psychology. Some who accept folk psychology object to the idea that mental states are causal. Some who accept both folk psychology and the causal efficacy of mental states deny that mental states are representational.
- Theoretical point number one is whether or not to accept folk psychology as a tenable guide to the way that the mind works. Remember, folk psychology or the Theory-Theory explains human behavior in terms of mental states, like beliefs, desires, hopes etc…1. If LOTH is correct, then folk psychology is also correct. This is because LOT explains how and why these mental states cause behaviors. Those that do not accept folk psychology, deny that there are any such mental states at all. These views those such as Eliminative Materialism held by Paul Churchland2, and the Intentional stance Theory position held by Daniel Dennett3. Eliminative materialism is the view that psychological states correspond 1:1 with neurophysiological states in the brain. Intentional Stance Theory posits that behavior is explained by the mental states that some rational agent ought to have given the behavior of that rational agent. For example, if a bird flies away when a cat is coming, the intentional stance explanation is that the bird believes the cat will eat him, and desires not to be eaten. The idea is that the intentional stance is able to predict patterns of behavior in ways that neurophysiology cannot4.
- Theoretical point number two, in order to accept LOTH, is the notion that mental states are causally efficacious. This is where the Behaviorists like Gilbert Ryle (would) object to any representational theory of mind. Ryle held that there is no break between cause of mental state and effect of behavior. Rather, Ryle proposed that people act in some way because they are in a disposition to act in that way. For example, some person X is in a disposition to march just in case he marches when he is not tied up; he is under the propensity to act in such a way that is called “marching.”5
- Theoretical point number three is that these causal mental states are representational. One objection to this point comes from John Searle in the form of Biological Naturalism. Biological Naturalism is a nonrepresentational theory of mind that accepts the causal efficacy of mental states. Searle is a monist, but divides intentional states into low-level brain activity and high-level mental activity. So it is these lower-level, nonrepresentational neurophysiological processes that have causal power in intention and behavior, rather than some higher-level mental representation.6
 Objections to LOTH and the Contemporary Debate within the Representational Theory of Mind
- The first objection challenges LOTH’s explanation of how sentences in natural languages get their meaning. That is the view that “Snow if white” is TRUE if and only if P is TRUE in the LOT, where P means the same thing in LOT as “Snow is white” means in the natural language. If the meaning of sentences is explained in terms of sentences in the LOT, then the meaning of sentences in LOT must get their meaning from somewhere else. The idea behind this is that there seems to be an infinite regress of explanations of how sentences get their meaning. This regress is often called the homunculi regress7(see Homunculus argument).
- Homunculi Regress: This objection starts by assuming that LOT sentences get their meaning in the same way that natural languages get their meaning. Sentences in natural languages get their meaning from their users (speakers, writers)8. For example, the sentence S in natural language, “The cat is black,” gets its meaning from the speaker of S. The speaker of S, very well may have meant that some unique particular animal, specifically some unique, particular cat is some unique color: black. If sentences in natural languages get their meaning from the way in which they are used in writing or speaking, then sentences in the LOT get their meaning from the way in which they are used by thinkers. Perhaps thinkers employ something called the homunculus. The homunculus is some brain function that uses sentences in LOT. The homunculus uses LOT sentences in a way similar to the way speakers use natural languages, in so far as sentences in LOT get their meaning from the way in which they are used by the homunculus. The problem is that either sentences used by homunculi get their meaning by how they are used, or they get their meaning in some other way. If they get their meaning the same way, then there must be some smaller homunculus to interpret the meaning of the sentences of every larger homunculus. Since the smaller homunculus must be interpreted in the same way, it follows that this reasoning leads to an infinite regress of homunculi.9
- Daniel Dennett accepts that homunculi may be explained by other homunculi, however denies that this would yield an infinite regress of homunculi. Rather, he posits that each explanatory homunculus is “stupider” or more basic than the particular homunculus it explains. Dennet says that this regress of homunculi is not infinite but bottoms out at a basic level that is so simple that it does not need interpretation.10
- As John Searle points out, even if homunculi regress was explained by simpler homunculi giving meaning to more complicated homunculi until it is the most complex homunculus giving meaning to sentences in the LOT, it still follows that the bottom-level homunculi are manipulating some sorts of symbols. Any symbol manipulation is in need of some way of deriving what those symbols mean.11
- The second objection within Representational Theory of Mind is the objection to acting in accordance with a rule. LOTH posits that the mind has some tacit knowledge of these rules. Tacit knowledge is the knowledge in the mind of conscious beings that they are not necessarily aware of. The rules in question are the logical rules of inference and the linguistic rules of syntax (sentence structure) and semantics (concept or word meaning)12.
- Proponents of this objection argue that LOT may not actually be acting in some volitional accordance, but rather may conform to some set of rules in virtue of some physical law. For example, the planets conform to the laws, or rules of gravity, yet they do not know of these rules. If LOTH cannot show that the mind knows that it is following the particular set of rules in question then the mind is not computational because it is not governed by computational rules.13
- To hold this view, many deny that there is a way to interpret a tacit knowledge of these rules. It seems that tacit knowledge implies some homunculus-like regress problem. After all, how is this tacit knowledge interpreted? Perhaps it is interpreted by a homunculus. However, we have already seen the problem with this explanation.14
- Also, in this objection, the apparent incompleteness of this set of rules in explaining behavior is pointed out. After all, many conscious beings behave in ways that are contrary to the rules of logic. Those who contend the nature of the roles these rules play in thinking point out that instead of updating the rules to explain this outlying behavior, LOT resorts to explaining this behavior as irrational. Yet, this irrational behavior is not accounted for by any rules, showing that there is at least some behavior that does not act in accordance with this set of rules.15
- The third objection within representational theory of mind has to do with the relationship between propositional attitudes and representation. Here, the idea is that a representation must be directed on something, and some other thing, a propositional attitude, must be directed on it.
- In this objection it is shown that propositional attitudes are possible without explicit representation, thus making the causal role of mental states tenuous. This is because representations of rules and objects must be representational in order to be computational as stated earlier. One example of how proposotional attitudes can exist without representation comes from Danniel Dennet. Dennet points out that a chess program can have the attitude of “wanting to get its queen out early,” without having a representation or rule that explicitly states this.
- The reverse is also true, that is representations without propositional attitudes occur, again calling into question the causal role of mental states. For example, Dennet points out again that a multiplication program on a computer computes using the computer language of 1’s and 0’s yielding representations that so not correspond 1:1 with any prepositional attitude.16
 The Connectionism/ Classicism Debate
- Connectionism is an applied and more recent approach to Artificial Intelligence than LOTH. Connectionist models and makers very often accept a lot of the same theoretical framework that LOTH accepts. Connectionists accept that mental states are computational and causally efficacious and very often also accept folk psychology and that mental states are representational. However, Connectionism, is an alternative to LOT in the context of building thinking machines. Connectionism touts an ability to make thinking connectionist machines, most often realized as neural networks. Neural networks consist of an inter-connectional set of nodes, and describe mental states in numeric terms over activation values and thus are able to create memory by modifying the strength of these connections over time. Some popular types of neural networks are interpretations of units, and learning algorithm. An interpretation of units is such that units can be interpreted as neurons or groups of neurons. A learning algorithm is such that over time, a change in connection weights is possible allowing networks to modify their connections.17
- Connectionist neural networks are able to change over time via their activation. An activation is a numerical value which represents some aspect of a unit, that a neural network has at any time. Activation spreading is the spreading or taking over of other over time of the activation to all other units connected to the activated unit.18
- Since connectionist models can change over time, supporters of Connectionism claim that it can solve the problems that LOTH brings to classical AI. These problems are those that show that machines with a LOT syntactical framework very often are much better at solving problems and storing data than human minds, yet much worse at things that the human mind is quite adept at such as recognizing facial expressions, objects in photographs and understanding nuanced gestures.19
- Concerned that Connectionism is leading AI down the wrong path, Fodor defends LOTH against Connectionism by arguing that any connectionist model is just some realization or implementation of the classical computational theory of mind and therein necessarily employs a symbol manipulating LOT.
- Fodor and Zenon Pylyshyn use the notion of cognitive architecture in their defense. Cognitive architecture is the set of basic functions of an organism with representational input and output. Fodor and Pylyshyn argue that in essence cognition is a set of functions with representational input and output. They argue that it is a law of nature that cognitive capacities are productive, systematic and inferentially coherent. In other words, that they have the ability to produce and understand sentences of a certain structure if they can understand one sentence of that structure. For example, by systematicity something that cognizes should be able to understand and produce the sentence “John loves Mary,”20 if it is able to understand and produce the sentence, “Mary loves John.” A cognitive model must have a cognitive architecture that explains these laws and properties in some way that is compatible with the scientific method. Fodor and Pylyshyn say that cognitive architecture can only explain the property of systematicity in particular by appealing to a system of representations. Fodor and Pylyshyn argue that either Connectionism employs a cognitive architecture of representations or it does not. If it does, then Connectionism uses LOT. If it does not then it is empirically false. This brings them to the conclusion that either Connectionism is false, or it uses LOT.21
 Connectionists have responded to Fodor and Pylyshyn by
- Denying that cognition is essentially a function that uses representational input and output in favor of eliminative connectionism.
- Accepting that Connectionism is merely an implementation of LOT.
- Denying that systematicity is a law of nature that rests on representation.
- Denying that Connectionism uses LOT.22
 Empirical Testing of Mental Imagery and LOTH
Since LOTH came to be, it has been empirically tested. Not all experiments however, have confirmed the hypothesis. The following experiments add to the debate of how exactly the mind does manipulate mental images.
- In 1971, Roger Shepard and Jacqueline Metzler tested Pylyshyn’s particular hypothesis that all symbols are understood by the mind in virtue of their fundamental mathematical descriptions. Shepard and Metzler’s experiment consisted of showing a group of subjects a 2-D line drawing of a 3-D object, and then that same object at some rotation. According to Shepard and Metzler, if Pylyshyn were correct, then the amount of time it took to identify the object as the same object would not depend on the degree of rotation of the object. Their findings were that there was a proportionate change in time it took subjects to recognize the object to degree of rotation, disconfirming their hypothesis.23
- There also seems to be a connection between prior knowledge of what relations hold between objects in the world and the time it takes subjects to recognize the same objects. For example, it is more likely that subjects will not recognize a hand that is rotated in such a way that it would be physically impossible for an actual hand. It has since also been empirically tested and supported that the mind might better manipulate mathematical descriptions in topographical wholes. These findings have illuminated what the mind is not doing in terms of how it manipulates symbols.24
 See also
- Ravenscroft, Ian, Philosophy of mind. Oxford University press, 2005. pp 91.
- Fodor, Jerry A., The Language Of Thought. Crowell Press, 1975. pp 214.
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.1
- “Eliminative materialism,” , July 7, 20082
- “Intentional Stance,”[ http://en.wikipedia.org/wiki/Intentional_stance], July 7, 20083
- “Intentional stance,”[ http://en.wikipedia.org/wiki/Intentional_stance], July 7, 20084
- “Gilbert Ryle,” , July 7, 20085
- “Biological naturalism,” , July 7, 20086
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.7
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.8
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.9
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.10
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.11
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.12
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.13
- Murat Aydede, , “Language of Thought Hypothesis,”  200414
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.15
- Murat Aydede, , “Language of Thought Hypothesis,”  200416
- “Connectionism,” , July 7, 200817
- “Connectionism,” , July 7, 200818
- Crane, T., The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation: 2nd Edition. 2003, Routledge: New York.19
- James Garson, , “Connectionism,” 200720
- Murat Aydede, , “Language of Thought Hypothesis,”  200421
- Murat Aydede, , “Language of Thought Hypothesis,”  200422
- “Mental Image,” , July 7, 200823
- “Mental Image,” , July 7, 200824
 External links
- The Language of Thought Hypothesis at The Stanford Encyclopedia of Philosophy.
- Language of Thought – By Larry Kaye.
- Revealing The Language Of Thought – By Brent Silby
- Jerry Fodor Homepage
- The Language Of Thought Hypothesis: State Of The Art – By Murat Aydede