A guest post by Hannah Little
Introduction
I realised when writing this essay that it could be read in a couple of ways. Firstly it could be read as a (by now highly clichéd) anti-generative grammar rant, but I don't think that's necessarily what I was trying to achieve. The point of it was to try to argue for a much more scientific approach to syntax than has been done in recent years. I am not disputing either that many syntacticians are already stringent in this practice or that a scientific approach is impossible when approaching things as a generativist, far from that, but that certain considerations have to be made before continuing down the syntactic path.
I would also like to take the opportunity here to express that this is probably more aimed at undergraduates doing linguistics, this is partly because I don't want to patronise postgrads or postdocs because they know all of this stuff, but also because I haven't found a bachelor's degree in linguistics yet which isn't a BA and because of this the scientific methodology and philosophy is pretty thin on the ground, I'm pointing out here that it shouldn't be.
I'd also like to mention that whilst I put syntax in the title and all of the examples are syntactic there is no reason why the methods and phenomenon I've addressed here can be applied to other aspects of linguistics.
Physics Envy
A lot of Linguists, Chomsky among them, when trying to explain the phenomenon of principles and parameters explain it using the metaphor of a switch board, that is that there are a set of parameters in the head which are either switched on or off depending on what the language input is. The most cited example of this phenomenon is whether a language is left-headed or right-headed, the argument is that the switch could go one of two ways and depending on the language which is input the switch will be set to either left-headed (if you speak a language like Hungarian) or right-headed (if you speak a language like English). Now, anyone who's ever studied either English or Hungarian will know that it isn't that simple, there are hundreds of counter examples within a language with regards to head direction, this is why, to me, the switch board has always seemed like a pretty oversimplified analogy to use. For ages I was trying to come up with a better analogy and it was only when reading Richard Feynman's 'The Pleasure of Finding Things Out' (1999) that it suddenly came to me that a much better analogy is that of the game of chess, which is used within Feynman's book to explain our understanding of nature. He explains;
On way, that's kind of a fun analogy in trying to get some idea of what we're doing in trying to understand nature, is to imagine that the gods are playing some great game like chess, let's say, and you don't know the rules of the game, but you're allowed to look at the board, at least from time to time, in a little corner, perhaps, and from those observations you try to figure out what the rules of the game are. You might discover after a bit for example, that when there's only one bishop around on the board that the bishop maintains its square colour. Later on you might discover the law for the bishop as it moves on the diagonal which would explain the law that you understood before - that it maintained square colour- and that would be analogous to discovering one law and then finding a deeper understanding of it. Then things can happen, everything is going good, you've got all the laws, it looks very good, and then all of a sudden some strange phenomenon occurs in some corner, so you begin to investigate that - it's castling, something you didn't expect. We're always, by the way, in fundamental physics, always trying to investigate those things in which we don't understand the conclusions. After we've checked them enough, we're ok.
-Richard Fynman (1999:13-14)
This to me reads very much like the search for the rules of Universal Grammar, the set of internal rules which explains all phenomenon in language. This is, for the most part, why syntactians have 'Physics Envy'. (I stole this term from Simon Kirby who claimed that whilst syntacticians had 'physics envy', him and other evolutionary linguists had 'biology envy'.)
The main point which I took from this was that it should be the anomalies, the things which don't fit the theory, which should be focused on. It seems to me within current papers in syntax the anomalies are vaguely explained away using external factors at best, or completely ignored at worst.
Andrew Carstairs-McCarthy wrote of this seeming parallel between the study of physics and the study of language in his book 'The Evolution of Morphology' (2010). He makes the point that:
It would help cosmologists tremendously if there were other universes that they could compare with this one. In the absence of such universes (or, at least, in the absence of any access to them), cosmologists have to adopt a different research strategy. They have to devise thought experiments, asking themselves: 'In order for as many as possible of the currently observed characteristics of the universe to fall neatly into place, what assumptions do we need to make about its origin and about fundamental laws governing it?'
-Andrew Carstairs-McCarthy (2010:3)
This again makes the point, perhaps more directly this time as it comes from a book written by a linguist about linguistics, the difficulties which linguists (and physicists) face. That is that there isn't alien languages out there which we can compare human language to, and that because of that strategies of abductive reasoning must be implemented as opposed to the traditional scientific approach of deductive reasoning;
Deductive reasoning
The most productive scientific approach in most fields and the one most people would assume you mean when the scientific method is mentioned is that of deductive reasoning. That is the following:
The hypothesis to be tested is p. The proposition p has a consequence the claim that if q is true, then r must be true too. Therefore in appropriate experiment conditions we arrange that q holds, and then check whether r holds also. If we observe r, then the experiment tends to confirm the hypothesis p, whereas if we observe not-r, the experiment disconfirms p.
(From Carstairs-McCarthy, 2010:3-4)
This is the most often used method in syntax and its use can be summarized as follows: p is a hypothesized principle of universal grammar, q is a proposition that a set of items which contain that principle are candidates for sentencehood and r is a claim subject to the grammaticality judgments of native speakers. It is then possible to make the claim of if q and r then p is confirmed, or if q and not-r then p is disconfirmed. This form of reasoning within syntax is, for the most part, utilized well. What isn't done well is the willingness to drop the initial hypothesis if it is confirmed in one language and then, later, disconfirmed in another. Taking my example from earlier, it may be the case that within a verb final language adpositions are always realized to the right side of the noun (postpositions) and after confirming this with native speakers one could claim that the hypothesis that if verbs take a right headed position in a language then adpositions will take the same right-headed position. This is a very neat explanation for languages such as Hungarian which is verb final and for the most part uses postpositions. This hypothesis however might run into difficulties when one considers postpositions such as 'együtt' which can appear to the left or right of the noun it appears with.
(1) Boris-sal együtt
együtt Boris-sal
Boris-with together
“together with Boris”
Anomalies are then attributed to external factors such as emphasis or because unexplained syntactic phenomena. I'm guilty of this myself. Why is it such a stretch to pronounce a hypothesis disproved and come up with a better one? Is it because humans are haunted by the phenomenon known as confirmation bias (http://en.wikipedia.org/wiki/Confirmation_bias)? Or is it because we're too hung up on how neat and beautiful generative grammar seemed when it was first devised to want to bin it? I don't think considering other options means forgetting the roots of linguistics or denying that Chomsky is pretty good.
Even worse than the pitfalls of confirmation bias is that of making the hypothesis fit the results. I here refer you to Ben Goldacre's Bad Science:
Here is an analogy. Imagine I am standing near a large wooden barn with an enormous machine gun. I place a blindfold over my eyes and, laughing maniacally, I fire off many thousands and thousands of bullets into the side of the barn. I then drop the gun, walk over to the wall, examine it closely for some time, all over, pacing up and down: I find one spot where there are three bullet holes close to each other, and then I draw a target around them, announcing proudly that I am an excellent marksman. You would, I think, disagree with both my methods and conclusions for that deduction.
- Ben Goldacre (2009:258)
The linguistic equivalent of this would be to find 2 postpositions in English (see below) and proclaim that all adpositions in English were right-headed.
- 2 miles away
- 2 years ago
This is a very transparent example and therefore no one would try to make this claim but the warning is there.
Abductive reasoning
Let us now turn to 'Abductive reasoning' or 'inference to the best explanation', as first put forward by the logician Charles Sanders Peirce;
The hypothesis to be tested is p, if p is true then, on the basis of the other well-established assumptions, we will expect to observe q, r, s, t ...as well. If p is false, there is no obvious connection between q, r, s, t... Yet q, r, s, t...are all true. The likelihood that p is true is therefore increased, inasmuch as it explains the otherwise apparently random coexistence of q, r, s, t....
This is the 'different research strategy' which Carstairs-McCarthy was referring to above and it is one which I believe is very underused with regards to syntax. This is because this approach is one which works on probability of a hypothesis being true rather than just declaring it true or false on an experiment by experiment basis. In order to implement this method though some assumptions need to be confirmed first using statistical analysis, for example, the connections between syntactic phenomenon such as headedness of adpositions and word order are currently, from what I have seen, a result of conjecture and anecdotal evidence.
Conclusion
We can't see the internal structure of language, just as we can't see the rules of chess in Feynman's example, but through observation of what is 'true', statistical analysis and using logically sound methods progress can be made and our 'physics envy' can be satisfied.
References
For a good example of abductive reasoning being used in linguistics see:
Carstairs-McCarthy, Andrew (2010) The Evolution of Morphology. OUP. Oxford.
For funny insightful philosophy of science stuff:
Feynman, Richard, P. (1999) The Pleasure of Finding things out. Perseus Books. New York City.
For an easy to read book about statistics, experiment design and the media's portrayal of science:
Goldacre, Ben (2009) Bad Science. Forth Estate. London. About the authorHannah spent her formative years in Stockton-on-Tees in the North-East of England. She was a student at the University of York from 2007 -2010 studying for a BA in English Language and Linguistics, whilst there she completed a dissertation on Postpositions in Hungarian. She is starting an MSc in The Evolution of Language and Cognition at the University of Edinburgh in September (2010). She is 21 and a geek of the highest order.