Search

Syntax Matters

Jeremy W. Sherman

5 min read

Dec 14, 2008

Syntax Matters

One of my coworkers, Scott, recently pointed me towards Jacob Kaplan-Moss’s comments on programming languages and thought. In “Syntactic Sugar”, Jacob addresses the canard that “all Turing complete language differ solely on syntactic sugar.” He first concedes that this is technically true, in terms of reduction to machine instructions and register manipulation. At the same time, he says, this view ignores the important effect qualitative differences in the syntactic structure of different programming languages have on the way we as programmers solve problems. In support of this, he introduces the Sapir-Whorf hypothesis from linguistics, which states that, rather than simply being a vehicle for thought, language in fact determines the limits of what is thinkable. He argues that this applies equally to programming languages, and concludes that “we’ll always be more productive in a language that promotes a type of thought with which we’re already familiar.”

Jacob’s initial concession is meant to let him get straight to the point: syntax matters, regardless of whether that syntax is purely “sugar” from the compiler writer’s point of view or not. You could go ahead and look at vocabulary (standard libraries) as well as semantics, but I’d like to look a bit more closely at Jacob’s argument and conclusions.

First off, let me say that I agree with what I see as the most important point of Jacob’s article: Syntax does matter. Unfortunately, I have to disagree with how he gets to that point and the conclusion he draws from it.
Throughout the article hides the assumption that a programming language is a Turing complete language. I believe this definition is overly restrictive. Pretty much any programming language you’ll pick up is Turing complete, but that’s not a necessity. TeX was originally not intended to be Turing complete; that it ended up so was due partly to lobbying by Guy L. Steele and partly to necessity (typesetting is not an easy problem!). Because it is Turing complete, you can (very painfully) abuse it to do things it was never meant to do. But even were it not, it would still have been a useful language for typesetting. I haven’t looked at other typesetting languages, such as eqn, roff, and pic, but it’s likely they were not Turing complete. These languages are used to program how a device should layout and style text. They might also allow you to define and apply function-macros. Maybe you’d want to call these markup languages and not programming languages, but that’s perhaps because you have defined programming language to require Turing completeness.

I also don’t think I can accept his initial concession that all Turing complete languages are the same when reduced to the level of the machine. Even Turing complete languages differ much more fundamentally than in syntactic sugar alone. The very ideas of computation that Prolog, Haskell, and C (or assembler) bring to the table are fundamentally different. That these ideas are basically equivalent comes down to the Church-Turing thesis.

At the same time, reduction to CPU or machine instructions is an insufficient reduction to establish equivalence. Even for the same machine, different compilers can produce different machine code given the same source. Even a single compiler can produce different machine code for the same source – this is why we have switches like -O[0..3] and -Osize for GCC. Further, all machine codes are not alike. Look at how x86 has been extended to accommodate the move from 16-bit to 32-bit computing (one very visible example: in the assembly language, a bunch of registers go from names like AX to EAX, where E stands for “extended”). There’s also the difference between RISC versus CISC instruction sets.

So we can extend Jacob’s claim: not only does syntax matters, but differences in syntax can even be caused by technically significant differences between programming languages. For the same reasons, though, I can’t embrace Jacob’s ultimate conclusion that, in light of the Sapir-Whorf hypothesis,  “we’ll always be more productive in a language that promotes a type of thought with which we’re already familiar.”

Jacob sees programming languages as influencing programmer productivity. I would like to suggest a further programming language parallel of Sapir-Whorf. If you look at the similarities between the von Neumann machine and “von Neumann languages” like FORTRAN, C, and the bulk of languages widely deployed in computing’s brief history, you’ll start to think that hardware and languages influence each other, and possibly not for the best. (See “Can Programming Be Liberated from the von Neumann Style?”, an address by one of the creators of FORTRAN, for more on the limitations introduced by this style.)

Hardware influences the programming languages available to us as programmers. The type of algorithmic thought we’re accustomed to will then be determined by the programming languages we are most familiar with, so that hardware transitively determines our problem-solving approach. When we add this hardware-language parallel to the mix, hardware becomes, through historic accident, the driving factor behind programmer productivity (or the lack thereof).

Jacob’s conclusion that familiarity with a language’s “type of thought” guarantees the highest programmer productivity relies on a fundamental equality of languages being skewed in terms of productivity by each language’s similarity to our problem solving approach. But there are languages rooted in types of thought that might provide sufficient increases in productivity to be worth the trouble of learning to think a bit differently. That’s what Jane Street Capital is betting on in its use of Objective Caml in preference to any other language. (See Wadler’s brief remarks on a paper discussing this, “Caml Trading: Experiences with Functional Programming on Wall Street”. I don’t know where I read this, but I also recall something about it taking a company about 1-2 weeks to retrain Java/C++ programmers as (here my memory grows fuzzier) OCaml/Haskell/Erlang programmers.) That’s what the computer science community as a whole bet on when it made the move from navigational databases to relational databases. People still have trouble grokking the relational approach, but we’ve made the move nevertheless, as working with relational databases is much more productive in many cases than working with navigational databases.

Just because you’re most comfortable working with a language shouldn’t stop you from taking a long, hard look at what else is lurking in the wings and whether you might be able to be much more productive with an utterly different sort of language. If nothing else, the experience will be broadening. Jacob’s been looking into Scheme – what have you been doing?

Josh Justice

Reviewer Big Nerd Ranch

Josh Justice has worked as a developer since 2004 across backend, frontend, and native mobile platforms. Josh values creating maintainable systems via testing, refactoring, and evolutionary design, and mentoring others to do the same. He currently serves as the Web Platform Lead at Big Nerd Ranch.

Speak with a Nerd

Schedule a call today! Our team of Nerds are ready to help

Let's Talk

We are ready to discuss your needs.

Not applicable? Click here to schedule a call.

Stay in Touch WITH Big Nerd Ranch News