The difference between syntactic analysis and code generation

I can parse the text for the paragraph below very quickly. The author uses simple grammar. However, it is taking me hours of study (following footnotes) to make any sense out of it:

The bin_rec function is an example of a hylomorphism (see citeseer.ist.psu.edu/meijer91functional.html)โ€”the composition of an anamorphism (an unfolding function) and a catamorphism (a folding function). Hylomorphisms are interesting because they can be used to eliminate the construction of intermediate data structures (citeseer.ist.psu.edu/launchbury95warm.html).

From the article Cat: A Functional Stack-Based Little Language in the April issue of DDJ.

This experience matches how I imagine programming-language interpreters and compilers work. In the first pass, the interpreter reads in all the text and breaks it down grammatically, mapping chunks of text into nodes that have labels like “IDENTIFIER” or “FUNCTION DEFINITION” or whatever.

Then in the second pass, the system walks through the nodes and gets down to the business of writing out the ones and zeros that tell the little men inside my computer what to do.

I haven’t studied compilers formally (hey, my degree is in economics!) so please let me know how far off base I am. I’m aware that in reality, the first and second passes may not be separate from each other or can be interleaved.

3 thoughts on “The difference between syntactic analysis and code generation

  1. That’s roughly how things usually work – however, usually things are broken down into far more phases than that ๐Ÿ™‚

    Mind you, resolving cross-references is generally a job for the linking phase, which goes all the way at the end after code generation…

  2. @bd_: Thanks for the comment! I think my metaphor probably doesn’t withstand too much scrutiny.

    The point I want to make is that for each sentence, I can identify the subjects and verbs, but I don’t understand the meaning of it (not entirely, anyway).

Comments are closed.