Saturday, September 01, 2007

When is a self-interpreter not a "real" self-interpreter?

I started this blog about a year ago, after participating in the 2006 ICFP Programming Contest. The 2007 contest was run over a 72 hour period on 20 - 23 July, and so I tried my hand once again along with a little assistance from my regular team mate. Thanks Shaun, and Happy Birthday blog!

Unfortunately, like many others in this year's contest we didn't progress beyond a fairly minimal level. A slice of 50th place out of some hundreds of entries might not sound too bad but it was really the start of a very long flat tail on the scoreboard. And in my case this was due (in large part) to a very stupid bug and my inability to find and fix it fast enough. (I have since honed my skills with Python code testing and checking tools, set up a directory ready for next year, and sworn to do it right in 2008!)

But putting all that aside, the contest task was quite similar to that from last year in many ways. However, instead of the UM language from 2006, this time we had to interpret "Fuun DNA". The Fuun DNA language is a kind of string rewriting system, and sometime after the contest was over (and also after I had spent huge amounts of time trying to progress further with only limited success!) I suggested that perhaps someone should write a self-interpreter for Fuun DNA. Jochen Hoenicke promptly did this - he's the smart guy who was also the first person acknowledged by the organisers to have generated the exact output image required by this year's contest, albeit also about a month after the 72 hour contest closed! I should also say that he knows this first shot at a self-interpreter has a couple of bugs/limitations. You can find the "source" and "DNA" for it here (in selfinterpreter.*).

But there was also some discussion (see the comments for this post on Max Rabkin's blog) about what a self-interpreter for Fuun DNA actually needs to do! The language doesn't support input directly, but appending the "input" onto the end of the "DNA" representing your self-interpreter seems a reasonable way to achieve this. Because the next "pattern/template" instruction executed is always read from the start of the current DNA string, this also means you could put any kind of NOP program code in front of the DNA to be interpreted and claim to have a self-interpreter! This doesn't gel with my notion of a self-interpreter at all, and so I've ended up revisiting various definitions in an attempt to find a clear-cut definition of a self-interpreter that would disallow "tricks" such as this.

Most definitions seem pretty basic. Something like this (from the Wikipedia entry):

A self-interpreter is a programming language interpreter written in the language it interprets.

So then we need to understand what a "programming language interpreter" is. From the relevant Wikipedia entry for "interpreter" we are told: interpreter is a computer program that executes, or performs, instructions written in a computer programming language

It then goes on to talk about the structure of a self-interpreter:

An interpreter needs to be able to analyze, or parse, instructions written in the source language. It also needs to represent any program state, such as variables or data structures, that a program may create. It needs to be able to move around in the source code when instructed to do so by control flow constructs such as loops or conditionals. Finally, it usually needs to interact with an environment, such as by doing input/output with a terminal or other user interface device.

This is all well and good, and sits pretty comfortably with my understanding of what a self-interpreter should normally do. It would also seem to exclude empty strings and other variations on that theme, and I'm happy about that also! However it's all still rather informal.

In their paper "A Very Short Self_Interpreter" (describing their brainfuck self-interpreter), Oleg Mazonka and Daniel B. Cristofani add a few more conditions. (Note that this extended definition may not be widely accepted as being correct.)

Here we define self-interpreter as an interpreter which is written in the same language it interprets, and which also meets these three requirements:

1. the language must be Turing-complete;
2. the behavior of programs when interpreted by a self-interpreter must be the same as their behavior when interpreted by any other interpreter, i.e. the two must produce identical observable output for any legitimate input, though not necessarily at the same speed; and
3. the self-interpreter must not use language constructs which are designed to facilitate recognition and interpretation of these and other language constructs (self-interpretation), e.g. LISP's eval.

The second requirement looks like a tautology saying that a self-interpreter is an interpreter. Nevertheless this requirement rules out languages whose pragmatics are not sufficient to permit a self-interpreter. Later we show that not every Turing-complete language can have a self-interpreter in this sense. This property forms a distinction between languages: some languages are environmentally complete, while others are not. Thus, Turing-completeness comes from the semantics of the language and environmental completeness from its pragmatics.

Item 3 seems the least well defined to me, and particularly when you look at languages such as that defined by this years ICFP Contest, the "Fuun DNA". Being a string rewriting system, almost every aspect of that language seems to fit in item 3! Now, I must admit I haven't checked this very carefully but I think that Jochen's self-interpreter basically works by parsing the code to be interpreted just enough so that the next "instruction" can be wrapped within another instruction which is then placed at the front of current "DNA" where it will be the next thing executed. The wrapper skips past the rest of the self-interpreter before executing the wrapped instruction more or else in the original environment that it was copied from. Repeat until done. Is that "cheating"? I suspect a panel of judges would say it was, but perhaps only by a majority decision, not unanimously. And just to be clear, this is not meant to be a criticism of what Jochen did! In fact I think he wrote it that way mainly to demonstrate how to circumvent a different idea put forward earlier by someone else to stop "cheating".

It seems to me that most languages could be described in terms of some number of atomic units of computation. For the Fuun DNA, these include the ability to match specific bases, the ability to skip past some fixed number of bases, to search for a particular sequence and so on. The language is defined in terms of these basic things that it is able to do. In my opinion, a self-interpreter needs to be emulating the target language/program at least down to that level, even if it is technically possible to take short-cuts. I guess this is really what the last item in Oleg and Daniel's requirements is trying to get at.

Even when you get down to that level of detail, there are choices to be made. Once again the Wikipedia article talks about this in some detail so I'll just quote it again:

There is an important design dimension in the implementation of a self-interpreter, namely whether a language feature in the interpreted language is implemented by using the same feature in the implementation language of the interpreter (the host language). A typical example is whether a closure in a Lisp-like language is implemented using closures in the interpreter language or implemented "manually" using a data structure that stores the environment explicitly. The more features are implemented by the same feature in the interpreter language, the less control the programmer of the interpreter has. For example, a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are just delegated to the corresponding operations in the host language.

Of course, this blog is supposedly about "eigenratios" of self-interpreters, and up until now eigenratios haven't been mentioned in this post. In my earlier experiments, the eigenratios that I've managed to determine exactly have been derived from a matrix whose elements corresponds to a count of times particular "instructions" (or "variations" of them) are executed in the self-interpreter when interpreting one of the same "instructions", etc., from the target program. Although this is a very one-sided view of the world, obviously I would like all "self-interpreters" to emulate the target language in a sufficiently detailed fashion as to allow a corresponding matrix to be constructed! Perhaps something like this could be another requirement for a "real" self-interpreter? And in a sense I think it is another way of "interpreting" item 3 from the paper mentioned earlier.

Finally, if you want more (although not exactly on the same topic), there's a lot of other interesting reading about self-interpreters and meta-circular interpreters (in particular) in a post by Reg Braithwaite on his blog (raganwald).