I've had some tertiary level training in pure maths (not that you can probably tell!) but I'm pretty sure these beautiful creatures never popped up in any serious way in any of my courses, and ditto for my earlier schooling. They aren't complete strangers to me, but I don't think I'd ever thought much beyond "wow, that's a strange and beautiful thing" when I have bumped into one previously. At least up until a few days ago when I started looking a bit harder and discovered all sorts of curious and interesting facts about them. Here are two examples of infinite continued fractions:
In my previous post I talked a bit about Alex Smith's proof that the Wolfram Prize 2 state, 3 symbol Turing machine was universal. His proof relies on an infinite non-periodic initial pattern on the tape. Since then Alex has come up with a language called 1cnis with the idea that this could in some sense formalise the kind of patterns that he thinks should be allowed on the initial tape. Perhaps we can even think of it as a kind of compiler, taking a finite source description and turning that into a infinite amount of machine language for the 2,3 machine. Talk about code bloat!
Anyway, Alex's email about 1cnis reminded me of some idle thoughts I'd had a few days earlier about a possible hierarchy of classes of initial tape patterns. We could imagine that the traditional Turing machine blank background could be seen as the infinite decimal expansion of the value zero. And other non-blank periodic backgrounds can be viewed as the decimal expansion of a simple fraction. When I'd been thinking along these lines I'd thought that perhaps a continued fraction of some kind might be the next most general class of possible background pattern.
Well, you can't realistically use any old continued fraction because every real number can be represented as a continued fraction and many of those are non-computable. But some real numbers have regular periodic representations as continued fractions - for example, the positive square root of any natural number that isn't a perfect square. These kinds of infinite continued fractions (with some kind of regular or repeating pattern) are clearly computable real numbers. In some vague sense the idea of continued fractions made me think of the kind of pattern used by Alex Smith's proof although viewing it as a sum of an infinite sequence is probably far more natural.
Changing course slightly, I also exchanged a couple of emails with "r.e.s." recently after spotting a message from him about some small universal tag systems and details on how to create them. As Alex Smith's proof also went via Cyclic Tag Systems I'd got this crazy idea into my head that it might be fun to try to write a self-interpreter for one of the variants of tag systems directly, and so the information posted by r.e.s. caught my eye.
You can view a specific encoding scheme for some particular universal machine (e.g. a universal cyclic tag system with its set of productions, or a particular universal Turing machine) as a specification for a programming language. We can use that language to write a specific program to compute some function. That program can be then be understood as a description of another machine (along with appropriate data) that is purpose-built to do the required computation. So, while some "languages" already exist for some universal tag systems (and corresponding "self-interpreters" could also be realised), I'm pretty sure they are not the most efficient (because they go via an emulation of some Turing machine, which in turn emulates a tag system. I'm interested in finding the most efficient self-interpreter (in the sense that it has the lowest eigenratio) and it seems likely to me that would be a more direct encoding system. In other words, I want to find the encoding scheme that allows the most efficient implementation of a universal tag system - one that can "directly" emulate any other tag system - and measuring efficiency by looking at the asymptotic limit of the increase in runtime as the system emulates additional copies of itself and some other fixed program/data "on top".
To slightly paraphrase a comment from "r.e.s.", my goal is "to find a Golden Implementation in a Golden Language, for which the lower bound is the Golden Ratio". Ha ha! I had mentioned this blog to "r.e.s." - he has had a look and must have eventually found "Clive's Conjecture". This was a suggestion that the lower bound for any eigenratio for any language/system self-interpreter was the Golden Ratio. I really just plucked that number out of thin air, more "for fun" than based on any serious reasoning or belief. I did consider a few other candidates at the time, e and 2 for example. But some of the properties of the Golden Ratio that I knew about seemed vaguely "right" and so that eventually swung it for me.
That was a all a long time ago, so the funny thing (to me) is that it was only a month or two ago that I finally stumbled over the fact that there is a very nice small matrix that has precisely the Golden Ratio as its dominant eigenvalue. So if someone can find a computational system with a self-interpreter that is "described" by this matrix then my conjecture might actually start to look slightly more serious!
Anyway, for now I think I'll call this the Golden Matrix. It can be understood as describing how to get the next term of the Fibonacci sequence from the previous two. Just use it to multiply a column vector containing Fk+1 as the first element and Fk as the second to get a new vector with Fk+2 and Fk+1.
As k goes to infinity, the limit of Fk+1/Fk is the Golden Ratio. This was one of the things I knew when I made my "conjecture" but I didn't find out about the connection to the matching Golden Matrix representation until much more recently!
So lets have a look at the continued fraction representation of the Golden Ratio (Phi):
Isn't that the clincher?! How can anything so beautiful not be the answer?