June 14, 2005

Fie on the register allocator

One of the things that was plaguing me with $WORK_PROJECT was the interaction of parrot's register allocator with some of my... degenerate code. (Assuming you consider a single subroutine with 1.6M of source text and 20K+ temps degenerate. I certainly do) On the fast machine at the office it topped out at around a gig of memory consumed and somewhere around 360 minutes of CPU time. Needless to say... ick. Far from acceptable, and nearly all the time's in the register allocator.

I'd taken a shot at patching that up a while back, but ran into some issues. (Entirely internal to my compiler) I left the infrastructure in place, though, and this week I dove into it again. Took all of a day and a half to patch up properly. Now I use no virtual registers at all.

Now my big program takes 32 minutes to run through parrot to generate bytecode, and I think most of that time is due to bugs in the current register allocator (since that's where the time's spent, though there are no registers that need allocating at all). I may well toss the PIR entirely and generate pasm directly. Right now the only PIR features the compiler's using are function pre/postamble generation, function call generation, and easy keyed access, and all that's being generated in single subroutines. (I abstracted it all out, so changing, say, the function call code emission is simple -- change it in one spot in the compiler and every function call is fixed up) Switching to PASM generation's no big deal, and ought to get me a damned significant compilation time speedup, since PASM is just bytecode turned to text.

What're the takeaways here?

  1. Parrot's register allocator as it currently stands gets degenerate pretty quickly
  2. Completely bypassing parrot's register allocator isn't that big a deal if you're writing a compiler
  3. Parrot's register count (32 of each type, with 16 of those not touched by the calling conventions) is sufficient for all my needs with space to spare (I never needed more than 13, and the Evil Program has some twisted code in it)
  4. PIR isn't actually all that useful to a compiler, though it is tremendously useful for hand-written code

Point 4 was the most surprising of the lot. I really expected to get more of a win from PIR for the compiler, but the only advantage it offered, register spilling, turned out to be both not much of an advantage (because of how quickly the code turned the spiller degenerate) and not at all troublesome to completely ignore.

I should sit down and write up how parrot looks as a compiler target, as I'm the only person with a significant compiler targeting it with any time actually spent doing it. (And let me tell ya -- DecisionPlus ate nearly two years of my life and I wouldn't mind them back... :) Some of the design changes that were proposed to parrot when I cut myself off were a bit off-target, at least compared to my experience. I'll probably discuss that as well with Chip come YAPC::NA.

Posted by Dan at June 14, 2005 10:16 PM | TrackBack (0)
Comments

"a single subroutine with 1.6M of source text and 20K+ temps"

OK, I think nobody can top this. This has to be the worst code ever!

Posted by: Sjoerd Visscher at June 15, 2005 04:04 AM

:) Well, in it's defense, this is compiler-emitted code, the language the compiler's for has no subroutines or functions, and the program in general is the single most complex screen in the system with 11K lines of code and 366K of source text. (Not including the #include files, since this language runs its source through the C preprocessor first)

I did lie, I suppose -- it doesn't have 20K temps. It's got 20K PMC temps. (And another 8-9K integer temps, and 15K or so string temps... :) Close enough, though.

Yeah, it's nasty. Debugging this thing is an exercise in interesting pain, especially since some of the more esoteric language features are only used in the really large programs, so debugging them in production code is tricky. (Hell, finding the undocumented esoteric edge cases is tricky)

Posted by: Dan at June 15, 2005 08:07 AM

Back in the 80's, people realised that compilers weren't using all the features of the processors of the day. And thus RISC was born - stripped down minimal instruction sets with just the features that compilers targetted. 20 years later, CISC is still king (x86), though today's x86 implementations are in many ways more analagous to PIR that PASM. I'm not sure what the lesson for Parrot is, but it's interesting how history repeats itself.

Posted by: Dave Whipp at June 15, 2005 11:00 AM

Yeah, the whole "who will use this thing?" concern drove all the features that went into parrot. Things got looked at under these criteria:

  1. Would the perl/python/ruby compiler use it?
  2. Is it development scaffolding?
  3. Is it needed for the runtime library?
  4. Is it needed in support for a declared feature?
If it met one of those it went in, otherwise not. Keeping an eye on the end result was a bit difficult sometimes, though I think it was generally managed OK.

Posted by: Dan at June 15, 2005 11:45 AM