September 17, 2004

VEE workshop, day 3

A short day, and one I talk at, so less than meets the eye.

I got to talk with Doug Lea at breakfast (which was really cool) about lightweight threading primitives. The more heavyweight stuff's fine, but there's a lot you can do with lightweight stuff. We'd need a way to make a thread wait until woken (with queued wakeups), wake up a thread, and do an atomic compare-and-set. It's interesting, and I think we can do it cheaply enough to make it better than pthreads synchronization.

The morning talks were interesting, if a bit specialized (the Metronome project for guaranteed collector behavior for hard realtime systems was cool, even if I think using Java in a hard realtime environment's a bit... odd). Still, neat. The 3500 byte Java JAR file that triggers degenerate behavior in Java's bytecode verifier (15 minute verify times on a 3GHz machine) was something I really wish I'd copied the URL to. Hopefully the slides to the talk'll make it online.

I was up last (other than the wrapup) and it went... well, it went as well as expected, given the room was filled almost entirely with JVM guys, and I wasn't sure what exactly I was going to talk on anyway. I more or less went through a laundry list of things that VM implementors could (and, indeed, I argue would have to) provide to languages running on top of them. The raw presentation'll probably be on the workshop website in the next few days, and I'll try to get an annotated version together.

I'd have to say, on the whole, the workshop was interesting, and well worth doing. There's a bunch of stuff that came up -- mistakes made by the JVM, or things people really want but don't have -- that we should address in Parrot. I took some paper notes, which I'll have to throw together and get addressed. It was also interesting to see the spots people were talking past one another, and I'm sure I did it but didn't notice, but that's the nature of that sort of thing.

Posted by Dan at 03:55 PM | Comments (0) | TrackBack

September 16, 2004

VEE workshop, day 2

mprotect, which has nothing to do with the VEE workshop, would be a lot more useful if it could be done per-thread. (Or, more specifically, imposed on other threads) No joy there, though. Pity. And yes, the VEE notes here really are notes, and a bit raw. (Woo, stream of unconsciousness note taking)

Second talk was on security and exploitability. Interesting talk -- on provability of correctness using type systems. The interesting bit here is that large systems are effectively unprovable, so the sensible thing is to shrink down the base that you actually need to trust, and find a way to make those trustworthy without actually checking them, since they're still too big to sanely check. The solution they used was to have a small correctness checker (~2K lines of C code, which generated ~5Klines of assembly which they hand-checked) that checked the very large rules base for correctness, and then was used to check the JIT for a JVM. Which was really cool.

Next talk's on optimizations. Monitoring systems, adaptive tuning, tweaking of various things. Interesting. Now I'm certain that we're getting it wrong with optimizing. Or, rather, we've gotten it about as right as we can and it's time for a big Plan B. And comparisons to hardware speedups aren't valid here -- the hardware guys have both massive parallelism to exploit and started with extraordinary overhead which they've been chipping away at (so to speak). There isn't that massive parallelism for the software to exploit, and we started off sucking much less. Not that optimization's a bad thing, nor unnecessary, since it does allow for finding inefficiencies (well, a few classes of inefficiencies, at least) in the existing code. Still, we don't start with inefficiencies on the factor of billion scale like the hardware guys did. So today I'm wondering what'd happen if someone did a 6502 processor in a 90nm process at 2+GHz. (No L1 cache even, since the whole memory would fit within modern chip's L1 area anyway) Would we get a factor of 2 billion speedup? (And if so, would it matter?) Kinda makes you wonder what would happen if the next CPU shrink didn't add any features but instead took up all the extra transistor space with a couple thousand really fast Z80s... (Yeah, I know, what'd happen is we'd lose most of the win in interconnect overhead)

Mmmm, dynamic systems. (Yeah, next talk) I saw a version of this at OOPSLA a few years ago. Looking at it now, I'm lusting after some of the stuff that's being done here -- everything is mutable, dynamic, and compiles to native code. I really would love to have the resources to do this. (Hell, the resources to take the time to understand it fully) Nice to have a reminder of things that we really ought to do when doing the design of the compiler and parser modules.

Java on phones. People want it. I admit I think it's a bad idea. I like my phone to be... a phone. And that's it. But then I can put on the Luddite hat sometimes.

It's interesting to see the sorts of things the business folks want and need to get their jobs done. The J2EE middleware layers look to be almost insanely big. (The 'almost' there might be unnecessary) Lots of crud, lots of crap, and, it turns out, lots of need for semi-isolation, protection mechanisms, and lots of what really are OS types of control and protection. Nobody seems to really want to bite the bullet and do it right, or what seems like right to me. (Hell, the OS folks worked a lot of this out decades ago) A fair amount of handwavey, over-engineered stuff, though that might be my cynicism talking.

(Good grief -- 1.5-2G address spaces are too small for some Java apps. That's... insane. At least the live data set's only a few hundred megabytes, so it's not too bad)

Adding in external monitoring schemes are definitely useful, and lacking in JVMs. Gotta remember to add in some routines to the embedding interface to allow for external querying of state, and external runtime monitoring. Event posting too. Feh. Should specify events.

Oh, and for some reason the phrase "World's largest microscope" really amuses me. Call it a character flaw, I guess.

The Azul talk was both interesting and really frustrating. Looks like they're doing all sorts of clever things with hardware and software in some sort of combo Java Big Iron thing, but there wasn't enough info to actually get detail. Dammit. Cool looking, and again I find I'm lusting after both CPU support and MMU access, neither of which I'm ever going to get, unfortunately.

Now Intel's got some interesting stuff here. Hardware level monitoring of VM activity with feedback. They've got things wedged in some places to do speculative cache loads based on feedback data and get an impressive (14% on this slide) speedup.

Pypy is kind of cool, though it's late enough that my brain's too fried to write about it coherently. And somehow I really like that Parrot's got about as easy to use an unsafe, horribly dangerous, outside world interface as any VM on the planet. Woohoo us! One-instruction library loading, and one-instruction 'make a parrot sub for this function with this signature in that library' parrot wrapper sub creation. Mmmm, unsafety. :)

Posted by Dan at 06:33 PM | Comments (3) | TrackBack

An interesting set of GC papers

Courtesy, indirectly, of the VEE workshop:

Looks like maybe read barriers aren't as bad as I thought they might be. May well be worth more investigation in getting infrastructure set up.

Posted by Dan at 03:56 PM | Comments (1) | TrackBack

September 15, 2004

VEE workshop, day 1

Breakfast at these things is always somewhat disheartening -- everyone in the conversations seems significantly more clever than I am. Granted, it may just be that everyone who's not, including me, just doesn't talk, but still... Anyway, interesting chats on some of the internals of the hotspot JVM, along with a bunch of other Java talk. (Everyone who got here early seems to be a Java guy)

One thing that's just occurred to me is that we're going about optimization all wrong. There are a lot of people putting a lot of work into optimizing code. That's cool, and I do that too. I think we've gotten about to the point where we can't go any further. Optimizers in general try to order and fiddle with code to make it run as fast as possible. Woohoo! But... At some point you just can't run code any faster, and the only way to go faster is to not run code. That's something that isn't well managed right now.

Sure, optimizers do what they can to find dead code, or find code that only needs to be run once rather than multiple times, but by the time most optimizers get code it's far too late to do this with any real effectiveness. What really needs to happen to make this work is to introduce more HLL constructs and annotations that allow for it. Lazy execution and memoization are two biggies, but there are others. This desperate lack of information (and it's not type information, really) makes things more difficult, which is a shame. It's a lot faster to not do something than it is to actually do something.

Interesting -- looks like there's a big blind spot in the JVM community about embedding and extending the VM with native code. Seems like a big problem, but I dunno, since it's something that Perl/Python/Ruby have been doing for ages, and doing well. Go figure. Blind spots are what this thing's mostly about, though.

One of the talks was basically "We're thinking about solving the problem that Inline solves" only without knowing about Inline. So my being here hasn't been completely one-sided -- the perl/python/ruby folks have definitely solved some of the problems the Java folks haven't thought of, or have only started to think about.

Paolo's talk on mono's interesting. One thing he just said was the reason for mono was so they could write one binding to GNOME, rather than one per language. The implication there is that it's easier to write a compiler for any language to mono than it is to write the GNOME bindings. I'm not sure if that's what he meant, or if it is if it's what he implied. Wonder what that says about the difficulty of binding to GNOME. (Or what it implies about the difficulty of writing language compilers) Not that having a common back end's a bad thing, I just find the implications amusing.

Now this is interesting. On the Need for Data Management Primitives in a VEE makes the point that SQL engines are really just bytecode engines. Specialized ones, but still... And this gives me another Evil Idea. I do like those. I really need to talk to the Postgres folks. (I think this comes after becoming independently wealthy so I have actual free time, but...)

As this goes on, one thing keeps coming to mind: Dammit, I want access to the MMU on the system! Even in a limited way! Because I do. Being able to put some memory blocks in place with catching code would make Parrot's life ever so much easier.

Oh, yeah, and IBM puts on a good spread. Which is a nice thing. :)

Posted by Dan at 05:53 PM | Comments (6) | TrackBack

September 10, 2004

Suboptimal optimizing

We use Postgres at work, as the back end database for the New Project. (The same project I'm writing the compiler for) It's nice, has the features we need (need those views and triggers), and is snappy. Well... mostly snappy.

I've been finding that my SQL is just... not fast. Slow, in fact. Really, really slow. And the weird thing is that it isn't slow when I EXPLAIN the stuff. Not slow in a "six orders of magnitude less slow" sense. Which, you've got to admit, is getting into the darned significant range. I did the sensible thing here--I asked someone else what the heck's going on.

It turns out that my SQL (nearly all of it, dammit!) trips over an interesting quirk of Postgres' query optimizer. Y'see, Postgres' optimizer only optimizes LIKE conditions if it gets handed a string constant, so:

foo LIKE 'F%'

gets optimized, while

foo LIKE $1

where $1 is 'F%' doesn't. Now, all my SQL uses LIKE, and all of it uses placeholders, so I can use the PQexecParams call rather than PQexec. Doing it this way means I don't have to do any string escaping or suchlike stuff, which I'm all for not doing. The speed hit, though... Yow.

So, be aware, LIKE doesn't get optimized when you've a non-string-constant you're LIKE-ing against. (Those of you who switched over to PQexecParams to avoid SQL injection attacks for your apps may now groan if you like :)

Posted by Dan at 06:49 PM | Comments (3) | TrackBack

Sometimes things are just too surreal

For reasons I sometimes just don't understand, people seem to assume I know what I'm talking about. (Go figure) This is often a disconcerting thing, but it does have its upside--I get invited places to talk to folks, which is cool. (I like to travel :)

Next week's a workshop at IBM, "Future of Virtual Execution Engines", filled with a bunch of folks significantly more clever than I am. They say there may be video available of it. (Dunno if it's in-house only, or will be public. I'll find out, and I plan on taking a lot of notes)

Saturday October 2nd I'm giving a full day workshop in Cambridge MA, hosted by the Greater Boston ACM chapter. That should be fun -- six hours of mixed-mode parrot and perl 6 talks. Woohoo! Who knows, if I do my job right there you may well be able to write a compiler to target parrot by the time I'm done. That'd be quite keen. Who knows, maybe one of the local MIT denizens can be conned/coerced/enticed into putting together a working Scheme compiler...

Update: Found a link to the workshop.

Posted by Dan at 11:30 AM | Comments (2) | TrackBack

September 05, 2004

I scream, you scream, we all scream for ice cream

For reasons that seemed to make sense at the time, I'm the proud owner of three ice cream makers. (Two Cuisinart and a Krupps. The Cuisinart ones are nicer, FWIW) And we've got a freezer that keeps things nicely sub-arctic and'll drop the chilling containers to a useable temperature in about six hours. And a bunch of us are getting together on Monday for a picnic. That means... ice cream!

So, besides the standards (vanilla, chocolate, mint chocolate chip, cocoanut, and lemon sorbet (don't use straight lemon juice for that. It's a bit... tart)) I've been fiddling some. I mean, why not? The ingredients are cheap, so it's only a matter of time, and with three chillers time's no big deal either. What I've found is that key lime pie comes through again.

The filling does, at least. Not a big surprise, since the filling for a key lime pie's just a lime custard, and a dead-simple one at that. I've used this stuff out-of-pie before (it makes a really nice filling between layers of a white chocolate case) but never frozen. It's about time--frozen custard's darned good. Mix it up, throw it into a 300 degree oven for fifteen or twenty minutes (use a shallow dish and make sure the edges are clean or you'll have burned custard. Ick), cool it down, then freeze. Mmmm! Frozen lime custard.

It works well with lemon and orange as well as lime, too. Dunno how grapefruit works, though I'm not inclined to try. I think I'm going to give it a shot with pineapple and raspberry, at some point. The nice thing about this is that even if you might not get a really good custard texture in a pie (because something in the fruit retards gelling, or you need so much fruit juice that the mix is too watery to gel when refrigerated) it doesn't make that much difference, since you're going to be freezing the stuff anyway, and cold generally wins.

The basic custard recipe, for the interested, is simple. One 14 oz can of sweetened condensed milk, two egg yolks, and 3/4 cup of lime, lemon, or orange juice. Mix up, bake in a shallow pan at 300 degrees for 15-20 minutes, and you're set. (The pan should be shallow, like a pie or cake pan, otherwise the custard in the center won't be properly cooked by the time the outside starts to burn) From there it's good for all sorts of stuff.

Might even make a good mix-in for a buttercream frosting. I'll have to try that at some point...

Posted by Dan at 03:33 PM | Comments (4) | TrackBack