Saturday, November 28, 2009

Lazy Programming, too much, too little?

Another Haskeller/Blogger wrote the following interesting article on how optional laziness doesn't always work out so well.

However I'm finding there's a lot of functions that don't make a ton of sense to my imperative, strictly trained programmer's mind to do in a lazy way by default.

foldl is a good example. Without strictness checking turned on in the compiler options for GHC, it can turn into a memory leak right away.

I find myself kind of hard pressed to come up with a good use for a lazy foldl where it isn't obvious that I'd rather have used foldl'.

Wednesday, November 11, 2009

Long running Haskell applications

I've been using Haskell in a serious way for about 2 years. Been using it in a professional sense about 1.5 years now in that, yes, I am one of the lucky ones that gets to use Haskell at work.

The ride has been pretty smooth most of the time, as I've found that the type system especially helps me to rule out certain classes of bugs, easily test rather large chunks of programs as they're pure. The interpreter allows experimentation and iteration of ideas that can then be composed into the final compiled programs. All of this gives me a good deal of confidence that the code I'm writing is correct to some degree up front, something I've come to expect from functional programming languages over the years and greatly appreciate.

However I've felt compelled to comment that things aren't always so smooth either. I spent the better part of a weekend and a Monday tracking down a space leak in a program that just was not allowed to leak space. I have a stack of a ReaderT StateT IO that I use to communicate with a device through a passthrough program that speaks CAN to a device for the purposes of creating a serial console connection where there is only a CAN bus. The Haskell program is responsible for the management of the data found at the other end o the serial connection and supports operations to the device through the serial channel via a simple text protocol while "forever" polling data on the serial endpoint.

What I had done was the equivalent of

pollerLoop :: Poller ()
pollerLoop = forever pollOnce

Where Poller is my monad stack.
pollOnce is defined as,

pollOnce :: Poller ()
pollOnce = do
checkCommandChannel -- see if there's a pending command to run

Yes, this application is multi-threaded. I have a logger thread, a thread watching standard input of the program for queries and issuing commands to the endpoint. I have a thread per Poller, and the ability to poll devices simultaneously.

The poller includes an implementation of my little "expect" syntax which was based on a naive implementation of hGetChar and checking for a desired result or timing out eventually. The really data inefficient version is a real screamer, beating the heck out of my Parsec or even ReadP version, but because of the way I wrote it, with a lot of reversing and prefix checking and substring slicing into temporary areas, it's not useful for large input blocks over a long time frame.

Still it's so fast that it's appropriate for certain sections of code. I measured and verified this via space profiling to see the real runtime heap utilization (nice feature btw, I'd be dead without it right now I think).

So what's the problem you're probably thinking? Well it turns out that since part of my state in StateT is a Data.Map, and that all my polling and parsing of expect-passing blocks caused updates to a Map, coupled with the language's default laziness caused a bit of a "bomb" of PAP (Partial APplications of functions).

I had an ill-timed, project wise, discovery of a space leak.

I tried sprinkling $! and seq all over the place, rewriting big chunks of code that used Text.Regex.Posix, to use Parsec and only got incremental improvements. The growth problem still existed, and would eventually exhaust memory. This was a real problem as this was an application that was not supposed to stop when the other conditions of the management system were ok. It could run for months or even years!

I went through a whirlwind of emotions, and considered that perhaps I should be working a different job. Perhaps I could be a lion tamer? It turned out what I thought was a lion was really an anteater though... but that's literally a different story.

It turns out that by looping not inside the monad, but over the execStateT/runReaderT expression I could pull all the state out, and then re-inject it into another execStateT/runReaderT each poll, which forced the strictness I needed on all state data, made the system respond faster, and best of all, not crash!!!!

Diagrams below:

This first one is the "before" picture. It shows the data growth by cost center in my code. As you can see things are getting worse almost linearly as I poll.

This picture illustrates the result of pulling the state out of the Monad, and re-injecting it, forcing it to be evaluated. As you can see, I've got much more manageable memory utilization.

This final one shows the new algorithm running with 2 threads, one spawned a few seconds into the run. You can see the initial burst of the fast but inefficient "expect" algorithm, followed by a much more regular memory utilization pattern.

I'd like to thank all the folks on #haskell on FreeNode who gave me suggestions and hints to my vague problems regarding data growth and laziness vs strictness. Haskell's got a great user community and is probably one of the most helpful out there. One can learn a lot just asking questions of the right mentors as well as by reading haskell-cafe and the various blogs that are out there.

I'm hoping that this anecdote is useful to someone.