Tuesday, December 14, 2010

The D2 Programming Language

Just began looking at this one last night. It's available, with source, from Digital Mars (http://www.digitalmars.com/d/2.0/), and it's quite interesting.

It is definitely a general purpose programming language, and addresses many areas where people want to pull out their hair with C++. It seems it can directly compete with Go (Google's programming language), in the areas of concurrency, has a system of generics and is what I would call a "more than complete" language :-).

It's actually pretty big, and many people might claim that you don't have to understand what you don't use. That's only true if you're the only one authoring code in this language, or have a set of restrictions to a subset of the language that you go by.

Go, on the other hand, is pretty small, and fairly simple. One can understand the entire language fairly quickly by just reading the specification. It's quick to learn, the tools are quick, and the code runs reasonably fast.

D2, at least with the Digital Mars compiler, is pretty fast too. It can be executed as a "script" of sorts, making it nice for system administrative tasks where you often need the source to be right there.

Both languages seem to do quite well at achieving their stated goals and philosophies, and I expect them both to become more important to know as time goes on.

I hope to see both grow in popularity in the not too far off future.


Sunday, November 28, 2010

Goroutines vs function literals (closures)

Goroutines are a kind of heavy way to deal with a situation where you just want some kind of lazy evaluation. Say I would like to process a file line by line and the basic guts of it looks like this with a goroutines:

func lineStreamer (out chan <- string) {
file, err := os.Open("/usr/share/dict/words", os.O_RDONLY, 0666)
if err != nil {
panic("Failed to open file for reading")
defer file.Close()

reader := bufio.NewReader(file)
for {
line, err := reader.ReadString(byte('\n'))
if err != nil {
// Do something interesting here perhaps other than returning a line
out <- line

This greatly simplifies the act of opening the file, dealing with bufio, and gives me an interface I can just read lines from (or processed lines from) on a channel. But it seems kind of slow, running at about 2.04 to 2.07 seconds on my macbook pro with no runtime tuning. If I raise GOMAXPROCS to 2 I'm getting between 1.836 seconds to 1.929 seconds. GOMAXPROCS at 3 is getting me fairly regular 1.83 seconds.

This got me thinking about how I'd so something like this in other languages. I don't think I'd need coroutines to do it in Scheme for example, as I could do some delay/force thing to get stuff evaluated in chunks.

This led me to the following, possibly non-idiomatic version of a Go program using function literals.

type Cont func()(string, os.Error, Cont) // string, error, and a function returning the next

func lineStreamer (file *os.File, reader *bufio.Reader) (string, os.Error, Cont) {
line, error := reader.ReadString(byte('\n'))
return line, error, func () (string, os.Error, Cont){
return lineStreamer(file, reader)

To evaluate all the lines I can do something like the following:

s,err,next := lineStreamer(file, reader)

for err == nil{
fmt.Printf("%s", s)
s,err,next = next()

And my run times are down to about 1.2 seconds.

I guess my question is, is this idiomatic or not.

Tuesday, June 1, 2010

OMG C++?

I had an interesting problem to solve involving some code that was essentially driven this way:

while(getline(cin, string) {
// process string

A coworker of mine suggested that this would only process a line at a time, which it will, but I was wondering if it was reading a line at a time as well as just serializing activity based on lines read. In essence I wondered if the input was buffered for cin.

On my platform I'm testing with, Mac OS X Snow Leopard, it appears that no buffering is really going on.

Here's some code to show what I mean:

void show_stats () {
if (in) {
cout << "Stream is broken or closed\n" << endl;
else {
cout << "Availble bytes buffered: " <<cin.rdbuf()->in_avail() << endl;

This looks at cin's underlying streambuf implementation and looks to see if there's any available bytes in the buffer. When there's no bytes in the buffer, the istream calls on the internal streambuf's "underflow" function to go get more data, and adjust the buffer for some number of "put back bytes".

What I found was that at no point was I seeing any buffered input coming in for cin, so I decided to write my own streambuf and subsequent istream classes to deal with both buffering and any file descriptor (unix pipe, socket, file etc).

#include <cstdio>
#include <cstring>
#include <streambuf>
#include <unistd.h>
#include <iostream>
#include <errno.h>

class fd_inbuf_buffered : public std::streambuf
int fd;
const int bSize;
char * buffer;

fd_inbuf_buffered (int _fd, int _bSize=10) : fd(_fd), bSize(_bSize)
buffer = new char [bSize];
// The get pointer should not be at the beginning of the buffer, because
// it limits the ability to do put back into the input stream should
// there be a need to. Ideally that situation does not come up, but we
// leave room for 4 bytes, by pointing all 3 locations to 4 beyond the
// beginning of the buffer.
// 4 was the size used in an implementation in Josuttis' "The C++ Standard Library"
setg( buffer + 4, // beginning of putback area
buffer + 4, // read position
buffer + 4); // end position

~fd_inbuf_buffered ()
delete [] buffer;

// Underflow is what fills our buffer from the fd.
// if we don't override this, we get the parent, which just returns EOF.
virtual int_type underflow ()
//read position before end of buffer
if (gptr() < egptr())
return *gptr();

int numPutback = gptr() - eback();

//must limit the number of characters previously read into the putback
//buffer... 4 maximum

if (numPutback > 4)
numPutback = 4;
// Copy up to the putback buffer size characters back into the putback
// area of our buffer.
std::memcpy (buffer + (4 - numPutback), gptr() - numPutback, numPutback);

// read new characters
int num;
num = read(fd, buffer + 4, bSize - 4);
if (num == 0)
return EOF;
else if (num == -1) {
switch (errno) {
case EAGAIN:
case EINTR:
goto retry;

//reset buffer pointers
setg(buffer + (4 - numPutback), buffer + 4, buffer + 4 + num);

return *gptr();

struct fd_istream : public std::istream
fd_inbuf_buffered buf;
explicit fd_istream (int fd, int bufsz) : buf(fd, bufsz), std::istream(&buf) {}

Now I can declare an istream like so:

fd_istream my_cin(0, 1000);

Where 0 is the numeric file descriptor for stdin and 1000 is the buffer size in bytes.

Because I went with the standard IOStream library, as opposed to just writing C style IO directly, I can use it in the same way I'd use any istream. I can use it with iterators or algorithms from the standard library, and I can even use it with getline as you can see below.

void show_stats () {
if (!my_cin) {
cout << "Stream is broken or closed\n" << endl;
else {
cout << "Availble bytes buffered: " << my_cin.rdbuf()->in_avail() << endl;

int main () {
string line;
while (getline(my_cin, line)) {
cout << line << endl;

In an example run, such as "cat /usr/share/dict/words | ./a.out" I see something like the following:

Availble bytes buffered: 12
Availble bytes buffered: 0
Availble bytes buffered: 980
Availble bytes buffered: 968
Availble bytes buffered: 957
Availble bytes buffered: 948
Availble bytes buffered: 937
Availble bytes buffered: 929
Availble bytes buffered: 918

showing how the buffer grows each time I make it read a certain number of bytes, and flows back down to 0. At 0 it calls underflow again, and I can get more data if available or when I hit EOF, I return that from underflow, causing the stream to terminate.

This stream will work for pipes, sockets and files as long as the file descriptor is provided to the constructor. Now because I have a putback buffer size of at least 4, I will have to have allocated at least 4 bytes in my streambuf to make room for the pointers to work properly. There are possibly better ways to deal with it, but for demonstration purposes, this works nicely.

C++ isn't always so bad after all. It just depends on how it's written.

Monday, May 31, 2010

Not usually a fan of IDEs... but

I'm thinking of trying to use Leksah as my primary Haskell development environment on the Mac. I like that they seem to be willing to incorporate Yi as their editing environment to some extent, and I'd like to see where that goes.

Wednesday, May 26, 2010

PLT Scheme is easy

Lots of nice frameworks too. A friend showed me some code he was working to use the Twitter APIs over http to look at people's tweets (if they're not protected).

I thought this was cool, it was 7 lines of code. So I thought I'd wrap it up in a GUI.

Keep in mind I'm NOT a GUI programmer by trade, and that this was my very first venture into PLT GUI programming. It's easy to pick up, and now I've got something hideous that works.

#lang scheme/gui (require net/url xml)
(define (u screenname) (string->url (string-append "http://api.twitter.com/1/statuses/\
user_timeline.xml?screen_name=" screenname)))
(define (f v) (match v (`(text ,_ . ,v) `(,(string-append* v)))
(`(,_ ,_ . ,v) (append-map f v)) (else '())))
(define g (compose f xml->xexpr document-element read-xml))
;(call/input-url (u "omgjkh") get-pure-port g)

(define dialog (new dialog%
[label "Twitter Screen Name Activity Grabulatrixatronulator"]
[width 600]
[height 100]))

(define textfield (new text-field% [parent dialog] [label "Enter a Screen Name"]))
(send textfield set-value "omgjkh")
(display (send textfield get-value))

(define newframe (new frame%
[label "Results"]
[width 1000]
[height 600]
(define tf (new text-field% [parent newframe] [label ""] [min-height 500]))

(define (appender los)
(cond ((null? los) "")
(else (string-append (car los) "\n" (appender (cdr los))))))

(new button% [parent dialog]
[label "GITERDUN"]
[callback (lambda (button event)
(let ((text (appender (call/input-url (u (send textfield get-value)) get-pure-port g))))
(display text)
(send tf set-value text)
(send dialog show #f)
(display "here")(newline)
(send newframe show #t)
(display "here2")(newline)))])

(send dialog show #t)

Current Plan 9 Environment... and loving it!

I've got a Plan 9 CPU server running in VMWare Fusion on Mac OS X Snow Leopard. I ran into a few problems with the setup of this as VMWare Fusion's emulation of IDE disks didn't agree much with Plan 9. Changing to SCSI disks made all the difference in the world.

I followed and updated a little the Plan 9 wiki's instructions on setting up a CPU/Auth server, and then used it with drawterm, a unix program that works like a little Plan 9 terminal to connect to CPU servers, and all was good.

There's a project out there called vx32 which implements a sandboxing/virtualization in userspace library that has been used for a port of the Plan 9 kernel. I grabbed the latest Mercurial snapshot of this code base, and compiled it (after patching it up so Snow Leopard didn't complain about the deprecated ucontext.h stuff), and now I have a Plan 9 kernel (almost, it's not 100% the same) running as a terminal to connect to my Fusion CPU server.

So, now what? Well I may take a crack at the port of the Go language to Plan 9... when I get time to do this again.

Wednesday, May 19, 2010

Developing for the iPhone

The last time I used Objective-C, it wasn't 2.0. As such, I'm needing to brush up quite a bit on my skills. I'm not used to having the compiler generate my properties for me, or the dot syntax etc.

I'm also not a big fan of mixing garbage collection with no garbage collection, I feel that's a recipe for disaster, however I'm going to cautiously proceed anyway.

I will say that the Xcode tools have shown immense improvement since the last release. I wasn't a fan of them at all, and I was surprised that my muscle memory for emacs keybindings isn't wasted completely in the Xcode editor.

If I can keep my interest level high in this area, I may just splurge for that 99 dollar license to deploy applications.

Monday, May 17, 2010

Monday, May 10, 2010

Thoughts on Lazy Evaluation...

... I'll do it later.


So I got an iPad. I've had it about one month. I've been paying a lot of attention to the talk about how it doesn't do Flash, why it doesn't do Flash, how Apple is committing war crimes against humanity by disallowing applications authored in 3rd party tools etc etc.

As a developer who's spent a good bit of time working on different projects that scale from tiny little machines, to medium sized computers to giant supercomputing clusters (yes, I've been on several of the top 10 of the top500 list, writing software to squeeze performance out of them) I can tell you that flexible tool chains, great documentation, and great support do not always go hand in hand.

My opinion on that is that it's a bit sad that I won't be writing and running Haskell code on a non-jailbroken i(Phone|Pod|Pad) but that that's not a deal breaker for most people. Cocoa is a nice framework, with many years behind it making it great. Objective-C is a pretty cool language, (though I feel they should have kept it simpler, no garbage collection, all this automatic atomic update stuff can be confusing etc). Grand Central Dispatch and the libdispatch stuff is powerful, even in a raw C programming context, though some folks I know don't think it's well served to use it outside the realm of Objective-C. Having suffered programming with threads and locks, (even implementing my own locks on certain platforms) I'd say that this is a big step forward in thinking about concurrency and parallelism by means of organizing program code at a low level.

Yes, when you buy Apple's stuff, it's a bit more about doing things "their way" than doing things "your way". The limits Apple places on the hardware it supports with its operating systems, or the limits placed on programmers via the tool chains, are all really there for 2 reasons (in my opinion).

1. It keeps Apple from losing control of its own platform.
2. Apple can focus their engineering and support efforts on making a product that seemingly "just works" with all supported stuff, because the space of stuff to support is a lot smaller!

To me, neither of these things are inherently "evil", as some folks might like to convince you.

I should note I've used my iPad every day since I got it at least one time/day. To pay for it I began selling some of the stuff I intend not to need anymore, such as my old iPod touch, and my old laptop. So far it's been a great trade!

Wednesday, April 14, 2010

Just a thought or two on strictness

I've noticed that a lot of times when it comes to profiling programs in any language, that it is often surprising where the big cost centers are. Maybe it's just due to experience, but a lot of times I at least feel like I'm not as surprised in imperative languages when I recognize a cost center or my tool chain does. With functional languages, I'm often surprised that certain implementations of functions are as costly as they are, while others are not. This is even more true in "lazy" functional languages like Haskell.

When you're dealing with strict evaluation by default, you can basically just read the program top down and get a good idea for what happens when, and how data much a particular chunk of code ought to be using, and for the most part you'll get this right with experience.

With a language like Haskell, sometimes, all bets are off unless you know very well how a particular library of functionality is implemented. There's both strict and non-strict versions of Control.Monad.State for example. The non-strict version almost requires you to read the code, then think about how much stuff can be deferred to the last minute, and realize that those little deferred computations will have to be evaluated later. This can not only change the order of evaluation and thusly the data size of your program, but can also change the behavior a great deal.

Data.Map has operations within it for update that will be deferred (non-strictly evaluated) and some that are strict. It's important to read the documentation.

My advice to beginners is to try to understand strictness vs non-strictness (often called laziness) in Haskell as early as possible. It will give you great leverage to expressing some really beautiful solutions to problems, and save you from pulling all of your hair out or suffering any existential crises when trying to figure out why a program doesn't do what you thought.

All of that said, it's pretty easy for me to see why some people will decide that non-strict functional programming languages are not for them. It certainly takes a little mind bending to see the value in it if all you've ever done is imperative or strict functional programming, but I promise you the value is truly there. I also find that by challenging myself to learn how to think in these new paradigms I am simultaneously strengthening my understanding of the older paradigms a good deal more, so give it a shot and don't give up! There's help out there for you.

Saturday, February 20, 2010

NineP is on hackage!

I'm excited about it. There's still work to be done, but now I think there's a good chance to get a lot more feedback about the module as it's in experimental, and hopefully we can get some of these folks interested in distributed systems to use this nice, simple protocol.

Friday, February 19, 2010

9ph package for cabal/hackage

The 9ph code repository now has a fully cabalized module, and I'm waiting on a response to get permission to upload stuff to Hackage. This will be my first shared contribution to the Haskell community, though all I've done is create a simple test program and some of the administrative work around wrapping this code up. If anyone likes this, the credit should go to Tim.

I've got some ideas for enhancements, and ways to make a nice server/client API on top of this lovely encoding/decoding library that Tim created on top of the very excellent binary package for Haskell and the lovely Applicative module.

Monday, February 8, 2010

9ph works!

Finally, since last September, I've gotten around to messing with 9ph again. That's the 9P2000 implementation in Haskell that Tim Newsham basically wrote and sent to me to play around with.

I had started such a project on my own at one point, but Tim totally beat me to it, and rather than just re-implement the wheel, I figured I'd check out what he'd done and checked it in.

Today, I wrote a function that runs an IO program yielding a socket that's connected to a local 9P2000 server (or error'd out actually) running on From there I was able to create messages for Tattach, Topen, and Tread, and could explore the errors and successes of the server I was talking to.

For my experiments I just used the Inferno operating system with

"styxlisten -A 'tcp!*!6872' export /"

This just exports the local / namespace that the process running the styxlisten command can see to the world on TCP protocols listening on port 6872.

This library should be enough to write scripts for what is quickly becoming my favorite X11 window manager (if you must use one) wmii. wmii actually exposes a bit of functionality via a 9P server that you can mount from v9fs, Inferno, or any other 9P client. It'd be nice to be able to write some Haskell to configure wmii I figured, so this might be my first use for this library.

I'm hoping to get back to this again before long. This was good progress. I'm going to check in on Tim and see if he's got any more updates he feels he'd like to push. If not I'll just work from this fork.

Wednesday, February 3, 2010


Iteratee really looks promising on paper, and people using it seem to think it's really great. I've been put off a bit by what looks like a rather complex interface, but decided last night to take a crack at it.

What I've written below is an iteratee wrapper around "cat /etc/passwd" output using runInteractiveCommand.

module Main where

import Control.Monad.Trans
import Data.Iteratee.Base
import Data.Iteratee.Base.StreamChunk (ReadableChunk (..))
import Data.Iteratee.IO.Handle
import System.Process
import System.IO

-- For some reason this signature is wrong, but I'm not sure why...
--handleDriver :: (MonadIO m, ReadableChunk s el) => IterateeG s el m a -> Handle -> m a
handleDriver iter h = do
result <- enumHandle h iter >>= run
liftIO $ hClose h
return result

main :: IO ()
main = do
(_, outp, _, _) <- runInteractiveCommand "/bin/cat /etc/passwd"
handleDriver (stream2list :: IterateeG [] Char IO String) outp >>= putStrLn

handleDriver just runs enumHandle with an iteratee (in my case stream2list) over the Handle, in blocks (not character by character) that are specified by the implementation, returning the result. That result is then printed to stdout by putStrLn.

This is a little bit like interact for Handle in that I could have used something more advanced than stream2list to process the result of "/bin/cat /etc/passwd".

I'm not too excited about the fact that I got the type signature on handleDriver wrong, and I'm also a little bit put off by the type signature on stream2list.

So what's the difference between this approach and an interact-like styled approach? For one iteratees work like the function that's supplied to a fold operation over a collection of data. In this case the collection is being produced dynamically via the enumerator. The iteratees themselves work like little parsers that can be composed in a monadic sense. Errors in IO and termination of a stream get propagated automatically up through the system nicely.

Why is lazy IO then not so great? Look at the signature for interact:

interact :: (String -> String) -> IO ()

Interact is a function that takes a function of String to String and produces IO. This means it, ostensibly consumes all the input on stdin, applies the provided function to convert the whole String to a String, then prints that string to stdout. Now it doesn't have to read the whole of input in one shot, because of lazy evaluation. Strings are [Char], and the list structure in Haskell is non-strict in its construction. It's like pausing the construction of that list to do some processing on it, and going back to it with coroutines, except that the system is doing it behind the scenes.

What happens in interact when an error occurs? How does the pure function of type (String -> String) even know about exceptions in the processing? This is where iteratee is an improvement on traditionally lazy IO.

Let's assume we wanted to write a lazy version of interact for a handle called hInteract.

hInteract :: Handle -> (String -> String) -> IO ()

I believe this function could be used safely as follows:

withFile "/etc/passwd" ReadMode ((flip hInteract) id)

withFile uses bracket internally to ensure that hClose is called on the handle and all seems well. I don't think we necessarily understand how resources get used. Oleg, the father of Iteratee, posts this message a few years back explaining more of the benefits of Iteratee.

However, it seems that there is now a new lazy IO mechanism available that is safe. I've not had any time to check into this, but I plan to in the next coming days.

Having written an Expect-like Monad, I'm interested in the aspects of error handling and precise resource control, because the code I'm writing really needs to be able to run to as close to forever as I can get.

Friday, January 15, 2010

When types and definitions aren't enough.

Was in #haskell on freenode a bit this morning, and someone mentioned something about how they were not exactly excited about the new rules for code formatting on if-then-else expressions.

I mentioned that I try to avoid if-then-else and case as much as possible by using types like Maybe that have 2 kinds of constructors, namely Nothing and "Just a" (for Maybe a).

I said that I can use the MonadPlus instance for Maybe a to get a lot of what is available in if-then-else clauses.

let x = someExpression
in if x == Nothing
then 9
else fromJust x

could be written as

let x = someExpression
in fromJust $ x `mplus` Just 9

mplus is defined for Maybe as evaluating the first parameter, and if it is not Nothing, it returns it, otherwise it will return the second parameter. It's essentially an "or" operator.

However, someone pointed out that there's absolutely no requirement for mplus to be written this way. It can still live up to all the rules and restrictions of MonadPlus by short-circuiting evaluation on the second argument instead of the first. Sure, it's sort of a de-facto first then second sequencing of evaluation, but it is not as safe as say "if-then-else".

I wonder now about the Applicative module as well, and specifically Alternative for the Maybe class.

I could just as easily write

let x = someExpression
in fromJust $ x <|> Just 9

But do we fall into the same trap of no guarantees? Is there a rule in Applicative enforcing the short-circuit of the first argument before the second?

Much code is written in the Applicative style for Parsec, so I really hope this is well defined.

Monday, January 11, 2010

What's Missing in the Haskell Community?

Documentation is often sighted as possibly the #1 item that needs to be improved with respect to Haskell. It depends on what modules we use, but I have to agree. It's quite difficult to uphold the claim that you don't need to understand Category Theory in order to employ a Monoid or Monad, but when you run into Monoid instances like "endo" and don't know what to make of it because the documents don't really describe how to use it, you're probably going to suffer in that that implementation of Monoid is likely useful for some kind of programming that you want to do, and you'll end up struggling with a solved problem. Some people have been stepping up to improve the documentation, and that's really wonderful, but I think there's still some work to be done there.

The best way to learn Haskell in general for me has been to get the great books that are available out there. Real World Haskell is freely available online. (But please support the authors and get a copy if you're finding it useful). Search for Haskell on Amazon.com and you'll find that the reviews are a really good guide to picking which ones might be right for you. If you're really new to the language. Dr. Graham Hutton's book is outstanding. There's even been a series on MSDN's channel 9 walking through the chapters of this book, explaining how to solve some problems and think like a functional programmer.

To keep up to date with Haskell developments, reddit has been invaluable. You'll find blog posts, updates about new Haskell packages, and general community news and related discussion topics there.

So what's still missing?

I can tell you that over the years I've been messing around with Haskell, trying to understand how it works, why it's appropriate for certain kinds of problem solving, and to really get a good appreciation of why people seem to really like it so much, that I feel like the community has been pretty amazing with respect to fueling the flames of curiosity.

Where I think we might be needing a little more help is in the following areas of Haskell.

Explaining where laziness or non-strict evaluation is an advantage over strict evaluation. Perhaps this requires learning to think about the code we write differently, in much the same way it can be a leap to get to recursive programming, I feel this might be a slightly wider gap to cross mentally. (But then again maybe I'm just getting old... )

Show more examples of unintended data-growth or space leakage due to the lack of strict evaluation. In languages like C, you're in direct control of when memory is allocated or deallocated. This is generally considered a "bad thing" for a lot of tasks, including systems programming if you are signed up with the Go camp. A side effect of non-strict by default seems to be that you have to understand how the code you're writing will be evaluated from a bit of a wider view than you might need to care about malloc and free, or new or delete. It seems that unless you've somehow been taught how to recognize the patterns that could cause a a space leak, you're basically doomed to run into some sharp corners that others already seem to understand how to avoid.

Real World Haskell has a great chapter on optimization, but perhaps it's time for an "Optimizing Haskell" book too? There's lots of good advice scattered all over the web, and the experts are not shy to offer you help should you ask. Sometimes I think it's difficult to even ask the right questions when you're confused though, and I suspect this may turn some folks off to Haskell.

Wednesday, January 6, 2010

L4 and Plan 9 or Inferno or both?

Some folks already started in on a Plan 9 port to L4 it seems (PDF), but I'm not sure how far they've gotten with it. I've been sort of peeping at L4 on and off for a long time now, and OKL4 has been successfully shipped in a few ARM phones, successfully being used to host Linux and a Qualcomm OS that drives the phone's radios and such. Neat stuff.

Been wondering if Plan 9 or Inferno and L4 are really a good marriage and what benefits could be added by leveraging Plan 9's namespace based resource management and L4's powerful IPC mechanisms.

As I've had a little more spare time lately, I've been digging around looking into L4 again, and I'm interested in exploring some ideas a little more deeply.

There's a few different implementations of L4 to look at...

OKL4 is now basically an ARM only platform in it's latest releases. Fiasco is a Pentium targeted L4 implementation that has a userland implementation that might be fun to work with (Fiasco-UX), and Pistachio is still being worked by at least a few people, with the latest changes coming in as of yesterday. Pistachio also supports a lot of architectures.

I'm tempted to play with each of these, but my problem has always been one of focus when it comes to these spare time projects as free time for me is usually at a premium.

Tim Newsham has been following the seL4 stuff, and OKL4 is migrating towards those APIs. His vote is I shoot for OKL4, so I believe that's where I'm going to start. There's a good community around that implementation, and a commercial pressure to keep things working nicely.

But as they say, talk is cheap... Let's see what I learn.


Looks like a lot of progress was made by others who have already started this work!