I drive the boys up and back from Birmingham about at least once a week for practice and the idea of building a personal cluster came to me three weeks ago around Alexander City. I listen to podcasts while driving because the boys don’t usually talk, they mostly sleep or read or earbud out the world, and when they do talk it’s mostly to each other and not to me. Not that I take it personally, they’re teenage boys.
One of the podcasts I listen too is Software Engineering Daily partly because most of the episodes are interesting and partly because there are a lot of episodes and I can load up phone with downloaded episodes every two to three months and then just hit play while I’m driving and have something fresh and no matter how much I drive, I probably won’t keep up because of the ‘daily’ part of it. And so, a couple of weeks ago I was hitting Alexander City and listening to the episode about KubeCloud.
KubeCloud is an academic project that put Kubernetes on Raspberry Pi’s and for one reason or another that resonated with me and sounded doable: I suppose if I wasn’t already a bit predisposed with a positive attitude toward Kubernetes, I’d probably have skipped the episode like I do when Jeff wades in as an expert on education. After a bit more googleing and reading Hacker News the idea that building a cheap local cluster was doable despite my probable ineptitudes seemed reasonably confirmed.
The first order business decisions were:
- Kubernetes, really?
- Raspberry Pi, really?
For what it does, Kubernetes looks amazingly easy to use. But it makes sense to consider that what it does is facilitate running data centers at Google’s scale and that means that means that making the lives of systems engineers with a few years of data center experience easier within three months, is plausible evidence supporting the ‘easy to use’ claim. The Kubernetes documentation goes along with that view. That’s not a knock on the project or Google or anything, just an observation that the software and community and ecosystem reflects the structure of the businesses behind it and that business is more toward the cathedral end of the organizational spectrum.
So the initial attempt will be Docker Swarm. Yes, there are probably technical tradeoffs including Swarm being less mature and possibly more likely to experience breaking changes. The advantages for a first pass is that Swarm is more of a scaling up from Docker rather than a scaling down from a data center and up is clearly the direction I’m looking to scale. The second factor that puts Docker Swarm in my plan is that Raspberry Pi officially supports Docker or vice versa or something like that.
Once I started researching clusters and pricing out hardware it seemed like there were alternatives to Raspberry Pi. I mean damn, those NanoPi‘s look good and cheap and I don’t really need WiFi or four USB ports [or even video for that matter] and gigabit ethernet would be cool. I went with Raspberry Pi’s anyway due to that whole, what-does-easy-to-use-mean? thing. In this case, I’m scaling down to an SOC [system on a chip] not up from microcontrollers and there were suggestions in my research that an arbitrary SOC board may not receive long-lived robust consumer grade support…specifically the AllWinner SOC’s are just another embedded system component and Linux support is via community run BBS. Raspberry Pi has it’s own site on StackExchange. So does Ubuntu.
If I wasn’t already over the tipping point to spending more money to potentially make my life easier, the availability of Ubuntu images for the Raspberry Pi did it. System administration for Linux is in my opinion why the year of the Linux desktop is always next year and though I’m enough of a masochist to run Linux on the desktop, I’m not enough of a masochist right now to run something other than Ubuntu if I can help it. Never mind trying to run Ubuntu on a piece of hardware with unknown proprietary drivers. The project looks hard enough already.
The plan is looking like Docker Swarm on Raspberry Pi’s.