is Ben Rudgers

Plan is Raspberry Pi, what should I buy?


I considered going with Raspberry Pi Zero’s for $5.00 a pop. The limit on purchasing seems to be one per order. What about Raspberry Pi 2’s? Well they’re not readily available and where they are they’re not discounted. So the only game in town if you’re playing Raspberry Pi is the model 3. They run about $35.00 which is not exactly cheap except in the amazing historical sense that it’s a quad-core computer at 64-bits with graphics and wifi for less than the cost of a decent mouse.

However, that computer doesn’t come with a power supply. The Raspberry Pi 3 is rated at 2.5A @ 5 volts max. Though it’s worth keeping in mind that the 12.5 watt rating is conservative and allows for significant additional hardware running on the GPIO bus. Since multiple Raspberry Pi’s were in the plan (it’s a cluster after all), the idea is to use a USB phone recharing station for power rather than purchasing multiple independent supplies.

SD Cards

One of the ideas I’ve been considering is hardware as a useful abstraction, and part of what intrigues me about building a local personal cluster [LPC, anyone? yes I just made that up] is in terms of hardware abstractions — beginning with, what does ‘software as a service’ really mean.

Deeper down I suppose one starting point is the psychology of cattle versus pets. I don’t name plastic forks or each gallon of gasoline and I want to get away from the idea of ‘running it on my laptop.’ Part of the reason is that my laptop gets more state each time I run into a problem or read about an improvement that apt install addresses. Along with all the utility comes a lot of cruft or at least complexity to the point where I’ve gone to install something and already had it. I’ve only been running this SSD for about six months but but the thought of backing up, wiping and restoring to some base state looks painful and short lived.

One of the ideas I played with over the past few months was implementing a bit of the old floppy drive paradigm in thumb drives. Basically, keeping a task/project context on removable media with the goal of organizing high levels of the file tree in hardware. It didn’t really work so well except at the level of operating systems where I swap SSD’s to go back and forth between Windows 10 and Ubuntu. That’s a level of granularity that seems to work for me: generally, I don’t need access to my Windows 10 state from Linux or vice versa.

Though I’m going with Docker Swarm, my LPC plan is to allow switching to a Kubernetes installation to be as simple as swapping floppies (or in this case SD cards) and so the first hardware decision was to bulk purchase more than two micro-SD’s per Raspberry Pi. As a practical matter, the choice of Raspbian/Ubuntu is far more likely to be an early card swap scenario than Swarm/Kubernetes (or Rancher/OS after listening to that SE-Daily episode last weekend).

Speed Class

Something I realized about SD Cards is that the primary use cases are in photography and video and hence the rating classes are based on write performance. Two SD cards rated Class 10 can have substantially different read performance. For the workloads I’m imagining (and I realize I’m imagining) read performance is a more probable bottle neck than write performance.

It’s also worth keeping in mind the available network bandwidth of a Raspberry Pi. The ethernet is 10/100Mb and 100Mb is 12.5 megaBytes per second. That’s not much more than the minimum possible write speed of a Class 10 SD card (class 10 is 10MB per second) and my gut says that out in the real world, it’s unlikely that the Raspberry Pi’s built in Wifi would ever reach the 80Mb (that’s 10MB) per second necessary to write level saturate a class 10 SD card.


I really wanted an excuse to buy a gigabit ethernet switch. I mean, I really really wanted one. But it didn’t make sense and at 8 ports and for $7.99 a 10/100 switch did. I’ll get gigabit when I need it.


The stackable ‘lego’ cases looked kind of cool. I read something that implied there are some struggles with wiring when their stacked. A typical case adds about 20% to the cost of a unit. I’m going to skip cases until I know what I want.


I’m going to splurge on some new 1.5′ ethernet cables and some 3′ USB Type A to micro A cables.


Looks like the total cost of putting a Rasberry Pi 3 online with power supply, case, etc is about $50. Aggregating the power supply, adding a switch, skipping the case, and ‘over provisioning’ sdcards moves it up to about $55.

Bill of Materials
1. (4) Raspberry Pi 3 Amazon
2. (10) Class 10 16GB Micro SD cards Amazon
3. (1) 8 port 10/100 ethernet switch Amazon
4. (1) 60w USB Charging Station Amazon
5. (1) 5 pack of 1.5 ethernet cables Amazon
6. (5) 3′ USB A to Micro A cables MonoPrice

Since I currently have Prime I tried to order everything off of Amazon to saving on shipping. The Dark Patterns of Amazon’s search eventually pissed me off to the point that I ordered the USB cables from MonoPrice. They were fullfilled by Amazon.

Update and Opinion

Doing a little more research on Rasberry Pi Clusters suggested that Real Time clocks make sense. I ordered some from China off Ebay. They fit on the GPIO bus rather than the expansion bus. I tried ordering them from Amazon. Searching for a reasonable price was a pain in the ass and the shipping window for a similar product was six to ten weeks out. Easy to find at a good price on Ebay, much shorter lead time, much lower shipping cost.

That’s been the trend: Amazon gets worse, Ebay gets better. I think that’s because Amazon is a merchant and Ebay is a market maker. The incentives are different to the point Amazon will A/B test implementation of dark patterns, Ebay is unlikely to do anything that reduces the likelihood of a [legitimate] transaction.

A Nitwit’s Plan

Rambling Introduction

I drive the boys up and back from Birmingham about at least once a week for practice and the idea of building a personal cluster came to me three weeks ago around Alexander City. I listen to podcasts while driving because the boys don’t usually talk, they mostly sleep or read or earbud out the world, and when they do talk it’s mostly to each other and not to me. Not that I take it personally, they’re teenage boys.

One of the podcasts I listen too is Software Engineering Daily partly because most of the episodes are interesting and partly because there are a lot of episodes and I can load up phone with downloaded episodes every two to three months and then just hit play while I’m driving and have something fresh and no matter how much I drive, I probably won’t keep up because of the ‘daily’ part of it. And so, a couple of weeks ago I was hitting Alexander City and listening to the episode about KubeCloud.

KubeCloud is an academic project that put Kubernetes on Raspberry Pi’s and for one reason or another that resonated with me and sounded doable: I suppose if I wasn’t already a bit predisposed with a positive attitude toward Kubernetes, I’d probably have skipped the episode like I do when Jeff wades in as an expert on education. After a bit more googleing and reading Hacker News the idea that building a cheap local cluster was doable despite my probable ineptitudes seemed reasonably confirmed.

Early Decisions

The first order business decisions were:

  1. Kubernetes, really?
  2. Raspberry Pi, really?

For what it does, Kubernetes looks amazingly easy to use. But it makes sense to consider that what it does is facilitate running data centers at Google’s scale and that means that means that making the lives of systems engineers with a few years of data center experience easier within three months, is plausible evidence supporting the ‘easy to use’ claim. The Kubernetes documentation goes along with that view. That’s not a knock on the project or Google or anything, just an observation that the software and community and ecosystem reflects the structure of the businesses behind it and that business is more toward the cathedral end of the organizational spectrum.

So the initial attempt will be Docker Swarm. Yes, there are probably technical tradeoffs including Swarm being less mature and possibly more likely to experience breaking changes. The advantages for a first pass is that Swarm is more of a scaling up from Docker rather than a scaling down from a data center and up is clearly the direction I’m looking to scale. The second factor that puts Docker Swarm in my plan is that Raspberry Pi officially supports Docker or vice versa or something like that.

Once I started researching clusters and pricing out hardware it seemed like there were alternatives to Raspberry Pi. I mean damn, those NanoPi‘s look good and cheap and I don’t really need WiFi or four USB ports [or even video for that matter] and gigabit ethernet would be cool. I went with Raspberry Pi’s anyway due to that whole, what-does-easy-to-use-mean? thing. In this case, I’m scaling down to an SOC [system on a chip] not up from microcontrollers and there were suggestions in my research that an arbitrary SOC board may not receive long-lived robust consumer grade support…specifically the AllWinner SOC’s are just another embedded system component and Linux support is via community run BBS. Raspberry Pi has it’s own site on StackExchange. So does Ubuntu.

If I wasn’t already over the tipping point to spending more money to potentially make my life easier, the availability of Ubuntu images for the Raspberry Pi did it. System administration for Linux is in my opinion why the year of the Linux desktop is always next year and though I’m enough of a masochist to run Linux on the desktop, I’m not enough of a masochist right now to run something other than Ubuntu if I can help it. Never mind trying to run Ubuntu on a piece of hardware with unknown proprietary drivers. The project looks hard enough already.


The plan is looking like Docker Swarm on Raspberry Pi’s.

Option Stacks: Solving the Horowitz-Altman Conundrum


Early stage investment constitutes greater risk and should offer commensurately larger rewards. Using the option stack an option accrues additional value in accordance with the degree to which it reflects a long term investment by the employee.

The problems with traditional practices surrounding employee options and the mechanics for exercising employee options are discussed by Ben Horowitz in How to Start a Startup: Lecture 15. There is also a transcript.

Altman: The idea is to grant options that are exercisable for 10 years from the grant date.

Horowitz: 10 years on a Startup stock, that’s a valuable thing. Remember the employee who stays doesn’t get that. The employee who stays just gets a stock. They don’t get the new job and the new stock. They get one thing but they don’t get both things. You have to weigh that in.


  • Provide ample time for former employees to exercise their options utilizing arms-length financing or other means.
  • Align company’s interests in retention with value of employee options by correlating the duration of an option to the duration of the employee’s tenure.


As options vest, they are pushed onto a stack. After an employee leaves, options periodically pop off the stack and expire. Vesting and expiration operate off the same master clock.


  c = a constant period of time that is the minimum amount of time
      former employees have to exercise their vested options.
  etd = employee's termination date
  ovd = option vesting date of a specific option.
  oed = option expiration date of a specific option.
  For Each option
    oed  = etd + (etd - ovd) + c



  1. 100 share options.
  2. Four year vesting @ 25 share options per year.
  3. c = one year.


  1. Employee start date = First day of year 1 = 0 vested share options.
  2. First day of year 2 => 25 vested share options.
  3. First day of year 3 => 50 vested share options.
  4. First day of year 4 => 75 vested share options.
  5. First day of year 5 => 100 vested share options.
  6. First day of year 6 => 100 vested share options.
  7. First day of year 6 => employee resigns with 100 vested share options,
  8. First day of year 7 => 100 vested share options.
  9. Second day of year 8 => the 25 options that vested at year end of year 4 expire => 75 vested share options remain.
  10. Second day of year 9 => the 25 options that vested at year end of year 3 expire => 50 vested share options remain.
  11. Second day of year 10 => the 25 options that vested at year end of year 2 expire => 25 vested share options remain.
  12. Second day of year 11 => the 25 options that vested at year end of year 1 expire => 0 vested share options remain.


The one year constant for exercising options after leaving is symmetrical with the one year initial vesting cycle. This is probably easier for an employee to understand upfront and for a manager to clearly explain. A two year cycle might better smooth out variation in larger economic cycles. The example is intended to be illustrative rather than realistically nuanced.

Writing Strongly Typed Procedures in Typed Racket

In #lang typed/racket as in many Lisps, functions, or more properly: procedures, are first class dataypes. By default, #lang racket types procedures by arity and any additional specificity in argument types must be done by contract. In #lang typed/racket procedures are typed both by arity and by the types of their arguments and return values due to the language’s “baked-in contracts”.

The Problem

The Typed Racket Guide provides an example using define-type to define a procedure type:

 (define-type NN (-> Number Number))

This allows specifying a procedure more succinctly:

 ;; Takes two numbers, returns a number
 (define-type 2NN (-> Number Number Number))

 (: trigFunction1 2NN)
 (define (trigFunction1 x s)
   (* s (cos x)))

 (: quadraticFunction1 2NN)
 (define (quadraticFunction1 x b)
   (let ((x1 x))
     (+ b (* x1 x1))))

Math as an example

In a domain like mathematics, it would be nice to work with more abstract procedure types because knowing that a function is cyclical between upper and lower bounds (like cos) versus having only one bound (e.g. our quadratic function) versus asymptotic (e.g. a hyperbolic function) provides for clearer reasoning about the problem domain. I’d like access to useful abstractions something like:

 (define-type Cyclic2NN (-> Number Number Number))
 (define-type SingleBound2NN (-> Number Number Number))

 (: trigFunction1 Cyclic2NN)
 (define (trigFunction1 x s)
   (* s (cos x)))

 (: quadraticFunction1 SingleBound2NN)
 (define (quadraticFunction1 x b)
   (let ((x1 x))
     (+ b (* x1 x1))))

 (: playTone (-> Cyclic2NN))
 (define (playTone waveform)

 (: rabbitsOnFarmGraph (-> SingleBound2NN)
 (define (rabbitsOnFarmGraph populationSize)

Alas, define-type does not deliver this level of granularity when it comes to procedures. Even moreover, the brief false hope that we might easily wring such type differentiation for procedures manually using define-predicate is dashed by:

Evaluates to a predicate for the type t, with the type (Any -> Boolean : t). t may not contain function types, or types that may refer to mutable data such as (Vectorof Integer).

Fundamentally, types have uses beyond static checking and contracts. As first class members of the language, we want to be able to dispatch our finer grained procedure types. Conceptually, what is needed are predicates along the lines of Cyclic2NN? and SingleBound2NN?. Having only arity for dispatch using case-lambda just isn’t enough.

Guidance from Untyped Racket

Fortunately, Lisps are domain specific languages for writing Lisps once we peal back the curtain to reveal the wizard, and in the end we can get what we want. The key is to come at the issue the other way and ask “How canwe use the predicates typed/racket gives us for procedures?”

Structures are Racket’s user defined data types and are the basis for extending it’s type system. Structures are so powerful that even in the class based object system, “classes and objects are implemented in terms of structure types.”

In #lang racket structures can be applied as procedures giving the #:property keyword using prop:procedure followed by a procedure for it’s value. The documentation provides two examples:

The first example specifies a field of the structure to be applied as a procedure. Obviously, at least once it has been pointed out, that field must hold a value that evaluates to a procedure.

> ;; #lang racket
> (struct annotated-proc (base note)
     #:property prop:procedure
      (struct-field-index base))
> (define plus1 (annotated-proc
     (lambda (x) (+ x 1))
     "adds 1 to its argument"))
> (procedure? plus1)
> (annotated-proc? plus1)
> (plus1 10)
> (annotated-proc-note plus1)
"adds 1 to its argument"

In the second example an anonymous procedure [lambda] is provided directly as part of the property value. The lambda takes an operand in the first position which is resolved to the value of the structure being used as a procedure. This allows accessing any value stored in any field of the structure including those which evaluate to procedures.

> ;; #lang racket
> (struct greeter (name)
    #:property prop:procedure
    (lambda (self other)
         "Hi " other
          ", I'm " (greeter-name self))))
> (define joe-greet (greeter "Joe"))
> (greeter-name joe-greet)
> (joe-greet "Mary")
"Hi Mary, I'm Joe"
> (joe-greet "John")
"Hi John, I'm Joe

Applying it to typed/racket

Alas, neither syntax works with struct as implemented in typed/racket. The problem it seems is that the static type checker as currently implemented cannot both define the structure and resolve its signature as a procedure at the same time. The right information does not appear to be available at the right phase when using typed/racket‘s struct special form.

To get around this, typed/racket provides define-struct/exec which roughly corresponds to the second syntactic form from #lang racket less the keyword argument and property definition:

    (define-struct/exec name-spec ([f : t] ...) [e : proc-t])

      name-spec     =       name
                    |       (name parent)

Like define-struct, but defines a procedural structure. The procdure e is used as the value for prop:procedure, and must have type proc-t.

Not only does it give us strongly typed procedural forms, it’s a bit more elegant than the keyword syntax found in #lang racket. Example code to resolve the question as restated here in this answer is:

#lang typed/racket

(define-type 2NN (-> Number Number Number))

(define-struct/exec Cyclic2NN
   ((f : 2NN))
   ((lambda(self x s)
     ((Cyclic2NN-f self) x s))
      : (-> Cyclic2NN Number Number Number)))

 (define-struct/exec SingleBound2NN
   ((f : 2NN))
   ((lambda(self x s)
     ((SingleBound2NN-f self) x s))
       : (-> SingleBound2NN Number Number Number)))

 (define trigFunction1 
    (lambda(x s) 
      (* s (cos x)))))

(define quadraticFunction1
    (lambda (x b)
      (let ((x1 x))
        (+ b (* x1 x1)))))

The defined procedures are strongly typed in the sense that:

> (SingleBound2NN? trigFunction1)
- : Boolean
>  (SingleBound2NN? quadraticFunction1)
- : Boolean

All that remains is writing a macro to simplify specification.

Mini Tutorial: Bashing hello.py

This post is related to Coursera’s Programming for Everyone course.

“Why fire up a text editor when I have a command line?” is the sort of question that I never really asked when I was running Windows as my primary operating system. My default approach was to look for a button to click on. But it’s the sort of question that naturally arises as I spend more time with Linux.

After navigating to the directory where I want to hello.py to live:

[prog4everyone]$ touch hello.py  
[prog4everyone]$ echo print "'hello world'" >> hello.py 
[prog4everyone]$ cat hello.py 
print 'hello world'
[prog4everyone]$ python hello.py  
hello world

touch hello.pytouch creates an empty file named ‘hello.py’ if it does not already exist. Otherwise it changes the timestamp of an existing file.

echo print "'hello world'" >> hello.py has two parts.

  1. The first part is echo print "'hello world'". echo simply repeats what is given as input. In order to pass the double quotation marks " for “hello world” through the echo command, the quotation marks need to be nested in single quotes '. Alternatively, to pass single quotes around ‘hello world’ through echo, they must be must be wrapped in double quotations. In other words echo print '"hello world"' and `echo print “‘hello world'” will both pass good Python syntax.

  2. The second part is >> hello.py. It takes the output of the first part and appends it to the end of the file “hello.py”.

cat hello.py – The cat command concatenates the contents of various files and ouputs the result. In this case the output is sent to the screen, and since only one file is provided as input to cat we just get the contents of that file [i.e. print 'hello world']

python hello.py – This calls the python interpeter with “hello.py” as input. This returns hello world.

Now I know some people are probably upset since using touch to create the hello.py is completely unneccessary because redirection will create a new file if one doesn’t already exist. For example:

[prog4everyone]$ echo print "'goodbye world'" >> goodbye.py
[prog4everyone]$ python goodbye.py
goodbye world

is even more efficient. Please accept an apology. I know that being enamoured with touch to create empty files is unhealthy and compulsive.

Racket: Windows COM-object Example

This morning’s lesson from investigating a StackOverflow question (which I have not answered):

#lang racket
(require ffi/com)
(define ie (com-create-instance "InternetExplorer.Application"))
    (com-invoke ie "Navigate" "http://stackoverflow.com/questions/21038482/how-to-pass-box-unsigned-int-to-com-invoke")
(com-set-property! ie "Visible" #t)

It is taken from an extended example in the Racket email archives.

How I am learning Git on Windows


My interest in version control arises in no small part from the years of using informal systems and loose in house standards for managing documents for architectural projects. If it was just the case that every office archived their projects differently, the world would be a better place. It would be better even if each individual did so, but the delta between the number of architectural projects and the number of ways in which they are archived and managed is rather small…at  best. Some projects have different methods for each type of document, and sometimes these are changed with a project’s phase.


As I started to get more serious about learning to program, I started using version control out of three motivations: for access to code from both laptop and desktop; as a way to backup my work; and because of my interest in the general problem. What I have found is that using version control not only solves the technical issues and resolves my curiosity, it also structures my thinking as I develop projects. The implicit symmetry between the local and remote repositories is something worth seeking [Perlis, Epigram 6].

Feature Presentation

I have a free account on Github and use the Windows app. The interface has a Metro look, but it’s not a Metro app. The downside of Github is that all free account repositories on Github are public, and I have a use case, coursework for Coursera classes, that make public repositories inappropriate.

For that I have a free account on Bitbucket because it allows private repositories. I access that using Atlassian’s free version of SourceTree. What I like about SourceTree is that it exposes more of the guts of version control. If command line Git is C, it’s Java, and the Github app is…well, it’s an app.

The mention of the command line is not accidental. Once I start wanting to do something a little advanced, the step by step descriptions for implementation will invariably use the command line because at it’s core Git is not an app; it is a system with a command language, and the command language maps directly onto Gits underlying concepts in a specific way versus the generic language of click here, select that, type this, click there…etc. of app usage.

As a twenty odd year user of Windows, it feels very unnatural to say this, but learning the command line is something to be embraced. Then again Emacs feels unnatural from that perspective too.


My thanks to Eric Sink for the free! [hard!] copy of his book Version Control by Example which he graciously provided as an “Offer HN:” a couple of years ago. It provides consistent examples which cover the general issues across different types of version control systems. When I finally got around to using version control, I reread it more thoroughly than when I was merely interested in the topic.


Amazon Affiliate Link to Eric’s book.

Non-Affiliate Link to Eric’s book.

Expanded from this HN comment.

Documentation is a Feature

The problem

As Alice in Wonderland {1}, I think about all the times I have chased around documentation structured where eggs links to bacon and sausage; bacon links to sausage and eggs; and sausage links to bacon and eggs and when spam is what I needed, and eventually it turns out on the one-hundred and seventeenth round trip, I notice that there is a link to spam right there at the bottom of the pages for eggs and sausage and bacon; hot linked in the phrase “Lobster Thermidor aux crevettes with a Mornay sauce, garnished with truffle pâté, brandy and a fried egg on top of spam.”

I have started thinking that part of the reason this happens is that hyperlinks are nothing more than GOTO, and sure, since GOTO is just JMP and JMP is fundamental to computing, there’s nothing inherently wrong with hyperlinks [or GOTO]. It’s just that circular GOTO constructs don’t fit with well with my brain’s bias toward pattern matching upon the obvious.

The other factor is not a product of recent thinking: open source documentation tends to be a ball of mud.

It has this tendency because it’s hard work to write and good documentation for a project of any significance and a lot less fun than writing code for people who write the code that requires all that documentation. The fun part is self explanatory, but the hard work is because good documentation looks more like the 2000 pages of GNU Emacs manuals or Knuth’s 1962 book about compilers than Java-doc. Good documentation is built around a narrative, or several narratives when there are multiple manuals, and each of these narratives is structured to have a beginning, middle, and end.

When the foundation document in a documentation project doesn’t have an appendix or two, it should be heeded as a warning not celebrated as a feather in the cap. Good documentation recognizes that there’s always something more to say, and tries to say it.

Documentation as a competitive advantage.

Four or five years ago when my father, whose early professional career entailed Algol on paper tape and ended using Wolfram’s SMP on Vax’s decided he wanted to learn word processing, spreadsheets and databases; I recommended he just purchase a copy of Office. Being Dad, this of course meant he chose Libre Office because it did everything he wanted to do. Technically, he was correct. The reason he went out and bought Office last year was because his local Barnes and Nobel and even Amazon had a poor selection of books about Libre Office, and more books about Office than there are trolls on the Internet [approximately].

This wasn’t surprising. He is a consumer of documentation and the right answer for him was pretty obvious to me. Being a similar consumer myself going back to the Amiga ROM Kernel Manuals. The quality of documentation is why I’ve decided EMACS is worth the effort of learning. It reminds me of the old hard bound reference manuals that used to ship with Auto-Cad. It was the quality of Microsoft’s documentation and its availability in hard copy that led me to stay on the Microsoft Stack until making a baby step to Racket and its above average documentation. The quality of Racket’s documentation is a byproduct of a writing culture it inherits from the academic world.

Apple is easy to write about

Though it is easier to write documentation for Microsoft products than for open source projects because there are easy assumptions about the context in which the reader operates. SQLserver, unlike MySQL, is only relevant with a narrow range of operating systems and under a few very similar licensing agreements and business models. MySQL could be running anywhere.

But nothing is easier to write about than Apple, it’s perhaps the reason why their devices are so popular with journalists. The reason is that Apple has established a ubiquitous language and this allows it to control the narrative. We talk about “App stores for Android” and despite the protestations, P. T. Barnum’s adage {2} that all that matters is if your name is spelled correctly holds true.

The Mozilla Conundrum

My list of great open source documentation projects is unfair. Mozilla’s documentation of all things browser related is phenomenal. It educates developers about the entire domain of which Mozilla’s projects form just a part. Unfortunately, it does not provide any momentum toward the adoption of Firefox. The decision to use Firefox is not made by the users of their developer portal. Decisions about Firefox are largely made one at a time by non-developers and usually based upon very few meaningful technical criteria.

For some software companies, however, “going Mozilla” and setting out to comprehensively document an industry segment, competitors and all, might provide strategic advantage. For a B2B oriented company based upon open source, documenting the larger domain demonstrates larger interest in helping potential customers find solutions to their problem irrespective of an established customer-vendor relationship.

A company providing a primary research resource for a larger market, above and beyond simply providing documentation for their own products, is able to honestly express the open-source ideals for a sound business purpose.

The fundamental ethic of the open-source movement is goodwill. The user must trust that the author is trying to solve the problem they say they are trying to solve. More interestingly, the author must trust that use of the software, in the worst case, causes the author no harm, and in better cases might just provide the author with some benefit. It is the ethic: we are not dividing a pie, your profit is not my loss.

{1} see Alan J. Perlis, Epigram 48

{2} Barnum allegedly said, “I don’t care what they say about me, just make sure they spell my name right.”

 Page 1 of 7  1  2  3  4  5 » ...  Last »