You might remember I skipped cases because I wasn’t sure what I wanted. A couple of weeks ago I read about ResinIO’s Beast Pi design. It’s a nice piece of aesthetics and at a high level of abstraction it’s the sort of the thing I want. On the other hand it seems to have a lot of fidley bits and to be at the pointy end of a long chain of maker gear that I don’t have and to smell a bit like Yak shaving.
I’d recently got all four Pi’s wired up onto an SSH’able network…my own little universe to rule! So it seemed time to have something a bit more structured than wires running across my desk. Running across my desk the wires were because those cheap Cat5 cables have a nice stiff curve along their length and an arbitrary twist between the orientation of the RJ45’s and this imposes torque on the loose lying Pi’s.
The first stop was for some nuts and bolts from the fastener aisle at local Ace Hardware with a Pi in my pocket. Nothing was small enough to fit. Which by implication means that nothing at Home Depot or Lowes was likely to fit either because they all sell pretty much the same stuff at different prices. So time to get serious enough to know what I need. So a bit of googling:
and I know I need m2.5 bolts (the hardware stores carry a few m3’s and a lot of m4’s). So I head over to Opelika Bolt. They have some m2.5’s but not enough. They can order more.
I don’t want to wait, so I load up on more 4-40 nuts bolts and washers than I need. The 4-40’s are oversize but I realize it’s a PCB board and the holes can be bigger without hurting anything. I get home, grab a suitable drill bit and make the 4-40’s fit.
Why didn’t I want to wait? Inspired to build I was. And what the Beast Pi got me inspired to build was with cardboard. I had a lot of it lying around in the form of Amazon boxes that held the Pi’s and such and it would be kind of cool to build a ‘server rack’ out of the box the Pi’s came in. An afternoon of cutting and fitting and I had a concept and a clear understanding that the shipping box the Pi’s came in was not going to hold all of them in a server configuration.
By which I mean that I couldn’t mount all of them in a sensible way and still plug a bunch of USB thumb drives in for additional storage. I probably could have mentioned that going entirely solid state for storage has been added to the road map and that using thumbdrives is in keeping with my character flaw of doing it on the cheap.
Stymied, I stopped, switched gears and wrote out bills. That meant going to the post office to buy more stamps. And that meant I became unstymied.
Like all great plans, my plan to run Ubuntu on my Raspberry Pi’s has not survived first contact with reality.
Ubuntu Snappy Core
I’d thought about Ubuntu and there were two options on the Raspberry Pi download page: Mate and Snappy Core. Mate is a desktop installation and Snappy Core is…well it’s sort of the container friendly future. Ubuntu Snappy is lightweight and modular and updates are atomic and transactional and hence can be rolled back and the only framework snap [Snappy calls the modules ‘snaps’] is Docker. It is Docker because container workloads are mostly what Snappy was designed for.
So I click the link to download Snappy — because I don’t really want Mate because I don’t really want a desktop and because I really want containers — and because Snappy is not officially supported by Raspberrypi.org it takes me to the Ubuntu Snappy site [where I’d been already to read all about Snappy’s goodness]. I follow the instructions to put it on a micro-SD card; stick the card in a Pi; wire everything up and up comes the video test and…then there’s no console or presence on the network.
So maybe I did it wrong so I rebuild the card again (that means firing up gparted to make sure I’ve got the formatting right before I reinstall). I stick the new copy in a Pi, wire it up, etc. and same result. Clearly time for more research.
Turns out I am in the uncanny valley of Microcontrollers. The Raspberry Pi 3 came out in March of 2016 and Ubuntu 16.04 LTS was already just about finalized, so support for the Pi 3 was not included in the release…and the images I’d been loading were for the Pi 2 and there’s enough of a difference between a Pi 2 and Pi 3 that the Ubuntu needs tweeking to run on the Pi 3.
Maybe the Pi 3 will be supported in 16.10 and that’s just around the corner
Raspbian it Is Then
Since that looked like it for now, I loaded up Raspbian Jessie-Lite and started fooling around with it…I’ll skip describing making the card because I’m getting proficient enough that there was no drama. I just made the card, wired things up and it all worked. Exactly what one would expect when we’re just one year away from the year of the Linux desktop. The first thing I tried was good old ~sudo apt install docker~ and it installed. But it turns out to be version 1.5 rather than the latest 1.12. That was disappointing enough that I didn’t even want to know how old 1.5 is.
Installing Docker on Raspbian Jessie
According to this blog and as of right now, that seems to be correct.
My tl;dr instructions for installing Docker 1.12 on the Raspberry Pi 3 are:
ssh pi@<name of sd card> sudo nano /boot/config.txt # Edit for Docker gpu_mem=16 sudo curl -sSL get.docker.com | sh sudo systemctl enable docker sudo systemctl start docker sudo usermod -aG docker pi
The story about Snappy wasn’t quite done
One of the apps I am interested in containerizing is Rocket Chat and so I was on their Github page about Pi’s https://github.com/RocketChat/Rocket.Chat.RaspberryPi and it mentioned that there were Snappy Core images at http://cdimage.ubuntu.com/ubuntu-snappy/16.04/current/ and so I downloaded a copy, imaged a microSD, wired it all up and got four Raspberries on the screen but no terminal and no connection to the network. Maybe I did something wrong…like forgot to type
sync after copying the image so I make the image again and just to prove my insanity get exactly the same results. That’s it, I’ve Lucy, I’m not going to kick your damn football.
Then Charlie Brown decides to check out the Snappy mailing list archive (before I hit IRC) and low and behold this email thread tells me that I’ve been using the special development version and because I want a non-serial terminal I should use the daily build from here.
OK Lucy, hold the ball for me.
I have Amazon Prime and it comes with free two day shipping and technically my order arrived in two days. If just the five pack of ethernet cables arriving counts technically. The Raspberry Pi 3’s arrived a day later along with the 60w USB recharging station. The sdcards and the switch are ‘delayed’ according to the Amazon status page. Who the hell knows where Monoprice’s fulfilled by Amazon USB cables are.
Anyway, I’m fished an Lexar 8GB micro-sd card out of my old Nokia E71. I’ve backed up the photos and videos off it just to be safe [I think I’ve already got all of them, but backing up is faster than verifying…if it were all cryptographically hashed and living in git that’d be easy…maybe one day? The Nokia E71 came with a 6″ USB A to micro-A cable for tethering a computer. Rest assured that it was not nearly long enough to be convenient in any of the eight USB ports on my Dell Precision tower and the 180 degree twist just made it that much worse and even made hooking it into a laptop less than ergonomically ideal. But it was in the bag with the E71 and it’s not assigned to anything else…by which I mean that if I forget to throw it in a bag when I’m travelling I won’t be sorry.
So now that I’ve got an SD card getting one of the Raspberry Pi’s up and running should be…well Raspberry Pi’s are easy to use right? The Raspberry Pi is easy for an embedded system and not especially difficult for a Linux system which is to say that compared to a Windows or Mac or a smartphone it’s not all that easy unless you buy the NOOBS microSD along with your Raspberry Pi.
After the backup, I reformatted the Lexar micro-SD and it being Linux and having run Linux as my primary OS for several years I used ext4 and then followed the instructions for installing the Raspbian image; stuck the card in a Pi; hooked up monitor, keyboard and power and…nothing. So I waited. Still nothing. Maybe I need to be on the network [I remember something about needing an NTP (network time protocol) service]…nope.
Pull the card, go back to the big machine and start Googling. Card needs to be Fat32 and I’m going to use it to start up the Raspberry Pi, so I’ll just use Ubuntu’s Make Startup Disk tool and before I know it, I’ll be up and running. So I make the micro-SD ‘startupbable’ and then follow the instructions for installing Raspbian; stick it back in the Pi, wire everything up and …nothing.
More googling and hoping the thrid time is a charm and I read a bit deeper into the Raspberry Pi boot process and it turns out that it’s extra particular relative to GRUB or GRUB2 (the necessity of learning a bit about came about by needing to dual boot Windows and Linux…and on top of that wanting to boot both Windows XP x64 Professional and Windows 8 and they are two different bootloaders and on the Linux side booting CentOS 6 (my first distro) that uses GRUB and Bhodi Linux that uses GRUB2 and winding up with a good example of why the year of the Linux desktop will be ‘next year’).
So, under Raspbian, the Raspberry Pi boot sequence starts on the GPU and then chains to read a FAT32 ‘boot’ partition on the micro-SD card and then chains to read ‘/boot’ on a regular ext4 system partition.
+----------+ +-------------+ +--------------+ | | | | | | | GPU +-----o+ FAT32 boot +-------o+ ext4 /boot | | | | partition | | | +----------+ +-------------+ +--------------+
A properly installed Raspbian sd-card image looks something like this diagram in gparted
+--------------+------------+--------------+------------------------------------+ | | | | | | | | | | | unallocated | FAT32 Boot | ext4 System |unallocated | | | | | | | | | | | +--------------+------------+--------------+------------------------------------+
Since I mentioned gparted, that’s what I used next to delete existing partions and reformat the entire card to FAT32. Then I followed the instructions for installing Raspbian; stuck the card in a Pi; wired it all up; and third times a charm, it all worked.
For a Linux exercise it was not an atypical number of hours of pain and frustration on the road to the year of the Linux desktop. For an embedded system, it was probably an ‘easy to use’ experience. And now that one Pi is up and running, it’s almost reasonable to assume I can rest on my laurels and smoothly sail to a Swarm cluster…I mean how hard could it be?
I considered going with Raspberry Pi Zero’s for $5.00 a pop. The limit on purchasing seems to be one per order. What about Raspberry Pi 2’s? Well they’re not readily available and where they are they’re not discounted. So the only game in town if you’re playing Raspberry Pi is the model 3. They run about $35.00 which is not exactly cheap except in the amazing historical sense that it’s a quad-core computer at 64-bits with graphics and wifi for less than the cost of a decent mouse.
However, that computer doesn’t come with a power supply. The Raspberry Pi 3 is rated at 2.5A @ 5 volts max. Though it’s worth keeping in mind that the 12.5 watt rating is conservative and allows for significant additional hardware running on the GPIO bus. Since multiple Raspberry Pi’s were in the plan (it’s a cluster after all), the idea is to use a USB phone recharing station for power rather than purchasing multiple independent supplies.
One of the ideas I’ve been considering is hardware as a useful abstraction, and part of what intrigues me about building a local personal cluster [LPC, anyone? yes I just made that up] is in terms of hardware abstractions — beginning with, what does ‘software as a service’ really mean.
Deeper down I suppose one starting point is the psychology of cattle versus pets. I don’t name plastic forks or each gallon of gasoline and I want to get away from the idea of ‘running it on my laptop.’ Part of the reason is that my laptop gets more state each time I run into a problem or read about an improvement that
apt install addresses. Along with all the utility comes a lot of cruft or at least complexity to the point where I’ve gone to install something and already had it. I’ve only been running this SSD for about six months but but the thought of backing up, wiping and restoring to some base state looks painful and short lived.
One of the ideas I played with over the past few months was implementing a bit of the old floppy drive paradigm in thumb drives. Basically, keeping a task/project context on removable media with the goal of organizing high levels of the file tree in hardware. It didn’t really work so well except at the level of operating systems where I swap SSD’s to go back and forth between Windows 10 and Ubuntu. That’s a level of granularity that seems to work for me: generally, I don’t need access to my Windows 10 state from Linux or vice versa.
Though I’m going with Docker Swarm, my LPC plan is to allow switching to a Kubernetes installation to be as simple as swapping floppies (or in this case SD cards) and so the first hardware decision was to bulk purchase more than two micro-SD’s per Raspberry Pi. As a practical matter, the choice of Raspbian/Ubuntu is far more likely to be an early card swap scenario than Swarm/Kubernetes (or Rancher/OS after listening to that SE-Daily episode last weekend).
Something I realized about SD Cards is that the primary use cases are in photography and video and hence the rating classes are based on write performance. Two SD cards rated Class 10 can have substantially different read performance. For the workloads I’m imagining (and I realize I’m imagining) read performance is a more probable bottle neck than write performance.
It’s also worth keeping in mind the available network bandwidth of a Raspberry Pi. The ethernet is 10/100Mb and 100Mb is 12.5 megaBytes per second. That’s not much more than the minimum possible write speed of a Class 10 SD card (class 10 is 10MB per second) and my gut says that out in the real world, it’s unlikely that the Raspberry Pi’s built in Wifi would ever reach the 80Mb (that’s 10MB) per second necessary to write level saturate a class 10 SD card.
I really wanted an excuse to buy a gigabit ethernet switch. I mean, I really really wanted one. But it didn’t make sense and at 8 ports and for $7.99 a 10/100 switch did. I’ll get gigabit when I need it.
The stackable ‘lego’ cases looked kind of cool. I read something that implied there are some struggles with wiring when their stacked. A typical case adds about 20% to the cost of a unit. I’m going to skip cases until I know what I want.
I’m going to splurge on some new 1.5′ ethernet cables and some 3′ USB Type A to micro A cables.
Looks like the total cost of putting a Rasberry Pi 3 online with power supply, case, etc is about $50. Aggregating the power supply, adding a switch, skipping the case, and ‘over provisioning’ sdcards moves it up to about $55.
Bill of Materials
(4) Raspberry Pi 3 Amazon
(10) Class 10 16GB Micro SD cards Amazon
(1) 8 port 10/100 ethernet switch Amazon
(1) 60w USB Charging Station Amazon
(1) 5 pack of 1.5 ethernet cables Amazon
(5) 3′ USB A to Micro A cables MonoPrice
Since I currently have Prime I tried to order everything off of Amazon to saving on shipping. The Dark Patterns of Amazon’s search eventually pissed me off to the point that I ordered the USB cables from MonoPrice. They were fullfilled by Amazon.
Update and Opinion
Doing a little more research on Rasberry Pi Clusters suggested that Real Time clocks make sense. I ordered some from China off Ebay. They fit on the GPIO bus rather than the expansion bus. I tried ordering them from Amazon. Searching for a reasonable price was a pain in the ass and the shipping window for a similar product was six to ten weeks out. Easy to find at a good price on Ebay, much shorter lead time, much lower shipping cost.
That’s been the trend: Amazon gets worse, Ebay gets better. I think that’s because Amazon is a merchant and Ebay is a market maker. The incentives are different to the point Amazon will A/B test implementation of dark patterns, Ebay is unlikely to do anything that reduces the likelihood of a [legitimate] transaction.
I drive the boys up and back from Birmingham about at least once a week for practice and the idea of building a personal cluster came to me three weeks ago around Alexander City. I listen to podcasts while driving because the boys don’t usually talk, they mostly sleep or read or earbud out the world, and when they do talk it’s mostly to each other and not to me. Not that I take it personally, they’re teenage boys.
One of the podcasts I listen too is Software Engineering Daily partly because most of the episodes are interesting and partly because there are a lot of episodes and I can load up phone with downloaded episodes every two to three months and then just hit play while I’m driving and have something fresh and no matter how much I drive, I probably won’t keep up because of the ‘daily’ part of it. And so, a couple of weeks ago I was hitting Alexander City and listening to the episode about KubeCloud.
KubeCloud is an academic project that put Kubernetes on Raspberry Pi’s and for one reason or another that resonated with me and sounded doable: I suppose if I wasn’t already a bit predisposed with a positive attitude toward Kubernetes, I’d probably have skipped the episode like I do when Jeff wades in as an expert on education. After a bit more googleing and reading Hacker News the idea that building a cheap local cluster was doable despite my probable ineptitudes seemed reasonably confirmed.
The first order business decisions were:
- Kubernetes, really?
- Raspberry Pi, really?
For what it does, Kubernetes looks amazingly easy to use. But it makes sense to consider that what it does is facilitate running data centers at Google’s scale and that means that means that making the lives of systems engineers with a few years of data center experience easier within three months, is plausible evidence supporting the ‘easy to use’ claim. The Kubernetes documentation goes along with that view. That’s not a knock on the project or Google or anything, just an observation that the software and community and ecosystem reflects the structure of the businesses behind it and that business is more toward the cathedral end of the organizational spectrum.
So the initial attempt will be Docker Swarm. Yes, there are probably technical tradeoffs including Swarm being less mature and possibly more likely to experience breaking changes. The advantages for a first pass is that Swarm is more of a scaling up from Docker rather than a scaling down from a data center and up is clearly the direction I’m looking to scale. The second factor that puts Docker Swarm in my plan is that Raspberry Pi officially supports Docker or vice versa or something like that.
Once I started researching clusters and pricing out hardware it seemed like there were alternatives to Raspberry Pi. I mean damn, those NanoPi‘s look good and cheap and I don’t really need WiFi or four USB ports [or even video for that matter] and gigabit ethernet would be cool. I went with Raspberry Pi’s anyway due to that whole, what-does-easy-to-use-mean? thing. In this case, I’m scaling down to an SOC [system on a chip] not up from microcontrollers and there were suggestions in my research that an arbitrary SOC board may not receive long-lived robust consumer grade support…specifically the AllWinner SOC’s are just another embedded system component and Linux support is via community run BBS. Raspberry Pi has it’s own site on StackExchange. So does Ubuntu.
If I wasn’t already over the tipping point to spending more money to potentially make my life easier, the availability of Ubuntu images for the Raspberry Pi did it. System administration for Linux is in my opinion why the year of the Linux desktop is always next year and though I’m enough of a masochist to run Linux on the desktop, I’m not enough of a masochist right now to run something other than Ubuntu if I can help it. Never mind trying to run Ubuntu on a piece of hardware with unknown proprietary drivers. The project looks hard enough already.
The plan is looking like Docker Swarm on Raspberry Pi’s.
Early stage investment constitutes greater risk and should offer commensurately larger rewards. Using the option stack an option accrues additional value in accordance with the degree to which it reflects a long term investment by the employee.
The problems with traditional practices surrounding employee options and the mechanics for exercising employee options are discussed by Ben Horowitz in How to Start a Startup: Lecture 15. There is also a transcript.
Altman: The idea is to grant options that are exercisable for 10 years from the grant date.
Horowitz: 10 years on a Startup stock, that’s a valuable thing. Remember the employee who stays doesn’t get that. The employee who stays just gets a stock. They don’t get the new job and the new stock. They get one thing but they don’t get both things. You have to weigh that in.
- Provide ample time for former employees to exercise their options utilizing arms-length financing or other means.
- Align company’s interests in retention with value of employee options by correlating the duration of an option to the duration of the employee’s tenure.
As options vest, they are pushed onto a stack. After an employee leaves, options periodically pop off the stack and expire. Vesting and expiration operate off the same master clock.
Let: c = a constant period of time that is the minimum amount of time former employees have to exercise their vested options. etd = employee's termination date ovd = option vesting date of a specific option. oed = option expiration date of a specific option. In: For Each option oed = etd + (etd - ovd) + c End.
- 100 share options.
- Four year vesting @ 25 share options per year.
- c = one year.
- Employee start date = First day of year 1 = 0 vested share options.
- First day of year 2 => 25 vested share options.
- First day of year 3 => 50 vested share options.
- First day of year 4 => 75 vested share options.
- First day of year 5 => 100 vested share options.
- First day of year 6 => 100 vested share options.
- First day of year 6 => employee resigns with 100 vested share options,
- First day of year 7 => 100 vested share options.
- Second day of year 8 => the 25 options that vested at year end of year 4 expire => 75 vested share options remain.
- Second day of year 9 => the 25 options that vested at year end of year 3 expire => 50 vested share options remain.
- Second day of year 10 => the 25 options that vested at year end of year 2 expire => 25 vested share options remain.
- Second day of year 11 => the 25 options that vested at year end of year 1 expire => 0 vested share options remain.
The one year constant for exercising options after leaving is symmetrical with the one year initial vesting cycle. This is probably easier for an employee to understand upfront and for a manager to clearly explain. A two year cycle might better smooth out variation in larger economic cycles. The example is intended to be illustrative rather than realistically nuanced.
#lang typed/racket as in many Lisps, functions, or more properly: procedures, are first class dataypes. By default,
#lang racket types procedures by arity and any additional specificity in argument types must be done by contract. In
#lang typed/racket procedures are typed both by arity and by the types of their arguments and return values due to the language’s “baked-in contracts”.
(define-type NN (-> Number Number))
This allows specifying a procedure more succinctly:
;; Takes two numbers, returns a number (define-type 2NN (-> Number Number Number)) (: trigFunction1 2NN) (define (trigFunction1 x s) (* s (cos x))) (: quadraticFunction1 2NN) (define (quadraticFunction1 x b) (let ((x1 x)) (+ b (* x1 x1))))
Math as an example
In a domain like mathematics, it would be nice to work with more abstract procedure types because knowing that a function is cyclical between upper and lower bounds (like
cos) versus having only one bound (e.g. our quadratic function) versus asymptotic (e.g. a hyperbolic function) provides for clearer reasoning about the problem domain. I’d like access to useful abstractions something like:
(define-type Cyclic2NN (-> Number Number Number)) (define-type SingleBound2NN (-> Number Number Number)) (: trigFunction1 Cyclic2NN) (define (trigFunction1 x s) (* s (cos x))) (: quadraticFunction1 SingleBound2NN) (define (quadraticFunction1 x b) (let ((x1 x)) (+ b (* x1 x1)))) (: playTone (-> Cyclic2NN)) (define (playTone waveform) ...) (: rabbitsOnFarmGraph (-> SingleBound2NN) (define (rabbitsOnFarmGraph populationSize) ...)
define-type does not deliver this level of granularity when it comes to procedures. Even moreover, the brief false hope that we might easily wring such type differentiation for procedures manually using
define-predicate is dashed by:
Evaluates to a predicate for the type t, with the type (Any -> Boolean : t). t may not contain function types, or types that may refer to mutable data such as (Vectorof Integer).
Fundamentally, types have uses beyond static checking and contracts. As first class members of the language, we want to be able to dispatch our finer grained procedure types. Conceptually, what is needed are predicates along the lines of
SingleBound2NN?. Having only arity for dispatch using
case-lambda just isn’t enough.
Guidance from Untyped Racket
Fortunately, Lisps are domain specific languages for writing Lisps once we peal back the curtain to reveal the wizard, and in the end we can get what we want. The key is to come at the issue the other way and ask “How canwe use the predicates
typed/racket gives us for procedures?”
Structures are Racket’s user defined data types and are the basis for extending it’s type system. Structures are so powerful that even in the class based object system, “classes and objects are implemented in terms of structure types.”
#lang racket structures can be applied as procedures giving the
#:property keyword using
prop:procedure followed by a procedure for it’s value. The documentation provides two examples:
The first example specifies a field of the structure to be applied as a procedure. Obviously, at least once it has been pointed out, that field must hold a value that evaluates to a procedure.
> ;; #lang racket > (struct annotated-proc (base note) #:property prop:procedure (struct-field-index base)) > (define plus1 (annotated-proc (lambda (x) (+ x 1)) "adds 1 to its argument")) > (procedure? plus1) #t > (annotated-proc? plus1) #t > (plus1 10) 11 > (annotated-proc-note plus1) "adds 1 to its argument"
In the second example an anonymous procedure [lambda] is provided directly as part of the property value. The lambda takes an operand in the first position which is resolved to the value of the structure being used as a procedure. This allows accessing any value stored in any field of the structure including those which evaluate to procedures.
> ;; #lang racket > (struct greeter (name) #:property prop:procedure (lambda (self other) (string-append "Hi " other ", I'm " (greeter-name self)))) > (define joe-greet (greeter "Joe")) > (greeter-name joe-greet) "Joe" > (joe-greet "Mary") "Hi Mary, I'm Joe" > (joe-greet "John") "Hi John, I'm Joe
Applying it to typed/racket
Alas, neither syntax works with
struct as implemented in
typed/racket. The problem it seems is that the static type checker as currently implemented cannot both define the structure and resolve its signature as a procedure at the same time. The right information does not appear to be available at the right phase when using
struct special form.
To get around this,
define-struct/exec which roughly corresponds to the second syntactic form from
#lang racket less the keyword argument and property definition:
(define-struct/exec name-spec ([f : t] ...) [e : proc-t]) name-spec = name | (name parent)
Like define-struct, but defines a procedural structure. The procdure e is used as the value for prop:procedure, and must have type proc-t.
Not only does it give us strongly typed procedural forms, it’s a bit more elegant than the keyword syntax found in
#lang racket. Example code to resolve the question as restated here in this answer is:
#lang typed/racket (define-type 2NN (-> Number Number Number)) (define-struct/exec Cyclic2NN ((f : 2NN)) ((lambda(self x s) ((Cyclic2NN-f self) x s)) : (-> Cyclic2NN Number Number Number))) (define-struct/exec SingleBound2NN ((f : 2NN)) ((lambda(self x s) ((SingleBound2NN-f self) x s)) : (-> SingleBound2NN Number Number Number))) (define trigFunction1 (Cyclic2NN (lambda(x s) (* s (cos x))))) (define quadraticFunction1 (SingleBound2NN (lambda (x b) (let ((x1 x)) (+ b (* x1 x1)))))
The defined procedures are strongly typed in the sense that:
> (SingleBound2NN? trigFunction1) - : Boolean #f > (SingleBound2NN? quadraticFunction1) - : Boolean #t
All that remains is writing a macro to simplify specification.
This post is related to Coursera’s Programming for Everyone course.
“Why fire up a text editor when I have a command line?” is the sort of question that I never really asked when I was running Windows as my primary operating system. My default approach was to look for a button to click on. But it’s the sort of question that naturally arises as I spend more time with Linux.
After navigating to the directory where I want to
hello.py to live:
[prog4everyone]$ touch hello.py [prog4everyone]$ echo print "'hello world'" >> hello.py [prog4everyone]$ cat hello.py print 'hello world' [prog4everyone]$ python hello.py hello world
touch hello.py –
touch creates an empty file named ‘hello.py’ if it does not already exist. Otherwise it changes the timestamp of an existing file.
echo print "'hello world'" >> hello.py has two parts.
The first part is
echo print "'hello world'".
echosimply repeats what is given as input. In order to pass the double quotation marks
"for “hello world” through the
echocommand, the quotation marks need to be nested in single quotes
'. Alternatively, to pass single quotes around ‘hello world’ through
echo, they must be must be wrapped in double quotations. In other words
echo print '"hello world"'and `echo print “‘hello world'” will both pass good Python syntax.
The second part is
>> hello.py. It takes the output of the first part and appends it to the end of the file “hello.py”.
cat hello.py – The
cat command concatenates the contents of various files and ouputs the result. In this case the output is sent to the screen, and since only one file is provided as input to
cat we just get the contents of that file [i.e.
print 'hello world']
python hello.py – This calls the python interpeter with “hello.py” as input. This returns
Now I know some people are probably upset since using
touch to create the
hello.py is completely unneccessary because redirection will create a new file if one doesn’t already exist. For example:
[prog4everyone]$ echo print "'goodbye world'" >> goodbye.py [prog4everyone]$ python goodbye.py goodbye world [prog4everyone]$
is even more efficient. Please accept an apology. I know that being enamoured with
touch to create empty files is unhealthy and compulsive.