KludgeCode

is Ben Rudgers

Where Do I Want to Send a Check?

This morning I read Fred Wilson’s You Are Working Too Hard And Not Getting Anywhere via HN.

In any business idea, I start by thinking, “How can I get people to give me money.” No doubt because of my background in a family of people who worked for paychecks. I suspect that most people think that way.

TACODA scaled their business when they answered the question, “How can a give money to other people?” As I noted in one of my comments, its easy to sell mailbox checks* to a business person. It makes them look and feel clever.

What I take out of Wilson’s article is that deciding who to mailbox-check may be a good starting point for developing a business model. Compared to getting Lamborghini to send me a check, sending a mailbox-checks to Lamborghini is trivial. Monetizing the process of mailbox-checking Lamborghini is non-trivial, but may be several orders of magnitude easier than getting them to send me a check.

It’s just a matter of recognizing this is just a cognitive shift, not a process shift. I could open a bike shop, but unless I want to write checks to bicycle distributors, I should look for something else.

*”Mailbox checks” come from my time with real-estate developers. They were the relatively small recurring payments from the deals done years ago – the rent from a twenty unit small-market subsidized apartment complex might only be a few thousand a month. But it was every month for many years.

It seems to me that figuring out the right people to mailbox-check is the essence of investing and taking investment – investor’s give money, investments give money back.

Remarks: Epigram 7

This is part of a writing exercise around Alan Perlis‘s Epigrams in Programming.

It is easier to write an incorrect program than to understand a correct one.

The first week of my first semester as a philosophy major, I spent 80 minutes reading Descartes Meditations. I had progressed six pages. Understanding is generally hard work. When the material is unfamiliar progress is slow.

It’s easier to do a lot of things incorrectly.

Dick Cavett wrote for Groucho Marx.  “Well, you certainly could have fooled me” is the correct punchline. Every other variation is wrong. Groucho expected the syllables to be perfect. Writing a sitcom is easier – make it funny enough to work with a laugh track.  Vaudevillians like Groucho spent decades telling the same joke day after day until the timing was perfect. But try as you might, you can’t put that on TV and get people to watch.

Correct varies. There is always a context.

What can be measured easily gets done. Writing lines of code is easy to measure. Developing an understanding of another person’s program is not. There is friction against sitting and thinking. The keyboard makes a noise which some will take for progress. My friend, Mike the Chemist, had a pithy saying in the last of his undergraduate years. “Six hours in the lab will save you an hour in the library.”

 

Remarks: Epigram 6

This is part of a writing exercise around Alan Perlis‘s Epigrams in Programming.

Symmetry is a complexity-reducing concept (co-routines include subroutines); seek it everywhere.

[An obviously Hickian interpretation] Data aggregating to data is even more symmetric than co-routines including subroutines. Co-routines and subroutines implement different logic – a subroutine can be functional a co-routine cannot because co-routine is a way not only of maintaining state but of making one state dependent on another. A co-routine is a continuation which is inherently stateful rather than seeking to be a datum.

[A caveat] My formal understanding of co-routines is Wikipedian, and the 70 points of intelligence Wikipedia gives me may not be sufficient to overcome my ignorance regarding them and to a lesser degree continuations.

 

Remarks: Epigram 5

This is part of a writing exercise around Alan Perlis‘s Epigrams in Programming.

If a program manipulates a large amount of data, it does so in a small number of ways.

Contemporary Example: Youtube.

Youtube involves huge amounts of data, yet it’s publicly observable behaviors have evolved slowly over the past few years. Upload, play, pause, rewind, search, leave a comment, make a playlist, that’s pretty much always been the extent to which users can manipulate the data.

What Google does behind the scenes is undoubtedly more sophisticated, but still the ways in which it has manipulated the dataset seem limited relative to its scale at this time. Which points out that what is large today is larger than what was large yesterday – it would probably be disorienting to treat the large data sets when Perlis wrote the epigram as large today.

The world wide web illustrates the effects of scale on data manipulation. At it’s massive scale, there are only a few simple manipulations – Get, Post etc. – while at the small scale of the DOM, there are all sorts of standard manipulations one can do.

Perhaps the small number of manipulations goes hand in hand with large sets of data because the only way for a human to deal with large amounts of data is via abstractions. Our purpose in examining data is to find patterns (or their absence)  and a pattern is an abstraction.

Remarks: Epigram 4

This is part of a writing exercise around Alan Perlis‘s Epigrams in Programming.

Every program is part of some other program and rarely fits.

Great Chain of Being Interpretation –  The operating system may be considered part of the program  because the program makes calls to services provided by the OS. The CPU’s microcode is likewise part of the OS. But below that, it stops. It is programs all the way down to the hardware but no further. Hardware is the last turtle. [Interestingly, causal chains to an Unmoved Mover feel as if they are going down rather than up. We descend a causal tree. We climb to angels.].

Web of Life Interpretation – Let’s treat “program” and “algorithm” as interchangeable (we can discuss arguing about that later) and get ourselves into the land of processes. Now the line between algorithms held in volatile memory and those implemented by logic gates is arbitrary. If we consider human processes – call them “habits” out of love for Dewey – akin to algorithms then we can close the web up nicely. [I recognize it’s a stretch, so consider it a thought experiment]. Now we’ve got something along the lines of “users are part of programs and rarely fit.

Everything a user a user tries to do suffers from the incidental complexity of translation into ones and zeros. Whatever goals the user has or whatever actions they take which cannot be translated do not exist for the homunculus living inside our programs. The degree to which our program translates actions and goals into ones and zeros is  a measure of its simplicity for the user.

The easiest thing for software is failing to translate the user’s actions and goals into ones and zeros. This means that not doing something the user needs to do introduces complexity. Now they have to look for another tool for abandon their goal. Thus the sense in which something is simple for the user correlates to powerful software. This is the allure of Excel. It can make charts and calculate tips and maintain a list for groceries and even let the user write letters (o.k. it was actually Lotus123 my boss Greg used to use). Lotus123 was simple to use, but as a whole, not easy.

The same for GNU/Emacs – what could be simpler than typing in commands? The hard part is remembering all the commands and learning what they do. Notepad doesn’t have that problem because it doesn’t allow the user to fetch their email and maintain a calendar. Microsoft Office has the same problems as Emacs – and shares some with Notepad for the sake of Easy.

Determining the value of waterboarding logic to treat the user as software is left as an exercise for the reader.

 

S-Expression Isomorphism Between Lisp and Markup

Expressive Data in Lisp Lists based on Programming in an Interactive Environment [Erik SandeWall,1978].

I came across this little snippet and was impressed by the degree of isomorphism between s-expressions and markup languages such as XML.


((  (JAN 12 2014)

    ((9 15) (10 00) (SEE ANDERSON))

    ((10 45) (11 00) (SEE LUNDSTROM))

    ((13 15) (16 00) (ATTEND Y COMMITTEE MEETING)))

((  (JAN 13 2014)

    ((9 30) (10 00) (ATTEND NEW PRODUCTS PRESENTATION)))

I was primed for the idea by Slava Akhmechet’s article “The Nature of Lisp” which I found by looking at HackerNews item 10000 – which was a comment on the article.

Paul Graham’s Recipe for Pasta

I’ve been reading Paul Graham’s book – or rather reading from it for several months since picking up a used copy on Amazon for about $30. When reading his notes and going back to  the text to see what was so important it needed a note, I came across this gem on object oriented programming:

Let’s play that back one more time: we can make all these types of modifications without even looking at the rest of the code. This idea may sound alarmingly familiar to some readers. It is the recipe for spaghetti code.

The object-oriented model makes it easy to build up programs by accretion. What this often means, in practice, is that it provides a structured way to write spaghetti code. This is not necessarily bad, but it is not entirely good either. [Ansi Common Lisp, Paul Graham, 1996, page 408]

I don’t know how far Graham’s views have changed over the last two decades, but there is something there which seems consistent with Rich Hickey’s views. But perhaps only if we ignore “not necessarily bad.”

Remarks: Epigram 3

This is part of a writing exercise around Alan Perlis‘s Epigrams in Programming.

Syntactic sugar causes cancer of the semicolon.

Better start like Rich Hickey with a definition. In this case from Wikipedia:

syntactic sugar is syntax within a programming language that is designed to make things easier to read or to express. It makes the language “sweeter” for humans to use: things can be expressed more clearly, more concisely, or in an alternative style that some may prefer…it can be removed from the language without any effect on what the language can do: functionality and expressive power will remain the same.

There are two ways I can go with this.

  1. Perl’esque concision can make a significant fraction of a program consist of semicolons, that is it becomes enlarged.
  2. The ultimate syntactic sugar accessible by Lisp Macros means that the diseased semicolon has been removed.
 Page 5 of 8  « First  ... « 3  4  5  6  7 » ...  Last »