2015 Year in Review

It’s that time of year again — time for a retrospective on how I did on my goals for the year. I had 5 main goals for 2015:

  • Job Hunting
  • Conferences
  • Blogging
  • Programming Language Design
  • Writing an Agile Book

Job Hunting

I got pretty lucky on this one. My main contract with Mercy got extended several times. Amos and I must have been doing a good job of keeping the customer happy. We even made it through a couple rounds of layoffs. I’m wrapping up the gig at Mercy now. I’m working one day a week there, as the project winds down.

I also started a new gig this month at CenturyLink. I’m working on a cloud development team. Our current project involves selling WordPress as a service. The manager had been courting me for most of the year. I’m excited about my new role; I’ll be writing about it in a blog post soon.


I set a goal in 2014 to give my first conference talk. I accomplished that, giving an ambitious talk at RubyConf. I enjoyed having done that, and vowed to do more conference speaking.

I gave 3 conference talks in 2015. I gave a workshop on HTTP at RailsConf. I talked about immutable infrastructure at Madison+ Ruby. At RubyConf, I gave a talk on a micro-ORM I wrote. I also gave a lightning talk about Agile estimation (#noestimates).

I was an alternate speaker at Windy City Rails, but did not give my talk on Alternatives to ActiveRecord. I also went to Strange Loop, mainly to see several friends and acquaintances speak.


I wrote 24 blog articles this year. That’s about one every other week. What really kept me going was participating in a writing pact. When the pact was going, I had a 75% blogging rate. That’s pretty good.

I’m not so sure about the quality of my blog writing though. I know that practicing writing is supposed to make you better. I know I wrote some really good articles over the past year, but I think I also wrote some articles that weren’t very good. I think sometimes the deadline has caused more harm than good. I’m not really sure what to do about that; perhaps just pushing on is the right answer.

Programming Language Design

I’ve taken a lot of notes on the design of my programming language. Any time I learn something interesting about another language, or come up with another idea, I write it down.

But I haven’t worked on the implementation. (I last worked on the implementation in 2014.) I should be experimenting with some ideas, implementing them to see how they work out. I’ve even kicked around the idea of starting with a Forth variant, just to get something working quickly.

I haven’t written any articles on my ideas this year either. My notes are pretty extensive, and it would be good to write some articles to help get my thoughts straight.

Writing an Agile Book

I’ve got some things to say about Agile, and want to write a book to express those ideas. I’ve made a start — I’ve got the chapters outlines, and have started on a few chapters. But I haven’t made as much progress as I’d like to. I shared what I’ve got with Amos, and he showed some interest in pairing with me on the writing. Hopefully we’ll work on it together in 2016 and publish it.


There were a few other accomplishments that weren’t explicitly on my list, but I’d like to call attention to.

I’ve continued participating on the This Agile Life podcast. I was in 12 of the 33 episodes that were recorded in 2015. I hope to participate in more in 2016. We’re considering scheduling a standard recording night each week, which might help us record more regularly.

I recently took over as maintainer of Virtus, a library to declare attributes for Ruby model classes. I haven’t done a lot yet, since I’ve been busy with travel, vacation, and holidays. But I hope to catch up with all the pending pull requests and issues in the next month or so.

The accomplishment I’m most proud of is mentoring for the Roy Clay Sr. Tech Impact program. This is a program begun as a result of the Ferguson protest movement. We’re helping teach kids (from 14 to 25) web design and development. My personal goal was to give these kids an opportunity that they would not have otherwise had. But it turned out that some of them have actually started a business building web sites for small companies. I’m so proud of the progress they’ve made in such a short time; it’s a challenging program.


I’m pretty happy with my accomplishments this year. I made at least some progress on each of the goals I set. I’ve been thinking about my goals for next year; I’ll write that as a separate blog article next week.

The Ultimate Optimization

I got my first computer in 1984. It was a Commodore 64. I had to do extra chores and save up my allowance to buy it. I was 13.

Back then, Sears and other retail stores had Commodore 64 computers out on display. Whenever I went to the store, I’d write a short BASIC program and leave it running on the display computers:

20 GOTO 10

Hey, I was a 13-year-old kid. Later, I got a little more sophisticated. I’d change the background color instead. I still remember the POKE address:

10 FOR I=0 TO 255
20 POKE 53281, I
40 GOTO 10

This was fast enough to change the background color every few scan lines, creating a flashing scrolling effect.

Later, I learned 6502 assembly language. I translated the BASIC program into assembler, and memorized the bytes to type in at the store. In assembly language, the background color would change several times per scan line. The effect was kind of psychedelic.

All that happened in the mid-1980s.

Fast-forward to about 2000 or so. I was telling the above story after a St. Louis LUG meeting. I explained how I had memorized the 10 or 12 bytes of machine code, and would leave the program running with its psychedelic effect.

After thinking about it for a bit, I thought that 10 or 12 bytes seemed too much. It actually bothered me — I couldn’t fall asleep when I got home. I got up and found my old 6502 manuals. I figured out how to write the code in 7 bytes. I installed the Vice C64 emulator on my Linux desktop, and tested my code. It worked as expected. (The emulator was already clock-cycle perfect by then.) Here’s the assembly code:

INX         ; $E8           ; 232
STX $D021   ; $8E $21 $D0   ; 142 33 208   ; $D021 = 53281
JMP $C000   ; $4C $00 $C0   ; 76 0 192     ; $C000 = 49152

Here’s the BASIC program to store that program in memory and run it:

10 FOR N=49152 TO 49152+6: READ Q : POKE N, Q : NEXT
20 DATA 232, 142, 33, 208, 76, 0, 192
30 SYS 49152

The moral of the story is that you can optimize even a 10-byte program, 15 years after the last time it was used. So don’t tell me that your program can’t be improved, no matter how small it is.

PS. I rewrote the code above while writing this article in 2015, about 15 years after the last time I rewrote it. And I again downloaded Vice to test it, this time on Mac OS X.

Not Quite Callbacks

I’ve been working on application architectures based on Uncle Bob’s Ruby Midwest talk, following the hexagonal architectural pattern. I posted an article a couple months ago showing a way that works fairly well in Rails, and some accompanying Rails example code. But there was one thing I wasn’t quite happy with.

The problem is that we used callbacks (actually, a publish/subscribe mechanism) in a situation where they don’t seem to quite fit:

  def show
    interactor.on(:display) { |order| render order }
    interactor.on(:not_found) { |order_id| render status: 404 }

What we really want is to respond in different ways, depending on the result of the call to interactor.get(). There’s no good reason to define the responses before the call. It makes a lot more sense to define the responses after the call, because they’ll happen after the call. I’d much prefer that the code be written in the order that it will be run.

I discussed this problem with my friend and colleague, Amos King. We came up with a better solution, which puts things back in the right order:

  def show
    interactor.get(params[:id]) do |on|
      on.display { |order| render order }
      on.not_found { |order_id| render status: 404 }

He even wrote a small library to do this, which he called Riposte. I’m not sure what to call this pattern, but it seems to work pretty well in this situation. I suppose that they’re still technically callbacks, because they’re passed in in the block that’s passed in to the call to interactor.get(). But due to the magic of Ruby blocks, we get to put them in the order they should be.

Riposte also gives you the option of using the response object directly, instead of passing a block:

  def show
    on = interactor.get(params[:id])
    on.display { |order| render order }
    on.not_found { |order_id| render status: 404 }

This shows that it’s just returning an object, with the twist that the response object has methods that take blocks. The nested blocks variant is really the same thing, except that it’s yielding to the response object instead of returning it.

I’ve decide that is the pattern I’d like to use for interactions and their callers within Ruby hexagonal architecture.

Architectural Thoughts

I’ve started working on my own framework in Ruby in the past couple days. It’s built upon my recent work at understanding Uncle Bob’s Ruby Midwest 2011 talk, and his article on Clean Architecture, and the resulting hexagonal architecture (AKA ports and adapters).

Somehow my research in that vein led me to Gary Bernhardt’s Boundaries talk. I’ve read a lot about the talk, and knew about the idea of “functional core / imperative shell”. And I’ve worked with a lot of similar ideas lately. But I believe this is the first time that I actually watched the whole video.

Even after having read a lot about similar ideas, it was pretty mind-expanding. Gary’s really good at presenting these kinds of ideas in a simple way.

OOP as usually taught includes encapsulation of data together with behavior, with mutable objects. Functional programming separates data and behavior, with mostly immutable data. From experience, encapsulating data and behavior together seems helpful. But experience also shows that immutability is useful. So it would be good to have both of those together. This is something I’ve been thinking for a few years — how best do we get both?

Gary calls the combination “FauxO”. Logic and data are still combined, but there’s no mutation. Anywhere OOP would normally have mutation would just generate a new object. There’s no language restriction involved in enforcing immutability — just discipline.

But without mutability, it’s hard to do IO and maintain state. So Gary’s solution is to encapsulate as much as possible into an immutable (functional or FauxO) core, and around that, use an imperative (traditional OOP) shell. The functional core contains the bulk of the logic, and the imperative shell is a glue layer that handles the real world, including disk, network, and other I/O.

The result of this is that the shell has fewer paths, but more dependencies. The core contains no dependencies, but encapsulates the different logic paths. So we’re encapsulating dependencies on one side, and business logic on the other side. Or put another way, the way to figure out the separation is by doing as much as you can without mutation, and then encapsulating the mutation separately.

I love how this naturally breaks things up, so that the core is all testable with unit tests, and the imperative shell is tested with integration tests. And since the shell has few or no logic paths, you get the testing pyramid, with more unit tests and fewer integration tests. The whole thing ends up being quite beautiful. Tests end up being very fast without any extra effort — not even stubbing or mocking. This tells us that things have been decomposed very well — an elegant design.

Gary makes the case that immutable objects can be treated as values, and passed across boundaries. Even process boundaries. This is something I’ve noticed as I’ve been working on my own Uncle Bob style hexagonal framework, but nobody in that camp ever mentioned that — they prefer DTOs or something more like hashes. I’m completely against hashes, because of the “stringly-typed” problem. And I don’t see much advantage in a DTO if I’ve got an immutable object; I’d be basically copying the object to an almost identical object. And I’d be losing any laziness possible for derived values within the original immutable object.

It’s striking to me how Gary’s image of an imperative shell around a functional core, plus Net, Disk, and State outside of the shell mirror’s Uncle Bob’s concentric circles. Uncle Bob has entities in the middle, surrounded by use cases, surrounded by Web, DB, and UI.

Another advantage that Gary shows is that breaking things up this way allows easy concurrency. In his example, he shows using the actor model — either just using threads and queues, or an actor library (or language feature).

After several years of thinking about the architectural issues seen in most large Rails apps, I’m starting to come to an understanding of how to combine all these ideas and come up with an architecture that will work better.


Hexagonal Rails Controllers

I’ve had a long love-hate relationship with Rails. I love the MVC framework and how it’s improved our speed of writing web apps. But I’ve never really been completely happy with it. I don’t generally agree with most of its opinions. I prefer models that follow the Data Mapper pattern, not the Active Record pattern. This includes separating the persistence layer from the models’ business logic. I prefer Slim or HAML to ERB. I prefer RSpec to Test::Unit or MiniTest. When Merb hit the scene, I was ready to make the jump, until Merb merged with Rails.

So inspired by PJ Hagerty’s recent article on alternative Ruby web frameworks, I started thinking about how I’d write a replacement for Rails. I’d definitely keep the basic MVC framework. But I’d also want to implement a more hexagonal architecture.

I started sketching out what this would look like, but I ended up starting with a Rails controller and finding the simplest way to make it hexagonal. I really don’t like callbacks, because they make tracing program execution difficult. But I didn’t see any other alternative. I found a simple pub/sub Ruby library called Wisper. It literally has only publish, subscribe, and on methods. (You use on to register single callbacks via blocks, and subscribe to register an object with method names corresponding to the callback names.)

The trick was figuring out how to break the controller into 2 pieces. What finally helped me was to find the single responsibilities of the 2 pieces. The Rails controller would remain in charge of managing the web interface, but would delegate to the other piece to handle any application-specific business logic. I decided to re-watch Uncle Bob Martin’s “Architecture The Lost Years” talk, which was the first time I was introduced to the ideas of Hexagonal Architecture. (He doesn’t name the architecture in the talk, but later calls it Clean Architecture.) He does a decent job of explaining how to break these 2 pieces apart. He used the term “interactor” in that talk, so I decided to go with that. He said that Jacobsen calls it a Control Object in Object Oriented Software Engineering, but that’s too close to Rails’s “controller”.

So here’s an example of what I ended up with:

class OrderController < ApplicationController
  def index
    interactor.on(:display) { |orders| render orders }

  def show
    interactor.on(:display) { |order| render order }
    interactor.on(:not_found) { |order_id| render status: 404 }


  def interactor
    @interactor ||= OrderInteractor.new
require "wisper"
require "order"

class OrderInteractor
  include Wisper.publisher

  def list
    orders = Order.all
    publish(:display, orders)

  def get(id)
    order = Order.find(id)
    publish(:display, order)
  rescue ActiveRecord::RecordNotFound
    publish(:not_found, id)

I do have a few problems with this solution though. I’m not a fan of the name “interactor” for the business logic. I thought about calling it OrderOperator, or maybe OrderOperations, because it’s really a collection of operations. Perhaps it would be better to separate each operation into a separate class. Trailblazer does it that way. And for more complicated business logic, I would do that too, using the Method Object pattern. But like a Rails controller, there’s a lot in common among all the operations. I feel like a separate class for each operation for each would create too many coupled classes.

I’m also uncomfortable with the fact that the controller is delegating almost everything to the interactor. I guess this is OK, but it feels like there’s too little left when every line starts with interactor. I suppose extracting things some more would help mitigate this concern I’ll likely write a small gem to perform that extraction. I expect that that will allow a typical controller to be written in only a few lines. And maybe the same for the interactor side.

With the business logic extracted out of the controller, it was really easy for me to write a command-line version of the app. As Uncle Bob says, “the web is not particularly important to your application.”

I’ve put the code for this example on GitHub: https://github.com/boochtek/hexagonal-rails. I’ll likely experiment with it some more over the next few weeks and months.


January kept me pretty busy, so I’m a little late to this. But better late than never. And as an Agile practitioner, I don’t think personal retrospectives should be limited to one time of year.

Review of 2014

Last year I wrote a blog entry listing my goals for 2014. As far as New Year’s resolutions go, I was relatively successful — about 50% of my goals accomplished. Unfortunately, my Open Source contributions weren’t as strong as I had hoped; while I released some of my own work, I didn’t do much else. I did increase my blogging; getting in on a weekly blogging pact helped immensely. I also increased my participation on the This Agile Life podcast to a level that I’m happy with. But the accomplishment I’m most proud of was giving a presentation at RubyConf.

Plans for 2015

I’d like to keep things rolling from last year, but crank up a few things. My plans are quite ambitious, so I don’t expect to get everything done by any means. But I think by setting the bar high, I’ll end up with a lot I can be proud of.

Job Hunting

Late last year, I took the jump into independent consulting. So far, I’ve really enjoyed it, and I’m booked up through April. My wife graduates in May, so we’ve got the possibility of moving if that makes sense. So I’ll be looking for consulting projects in town, but I’ll also be looking at jobs in San Francisco and Chicago. The possibilities are exciting, and I’ll be taking my time to find something just right.


I was incredibly nervous leading up to my RubyConf presentation. Part of that was just the common fear of public speaking. For me, that only kicks in at around 100 people, and this audience was around 250. I think another reason was that I chose a really ambitious topic, and I kept finding more that I wanted to talk about, but wasn’t prepared for. But I think I did a pretty good job presenting an advanced topic. And I was so pumped by the sense of accomplishment as soon as I finished. So I’m hoping to do it more. I’ve already submitted a couple proposals, and plan to submit several more.


I believe that blogging is important for me to get my thoughts down — for myself and to share with others. I was really successful last year when I had a partner to keep me honest, via a pact. So I’ve started up another pact this year, which will hopefully ensure I’ll keep things going. I’ve got a really long backlog of topics, so as long as I keep at it, I’ll have plenty to write about.

I also want to move away from WordPress to a static system — probably Middleman. I’ve got 2 major problems with WordPress. First, I no longer trust its security, nor the security of any application written in PHP. Second, it generates HTML every time someone requests a page, instead of when the content is updated. I find that to be a waste of resources, and problematic from a security standpoint. The main problem with moving to a static blogging system is that I really want to allow comments, pingbacks, and tweetbacks. So I’ll have to find a way to integrate those.

Programming Language Design

Last year I started thinking about programming language design, and started implementing a language tentatively called Brilliant. I’ve done a lot more thinking on the topic, and have a lot of notes. But I haven’t implemented much more yet. This year, I’d like to get my thoughts more organized, and write a series of blog posts on various aspects of language design. The most interesting part seems to be the trade-offs involved in the ways that various language features interact. So I’d like to make some progress on the language implementation, but more importantly, I’d like to get a lot of my design ideas written down.

I’m also going to spend a lot of time learning a bunch more programming languages, so I have a better understanding of possible features, combinations of features, and their interactions. I’ve already start with Elixir, Clojure, and Racket. I’m hoping to also look at OCaml, Factor, and Haskell. I’ll probably also take a look at the 2 “Seven Languages in Seven Weeks” books.

Agile Book

I think people often have trouble getting started with Agile. I started on a book last year, and got down quite a lot of good ideas. But I realized that I’m going to have a hard time organizing all those ideas into something coherent. Still, I’d like to try to get something out there that lets people get started with Agile. My idea is to present a toolbox of practices to get started with and build on that foundation over time with additional practices. Sort of a playbook on how to get started over the first 6 to 12 months and be successful. I want to make some progress on the book, at least enough to decide whether it’s worth the effort to finish it and self-publish it.


Ruby Pattern: Parameterized Module Inclusion

I’ve run across a pattern in Ruby lately that I really like. It solves some problems that I’ve struggled with for several years. Let me start with the problems.

Let’s say you want to include an ORM in a model class, and want to tell it what database table to use. Typically, you’d do this:

class User
  include MyORM::Model
  table 'people'

But that table call is more like an option to the module inclusion than anything else. So what we’d really like is something like this:

class User
  include MyORM::Model, table: 'people'

But that’s not valid Ruby; include doesn’t let you pass anything other than a module.

So when I was learning about Virtus, I noticed that its example of how to include it is a bit different than the standard Ruby idiomatic include:

class User
  include Virtus.model

At first glance, it reads like the first example. But on closer inspection and consideration, it’s quite a bit different. Where MyORM::Model is a constant that refers to a module, Virtus.model is a method call. So there’s a method named model in the Virtus module. That method returns another module — which is exactly what’s needed in order to include it into our model class.

The easiest way to implement Virtus.model would be this:

module Virtus
  def model

module Virtus::Model
  # ...

If Virtus.model doesn’t need to take any arguments, that’s perfectly fine. In fact, I’ve started to use this implementation of the pattern for modules that don’t need parameters.

Because Virtus.model is a method, we can also call it with options:

class User
  include Virtus.model(constructor: false, mass_assignment: false)

We could even pass a block. But how do we process those options? There are a few different ways. However we do it, we have to be sure to return a module. And we can create modules in a few different ways.

Virtus uses the builder pattern. It takes the parameters passed in and builds a module dynamically. By that, I mean that it calls Module.new and then adds methods to that module. It does this by mixing in other modules, but it could do it by dynamically defining methods as well.

I’ve never seen this pattern in any other language. It’s obviously only possible because we can dynamically create modules.

The use of this idiom seems to be catching on a bit in the Ruby community. I’ve started using it myself, and will be adding it to my Includable::ActiveRecord gem soon.



Brilliant – My Very Own Programming Language

I’ve decided to design and implement my own programming language, and call it Brilliant.

I’ve been interested in programming languages and linguistics almost as long as I’ve been using computers. I’ve long thought that if I ever go back to college, it’s likely that I’ll concentrate on programming languages as a specialty.

My recent discovery and involvement with the Crystal programming language has gotten me excited about new language ideas. It’s also helped me realize that implementing a language myself is feasible, and that there’s no better time to start than now.


For now, the Brilliant implementation doesn’t let you do much more than “Hello World”. But you gotta start somewhere. So this article isn’t really “Introducing Brilliant” so much as the start of a series of articles on its design and implementation.

I wanted to use a PEG (parsing expression grammar) parser for several reasons. For one, they seem to have gained a lot of popularity in the past few years. Also, PEGs cannot be ambiguous, which can solve a few difficult problems, such as the “dangling else“. Perhaps my favorite feature of PEG grammars is that you don’t need a separate tokenizer (lexer). This provides a nice advantage in that we can use keywords like “class” as variables, as long as they’re not used in a place where the keyword would make sense.

So knowing that I wanted to use a PEG, I had to find a PEG parser. I kind of wanted to use ANTLR, which has been a leading parser for many years. But the PEG seems to be new to version 4, and I couldn’t find any Ruby bindings for version 4. Treetop seems to be the most popular parser for Ruby, but I found the EBNF format that Rattler uses to be more to my taste. I think the fact that it’s newer also gives it a few advantages, having had a chance to learn some lessons from Treetop.

I thought about using the Rubinius VM, but decided to go with LLVM, mainly since it has slightly better docs for Ruby, and because it’s what Crystal uses. Also, it’s pretty easy to get it to compile to a binary executable or run in a JIT. In the future, I might consider switching to the Rubinius VM, the Erlang VM, or the Perl 6 VM (Parrot). But for now, I like the idea of being able to compile to a binary and easily interface with C, just like Crystal.


My main goal is to have fun playing around with language ideas.

I’ve found a really great language in Ruby, so I’ll be using it as a starting point. But Ruby does have its faults. In some ways, I want to answer the question “what would Ruby look like if we designed it today?”.

But I also want to explore other ideas. What if objects defaulted to immutable? What if functions and methods were assumed to be pure by default? Might it be possible to infer the purity of a function or method? (If so, we could automatically memoize them.) Can we make creating an actor as easy as creating an object?

I’ll also be looking at ideas from other programming languages. Could we get some of the benefits of Haskell’s purity without having to do somersaults to do IO? Could we mix Python’s indentation style scoping with brace style or begin/end style? Could we integrate Icon’s ideas about success and failure (goal-directed execution)? What interesting features can we pull from Ada, Io, CoffeeScript, Crystal, Rubinius, Perl 6, etc.?

I’m not so much as interested in cutting-edge features, as in features that can be easily used by the average programmer. More importantly, I’m interested in how features interact with each other, so they fit well together to create a whole that’s greater than the sum of the parts.


I wanted to name my programming language “Fermat”. I’ve long been intrigued by Fermat’s Last Theorem, and Fermat’s Little Theorem is important in number theory. Unfortunately, there’s already a computer algebra system with that name.

So I decided to find a name in the same vein as “Ruby” and “Crystal”. I looked at the Wikipedia page for “gemstones” for some inspiration. A lot of the names of gemstones are already taken. I considered some obscure gemstones, but saw the word “brilliant” and thought it was decent. It’s not the name of another gemstone, but still evokes some similarity to Ruby and Crystal.

So that’s the name for now. Perhaps I’ll decide to change it at some point in the future, but I needed a name for the project, as well as a file extension for source code files. I chose “bril” for that. I suppose “br” would be a better choice. Perhaps I’ll change that before the next article in this series.


I hope to work on Brilliant every once in a while. I expect it’ll take a couple years before it’s really very useful.

When I do add a major feature, I’ll be certain to blog about it. I’ve got tons of ideas strewn about in various files. It would be great to get them organized and published —and even better to get them implemented.

Slow Down!

There’s a tweet that I saw recently, with some simple advice for novice programmers:

Slow down.

This is probably good advice for most programmers. Our team recently noticed that every time we try to rush things, we make mistakes. And the mistakes end up costing us more time than if we had just done things at our normal pace. Slowing down ensures that you do things right, and when you do things right, you end up with a higher-quality product.

Speed and Code Quality

There are 2 types of code quality: internal and external. External code quality can be measured by how many bugs have been reported by customers. Internal code quality is harder to measure, but it mainly deals with the ability to change the code. When your internal quality is low, you’ve got lots of technical debt, and it’s harder to make changes.

So when you try to write code quickly, code quality decreases, leading to a code base that takes more time to make changes to. Conversely, when you slow down, your code quality improves, and it becomes easier to make changes more quickly. So when writing code, slowing down in the short run leads to a speed-up in the long run.

Speed and Process Improvement

But writing code isn’t the only place where we try to speed up. On an Agile team, we’re always trying to improve the way we work — especially at the beginning stages of an Agile transformation. So we’re eager to make changes in our processes. But I’d urge you to slow down here as well.

My colleague Amos and I frequently argue over pair switching. It’s funny, because we agree on everything except for 1 small detail. We both think pair switching is very important, to ensure that team members see more of what’s going on, to bring more ideas to each story, to prevent knowledge silos, and to encourage team ownership. Where we disagree is how long an ideal pairing session should last. I think pairs should switch every 2 hours, and he thinks 1 hour is ideal. I’ve seen teams reach the 1 hour pairing sessions successfully. But usually not without some pain and even often failing at the first attempt.

There’s nothing inherently wrong with failing. But if you fail at something, you’re not likely to try again. After all, you should learn from your failures, right?

So if you want your team to do something, you probably don’t want them to fail at it. If they fail, they won’t want to try a second time. That’s just human nature, and learning from failure. While you might think that they failed because they weren’t ready for the change yet, they’ll most likely think that they failed because this particular change won’t work for their situation. And they probably won’t know what to change when trying again, so they won’t try again.

I’ve seen this over and over. Back when Linux was up-and-coming, when a consultant pushed a company into using Linux before they were ready for it, and it didn’t work out, that company was cautious about trying again. So instead of being on the leading edge of using Linux, or even the middle of the pack, they ended up more toward the trailing edge. Had they not been pushed, they would have gotten more benefit in the long run.

So my advice in process improvement is the same as in programming: slow down. Take small steps toward what you think is the ideal. Make a small change, see how it works out, and adjust. As long as you’re still moving in the right direction, I believe you’ll move faster by taking small steps than by trying to make big leaps.

Burying the Lede

Most of us don’t write very readable shell scripts. There are plenty of things we could do better, but today I want to talk about one in particular — burying the lede.

The term “burying the lede” comes from the field of journalism. Here’s the Wiktionary definition:

To begin a story with details of secondary importance to the reader while postponing more essential points or facts.

Like a good news article, code should tell a story. And the story should start with what’s most important. In the case of code, the most important information is the high-level functionality — a succinct summary of what the program does. In other words, write (and organize) the code top-down, as opposed to bottom-up.

Unfortunately, shell script doesn’t make this easy. Due to the way shell scripts are interpreted, you can’t call a function until after you’ve defined it. This leads to most of us structuring our code like this:

function do_something { ... }
function do_something_else { ... }


The problem with this is that the function definitions will likely take quite a few lines, and we won’t see what the top-level functionality is until we reach the end of the script.

I’d like to propose a standard way to structure shell scripts to mitigate this issue. (I’m really only talking about shell scripts that have function definitions within them.) I’m sure I’ve seen a few scripts do this, but it’s not very common at all.

My proposal is simple:

function main {
function do_something { ... }
function do_something_else { ... }


This structure lets us start with the lede. We describe the top-level functionality right away. Only then do we get to the secondary details. The name main makes it pretty clear that it contains the top-level functionality.

I’ve recently started writing my shell code like this, and I’m happy with the results. I’ve also started to use some other programming techniques in my shell scripts to improve readability: better naming, extracting more methods, and moving helper methods into separate files. It feels good to treat shell scripts like real code instead of just some stuff I’ve hacked together.

PS. The WordPress theme I’m currently using (Twenty Eleven) also buries the lede — I can barely even see the title of the blog post on my screen without scrolling. I’m going to have to change that soon.