Impromptu Retrospective

I’m surprised that I haven’t gotten this story down in print before. It’s something I’ve mentioned many times — including a few times on the podcast. It’s a great story about the power of retrospectives, and it’s a great story about the power of a blameless post-mortem.

I don’t recall all the specifics at this point. It was about 5 years ago. I’d just noticed that Arun had made some sort of mistake. That’s fine, people make mistakes. The thing that was different about his mistake was that I had made the same mistake about a week prior. And Amos had made the same mistake about a week before that.

Noticing a pattern of mistakes, Amos and I called an impromptu retrospective. We gathered all the developers into a conference room. We explained the problem that we were running into. At first, Arun was defensive. That’s understandable; he thought we were there to come down on him, to lay blame. But we made it clear that we weren’t focusing on him. We admitted that we had also made the same mistake recently. We weren’t there to lay blame; we were there to figure out how our team could stop making the mistake. It took Arun a few minutes to get over the defensiveness.

With the defensiveness out of the way, we could focus on the issue at hand. We were able to figure out the root cause of us all making the mistake. (I don’t know if we played the “5 whys” game, but I’m sure we effectively did something similar.) And with that, we were able to change our process, so that nobody else would make the same mistake again.

There are 2 important points to this story. First, you don’t have to wait until a scheduled retrospective to hold a retrospective. This one was impromptu, and it’s the best one we ever had. We saw a problem, addressed it, and found a solution in less than an hour. Had we waited until the end of the week, we would have forgotten some of the details, and wouldn’t have been as effective at solving the problem. Second, when addressing problems, take your ego out of the equation. If you’re in a position of authority, take the blame — but never place blame. Focus on what’s important — solving the problem.

And don’t forget the Retrospective Prime Directive:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.

 

The Problem With Estimates

I’m a big proponent of Agile (mostly XP; I’m mostly anti-Scrum) and I’ve contributed some to the #noestimates “movement”.

I don’t really mean that nobody should ever estimate anything. I mean that I’ve never seen useful (fine-grained) estimates anywhere. Here are some of the problems with estimates that I’ve seen frequently:

  1. We’re not good at estimating how long things will take. We’re usually optimistic about how quickly we can get things done, and almost always miss thinking about things that will take more time. I’ve never seen a case where a project is completed more quickly than estimated. I’ve only rarely seen fine-grained (story-level) tasks completed more quickly than estimated.
  2. Management asks for estimates and then treats them as deadlines. The team then learns to inflate their estimates. Then management learns to reduce the estimates they’re given. Given fudge factors in each direction, the estimate no longer has much reliability. Even if you’re using story points, the point inflation/deflation leads to less consistency and therefore reduced reliability.
  3. Estimates that are given are negotiated down, or simply reduced. This leads to the question why you’d ask for an estimate and not take the answer provided. If you’re not going to listen to the answer, why are you asking the question? This is probably the craziest one on the list — given my first point, increasing an estimate would make sense. Reducing the estimates is just magical wishful thinking.
  4. Plans change and work is added, but the deadline (presumably based on the estimates) is not changed to correspond with the extra work involved. So again, you’re not actually even using the estimates that were given.
  5. Management dictates deadlines arbitrarily, without speaking to the people who will be doing the work. Spending time estimating how long each task will take when the deadline is already set is completely pointless.
  6. Almost every deadline is complete bullshit, based on nothing. Often the excuse is that marketing needs to know when something will come out, so that they can let people know about it. Why they need to know the exact release date way in advance, I’ve never been able to figure out. Many people intuitively know that the deadlines are bullshit, and will likely be allowed to slip. The only exception to bullshit deadlines I’ve come across are regulatory deadlines. (I know there are a few other exceptions out there.)
  7. Estimation at a fine-grained level isn’t necessary. Many Agile teams estimate using story points, and determine a conversion from story points to time based on previous empirical data. This is fine, except that the time spent estimating the story is wasted time — counting the number of stories almost always gives the same predictive power. Teams tend to get better at breaking up stories over time, so that they’re more consistent in size, so this becomes more likely over time.
  8. The ultimate purpose of an estimate is to evaluate whether the proposed work will be profitable, and therefore worth doing. Or to compare the ROI (return on investment) between alternative projects. But to know that, you’ll have to know what value that work will provide. I don’t believe I’ve ever seen that done — at least not at a fine-grained level. Usually by the time you’re asked to estimate, the project has already gotten approval to proceed.

I’ll note that most of these pit management against the team, instead of working together toward a common cause. Most of the practices also lead to seriously demoralizing the team. And most of the time, the estimates aren’t really even taken into account very much.

My advice is to first understand the value of a project before you consider estimating the costs. The estimation at this point will be very rough, so make sure that you have a very wide margin between the expected value and the rough estimate of the cost. If you’re pretty certain of the expected value, I’d probably want to make sure I could still be profitable even if it took 3 or 4 times as long to complete as the rough estimate. And if there’s uncertainty in the expected value, much more.

Another way to mitigate the risk of throwing money at something that’s not going to have positive ROI is to reduce the feedback loop. Order the work so that the tasks are ranked in order of value to the customer. (Realistically, you’ll have dependencies of tasks to worry about, and should consider effort involved too.) So work on the most valuable feature first — get that out into production as soon as possible. Once that’s done, you can assess if your ROI is positive or not. Keep iterating in this fashion, working on the features that will provide the most value first. Keep assessing your ROI, and stop when the ROI is no longer worth it, compared to other projects the team could be working on.

At a fine-grained level, if you’re using story points, I’d ask you to do the math to see if just counting the stories would be as effective at predicting how much will be done over time as using the story points. If so, you can save the time the team spends on estimating stories. I’d still recommend spending time talking about stories so that everyone has a shared understanding of what needs to be done, and to break stories up into a smaller, more manageable size — with one acceptance criteria per story. Also take a look to see if empirical average cycle time (how long it takes a single story to move from start to finish) might provide you the predictive power just as well as estimates. (I.e. is it bandwidth or latency that really provides the predictive power you’re looking for?)

And don’t forget Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.

Website Checklist

While creating a web page isn’t too difficult, there are a lot of moving parts involved in creating an effective website. We’ve written up this checklist as a guide for anyone who has decided that they need a website.

This list is for a basic static website. A web application will require significantly more effort. We’re working on a separate checklist for web apps.

Overview

  • Determine your goals
  • Develop a business plan
  • Register a domain name
  • Set up DNS hosting
  • Set up web hosting
  • Design and develop the site
  • Deploy and test the site
  • Market your site
  • Set up additional services
  • Maintenance

Goals

  • What do you want to achieve from having the site?
  • What “call to action” do you want visitors to take?
  • How will you measure success?
  • What will be the focus of the site?
    • Info (“brochure”) for an existing business
    • Blogging
    • Sales
    • Community
    • Web app
    • Mobile app

Business Plan

  • Who is your target audience?
  • Marketing
    • How will you get them to come to your site?
    • How will you get them to buy something?
  • Who is your competition?
    • How do you compare to them?
    • How do you differentiate from them?
    • What is your niche?
    • What makes your site better than theirs?
  • How will you make money?
    • Bringing people into brick and mortar business
    • Advertising
    • Periodic billing/subscriptions
    • Selling goods/services
    • Get bought out
  • Pricing
    • Aim high — it’s easier to lower prices than raise them
    • Tiered pricing often makes sense; most people pick the middle tier
  • What will it take to have a positive ROI?

Domain Name

You’ll probably want your own domain name.

  • Think of a name
    • Stick with a .com name if possible; don’t use .biz
    • Some other top-level domains come into fashion on occasion
      • .io is pretty popular right now
  • Check availability of the name
  • Keep checking names, until you find a good name that is available
  • Register the name with a respectable registrar
    • DNS Registrars run from $10-$35/yr
    • DO NOT allow your web host provider to own/control your name
    • You may want to grab the .net, .org, and other versions of the same name
    • Multiple-year registration is cheaper, but it’s easier to forget how to renew it
  • Your registrar will likely point your domain to an “under construction” page initially
  • DO NOT LOSE YOUR NAME!
    • Spammers and pornographers will take it over if your registration lapses
    • Make sure your contact info (especially email address) is up-to-date
  • Beware scams (usually by US mail) trying to get you to renew with a different registrar
  • Note that the email address you register with will get spammed
    • Some registrars provide some protection for a fee

DNS Hosting

You’ll need to have someone take care of the servers that tell the world what server addresses your domain name corresponds to.

  • Find a DNS hosting provider
    • We like DNSimple; they also do domain registration
  • Provide the name servers to your DNS registrar

Web Hosting

You’ll need servers to host your web site on. There are a lot of options available, from virtual hosts (a Linux server where you control everything) to application-specific services.

  • Talk to your developer or designer first!
    • Web host can significantly restrain the development environment
  • Cost
    • $10 to $500 / month is typical
  • Bandwidth
    • Number of users × how often they use it × average “page” size
    • What happens if you go over?
  • Up-time
    • What kind of down-time can the site sustain?
    • Higher guaranteed uptime costs more
    • What if the specified uptime is not met?
  • Development environment
    • What programming languages are installed?
    • What databases are installed?
    • What libraries are installed?
    • What if other libraries are required?
  • Shared/dedicated/virtual hosting
    • Shared means others are using the same machine, with security implications
    • Dedicated is expensive, but you “own” the whole machine
    • Virtual is somewhere in between
      • You have a “slice” of a machine dedicated to your site
  • How responsive is the host to problems and requests?
  • Backups
    • What do they back up?
    • How often do they back up?
    • How can files be restored?

Design and Development

Designing and developing the site can vary from picking an existing template and adding content, to developing a full web application.

  • Cost ($30 – $300 / hr)
  • Project management
    • Story/task management
  • Revision control
    • Ensures changes can be rolled back quickly
  • Functionality
    • What does the site need to do?
  • Usability
    • How easy is it to use each page?
    • Is it easy to navigate the site to find what you’re looking for?
    • Check for broken links
    • Check for ADA/508 compliance
    • Spell checking and grammar checking

Deploying and Testing

  • Can updates be deployed quickly?
    • Deploy early and often, so it’s not such a big deal, and it becomes routine
  • Consider a staging and/or beta site
    • Test everything thoroughly on staging site before deploying to production
    • Staging site should (mostly) use the same code as production
    • Staging/test site should not process credit cards, etc.
  • Automated testing
    • Prevents regressions
  • Exploratory testing
    • See how things work
    • See if you can break things
  • Security testing
    • Penetration testing
  • Performance
    • Load testing
  • Beta testers

Marketing

  • Search engine “optimization” (SEO)
    • Good design, good URLs, and well-written HTML should cover most of this
    • Submit site to search engines
    • Use robots.txt and site maps
  • Directories
  • PR sites
  • Targeted groups
  • DO NOT send out spam

Additional Services

What other services do you need, besides just the web site?

  • Email
  • Blog
  • Wiki
  • File storage
  • Forums
  • Authentication
  • Customer interaction
    • CRM
    • Feedback
    • Bug tracking
    • Customer Q&A (StackExchange)
    • Follow-up emails to customers to offer assistance

Maintenance

Over the lifetime of the site, you’ll likely pay more in maintenance costs than the upfront costs.

  • Responding to user emails
  • Requests for info
  • Feedback about the site
  • Password resets?
  • Tracking bug reports and feature requests
  • Site improvements
  • Additional content
  • Moderation of user content
    • Spam removal
  • Log analysis
    • Google Analytics
  • Assessing advertising effectiveness
  • Analysis of revenues/profitability
  • Upgrades
    • New/improved functionality
    • Bug fixes
    • Upgraded infrastructure
  • Down-time
    • Web host
    • Upgrades
    • Accidents
    • Bugs
  • Backups
    • Testing restoring from backups
  • Payments for services
    • Domain name registration – DO NOT LOSE YOUR NAME!
    • Web hosting
    • Marketing/advertising

 

Potential F5 Vulnerability

It all started with an email about a WebInspect report. It listed a buffer overflow, which we had marked as a false positive. I read the WebInspect report carefully, and found a note at the bottom that said you could test manually to confirm whether it was a false positive or not. Unfortunately, the manual test listed had a few problems. First, it jammed the lines together, without the proper line-breaks. Second, it assumed the site was using HTTP, not HTTPS, so used telnet. Third, it was testing against a page that didn’t exist, giving a 404. Keeping those in mind, I tried the manual test using the openssl s_client command:

openssl s_client -quiet -connect mysite.example.com:443
POST /login HTTP/1.1
Host: mysite.example.com
Transfer-Encoding: chunked

c00000000

The test terminated immediately. The report said that this meant that the server was vulnerable to a buffer overflow and arbitrary code execution.

At first, we thought this was caused by Apache or Tomcat, or possibly the application code. But the reported vulnerability was an Apache CVE from 2002 (CVE-2002-0392), fixed by vendors long ago. After a while, we realized that if we hit the servers directly, we did not get the indication of a vulnerability. If we hit the site through the F5 VIP, we saw the immediate termination of the connection. The issue is with handling of HTTP chunked encoding. Nginx had a similar issue in 2013 (CVE-2013-2028).

So we turned our attention to the F5 load balancers. We were able to confirm that other sites using F5 load balancers were exhibiting the same behavior. We also confirmed that WebInspect run against the servers directly did not show the issue (even as a false positive). We reported the issue to F5, and they are looking into it.

I’m disclosing this publicly now for a few reasons. First, I’m not a security researcher, and almost nobody follows my blog — especially people looking for security issues that could be exploited. Second, I’ve not developed a payload, so I have no idea whether this is exploitable. But at this point, it’s merely a potential vulnerability. I’m not sure I’ll even spend the time to research and create a payload to prove the vulnerability. If I do, I’ll be more careful with the disclosure.

 

Not Quite Callbacks

I’ve been working on application architectures based on Uncle Bob’s Ruby Midwest talk, following the hexagonal architectural pattern. I posted an article a couple months ago showing a way that works fairly well in Rails, and some accompanying Rails example code. But there was one thing I wasn’t quite happy with.

The problem is that we used callbacks (actually, a publish/subscribe mechanism) in a situation where they don’t seem to quite fit:

  def show
    interactor.on(:display) { |order| render order }
    interactor.on(:not_found) { |order_id| render status: 404 }
    interactor.get(params[:id])
  end

What we really want is to respond in different ways, depending on the result of the call to interactor.get(). There’s no good reason to define the responses before the call. It makes a lot more sense to define the responses after the call, because they’ll happen after the call. I’d much prefer that the code be written in the order that it will be run.

I discussed this problem with my friend and colleague, Amos King. We came up with a better solution, which puts things back in the right order:

  def show
    interactor.get(params[:id]) do |on|
      on.display { |order| render order }
      on.not_found { |order_id| render status: 404 }
    end
  end

He even wrote a small library to do this, which he called Riposte. I’m not sure what to call this pattern, but it seems to work pretty well in this situation. I suppose that they’re still technically callbacks, because they’re passed in in the block that’s passed in to the call to interactor.get(). But due to the magic of Ruby blocks, we get to put them in the order they should be.

Riposte also gives you the option of using the response object directly, instead of passing a block:

  def show
    on = interactor.get(params[:id])
    on.display { |order| render order }
    on.not_found { |order_id| render status: 404 }
  end

This shows that it’s just returning an object, with the twist that the response object has methods that take blocks. The nested blocks variant is really the same thing, except that it’s yielding to the response object instead of returning it.

I’ve decide that is the pattern I’d like to use for interactions and their callers within Ruby hexagonal architecture.

Architectural Thoughts

I’ve started working on my own framework in Ruby in the past couple days. It’s built upon my recent work at understanding Uncle Bob’s Ruby Midwest 2011 talk, and his article on Clean Architecture, and the resulting hexagonal architecture (AKA ports and adapters).

Somehow my research in that vein led me to Gary Bernhardt’s Boundaries talk. I’ve read a lot about the talk, and knew about the idea of “functional core / imperative shell”. And I’ve worked with a lot of similar ideas lately. But I believe this is the first time that I actually watched the whole video.

Even after having read a lot about similar ideas, it was pretty mind-expanding. Gary’s really good at presenting these kinds of ideas in a simple way.

OOP as usually taught includes encapsulation of data together with behavior, with mutable objects. Functional programming separates data and behavior, with mostly immutable data. From experience, encapsulating data and behavior together seems helpful. But experience also shows that immutability is useful. So it would be good to have both of those together. This is something I’ve been thinking for a few years — how best do we get both?

Gary calls the combination “FauxO”. Logic and data are still combined, but there’s no mutation. Anywhere OOP would normally have mutation would just generate a new object. There’s no language restriction involved in enforcing immutability — just discipline.

But without mutability, it’s hard to do IO and maintain state. So Gary’s solution is to encapsulate as much as possible into an immutable (functional or FauxO) core, and around that, use an imperative (traditional OOP) shell. The functional core contains the bulk of the logic, and the imperative shell is a glue layer that handles the real world, including disk, network, and other I/O.

The result of this is that the shell has fewer paths, but more dependencies. The core contains no dependencies, but encapsulates the different logic paths. So we’re encapsulating dependencies on one side, and business logic on the other side. Or put another way, the way to figure out the separation is by doing as much as you can without mutation, and then encapsulating the mutation separately.

I love how this naturally breaks things up, so that the core is all testable with unit tests, and the imperative shell is tested with integration tests. And since the shell has few or no logic paths, you get the testing pyramid, with more unit tests and fewer integration tests. The whole thing ends up being quite beautiful. Tests end up being very fast without any extra effort — not even stubbing or mocking. This tells us that things have been decomposed very well — an elegant design.

Gary makes the case that immutable objects can be treated as values, and passed across boundaries. Even process boundaries. This is something I’ve noticed as I’ve been working on my own Uncle Bob style hexagonal framework, but nobody in that camp ever mentioned that — they prefer DTOs or something more like hashes. I’m completely against hashes, because of the “stringly-typed” problem. And I don’t see much advantage in a DTO if I’ve got an immutable object; I’d be basically copying the object to an almost identical object. And I’d be losing any laziness possible for derived values within the original immutable object.

It’s striking to me how Gary’s image of an imperative shell around a functional core, plus Net, Disk, and State outside of the shell mirror’s Uncle Bob’s concentric circles. Uncle Bob has entities in the middle, surrounded by use cases, surrounded by Web, DB, and UI.

Another advantage that Gary shows is that breaking things up this way allows easy concurrency. In his example, he shows using the actor model — either just using threads and queues, or an actor library (or language feature).

After several years of thinking about the architectural issues seen in most large Rails apps, I’m starting to come to an understanding of how to combine all these ideas and come up with an architecture that will work better.

 

From Agile To Happiness

The Agile Manifesto was written in 2001, as a way to explain the common values amongst several “light-weight” software development methodologies that had come about. The term “Agile” was chosen as a shorthand for those commonalities.

Once “Agile” started to show success, we started to see many people use the term to market their products and services, whether or not they really believed in the values and principles of the Agile Manifesto. It’s gotten to the point where some of us don’t see much value in using the term “Agile” any more. Even some of those involved in creating the manifesto have suggested new terms. Dave Thomas suggests “Agility” and Andy Hunt has started working on something called GROWS. Personally, I’m considering going back to the term “Extreme Programming”, even though I’ve incorporated ideas from other Agile methodologies.

It recently occurred to me that Agile, when done “right”, is closely aligned with the happiness of the team members. This is really interesting, because it aligns the interests of the employees and the business — a win-win situation. My next thought was that maybe the next step after “Agile” will be a focus on happiness and motivation.

I’ve recently been thinking about personal motivation lately, in the context of team practices. According to Daniel Pink’s book Drive, people are motivated by autonomy, mastery, and purpose. I personally add a fourth that can sometimes trump the other three: identity. And of course, happiness can also be motivating — both in the attempt to achieve happiness and in just being happy. (I suspect that happiness is more of a parallel to motivation than a cause though.)

There are a couple different ways that happiness can be achieved at work. The traditional way is for work to be a means to an end. In this view, the purpose of your job is to provide the money to live your life (outside of work) the way that you want to live it. There’s nothing wrong with this way of thinking. But for the lucky few, we can work on something that makes us happy in and of itself. That’s generally done by finding a job doing something that we enjoy.

But perhaps that’s thinking about things the wrong way. For one, plenty of people who have gone that route are still unhappy at work. I think a lot of that has to do more with the context surrounding the work than the work itself. Maybe you’ve got a lousy manager. Maybe you don’t like the people you work with. Maybe the tools you have to work with are bad. Maybe the processes add a lot of unnecessary tedium.

So maybe we need to find ways to be happier at work. Most of the Agile practices seem to make team members happy. For example, replacing a light-weight process for a heavier process always makes me happy. And I typically leave retrospectives in a good mood. So that’s a good starting point. But we should see if we can take the idea further. If we take employee happiness as a core value, where can we go? What kind of practices would we want to add? Please share any ideas in the comments below.

When Should We Do TDD?

On a recent episode (78) of This Agile Life, my fellow hosts talked about when to do Test-Driven Development (TDD). They all said that you should always do TDD — at least for anything that will go into production; there’s an exception for experimenting or “spiking”.

I wasn’t on that episode, but later commented on the topic. (Episode 83 — which wasn’t really all that great.) My take was slightly different. I said that you should do TDD only when the benefits outweigh the costs. Unfortunately, we usually greatly underestimate the benefits. And the costs often seem high at first, because it takes some time to get good at writing automated tests. Not to mention that both the costs and the benefits are usually hard to measure.

What is the cost of writing automated tests? I’ve asked this question before and recorded the answers (inasmuch as we have them) in a previous blog entry. TDD costs us about 10-30% in short-term productivity. The benefit that they found was a reduction in bugs by 30-90%, and a decrease in code complexity by about 30%.

But what about when the costs are higher than the normal 10 to 30 percent? One good example of this is when there’s no test framework for the situation you’re testing. This might be a new language or framework that you’re working with. More likely it’s a complex API that you have to mock out. So that could increase the cost of automated testing so as to outweigh the benefits — especially on a short project. I could imagine situations where the cost of writing the mocks could eat up more than the project itself.

Another case where we might consider skipping testing is when we’re more concerned about time to market than quality. This is almost always a mistake. Your code will almost always last longer than you expect. (Remember Y2K?) And if the code lasts longer than you expect, that means you’ll have to deal with bugs that whole time. But we have to work with the information we have at the time we make our decisions. And sometimes that might tell us that time to market is more important than anything.

One final case I can imagine is when a true expert is coding something that they’re very familiar with. I could picture someone like Uncle Bob writing code (in a language that he’s familiar with) without tests just as effectively as I could write code with tests.

But these situations should not be common; they’re edge cases. In almost all real-world cases, TDD is the right thing to do. Don’t forget, TDD is also a design discipline — it helps you design a better API. So keep doing TDD. But as with any practice, don’t do it blindly without considering why we’re doing it. Make sure you understand the costs, and whether the benefits outweigh the costs.

Good Enough

I ran into some former colleagues recently, from a company where I had worked to help transform the team to be more Agile. They’ve gone through some reorganization and management changes recently. One of the guys said that their team culture has helped them maintain quality in the face of those changes. This struck me as odd, since I had considered the work I had done there as somewhat of a disappointment. While I felt I had made a small dent, I didn’t feel like I’d made a true Agile transformation. Much of what I had taught didn’t seem to “stick”.

Later that night I thought about why my opinion of the team and his opinion were so different. There are a lot of reasons why an Agile “transformation” could fail. It could be due to lack of support from management. Or even worse, not getting the team members to buy into the ideas — usually due to fear of change. But while those had some effect in this case, they weren’t really the main issues. Now that I’ve had some time and some distance from that team, I’ve gained some clarity on what the real issues were.

I think the reason that I think this transformation was a failure was due to the continued pace of change that true Agile requires. Basically the team experienced “change fatigue”, where everything keeps changing and you feel like you can never catch your breath. But the more interesting thing is how our perceptions differed. He seemed to think that the transformation was a success — they’re doing things better than they used to. I view it as more of a failure, because they basically stopped improving — at least at the pace that I was hoping for.

I think this difference in mindset is pretty fundamental. My view of Agile involves continuous improvement — always inspecting and adapting our processes. I suppose that means that my preferred flavor of Agile is “Kaizen“. But other people don’t see improvement and change as a constant — they see each change as a means to an end. I’m starting to realize that neither viewpoint is objectively correct. Maybe they’re both fine viewpoints to have. Maybe I shouldn’t be so hard on myself and view that engagement as a failure, especially if my teammates view it as a success. Maybe perfection is the enemy of the good. And maybe I need to learn to be happy with “good enough”.

 

The Power of 1%

I frequently participate in a podcast called This Agile Life. Recently, a listener asked how much time Agile teams should spend on self improvement. I said 10% to 25%, leaning towards 15% to 20% for most teams. That comes to at least one hour per day, and maybe even more than one day per week.

I’m including personal self improvement and team retrospectives in this self-improvement time. This can be as simple as configuring your IDE to make you more efficient, learning to use a tool better, or following up on action items from a retro.

That may seem like a lot of time “wasted”. But I think I can justify the cost of all that time.

The purpose of spending time on team or self-improvement — the whole point — is to increase our performance and our efficiency. How much improvement can we expect? Can we improve by 1% each week? That doesn’t sound too unreasonable. I think that’s an achievable goal for almost any team, at least on average.

Spending 20% of your time to gain 1% doesn’t seem like it’s worth it — until you consider the long term. With compound interest, you’ll be 67% more efficient by the end of a year.1 At that point, you’ll be able to get things done in 59% of the time — saving 41% of the time required at the beginning of the year.2 The following years will show even more progress, as compared to when you started. If 10x programmers exist, continuous improvement is apparently the way to get there.

So there’s a pretty good return on investment, even with a small amount of improvement each week. You’ll be significantly more efficient.

But efficiency isn’t really what you should aim for. You should aim for effectiveness. You can be efficient in creating the wrong thing. Part of improving should be ensuring that you’re not just building things right, but that you’re building the right things. Build what the customer really needs. Find ways to ask the right questions.

Most importantly, find ways to keep improving. It would be a waste of time not to.

1: (1.01 ^ 52) – 1
2: (0.99 ^ 52)