You Don’t Have to Be Right

Not too long ago, I was asked during a job interview “how do you convince your teammates that you’re right?” I answered with probably the most surprising answer: “I don’t.”

It’s taken me years to realize that being right isn’t terribly important. Especially when there’s more than one right answer — which there usually is. It’s more important to work as a team. It’s more important to be respected and to have respect for others.

A few weeks after that interview, I was pair programming with the person who had asked me the question. We’re both senior developers, so we both have a lot of experience and opinions based on that experience. We were writing a shell script, and we both had an idea in mind about how to write the code. I let him proceed with his idea. It was a pretty good idea; probably better than mine. But we paired very effectively. I’d let him finish some code, and I’d find a way to make it better. Then he’d find a way to make my code better. What we ended up with was so much better than my original idea, and his original idea as well. I commented on this, and he agreed. We left that pairing session feeling the high of having accomplished something rare — two very experience programmers coming up with better code than we could have imagined going in.

I like to think of writing code as a path towards a destination. The destination is good code that does what it’s supposed to, is readable (intention revealing), is concise, and is well-factored so as to be easy to change in the future. But there are many paths to that destination. Especially when pair programming, the code gets refined as ideas are shared, getting you closer and closer to that destination. Sometimes the different paths end up with the same code; sometimes they end up with different code. But if the code does its job well (according to those criteria), it doesn’t really matter what code you ended up with.

Thinking back to the original question, I think the answer is a little more nuanced. (I’m pretty sure I followed up with an explanation, but I don’t recall the details.) It truly doesn’t matter in a lot of cases. If there are different paths that lead to the same destination, and they’re all roughly equal, then my rule works. It can also work even if the first path you try (a “wrong” path) doesn’t work out, as long as it’s easy enough to try a different path. I find that the majority of day-to-day coding fits these conditions.

But there are some places where it’s costly to get things wrong the first time. In these cases, it’s worth spending some time thinking about and discussing the alternatives before getting started. Software architecture is the big one that comes to mind. In general, weighing the pros and cons of each option and applying previous experience works best.

So next time you think about trying to convince someone you’re right, fight the urge. Instead, let them show you what they’re thinking. They just might surprise you. And more importantly, you might surprise yourself.

My First Open Space

I recently attended an Open Space hosted at work. I’d never been to an Open Space, and didn’t know what to expect. We’d been told that this was a workshop to help Engineering Managers (my role), Product Owners, and Product Analysts find better ways of working together. But due to the way an Open Space works, it evolved into something completely different — and better.

We’d brought in Diana Larsen to facilitate the Open Space. Diana is a stalwart of the Agile community, focusing on how people and teams interact. She literally (co-)wrote the book on Agile retrospectives. Diana was also kind enough to be our guest at an Agile LINC meetup later in the evening.

The morning started off with all the participants sitting in chairs arranged in a circle. Diana rang a chime, a nice soothing tone that literally set the tone for the day. (Most people didn’t seem to like it, but I thought it had its purpose.) It also acted as a call to gather in the meeting space. She then walked around the inside of the circle as she explained what was going to happen.

The idea behind Open Spaces came with the realization that at a conference, the “hallway track” (ad hoc discussions in the hallways) was often more valuable than the scheduled talks. So they figured out a way to capture that experience. There are only a few rules:

  • Whoever shows up is the right group
  • Whatever happens is the only thing that could have
  • Be prepared to be surprised
  • Whenever it starts is the right time
  • When it’s over, it’s over
  • The Law of 2 Feet: If you’re not learning or contributing, go somewhere else

Once Diana set the scene and explained the rules, people came up and presented ideas for topics. If you proposed a topic, you chose a time and location on the topic board. At that time, you’d facilitate a discussion on that topic. The format was really conducive to discussions. There was no preparation, so discussions were from the heart — lots of people felt that they could contribute.

Being new to the company (only 2 months), I got a lot out of the workshop, beyond the content itself. I got to get to know several more people. I’d even say I made a few friends. Having open discussions, we found that many of the teams were having the same issues. This made it easy for me to talk to other people.

All-in-all, I found the Open Space to be extremely conducive to discussions aimed at identifying issues, brainstorming solutions, and planning action items. I felt like we made great strides in addressing some of the biggest problems our organization is currently facing.

Team Values

I held a retrospective with my new team last week. The team includes 2 senior developers, 2 junior developers, a product owner, and a product analyst. I’ve joined the team as an engineering manager, which I think of more as a team lead with an elevated title. Being new to this group, I wanted a way to understand their values. What motivates them? What common values do we share that we can leverage to move forward in the same direction?

I started out with a pretty simple question: “What do you value (in regards to what we’re building); what are you willing to fight for?” I asked them each to write down several values and then put them on the board before looking at everyone else’s answers. I also asked each person to rank their values in order of importance.

It turned out that my question was a little vague. Some people thought about the question in terms of the end result, and some in terms of the process of creating the software. In some ways that was a bit of a problem, because the different interpretations led people in different directions. But in other ways, not pushing them in any particular direction got more varied answers, exposing how they think about the project and the product.

My list (in order) was Effectiveness (doing the right thing), Quality (doing things right the first time), Happiness, Purpose, and Teamwork. In retrospect, I should have put Happiness first. If I’m not happy at work, I don’t really want to be there, and need to move on. (Sometimes I can trade some happiness at work for more happiness at home, but I’m definitely a person that needs to be happy at work.) The other values serve to improve my happiness, but the happiness is more important.

My own issue ranking my values turned out to be a problem with the ranking in general. Should they be ranked in importance of the necessity of the value as an outcome, or in importance of the necessity to focus on the value? I think the former is what I was looking for, but even I wasn’t clear on that when I began the exercise.

After everyone put their values up on the board, we read them off. Then I asked the team to choose several values that we share as a team. We pulled them off the individual members’ lists, and put them in the team list. Then I asked each person to come up and rank those values, then explain why they had ordered them that way, especially when they ordered them significantly different than the last person. I was hoping to come to some convergence of the rank over time, so we could document our values in rank order. That didn’t happen. But the discussion was illuminating to me and to everyone on the team.

I think the most interesting part about the lack of convergence was the difference between the developers and the product guys. The product guys definitely viewed the values more in terms of outcomes than the process. That makes sense — they’re not as intimately involved in the process of building the product.

We were able to converge on the top priority though: Will the user buy the product? This was a combination of a couple different values that we merged together. This included the end user experience as well as making sure the team would continue to have a reason for existing. The rest of the values we left unordered: Teamwork (cohesiveness), Data driven decisions, Team ownership, Simplicity, Effectiveness, Maintainability / Supportability, Quality, Automation, and Performance. I think that’s a pretty decent list.

As a couple teammates pointed out, those values are probably in part a reflection of this current point in time. If I asked the same question some other time, under different conditions and team dynamics, the answers would probably change a bit. And we’d probably come up with other answers if asked again, just due to randomness of the way we think about these things.

But I don’t think I’d do this activity a second time with the team. It was really about understanding our motivations — both our own, and those of our teammates. I found it effective in that way, and also in helping the team to think about our culture and how we can work to shape it to help us all push in the same direction.

There are a few caveats. When I asked for feedback on the exercise, one teammate pointed out that it wouldn’t work if people weren’t honest about their values, and they answered with what they thought management or their teammates wanted to hear. I don’t think that was an issue with this group, but it’s something to keep in mind.

The other major thing I’d do is to make it clear up front that I’m not looking for any action items from this activity; it’s more about understanding each other and ourselves. And next time, I’ll work to clarify how to rank the values.

I would recommend this activity for a new team, or when the makeup of the team is changing in some significant way. I wish there was a way to help a team converge on the ranking of their values, but I suppose I should be happy that agreeing on the set of important values went pretty quickly. And the diversity of ideas and opinions is probably a blessing that I should be embracing — the more ideas we have, the wider the variety of solutions we can imagine.

2015 Year in Review

It’s that time of year again — time for a retrospective on how I did on my goals for the year. I had 5 main goals for 2015:

  • Job Hunting
  • Conferences
  • Blogging
  • Programming Language Design
  • Writing an Agile Book

Job Hunting

I got pretty lucky on this one. My main contract with Mercy got extended several times. Amos and I must have been doing a good job of keeping the customer happy. We even made it through a couple rounds of layoffs. I’m wrapping up the gig at Mercy now. I’m working one day a week there, as the project winds down.

I also started a new gig this month at CenturyLink. I’m working on a cloud development team. Our current project involves selling WordPress as a service. The manager had been courting me for most of the year. I’m excited about my new role; I’ll be writing about it in a blog post soon.


I set a goal in 2014 to give my first conference talk. I accomplished that, giving an ambitious talk at RubyConf. I enjoyed having done that, and vowed to do more conference speaking.

I gave 3 conference talks in 2015. I gave a workshop on HTTP at RailsConf. I talked about immutable infrastructure at Madison+ Ruby. At RubyConf, I gave a talk on a micro-ORM I wrote. I also gave a lightning talk about Agile estimation (#noestimates).

I was an alternate speaker at Windy City Rails, but did not give my talk on Alternatives to ActiveRecord. I also went to Strange Loop, mainly to see several friends and acquaintances speak.


I wrote 24 blog articles this year. That’s about one every other week. What really kept me going was participating in a writing pact. When the pact was going, I had a 75% blogging rate. That’s pretty good.

I’m not so sure about the quality of my blog writing though. I know that practicing writing is supposed to make you better. I know I wrote some really good articles over the past year, but I think I also wrote some articles that weren’t very good. I think sometimes the deadline has caused more harm than good. I’m not really sure what to do about that; perhaps just pushing on is the right answer.

Programming Language Design

I’ve taken a lot of notes on the design of my programming language. Any time I learn something interesting about another language, or come up with another idea, I write it down.

But I haven’t worked on the implementation. (I last worked on the implementation in 2014.) I should be experimenting with some ideas, implementing them to see how they work out. I’ve even kicked around the idea of starting with a Forth variant, just to get something working quickly.

I haven’t written any articles on my ideas this year either. My notes are pretty extensive, and it would be good to write some articles to help get my thoughts straight.

Writing an Agile Book

I’ve got some things to say about Agile, and want to write a book to express those ideas. I’ve made a start — I’ve got the chapters outlines, and have started on a few chapters. But I haven’t made as much progress as I’d like to. I shared what I’ve got with Amos, and he showed some interest in pairing with me on the writing. Hopefully we’ll work on it together in 2016 and publish it.


There were a few other accomplishments that weren’t explicitly on my list, but I’d like to call attention to.

I’ve continued participating on the This Agile Life podcast. I was in 12 of the 33 episodes that were recorded in 2015. I hope to participate in more in 2016. We’re considering scheduling a standard recording night each week, which might help us record more regularly.

I recently took over as maintainer of Virtus, a library to declare attributes for Ruby model classes. I haven’t done a lot yet, since I’ve been busy with travel, vacation, and holidays. But I hope to catch up with all the pending pull requests and issues in the next month or so.

The accomplishment I’m most proud of is mentoring for the Roy Clay Sr. Tech Impact program. This is a program begun as a result of the Ferguson protest movement. We’re helping teach kids (from 14 to 25) web design and development. My personal goal was to give these kids an opportunity that they would not have otherwise had. But it turned out that some of them have actually started a business building web sites for small companies. I’m so proud of the progress they’ve made in such a short time; it’s a challenging program.


I’m pretty happy with my accomplishments this year. I made at least some progress on each of the goals I set. I’ve been thinking about my goals for next year; I’ll write that as a separate blog article next week.

Face Your Fears

I’ve always been someone who faces my fears.

I have a moderate case of arachnophobia. I don’t run away when I see a spider, but it creeps my out when one is crawling on me. When I was in college, I decided to buy a tarantula to attempt to get over my irrational fear of spiders. I thought I’d be able to get more comfortable with the tarantula over time, eventually to the point of letting it crawl on my arm. It didn’t work. Although I did find that my fear of tarantulas is rational — I got a terrible case of hives just from touching the urticating hairs that fell off into its water sponge.

Last week, I was on vacation in Mexico. One of the excursions we took involved jumping in the water a lot. I’m not a strong swimmer — mostly because I have a hard time closing my nose; I hold my nose when I jump in. We zip-lined into the water a lot. At one point, there was a cliff to dive into the water from. It was about 15 feet above the water. It didn’t look so far down before jumping. But it felt like a really long way down the first time I jumped from it. It was pretty scary for me. So I did it a second time. There wasn’t any peer pressure to jump a second time. I literally jumped a second time specifically because I was scared.

Fear is a weird thing. Fear is there to protect us. But it’s there to protect us from the dangers of the African savannah. Most of the things our fears protect us from don’t exist in our everyday modern lives. So maybe we should work to gain a better understanding of how our fears work, to figure out when to pay attention to them and when to ignore them.

Fortunately, our brain has a good mechanism to help us do this. Our brains basically have 2 main processing systems. The first one is for quick reactions. This one involves things that are nearly reflexes. Fear is in this system. The second system is our analytical reasoning system. This system takes longer to process, but is able to take on more information.

Whenever the situation allows us time for both systems to work, we need to listen to them both. We need to listen to our fears, because they’re there for a reason. But that reason might not pertain to our situation. So we need to realize that, and let the slow analytical system determine if we should ignore our fears.

If we don’t allow both systems to work, we’re not taking full advantage of our brains; we’re not taking full advantage of the situations that life is presenting to us.


I’ve been on vacation the past week, in Cozumel, Mexico. One day, we went on an excursion called Xenotes. A cenote (say-NO-tay) is a sinkhole filled with fresh water. (The “X” is to give it a more Mayan-sounding trademarkable name.) We had a lot of fun swimming, kayaking, and zip-lining. A bilingual tour guide led our group, which consisted of people from across the US and South America, of various ages and physical abilities.

Our tour guide was a lot of fun. He made jokes, told us about the cenotes, and led us in the activities. But he also encouraged us. When someone was scared to do something, he was supportive. He told us that it was okay, and we could do it. But we also felt like it was OK to fail, if we really couldn’t. It really felt like his encouragement was literally creating courage.

What was really neat was that despite the language barrier, everyone else was also supportive and encouraging. Everyone cheered with encouragement before someone would attempt something difficult. And we’d cheer especially loud once someone accomplished something that was difficult for their abilities.

It was an awesome feeling to feel so supported. It made me feel like I was in a safe place, where I could try new things that were a little past my comfort zone. I was able to do the zip-line upside-down. I jumped off a 15-foot cliff into the water. I even jumped off the cliff a second time, even though I was a little scared.

Back at the resort, we played some volleyball in the pool. It was a similar situation, with players of varying ages and abilities. Again, we tried to help the weaker players feel comfortable so they could improve without feeling judged or self-conscious. It helped everyone have a good time. It made everything more fun to be in such a supportive environment, whether I was in a position as one of the more skilled (volleyball), or one of the less skilled (jumping or zip-lining into the water). We were all able to accomplish more, with less effort.

These experiences provide a good lesson that can be applied in a lot of places. Such a supportive environment would help any relationship, and any team. I’ve been on a couple really good software development teams, but I don’t think any of them have been as supportive as these two groups of strangers.

I’ve decided that this should be one of my goals as a team leader. I want to create an encouraging environment for the whole team. I want to make sure that everyone is comfortable enough that they feel like they can try things that are difficult, even if they might fail (as long as nobody gets hurt).

If a group of strangers can do this, so can your team. So can you and your significant others. We need to work every day to make sure that we’re supporting each other. It’s the best way to get everyone to achieve more.

Show and Tell

I’m wrapping up my current consulting gig at Mercy in December, and starting a new gig at CenturyLink. I’m a software developer, but that’s only a part of what I do. What I really do is join a team and help them improve the way they work — both their processes and their technical skills.

I think this is a key differentiator for me as a consultant. Most consultants (and Agile coaches) come in and tell people what to do. I don’t like to just tell people what to do. I’d much prefer to work side-by-side with them, getting a better understanding of what their challenges are. Once I have a better understanding of the challenges, I’m able to better brainstorm some ideas to try. Then we can experiment to see what will work and what won’t.

Instead of telling people what to do, I show them how. Most people learn better from seeing than from hearing. They also learn better if you explain how and why, not just what. So showing them how to do something is more effective than telling them. By showing and doing, you can also set a good example. This is especially important when collaboration is a large part of what needs to be improved.

I’ve found that this style of consulting is more highly respected by everyone. I build trust with developers by working closely with them. Managers like to keep me around once they see how effective these methods can be, so the gigs I take on tend to last relatively long.

The biggest problem I have is explaining how this works. I don’t really know what to put on my résumé. Sometimes I call myself an Agile practitioner, and sometimes an Agile player/coach. But those aren’t terribly satisfying descriptions. I’m considering actually putting “I help teams improve the way they work — both their processes and their technical skills” on the résumé. But that seems awkward, and misses the show versus tell part. I’d be open to any suggestions.


Happiness Retrospective

I facilitated a retrospective today; it was one of the best retros I’ve ever been involved with. I figured out what activities I wanted to do earlier in the morning. They were really quite simple. I wanted to focus on happiness.

How happy are you at work?

I started with two questions that I’ve used with teams before, to some success (although not so successful for one particular team). The first question I asked was “How happy are you at work?” I had them put a rating from 0 to 10, with 0 meaning they should have quit last week, and 10 meaning they couldn’t imaging being happier at work.

The answers were mostly 7s and 8s, with a 5 and a 9. The average came to 7.5, which is pretty good. The 5 concerns me a bit, especially that it’s 2 points lower than anyone else’s answer.

How effective do you think the teams is?

The next question I asked was “How effective do you think the teams is?”. Again, from 0 to 10, with 0 meaning they can’t accomplish anything, and 10 meaning you couldn’t imagine a more effective team.

When I ask both of these questions, the scores are always highly correlated. If your team isn’t doing good work, it’ll make you unhappy. And if you are unhappy, you’re less likely to do your best work. This team was no different; the 5 and 9 became a 6 and a 10, and most of the 7s became 8s, for an average of just under 8.

What makes you happy at work?

The next question I asked was “What makes you happy at work?”. The answers were mostly about the teamwork and teammates. This went quicker than I expected. At this point, I was worried the retro would only last a little more than 30 minutes.

What would make you happier at work?

The final question I asked was “What would make you happier at work?”. This was the real pay-off. We spent about half an hour just talking about the things that would make us happier, and what we could do to improve our happiness. We came up with 10 potential action items. I usually limit teams to trying 3 or 4 action items, but most of the items are quite small, so we’re going to try 6 of them. One is just observing another team’s standup meetings, to see how they’re using their time effectively.

Everyone went away feeling that this was a really good retro. It felt good to focus on happiness. Happiness is something I’ve been talking a lot about on the This Agile Life podcast, and it felt good to take some action on it. I’ve done “positive-only” retros before, but this one felt even better than that, by specifically targeting happiness and how we can achieve it.

Impromptu Retrospective

I’m surprised that I haven’t gotten this story down in print before. It’s something I’ve mentioned many times — including a few times on the podcast. It’s a great story about the power of retrospectives, and it’s a great story about the power of a blameless post-mortem.

I don’t recall all the specifics at this point. It was about 5 years ago. I’d just noticed that Arun had made some sort of mistake. That’s fine, people make mistakes. The thing that was different about his mistake was that I had made the same mistake about a week prior. And Amos had made the same mistake about a week before that.

Noticing a pattern of mistakes, Amos and I called an impromptu retrospective. We gathered all the developers into a conference room. We explained the problem that we were running into. At first, Arun was defensive. That’s understandable; he thought we were there to come down on him, to lay blame. But we made it clear that we weren’t focusing on him. We admitted that we had also made the same mistake recently. We weren’t there to lay blame; we were there to figure out how our team could stop making the mistake. It took Arun a few minutes to get over the defensiveness.

With the defensiveness out of the way, we could focus on the issue at hand. We were able to figure out the root cause of us all making the mistake. (I don’t know if we played the “5 whys” game, but I’m sure we effectively did something similar.) And with that, we were able to change our process, so that nobody else would make the same mistake again.

There are 2 important points to this story. First, you don’t have to wait until a scheduled retrospective to hold a retrospective. This one was impromptu, and it’s the best one we ever had. We saw a problem, addressed it, and found a solution in less than an hour. Had we waited until the end of the week, we would have forgotten some of the details, and wouldn’t have been as effective at solving the problem. Second, when addressing problems, take your ego out of the equation. If you’re in a position of authority, take the blame — but never place blame. Focus on what’s important — solving the problem.

And don’t forget the Retrospective Prime Directive:

Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.


The Problem With Estimates

I’m a big proponent of Agile (mostly XP; I’m mostly anti-Scrum) and I’ve contributed some to the #noestimates “movement”.

I don’t really mean that nobody should ever estimate anything. I mean that I’ve never seen useful (fine-grained) estimates anywhere. Here are some of the problems with estimates that I’ve seen frequently:

  1. We’re not good at estimating how long things will take. We’re usually optimistic about how quickly we can get things done, and almost always miss thinking about things that will take more time. I’ve never seen a case where a project is completed more quickly than estimated. I’ve only rarely seen fine-grained (story-level) tasks completed more quickly than estimated.
  2. Management asks for estimates and then treats them as deadlines. The team then learns to inflate their estimates. Then management learns to reduce the estimates they’re given. Given fudge factors in each direction, the estimate no longer has much reliability. Even if you’re using story points, the point inflation/deflation leads to less consistency and therefore reduced reliability.
  3. Estimates that are given are negotiated down, or simply reduced. This leads to the question why you’d ask for an estimate and not take the answer provided. If you’re not going to listen to the answer, why are you asking the question? This is probably the craziest one on the list — given my first point, increasing an estimate would make sense. Reducing the estimates is just magical wishful thinking.
  4. Plans change and work is added, but the deadline (presumably based on the estimates) is not changed to correspond with the extra work involved. So again, you’re not actually even using the estimates that were given.
  5. Management dictates deadlines arbitrarily, without speaking to the people who will be doing the work. Spending time estimating how long each task will take when the deadline is already set is completely pointless.
  6. Almost every deadline is complete bullshit, based on nothing. Often the excuse is that marketing needs to know when something will come out, so that they can let people know about it. Why they need to know the exact release date way in advance, I’ve never been able to figure out. Many people intuitively know that the deadlines are bullshit, and will likely be allowed to slip. The only exception to bullshit deadlines I’ve come across are regulatory deadlines. (I know there are a few other exceptions out there.)
  7. Estimation at a fine-grained level isn’t necessary. Many Agile teams estimate using story points, and determine a conversion from story points to time based on previous empirical data. This is fine, except that the time spent estimating the story is wasted time — counting the number of stories almost always gives the same predictive power. Teams tend to get better at breaking up stories over time, so that they’re more consistent in size, so this becomes more likely over time.
  8. The ultimate purpose of an estimate is to evaluate whether the proposed work will be profitable, and therefore worth doing. Or to compare the ROI (return on investment) between alternative projects. But to know that, you’ll have to know what value that work will provide. I don’t believe I’ve ever seen that done — at least not at a fine-grained level. Usually by the time you’re asked to estimate, the project has already gotten approval to proceed.

I’ll note that most of these pit management against the team, instead of working together toward a common cause. Most of the practices also lead to seriously demoralizing the team. And most of the time, the estimates aren’t really even taken into account very much.

My advice is to first understand the value of a project before you consider estimating the costs. The estimation at this point will be very rough, so make sure that you have a very wide margin between the expected value and the rough estimate of the cost. If you’re pretty certain of the expected value, I’d probably want to make sure I could still be profitable even if it took 3 or 4 times as long to complete as the rough estimate. And if there’s uncertainty in the expected value, much more.

Another way to mitigate the risk of throwing money at something that’s not going to have positive ROI is to reduce the feedback loop. Order the work so that the tasks are ranked in order of value to the customer. (Realistically, you’ll have dependencies of tasks to worry about, and should consider effort involved too.) So work on the most valuable feature first — get that out into production as soon as possible. Once that’s done, you can assess if your ROI is positive or not. Keep iterating in this fashion, working on the features that will provide the most value first. Keep assessing your ROI, and stop when the ROI is no longer worth it, compared to other projects the team could be working on.

At a fine-grained level, if you’re using story points, I’d ask you to do the math to see if just counting the stories would be as effective at predicting how much will be done over time as using the story points. If so, you can save the time the team spends on estimating stories. I’d still recommend spending time talking about stories so that everyone has a shared understanding of what needs to be done, and to break stories up into a smaller, more manageable size — with one acceptance criteria per story. Also take a look to see if empirical average cycle time (how long it takes a single story to move from start to finish) might provide you the predictive power just as well as estimates. (I.e. is it bandwidth or latency that really provides the predictive power you’re looking for?)

And don’t forget Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.