From Agile To Happiness

The Agile Manifesto was written in 2001, as a way to explain the common values amongst several “light-weight” software development methodologies that had come about. The term “Agile” was chosen as a shorthand for those commonalities.

Once “Agile” started to show success, we started to see many people use the term to market their products and services, whether or not they really believed in the values and principles of the Agile Manifesto. It’s gotten to the point where some of us don’t see much value in using the term “Agile” any more. Even some of those involved in creating the manifesto have suggested new terms. Dave Thomas suggests “Agility” and Andy Hunt has started working on something called GROWS. Personally, I’m considering going back to the term “Extreme Programming”, even though I’ve incorporated ideas from other Agile methodologies.

It recently occurred to me that Agile, when done “right”, is closely aligned with the happiness of the team members. This is really interesting, because it aligns the interests of the employees and the business — a win-win situation. My next thought was that maybe the next step after “Agile” will be a focus on happiness and motivation.

I’ve recently been thinking about personal motivation lately, in the context of team practices. According to Daniel Pink’s book Drive, people are motivated by autonomy, mastery, and purpose. I personally add a fourth that can sometimes trump the other three: identity. And of course, happiness can also be motivating — both in the attempt to achieve happiness and in just being happy. (I suspect that happiness is more of a parallel to motivation than a cause though.)

There are a couple different ways that happiness can be achieved at work. The traditional way is for work to be a means to an end. In this view, the purpose of your job is to provide the money to live your life (outside of work) the way that you want to live it. There’s nothing wrong with this way of thinking. But for the lucky few, we can work on something that makes us happy in and of itself. That’s generally done by finding a job doing something that we enjoy.

But perhaps that’s thinking about things the wrong way. For one, plenty of people who have gone that route are still unhappy at work. I think a lot of that has to do more with the context surrounding the work than the work itself. Maybe you’ve got a lousy manager. Maybe you don’t like the people you work with. Maybe the tools you have to work with are bad. Maybe the processes add a lot of unnecessary tedium.

So maybe we need to find ways to be happier at work. Most of the Agile practices seem to make team members happy. For example, replacing a light-weight process for a heavier process always makes me happy. And I typically leave retrospectives in a good mood. So that’s a good starting point. But we should see if we can take the idea further. If we take employee happiness as a core value, where can we go? What kind of practices would we want to add? Please share any ideas in the comments below.

When Should We Do TDD?

On a recent episode (78) of This Agile Life, my fellow hosts talked about when to do Test-Driven Development (TDD). They all said that you should always do TDD — at least for anything that will go into production; there’s an exception for experimenting or “spiking”.

I wasn’t on that episode, but later commented on the topic. (Episode 83 — which wasn’t really all that great.) My take was slightly different. I said that you should do TDD only when the benefits outweigh the costs. Unfortunately, we usually greatly underestimate the benefits. And the costs often seem high at first, because it takes some time to get good at writing automated tests. Not to mention that both the costs and the benefits are usually hard to measure.

What is the cost of writing automated tests? I’ve asked this question before and recorded the answers (inasmuch as we have them) in a previous blog entry. TDD costs us about 10-30% in short-term productivity. The benefit that they found was a reduction in bugs by 30-90%, and a decrease in code complexity by about 30%.

But what about when the costs are higher than the normal 10 to 30 percent? One good example of this is when there’s no test framework for the situation you’re testing. This might be a new language or framework that you’re working with. More likely it’s a complex API that you have to mock out. So that could increase the cost of automated testing so as to outweigh the benefits — especially on a short project. I could imagine situations where the cost of writing the mocks could eat up more than the project itself.

Another case where we might consider skipping testing is when we’re more concerned about time to market than quality. This is almost always a mistake. Your code will almost always last longer than you expect. (Remember Y2K?) And if the code lasts longer than you expect, that means you’ll have to deal with bugs that whole time. But we have to work with the information we have at the time we make our decisions. And sometimes that might tell us that time to market is more important than anything.

One final case I can imagine is when a true expert is coding something that they’re very familiar with. I could picture someone like Uncle Bob writing code (in a language that he’s familiar with) without tests just as effectively as I could write code with tests.

But these situations should not be common; they’re edge cases. In almost all real-world cases, TDD is the right thing to do. Don’t forget, TDD is also a design discipline — it helps you design a better API. So keep doing TDD. But as with any practice, don’t do it blindly without considering why we’re doing it. Make sure you understand the costs, and whether the benefits outweigh the costs.

Good Enough

I ran into some former colleagues recently, from a company where I had worked to help transform the team to be more Agile. They’ve gone through some reorganization and management changes recently. One of the guys said that their team culture has helped them maintain quality in the face of those changes. This struck me as odd, since I had considered the work I had done there as somewhat of a disappointment. While I felt I had made a small dent, I didn’t feel like I’d made a true Agile transformation. Much of what I had taught didn’t seem to “stick”.

Later that night I thought about why my opinion of the team and his opinion were so different. There are a lot of reasons why an Agile “transformation” could fail. It could be due to lack of support from management. Or even worse, not getting the team members to buy into the ideas — usually due to fear of change. But while those had some effect in this case, they weren’t really the main issues. Now that I’ve had some time and some distance from that team, I’ve gained some clarity on what the real issues were.

I think the reason that I think this transformation was a failure was due to the continued pace of change that true Agile requires. Basically the team experienced “change fatigue”, where everything keeps changing and you feel like you can never catch your breath. But the more interesting thing is how our perceptions differed. He seemed to think that the transformation was a success — they’re doing things better than they used to. I view it as more of a failure, because they basically stopped improving — at least at the pace that I was hoping for.

I think this difference in mindset is pretty fundamental. My view of Agile involves continuous improvement — always inspecting and adapting our processes. I suppose that means that my preferred flavor of Agile is “Kaizen“. But other people don’t see improvement and change as a constant — they see each change as a means to an end. I’m starting to realize that neither viewpoint is objectively correct. Maybe they’re both fine viewpoints to have. Maybe I shouldn’t be so hard on myself and view that engagement as a failure, especially if my teammates view it as a success. Maybe perfection is the enemy of the good. And maybe I need to learn to be happy with “good enough”.

 

The Power of 1%

I frequently participate in a podcast called This Agile Life. Recently, a listener asked how much time Agile teams should spend on self improvement. I said 10% to 25%, leaning towards 15% to 20% for most teams. That comes to at least one hour per day, and maybe even more than one day per week.

I’m including personal self improvement and team retrospectives in this self-improvement time. This can be as simple as configuring your IDE to make you more efficient, learning to use a tool better, or following up on action items from a retro.

That may seem like a lot of time “wasted”. But I think I can justify the cost of all that time.

The purpose of spending time on team or self-improvement — the whole point — is to increase our performance and our efficiency. How much improvement can we expect? Can we improve by 1% each week? That doesn’t sound too unreasonable. I think that’s an achievable goal for almost any team, at least on average.

Spending 20% of your time to gain 1% doesn’t seem like it’s worth it — until you consider the long term. With compound interest, you’ll be 67% more efficient by the end of a year.1 At that point, you’ll be able to get things done in 59% of the time — saving 41% of the time required at the beginning of the year.2 The following years will show even more progress, as compared to when you started. If 10x programmers exist, continuous improvement is apparently the way to get there.

So there’s a pretty good return on investment, even with a small amount of improvement each week. You’ll be significantly more efficient.

But efficiency isn’t really what you should aim for. You should aim for effectiveness. You can be efficient in creating the wrong thing. Part of improving should be ensuring that you’re not just building things right, but that you’re building the right things. Build what the customer really needs. Find ways to ask the right questions.

Most importantly, find ways to keep improving. It would be a waste of time not to.

1: (1.01 ^ 52) – 1
2: (0.99 ^ 52)

Resolutions

January kept me pretty busy, so I’m a little late to this. But better late than never. And as an Agile practitioner, I don’t think personal retrospectives should be limited to one time of year.

Review of 2014

Last year I wrote a blog entry listing my goals for 2014. As far as New Year’s resolutions go, I was relatively successful — about 50% of my goals accomplished. Unfortunately, my Open Source contributions weren’t as strong as I had hoped; while I released some of my own work, I didn’t do much else. I did increase my blogging; getting in on a weekly blogging pact helped immensely. I also increased my participation on the This Agile Life podcast to a level that I’m happy with. But the accomplishment I’m most proud of was giving a presentation at RubyConf.

Plans for 2015

I’d like to keep things rolling from last year, but crank up a few things. My plans are quite ambitious, so I don’t expect to get everything done by any means. But I think by setting the bar high, I’ll end up with a lot I can be proud of.

Job Hunting

Late last year, I took the jump into independent consulting. So far, I’ve really enjoyed it, and I’m booked up through April. My wife graduates in May, so we’ve got the possibility of moving if that makes sense. So I’ll be looking for consulting projects in town, but I’ll also be looking at jobs in San Francisco and Chicago. The possibilities are exciting, and I’ll be taking my time to find something just right.

Conferences

I was incredibly nervous leading up to my RubyConf presentation. Part of that was just the common fear of public speaking. For me, that only kicks in at around 100 people, and this audience was around 250. I think another reason was that I chose a really ambitious topic, and I kept finding more that I wanted to talk about, but wasn’t prepared for. But I think I did a pretty good job presenting an advanced topic. And I was so pumped by the sense of accomplishment as soon as I finished. So I’m hoping to do it more. I’ve already submitted a couple proposals, and plan to submit several more.

Blogging

I believe that blogging is important for me to get my thoughts down — for myself and to share with others. I was really successful last year when I had a partner to keep me honest, via a pact. So I’ve started up another pact this year, which will hopefully ensure I’ll keep things going. I’ve got a really long backlog of topics, so as long as I keep at it, I’ll have plenty to write about.

I also want to move away from WordPress to a static system — probably Middleman. I’ve got 2 major problems with WordPress. First, I no longer trust its security, nor the security of any application written in PHP. Second, it generates HTML every time someone requests a page, instead of when the content is updated. I find that to be a waste of resources, and problematic from a security standpoint. The main problem with moving to a static blogging system is that I really want to allow comments, pingbacks, and tweetbacks. So I’ll have to find a way to integrate those.

Programming Language Design

Last year I started thinking about programming language design, and started implementing a language tentatively called Brilliant. I’ve done a lot more thinking on the topic, and have a lot of notes. But I haven’t implemented much more yet. This year, I’d like to get my thoughts more organized, and write a series of blog posts on various aspects of language design. The most interesting part seems to be the trade-offs involved in the ways that various language features interact. So I’d like to make some progress on the language implementation, but more importantly, I’d like to get a lot of my design ideas written down.

I’m also going to spend a lot of time learning a bunch more programming languages, so I have a better understanding of possible features, combinations of features, and their interactions. I’ve already start with Elixir, Clojure, and Racket. I’m hoping to also look at OCaml, Factor, and Haskell. I’ll probably also take a look at the 2 “Seven Languages in Seven Weeks” books.

Agile Book

I think people often have trouble getting started with Agile. I started on a book last year, and got down quite a lot of good ideas. But I realized that I’m going to have a hard time organizing all those ideas into something coherent. Still, I’d like to try to get something out there that lets people get started with Agile. My idea is to present a toolbox of practices to get started with and build on that foundation over time with additional practices. Sort of a playbook on how to get started over the first 6 to 12 months and be successful. I want to make some progress on the book, at least enough to decide whether it’s worth the effort to finish it and self-publish it.

 

TDD Is Alive And Well

I went to RailsConf this year, and the very first talk was a keynote by David Heinemeier Hansson (DHH), the creator of Ruby on Rails. The TL;DR of his talk was “TDD rarely has value”. He followed up with a blog post the next day, titled “TDD is dead. Long live testing.“, and 2 more posts. I think this line of thought is terribly misguided, and causing more harm than good. This article is my response.

First, I would like to address the good points of the talk. He said that programming is pseudoscience, and that people want to tell us that there’s a secret to being a better programmer. But what it really takes is working hard — reading a lot of code, writing a lot of code, and rewriting a lot of code. He’s right. And I also agree with him that you should forget about patterns for a while when learning to code. Beginners try to throw patterns at a problem instead of letting the patterns emerge where they’re supposed to.

I don’t completely agree that programming is a pseudoscience. In some ways it is, but I think it’s more of a craft. It’s a craft, because there’s a lot of science involved, but there’s also an art to doing it well. And like any craft, you’re always working to get better. So to respond to DHH’s stance that “software is more like poetry than physics”, I think it falls
somewhere in between.

With regard to the software engineering practices we use, there really isn’t much science available, mostly because it’s a soft science. That is, it’s really hard to isolate a single variable when comparing code between projects. And nobody has the time or money to write the same code so many times that the differences would be statistically significant.

So we don’t have much science on TDD. But we do have some. Here’s a collection of several: StudiesOfTestDrivenDevelopment. And here’s one that explicitly looks are the difference between test-first and test-last: Does Test-Driven Development Really Improve Software Design Quality? What do these tell us? They tell us that TDD costs us about 10-30% in short-term productivity; reduces bugs by 30-90%, and decreases code complexity by about 30%. As Code Complete tells us (in section 20.5, with studies to back it up), improving quality reduces development costs. So, like most Agile practices, this is a case where spending a bit more time in the short term leads to time savings in the long term.

The more important lesson in the talk was that you have to do what works best for you and your situation. If TDD doesn’t give better results, then either find out how to make it give better results, or stop using it. As we often say in the Agile world, Agile doesn’t mean that you can stop using your brain. While I think TDD is appropriate in most situations, there are cases where it’s not worth the additional up-front cost. If the most important thing for your project is time-to-market, then not testing might be the right decision for you.

To me, TDD provides a bunch of benefits. First and foremost, TDD is a design discipline. It ensures that I think about how my code will be used before I think about how to implement it. This is very powerful in ensuring that the code is well-written from the perspective of other code using it.

Tested code provides confidence to be able to make changes without breaking things. If we write tests after the code, we’re less likely to write them. Tests written after the code also tend to test the implementation instead of the desired functionality. What we really want is tests written as a specification. With tests as a specification, we can come back later and understand why code was written. Without tests, or with poor tests, we can’t understand why the code is there; if we want to rewrite it, we don’t have the confidence that we’re not missing something. Writing tests first also ensures that we only write the code that is needed to implement the required functionality.

I’m not sure why DHH hasn’t “gotten” TDD. I’m not sure if it’s because he’s  a better coder than average, or if he just thinks in a different way than most of us. I think it’s partly because he doesn’t understand TDD, which he admitted might be the case. And I think he’s conflating TDD and unit testing.

DHH is influential in the developer community, especially those newer to Ruby and Rails. People listen to what he has to say. I was happy to see almost every other speaker made fun of DHH’s ideas, and most of the crowd knew better. But there will be a lot of others who will hear DHH, respect his opinions, and not give TDD the try that it deserves. And that’s sad, because it will lead to an overall reduction in code quality in the world.

Here are some other people’s thoughts on the matter:

 

Estimation Isn’t Agile

I don’t believe that estimation should be part of any Agile practice.

One of our managers recently mentioned that we hadn’t met the “contract” that we had “committed” to in our last iteration. This was complete nonsense, because A) we hadn’t made any such commitments, and B) we completed many more story points than the previous iterations (and without inflating story points).

estimates-as-deadlines

But her language made me come to several realizations. First and foremost, estimates are contracts. Sure, they’re not supposed to be treated as commitments, but they almost always are. And what does the Agile Manifesto say about this? It says that we should value customer collaboration over contract negotiation, and responding to change over following a plan. So it’s pretty clear that treating estimates as commitments is completely counter to the Agile values.

Why does this matter? What benefits do the Agile values bring us? I think the biggest benefit they bring is changing the way that we work, so that we can better deliver value to our customers. Without Agile, we’d just keep working the way we’ve always done things. And that didn’t seem to be working out so well. If we follow the Agile values and principles, at least we’ll have a fighting chance of improving our ability to deliver value.

Ask yourself — have you ever seen a software development project that was on time and on budget? Where the estimates were spot-on? Of course not. For one, we’re terrible at estimating. For another, our plans change — either from external factors, or from what we learn as we go.

Improved Estimation

To me, Agile is also about facing reality — and embracing it. It realizes that we’re terrible at estimating. It realizes that plans change. Most Agile methodologies have some tricks to counteract Hofstadter’s law. Generally, we use relative story points instead of hours, and then use an empirical factor to convert points to hours.

When this works, it is better than any other estimation I’ve ever seen. But it doesn’t work very often. People have trouble with relative estimation. How do you set the basis for what a point means without relating it to actual hours? Affinity estimation could work, but then you have to remember what the basis was. We’ve got a large distributed team, and when we tried this, we couldn’t all remember what the basis was.

Since we couldn’t get affinity estimation to work, we tried changing to perfect hours (only powers of 2). But then people thought of them as time. When we took longer than the estimate on an individual story, managers and team members thought we were taking longer than we should have. So our estimates ended up causing problems.

What Can We Do Instead?

Managers want estimates so that they can have predictability. They want to know when new features will be available. Is there a better way to get what we need?

I believe there’s a better way — prioritization. If you work on the most important thing first, then the most important thing will get done first. We should always be working on the next most important thing.

What if there’s more than 1 thing that’s most important? Then you’ve failed. You’ve failed at logic if you can’t understand that only 1 thing can be most important. You’ve failed at prioritizing the customers’ needs. You’ve failed at project management.

Arguments

1. Why can’t you just tell us how long it will really take?

Because we don’t know. Because we can’t know. This is the first time we’ve ever implemented the functionality you’ve asked for. If we’d done it before, we’d just use that existing code. As Glenn Vanderburg pointed out in his excellent talk on Software Engineering, we’re not building software, we’re architecting it.

2. But we have to tell our customers what to expect.

Why? Is the product so bad that you can’t keep customers around without leading them on with future enhancements? And why do customers need exact dates? A general roadmap telling them what the priorities for upcoming features should be sufficient.

3. But we have to have messaging about new features.

OK. Then send out that messaging once the feature has made it to Staging. Or even after it’s been rolled out to Production.

4. But we’ve promised these new features to the customers by this date.

Ah, so you’ve made promises to the customer that you don’t have control over. Have you ever heard of “under-promise and over-deliver”? That’s how you create happy customers. Yet you’ve done just the opposite, haven’t you? And then you want to blame someone else.

Risk

Estimates are risk. But the risk doesn’t come at the end, when the estimates are shown to be incorrect. The risk was in asking for the estimates in the first place, and placing trust in them. Don’t do it. Don’t promise things that you can’t be sure of.

Embrace this reality. Embrace this uncertainty. Always focus on what’s most important. That’s how you make customers happy.

Slow Down!

There’s a tweet that I saw recently, with some simple advice for novice programmers:

Slow down.

This is probably good advice for most programmers. Our team recently noticed that every time we try to rush things, we make mistakes. And the mistakes end up costing us more time than if we had just done things at our normal pace. Slowing down ensures that you do things right, and when you do things right, you end up with a higher-quality product.

Speed and Code Quality

There are 2 types of code quality: internal and external. External code quality can be measured by how many bugs have been reported by customers. Internal code quality is harder to measure, but it mainly deals with the ability to change the code. When your internal quality is low, you’ve got lots of technical debt, and it’s harder to make changes.

So when you try to write code quickly, code quality decreases, leading to a code base that takes more time to make changes to. Conversely, when you slow down, your code quality improves, and it becomes easier to make changes more quickly. So when writing code, slowing down in the short run leads to a speed-up in the long run.

Speed and Process Improvement

But writing code isn’t the only place where we try to speed up. On an Agile team, we’re always trying to improve the way we work — especially at the beginning stages of an Agile transformation. So we’re eager to make changes in our processes. But I’d urge you to slow down here as well.

My colleague Amos and I frequently argue over pair switching. It’s funny, because we agree on everything except for 1 small detail. We both think pair switching is very important, to ensure that team members see more of what’s going on, to bring more ideas to each story, to prevent knowledge silos, and to encourage team ownership. Where we disagree is how long an ideal pairing session should last. I think pairs should switch every 2 hours, and he thinks 1 hour is ideal. I’ve seen teams reach the 1 hour pairing sessions successfully. But usually not without some pain and even often failing at the first attempt.

There’s nothing inherently wrong with failing. But if you fail at something, you’re not likely to try again. After all, you should learn from your failures, right?

So if you want your team to do something, you probably don’t want them to fail at it. If they fail, they won’t want to try a second time. That’s just human nature, and learning from failure. While you might think that they failed because they weren’t ready for the change yet, they’ll most likely think that they failed because this particular change won’t work for their situation. And they probably won’t know what to change when trying again, so they won’t try again.

I’ve seen this over and over. Back when Linux was up-and-coming, when a consultant pushed a company into using Linux before they were ready for it, and it didn’t work out, that company was cautious about trying again. So instead of being on the leading edge of using Linux, or even the middle of the pack, they ended up more toward the trailing edge. Had they not been pushed, they would have gotten more benefit in the long run.

So my advice in process improvement is the same as in programming: slow down. Take small steps toward what you think is the ideal. Make a small change, see how it works out, and adjust. As long as you’re still moving in the right direction, I believe you’ll move faster by taking small steps than by trying to make big leaps.

Empathy

I facilitated our team retrospective this morning. I felt like we made a little forward progress, but not as much as I would have liked. But it really brought one thing to the forefront of my thoughts today — empathy gained through communication.

We have a pretty large team by Agile standards — we had 20 people in our retro: 16 developers, 3 QA folks, and 1 manager. Out of those, only about 5 or 6 speak up regularly. I recently sent out a survey to the team, trying to get feedback on how we could improve our retros. A couple of the questions tried to get a feel for why people aren’t speaking up more. Only about half the people responded, and the answers didn’t really answer my question as well as I had hoped.

So on Amos‘s suggestion, we did the Safety Check exercise. We got a good set of answers to why people don’t feel safe. About half of the answers were about the fear of looking stupid in front of other people. About half of those mentioned the manager — they’re worried they might get in trouble for what they say. We talked some about fear and how it’s more often than not misplaced. And that the worst consequences are usually not as bad as you might think. But then we got to the crux of the problem — there’s not enough trust amongst the team, and especially between the team members and the manager.

About half of our team is new (within the past 6 months) — including the manager. While the developers have made some good progress building trust amongst each other, we haven’t had as much time with the manager to build trust between him and the rest of the team. So the lack of trust isn’t at all surprising.

Honestly, I already knew we had trust issues, and wanted to address them, but needed a way to lead the team to that same realization. With this exercise driving out the issue, we were then able to have a conversation about trust. The conversation was pretty good. We got more voices to contribute than probably any other topic we’d previously covered. (I was disappointed that the manager kept quiet though. I later found that he was trying to mitigate people’s fears by keeping quiet, but I urged him to contribute more in the future.)

But one point really stood out in my mind — a point of view that I hadn’t previously thought much about. Lauren, one of our QA analysts, pointed out that most human communication is non-verbal. We give tons of cues via body language, facial expressions, eye contact, tone of voice, even posture. I don’t recall if Lauren said it explicitly, but she pointed out that these cues build empathy between the speakers. She encouraged us to use more voice chat and video chat, as opposed to IRC text chat, because it would create more empathy between the people communicating, which would lead to more trust.

I spent most of the rest of the day talking to people on the phone or via Google Hangouts voice. And every single time, I consciously noticed that I was gaining empathy for the person I was speaking with. I assume (and hope) that that’s working both ways. I suppose that it’s always been happening, but I never really noticed it.

I’ve heard a lot of talk about empathy among Agile practitioners lately. It’s been mentioned on the Ruby Rogues podcast, and Angela Harms has been preaching it for years. I already understood how important it is. But until today, I didn’t really feel it happening.

So if you’re looking to build trust with someone, spend some time talking with them. Preferably in person, but if that’s not possible, seriously consider video or voice modes of communication, instead of sending an email or an instant message.

 

Testing Rails Validators

It’s challenging to test Rails custom validators.

I recently had to write a validator to require that an entered date is before or after a specified date.

It didn’t seem like writing the validator would be too difficult – I’ve written custom validators before, and date comparisons aren’t all that tricky. But when it came time to write the tests, I ran into several issues. And since I always try to follow TDD / test-first, I was blocked before I even began.

The biggest issue was the ActiveModel::EachValidator#validates_each API. It’s definitely not a well-designed API. You write your validator as a subclass, overriding validates_each. The method takes a model object, the name of the attribute of the model being tested, and the value of that attribute. You can also get the options passed to the custom validator via the options method. To perform a validation, you have to update the model’s errors hash.

The big flaw in the API is that instead of returning a result, you have to update the model. This needlessly couples the model and the validator. And it violates the Single Responsibility Principle — it has to determine validity of the field, and it has to update the errors hash of the model. This is not conducive to testing. Testing this method requires testing that the side-effect has taken place in the collaborator (model), which means it’s not really a unit test any more.

So to make it easier to unit test the validator, I broke the coupling by breaking it into 2 pieces, one for each responsibility. I moved the responsibility for determining validity to a separate method, which I called errors_for. It returns a hash of the errors found. This simplified the validates_each method to simply take the result of errors_for and update the errors hash of the model:

def validate_each(record, attribute_name, attribute_value)
  record.errors[attribute_name].concat(errors_for(attribute_value, options))
end

This made it much easier to unit test the errors_for method. This method doesn’t even need to know about the model — only about the value of the attribute we’re trying to validate. We simply pass in the attribute’s value and the options.

So we could write the tests without even pulling in ActiveRecord or any models:

describe DateValidator do
  let(:validator) { DateValidator.new(attributes: :attribute_name) }
  let(:errors) { validator.errors_for(attribute_value, validation_options) }

  describe 'when attribute value is NOT a valid date' do
    let(:attribute_value) { 'not a valid date' }
    it { errors.must_include 'is not a valid date' }
  end

  describe 'when attribute value IS a valid date' do
    let(:attribute_value) { Date.parse('2013-12-11') }
    it { errors.must_be :empty? }
  end
end

And the errors_for method looked something like this:

def errors_for(attribute_value, options)
  unless attribute_value.is_a?(Date)
    return [options.fetch(:message, "is not a valid date")]
  end
  []
end

Integration testing can also be a bit of a challenge. I recommend following the example from this Stack Overflow answer. Basically, create a minimal model object that contains the field and the validation. Then test that the model behaves like you expect with different values and validations.