Dave Powers

I create things with words and/or code.

Accepting Acceptance Testing

I recently reflected on my experiences with letting TDD steer the development path. TDD was employed for unit testing, primarily testing validations of fields, associations between models, and the presences of database fields. Acceptance testing is the next layer outward in the outside-in software development methodology.

Acceptance testing is meant to ensure that the business needs of the stakeholder are met. Tests are generally readable, and with good tests, you can see whether features satisfy the requirements of each scenario. For example, if your application’s main feature and acceptance criteria include “images are permanently deleted after a user-specified duration”, with thorough tests, you can quickly determine whether it passes or fails.

The tests mimic the behavior that a user will exhibit on your site, so it’s up to you to ensure that various use cases are covered. Common scenarios would be the “happy path”, what happens when the user interacts as expected and everything goes smoothly, as well as the inverse, like a user submitting a form with no fields filled out.

Including good acceptance tests definitely provides good coverage, and provides me with confidence that my application is functioning as expected. While it can do a good job of simulating user behavior, it is not infallible. I still think that going through all scenarios manually can be beneficial, if there is time provided to do so.

During a recent test, I was checking for the presence of an error message on blank fields. Testing it myself in the browser, I ran into an interesting issue where a pop-up field was present instead of the expected error text. However, my tests were still passing. I later determined it was part of the HTML5 required input attribute. The RSpec test did not support such a pop-up, and saw the plain text as a fallback. It was a good reminder that cross-browser testing is still important as well to ensure the UX is as intended.

Acceptance testing feels like a natural progression from test-driven development, since it’s really just an extension on the same idea: let your tests (or errors) determine the next action you take. As I practice it more, I can typically anticipate what the failure for a given test will be before running it. It serves as a useful guidance, and I’d liken it to using a GPS for directions on a route that you’re already familiar with; it’s there to make sure you stay on course and confirm that you are taking the right steps.

Starting out with good user stories and acceptance criteria, I find writing the tests to be like a planning stage, which makes the actual implementation that much easier. Making a test pass is very satisfying, especially considering that without tests, you’d have to manually go through all of your tests after every change (assuming you want to be thorough).

I’m reminded of the idea of the “render tax” from working in video production. The idea is that your video project will need to be rendered by the computer at some point in order to be played back smoothly. This can be before editing begins, throughout editing, or at the end during export.

Relating this concept to software development, you would be paying the “testing tax” at the beginning of and during your development cycle. Since the tests are completed upfront and as you go, any additional work can be immediately tested. Implement a new feature for a client? Run your test suite, and you’ll know immediately what, if anything, broke, and where. If you save your testing until the end, the process starts over every time there is a change.

To round out the taxation metaphor, I’ll close by saying that I’d rather pay as I go, regarding testing, than max out my coding credit cards. I can’t afford to accumulate any technical debt.

Comments