
Software Maintainability in New Technologies
Leveling UpEvery decade at the longest, the software development industry undergoes significant technological shifts, and these shifts make it difficult to keep delivering maintainable software....
When you’re trying to get into testing, often you may hear the advice “don’t test the framework.” That is, when you’re using well-established third-party code (whether it’s called a framework, library, or package), you don’t need to write tests to confirm that it works the way it’s documented to work, especially since it should already have tests for those features. This made sense to me at first, but over time I started to see situations where this universal statement doesn’t quite apply. Here are a few situations where a kind of “testing the framework” may be the right thing to do.
One concern you might have about “don’t test the framework” is: “I know the framework works, but will my whole app (built on top of the framework) work for users?” That’s a great point, and it’s the purpose of end-to-end testing.
End-to-end tests run against as much of your application as possible: all the code you wrote and all the framework code you use. Ideally, they interact with your user interface, clicking on web app links or tapping mobile app buttons. And they really hit your data store—a standalone testing data store, that is. An end-to-end test helps avoid the situation where your unit tests all pass, but the units don’t fit together quite right:
End-to-end testing is very common, and in a sense it’s a kind of “testing the framework.” You’re testing that your overall application works as expected, including all of its dependencies on the framework.
Exploratory and characterization testing are two closely related approaches to testing existing code. Both are focused on understanding the current behavior of a system, bugs and all. The terms are often used interchangeably: the slight difference is that characterization tests tend to be more comprehensive and permanent, whereas exploratory tests tend to be less comprehensive and more disposable.
As you’re getting to know a new framework, you may not want to jump right into coding your app. You may want to try out the framework’s features to see if you understand them correctly and if they really work the way they’re documented. One way to do this is by building a throwaway sandbox app, but an alternative would be to write automated tests that exercise features of the framework. If you keep these tests until you’re satisfied you understand the framework and then delete them, they functioned as exploratory tests. If, instead, you flesh them out until they thoroughly cover all the features of the framework you use, and then you rerun them against each new version of the framework that’s released, they’re functioning more as characterization tests.
In practice, I don’t see a lot of exploratory or characterization testing. Big Nerd Ranch tends to choose frameworks that are well-documented, well-established and well-supported. Even when we’re using newer libraries, we tend to write a sandbox app rather than automated tests. Still, if you find you need them, exploratory and characterization testing are completely valid reasons to “test the framework.”
A common argument I hear against “testing the framework” is simply that “the framework is already tested!” However, taken literally, that view may be a bit optimistic. All nontrivial software has bugs, and frameworks tend to be fairly complex. For frameworks that are open-source, if you find a bug, there is an implicit invitation to contribute to help.
There are a number of ways you could add value by contributing tests to the framework:
Just like any other code, frameworks do need to be tested, and open-source frameworks thrive on contributions from the community.
This last case is controversial, but worth mentioning. Consider the example of a model that extends a framework class, such as Rails’ ActiveRecord::Base
or Core Data’s NSManagedObject
. Some aspects of models are certainly worth testing: business logic, behavior based on different states, and custom validations, for example. But are stored attributes and simple validations? They aren’t really logic per se: they’re declarations. And end-to-end tests will usually confirm at least their basic functioning.
Whether these tests are valuable depends on how you see your model tests. Some see them primarily as a design tool: they help you create objects with a simple API and few dependencies. Tests of model attributes and validations aren’t really a design activity because they just assert that the right declaration is in place.
But there’s another way to see these tests: as documenting the object’s API. If you have tests that document the object’s attributes and methods, you can change out the implementation with confidence. For example, you might start with a boolean active
field, but later find that you have more statuses than just “active” and “inactive.” You could add a new status
field, then change the active
attribute to a computed field that’s based on the value of status
. The tests that documented the status
field would confirm whether it’s still working the same way.
This approach isn’t one I hear advocated very frequently. For every field that changes implementation, there are usually dozens that stay trivial—and that’s a lot of model tests to maintain for little benefit. Often end-to-end tests provide enough safety for these kinds of changes.
Testing is all about choosing the kinds of test that provide the most value for the effort they require. “Don’t test the framework” by repeating tests that a framework already has—they’re unlikely to be valuable. But you definitely want to end-to-end test your entire app including the framework. And you may want to consider exploratory or characterization testing, documenting your models’ API, and even contributing tests to the framework itself.
Does this discussion give you ideas for new types of test to write? Or types of test to stop writing? Keep thinking about the why behind the tests you write, and your tests will get better and better at serving your needs.
Every decade at the longest, the software development industry undergoes significant technological shifts, and these shifts make it difficult to keep delivering maintainable software....
Where is the Ruby language headed? At RubyConf 2021, the presentations on the language focused on type checks and performance—where performance can be subdivided...
At RubyConf 2021, TypeProf-IDE was announced. It's a Visual Studio Code integration for Ruby’s TypeProf tool to allow real-time type analysis and developer feedback....