The Unreasonable Effectiveness of TDD
DesignIn 1960, Eugene Wigner published a paper titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” In it, Wigner discusses one of the...
2 min read
Oct 4, 2011
Highgrooves’s “bias towards action” rallying cry is no secret, and we try to abide by that rule whenever we can, whether we’re deploying or choosing where to go for lunch. An important corollary is that we try to bias towards making mistakes earlier rather than later, too. A fellow Highgroover captured it well by saying: “If you aren’t getting burned, you need to play with fire more.”
Learning what breaks a new system helps developers get a feel for when heavier infrastructure might be required (e.g., implementing background jobs if server-side code starts becoming too intensive). And it’s part of why we encourage new developers to deploy code on the first day: it’s better to learn how to fix production while you’re getting situated than when there might be no one around at the moment (especially in a ROWE).
But finding out when your system doesn’t break can be just as important. I’ve discovered security issues in the past by using unsafe code I “knew” shouldn’t work as part of an initial naïve solution or experiment.
Well-developed test suites, of course, are what make all of this possible, by telling you exactly what has broken and where. It’s also what made me initially come around to TDD; writing tests first means you don’t bias your own thinking to the code you’ve written, making it easier to consider what “breaking” a model or controller (for example) should look like in a test.
Do you encourage experimentation and “failing fast”? How?
In 1960, Eugene Wigner published a paper titled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” In it, Wigner discusses one of the...
As we build apps, most workflows end up basing themselves around one interface or user experience that gets priority over others. As more and...