This guy has some good points, some which I failed to be able to articulate:
He has good points but I still digress.
But I can also tell you that uses all the latest and greatest tools and techniques, and attempting to “test all the fucking time” on large scale, long running projects, has actually hurt rather than helped me at times.
I fail to see how. Tools like RSpec, Steak (I met the creators of Steak.fm, actually, nice guys), and Cucumber is to make testing more logical and read like a requirements document. While there is a certain of level of testing that is unacceptable - like testing database connectivity or the creation of cookies - tests should focus on the important parts of the logic. The problem with the argument we're having is that tests is left to the developer. We can all agree on testing, it's the methodology and even the granularity of our tests that we can't seem to agree on. But hey, it's the differences in opinion that make this country great.
Personally, I’ve found testing to get in the way when I’m first exploring a new project. I almost always spend a couple hours writing absolutely crappy code without any tests at all. Test-first development does help drive interfaces and forces you to think about API design continuously, but it can only really be used to attain a local maximum.
So does the inverse of it. When I explore new things, I don't really write tests for it. I begin to write tests when I want to understand it. It also depends on what kind of thing I'm working on. If it's a new product or something for fun, I generally skip tests. If it's algorithmic in nature, I write tests. If I'm serious into putting together a real project, then I implement tests. His argument of a local maximum is kind of off, IMO and it contradicts his first statement. The problem is what kind of development are you doing? If your goal is to putz around and try new things, sure, go right ahead and skip tests...but if I'm evaluating a library, you better bet your bottom dollar that I will test it.
I don’t test complex interactions with users within a system, unless I begin to frequently write code that has system-wide effects. I’ve definitely been in situations in which integration tests have been vital, but they’ve been far and few in between. Part of this is because the projects I work on tend to be deeper than they are wide, but it’s also because I just trust my design capabilities enough to not introduce too many changes that could break more than one part of my application at a time. I feel like the majority of integration testing goes into way too much detail about the expected paths through a system, and as a result, forces a bunch of false-negatives as minor changes that shouldn’t affect users end up breaking tests.
Honestly, he may not need to rethink his tests, he may need to rethink his architecture. Judging based on what he's talking about, complex interactions should have been abstracted as messages. The problem I had with this statement is that he retailors his tests when he should have looked at the architecture. When you get into complex object interactions, sometimes, message passing is easier because it decouples objects from one another. Your integration tests are simple: test the message pass and the behavior, rather than testing the interaction itself. Those false negatives stem from the fact that his objects are tightly coupled - and false negatives are awful because they're misleading to the developer. It's almost a trade-off; testing is a kind of development effort, so you might as well architect well.
Note that in the above, I didn’t imply that testing results in writing better code. I also specifically avoided claiming that tests will help you avoid defects in the first place. While I think that occasionally testing contributes to accomplishing these two things, it really depends on the project as well as the individual developer’s skill level and coding style. I also didn’t claim that writing tests saves time or money. I don’t think it actually does, and I wouldn’t trust that claim until I saw some concrete evidence.
The one part I definitely agree on! However, writing tests creates a contract. I argued this with another guy from twitter who didn't agree with the open/closed principle. The problem is that objects, invariably, have a contract. Their signatures are their method behaviors, and the contract is that the object will behave as expected. Tests enforce this behavior, so it solidifies the contract between developer and object.