So, The last two days I sat down to do a testing/Deployment related code
It was a bit of hit&miss, but I'll at least document it to see what to think about.
The coding task was a generic/simple "count to a hundred" task. In bash
it would look like this:
for i in `seq 1 100`; do echo \$i;done
And not much worse in Python.
Except that it had to be both a package (library) and a standalone script at the same time. It needed documentation & test-cases.
Even this isn't much to go on, so the code was purposefully implemented as "crappy" as possible. Functions named "print1" and "print2" were made parts of the public API. ( Test-cases were written to make sure I didn't break this API )
Not bad enough. We needed code to test the output to screen as well. ( Mocking "print" would have been too pretty, so I went with an output pipeline ). And then we needed to object-orient it.
At this time you should have a sordid mess on your hands. Different object-classes for different numbers, various inconsistent names and calling conventions. Declare a couple of them as "public interfaces" and make sure you write test-cases for them in their own suite. This set of test-cases are not allowed to be refactored, and should capture bugs you make.
On the other end, you also write test-cases for the things that _aren't_ part of your API, but put these in a different suite.
Soon we had a nice list of objects, one for each number, with dynamic re-creation of it's own class with increments. At this point, don't hold back. Adopt strange conventions. Write temporary files and pass around. Toss in a lambda for no other reason than because you heard about them. Make a mess.
The step from there, to adding a text representation of itself ( implementing __str__ for the class ). This means that you need to refactor your mess. Toss out your "ugly" code, while retaining the public API. If your API has some objects, you need to still support them, while adding this functionality elsewhere. Revamp your functions to use the new organized class.
After this, split out the new functionality into it's own package. Copy the tests over, and update your old package to include & depend on the new one. Make sure you don't change your tests. No changes in include paths or other things.
After this, it's time to go Online. Make a simple HTTP fronted for it. Make sure it's REST-ful. Stable URI's for each number. Do not include English words in your URI's, you want to be translatable. It's all working in the webbrowser, add some basic HTML-templating to get it rendering centered.
This is where I went wrong. I should have spent the time to implement Web-based testcases. Selenium or something similar. My bad, and I'll redo this part of the kata.
However, at this time, change your mind and add another AJAX version of the interface. Since AJAX is all about JSON these days, bolt JSON on top of it. Accept: headers, and keep backwards-compability with the web-site version.
Are your URI's stable? Is the interface cache-able? Are you stateless?
And if you think the above is messy? Yeah, it is. It's supposed to be, the exercise is to wade deep into the mess, and to get your head above the surface and bring it together. Hold yourself to high standards. Running Lint at all times, making sure your documentation is there, that your functions aren't too long, and that you avoid duplication.
But not to start with. In the beginning, you need to make a mess. The better your mess, the more fruitful it is.