Showing posts with label unit testing. Show all posts
Showing posts with label unit testing. Show all posts

Thursday, February 25, 2010

Code coverage & cyclomatic complexity calculations coming to ColdFusion

We've heard it over and over again that ColdFusion 9 has made excellent headway in terms of providing engineers with great support to execute proper object orientation without all the bloat. Along with CF9 and awesome unit testing tools like MXUnit and Mockbox, unit testing is really becoming a mainstay for a lot of ColdFusion applications and I hope that it continues to stay that way. 


However one critical and often overlooked aspect of unit testing is code coverage and cyclomatic complexity calculation. Often times, developers feel that just because they've got a good amount of tests that its "good enough" without taking a step back and asking themselves questions like:

  • Are my unit tests covering every line of code?
  • Am I testing for every distinct path of execution inside a method?
  • Is my code written efficiently or is there room for improvement?
Perhaps you've asked yourself these questions but didn't know what the answers to it are. Well the answers lie in code coverage and cyclomatic complexity...

Code Coverage

Code coverage metrics show you the percentage of code that is covered and not covered by your unit tests. This is extremely beneficial to identifying the "holes" in your unit tests. Uncovered lines means no unit tests exist for those lines which is an indirect measurement of code and test quality. If you have not tested it, then it's at risk for error and should be addressed asap before rolling that beaut into production.

Cyclomatic Complexity

Also known as CCN, it answers the question of "is my code written efficiently?". To boil it down, take a method and imagine all the ways you as the developer can cause simple or complex conditionals. Meaning if's, switches, nested if's, inline method calls to other methods etc... 

The basic idea is that each of these "conditional" execution paths add to code complexity. In the programming world you always want to keep code complexity to a minimum because the more complex conditional code you have riddled throughout your code base, the less maintainable it is and extensibility becomes almost impossible. Your software becomes risky as more complex paths are added resulting in unhappy customers and stressed out engineers - and I think you'd agree with me that stress sucks and in a volatile software, release days are nail biters... 

So the idea is that you run CCN metrics on your classes and it will spit out a number for you. The standard is that the number should be under 10 and anything over that should be flagged, reviewed and more often that not re-written.

Code Coverage and CCN coming to ColdFusion


In languages like Java and C#, there are tools that provide this level of support but not for ColdFusion. That is until now!


Written with Java and ColdFusion, cfcommon's Chimera project will provide you with that critical line of defense to help engineers identify parts of their code that need unit tests and/or need to be re-factored because of too much complexity.


Attached is a preliminary screenshot of Chimera playing nicely with MXUnit tests.


Chimera ColdFusion Code Coverage Analysis


Although this image shows ColdFusion code writted in tags, we will also be providing script support as well ;)


Please stay tuned as we will be releasing Chimera BETA very soon and you can check out cfCommons in the meantime!


PEACE.


Mick

Friday, December 4, 2009

Mocking - An essential unit testing technique

Oh the joys of unit testing. Brian C. and I are creating a new platform from the ground up. From the get go unit tests were a standard. You need to write em, if you don't its timeout in the corner for 10 minutes. We started this project about a year ago on CF8 but recently got approval to move over to CF9. Very cool. So we started our script conversion A fricking SAP since the sooner we could rid the codebase of cftags the sooner my wrists and fingers can like me again.

Within that year we wrote unit tests for every single method in our codebase and tried our best to have near 100% code coverage. Life was good, we felt protected. That is, until we started hearing about Mocking during our script conversion. Essentially we were refactoring code to script which made us take a deeper look at the quality of our unit tests - and ehhh it was mediocre. A majority of tests felt clunky, doing too much, finding assertions that were'nt needed in the unit test in question but rather that belonged somewhere else. We both heard about mocking and decided to dedicate a few days trying to understand it and see if it solved our unit test shady-ness. I first heard the term "mocking" by a good friend of mine, Shakti S. who is spearheading the JUnit unit testing effort over at his shop. I'd say within the first few minutes of the explanation my eyes glazed over and my brain was sizzling. During that discussion, one point (out of many of course) that resonated with me was when he said this, "You can see the true power of mocking when writing tests against objects that leverage DAO's". So I slept on it... Shot a few emails back to Shaks and eventually brought it up with Brian as well. Brian and I spent a good day researching mocking (EXCELLENT resource by Martin Fowler-
http://martinfowler.com/articles/mocksArentStubs.html) and exchanging conversations that resulted in many "I think I get it. Lets run some tests". About the 10th "I think I get it" later we came to an epiphany. Unit tests need to be as simplistic as possible right? At the end of the day software functions are no different that mathematical functions - you define the inputs therefore you can assert the output.

OK so where does mocking come in? Well the object/method under test (coined System Under Test, SUT) can be delegating to composite objects to get its ultimate result. These composite objects often have their own composition so you can imagine what that setup() of that test case would look like.You'd need to instantiate the SUT, its composite relationships and THEIR composite relationships just to get the SUT in a state where its testable - ugh no thanks. Herein lies the problem that Mocking solves. These composite method calls within a SUT are termed collaborators. You, as the developer who have access to the source code, can mock/fake the results of a collaborator WITHIN that SUT. This is desirable because you should know what a valid expectation can be for that collaboration method. Instead of manually injecting/constructing the proper dependencies every single time just simply mock it up. You should NOT be manually injecting/constructing the proper dependencies because it takes your focus off of the SUT and you end up with a handful of extra code that you don't need, compositions that don't need to happen and assertions that belong somewhere else or worse, doesn't even add value to the unit test.


Back to how this mocking concept clicked as it pertains to the the DAO problem. In our new platform we leveraged an IoC container so we didn't have to manually construct dependencies every test case setup(). However we did have to inject the bean factory into each test case - so same problem, different dress. So how'd this epiphany apply to DAO's? Well when a test for a method that uses a delegate call to a dao method like dao.save(), that actually persisted a test record into the database. This should have just made you cringe... alot. You heard it right. Through each run of the build (firing unit tests) we were persisting test data to the dev database everywhere something.save() was called. This happens through local developer builds as well as continuous integration. We needed a way to make dao.save() return true but don't really go in there and persist records to the database. This is what integration tests are for, not unit tests. So after reaching mocking nirvana the solution was simple - mock the SUT's collaborator for dao, specifically the dao.save() method, to return true. Whatever dao.save() used to do is now overriden. If that save() went on and collaborated with 100 more objects, it no longer has any relevance here. We've mocked it. We've told you, dao.save(), what to return. We don't care how you really use 100 other objects to return true. You, dao.save(), are not the SUT. So I'll make you return true and you'll like it since I'm your daddy and my actual SUT execution path expects you to return true in order for me to assert properly.

So to conclude just remember that whenever you run across a SUT that has collaborators (composition), chances are you'll want to mock it. This keeps your unit tests controlled, light and focused. Keep up with your unit tests as it's tremendously helped us with this tag to script conversion. We'd convert to script, run the unit test where it would sometimes blow up in our face because we wrote some logic incorrectly. Gotta love it.

Hope this helped. I wanted to focus on the abstract and not show code however if you would like to see examples just let me know.


Good resources:

http://www.mockobjects.com/
Martin Fowler- http://martinfowler.com/articles/mocksArentStubs.html
Ask @marcesher on twitter - http://twitter.com/marcesher (MXUnit contributor)