Monday, September 29, 2008

Stack + Emma + JUnit

Premise:
In this experiment I'll take a look at testing techniques and programming coverage using both JUnit and Emma. All programs in this experiment will be edited and tested using Eclipse with built-in integration of JUnit's plugin. Emma is an open source Java coverage tool that integrates well with Ant. I will be running Emma and building my subsequent tests from command prompt (using Ant), while making changes that reflect improving coverage in Eclipse. The program below is the remanence of my previous "Stack" post. I will be using the Stack program as the basis of running my unit tests.

Improving Coverage:
The goal of this experiment is to improve code line coverage. Emma makes this very easy to accomplish by using a few commands from my previous setup of Ant. After executing the following build command: "ant -f emma.build.xml," Emma conveniently gives quantified metrics for class, method, block and line converage. Emma is also nice enough to provide an html (emma/coverage.html) summary describing the covered, partially covered and uncovered code line executions. It also indicates the file and highlights each line with a neat color scheme: green - fully covered, yellow - partially covered and red - uncovered.
Initial Emma Coverage Metrics:
  • class: 100% (3/3)
  • method: 73% (8/11)
  • block: 69% (65/94)
  • line: 75% (15/20)
Analysis of uncovered lines are specified by filename, percentage of coverage and methods containing non-executed code lines:
  1. ClearStack.java, (50%), getTop() & isEmpty()
  2. Stack.java, (79%), toString() & top()
I proceeded to first improve the most trivial test cases indicated by Emma for files Stack.java and ClearStack.java. I was able to raise the line-level coverage to 99.8%. However, I was stuck at a partially uncovered item in the isEmpty() method of the ClearStack class. In this case Emma has no further input other than to say that this is a line that needs more tests. It took me a while to notice that the isEmpty method needed to thoroughly be tested for both true and false cases, when initially I only had one. This minimal unit test boosted my line-level status to 100% and thus achieving what looks to be good code, right? Unfortunately, I was drawn to notice a couple flawed attributes that adhere from my tests and the confluence of using Emma as a fully trustworthy tool. Although my first intentions of this experiment is to improve code coverage, I've also come to realize Emma's vita flaw that could potentially separate good testers from the bad. I've also notice how Emma's tricks the programmer into thinking about only achieving code coverage, rather than helping the coder gain greater insight into their code analysis. Even at 100% coverage, my tests are not completely tested in terms of its context. For example, one unit test required that I needed to test the toString() method in the Stack class. In my TestStack class, I did the toString() method test by simply asserting the expected output while having prior knowledge of what toString() must express.
Here's my test case: assertEquals("Testing stack toString", "[Stack []]", stack.toString());
I found that this test builds upon the assumption that stack.elements works. I needed to conclude to myself that this statement truely executes without flaw, so I tested the List's toString method.
2nd test case: assertEquals("Testing list toString", "[]", stack.elements.toString());
It turns out that my curiousity only led me to remember that Collections have a default format of outputing elements in a form of an array-like string, enclosed with square-brackets. The point of this assertion was to distinguish the mindset of good and bad testers. Good being the ones who take it a further step by objectively analyzing the hidden points being made by their program, and bad ones who leave after the 100% coverage which may or may not be truely 100% valid.
Overall, Emma is still a great tool that should be used continuously in pre-development to production stages. However, I would not recommend using Emma in the middle of a project, unless the task is minimally affected by the program as a whole.

Here's a link to the final Stack code: stack-anthony.m.du-6.0.930.zip

Conclusion:
I learned the importance of building my programs using some sort of coverage tool. Although, coverage tools evidently determines if our code is tested for certain conditions, it limits us from determining the thorough aspects of what our program is truly expressing in terms of context. Upon questioning the effectiveness of converage tools, such as Emma, I've come to realize that there is no better tool than human analysis. Sure we can ease the pain of manually checking whether or not our code is executing like it should, but we must also take into account for its lack of human ingenuity and influence. Since we wrote the program using our own patterns and processes, we should also rely on our instinct when it comes to assessing our own work. I feel that the usage of Emma should go hand-in-hand in the beginning stages of developing test cases. Again, the concept of incremental programming should be exercised for assuring the quality of your code. As for future use, I would recommend using the integrated plugin called: EclEmma for Eclipse. It works very much like Emma. It also gives visual highlights of what code has been tested or "covered," as well as the usual quantified metrics provided after every build.

1 comment:

Tarundiw said...

Hi Anthony
How are you ?
My name is Tarun Diwan and i a recruiter with eBay in San Jose and i came across your blog while running a search for Automation Optimization Engineer with exp on Code Coverage "EMMA" and wanted to connect with you and see if you might be interested in learning more about this opportunity in case you are i will really appreciate if you can send me your contact number and best time to call you or you can call me at 650 703 0887.
Thanks,
Best Regards
Tarun Diwan
Phone: 408 376 8684
Mobile:650 703 0887
Email: tdiwan@ebay.com
My Jobs : http://tarunebayjobs.blogspot.com/