In this experiment I'll take a look at testing techniques and programming coverage using both JUnit and Emma. All programs in this experiment will be edited and tested using Eclipse with built-in integration of JUnit's plugin. Emma is an open source Java coverage tool that integrates well with Ant. I will be running Emma and building my subsequent tests from command prompt (using Ant), while making changes that reflect improving coverage in Eclipse. The program below is the remanence of my previous "Stack" post. I will be using the Stack program as the basis of running my unit tests.
Improving Coverage:
The goal of this experiment is to improve code line coverage. Emma makes this very easy to accomplish by using a few commands from my previous setup of Ant. After executing the following build command: "ant -f emma.build.xml," Emma conveniently gives quantified metrics for class, method, block and line converage. Emma is also nice enough to provide an html (emma/coverage.html) summary describing the covered, partially covered and uncovered code line executions. It also indicates the file and highlights each line with a neat color scheme: green - fully covered, yellow - partially covered and red - uncovered.
Initial Emma Coverage Metrics:
- class: 100% (3/3)
- method: 73% (8/11)
- block: 69% (65/94)
- line: 75% (15/20)
- ClearStack.java, (50%), getTop() & isEmpty()
- Stack.java, (79%), toString() & top()
Here's my test case: assertEquals("Testing stack toString", "[Stack []]", stack.toString());
I found that this test builds upon the assumption that stack.elements works. I needed to conclude to myself that this statement truely executes without flaw, so I tested the List's toString method.
2nd test case: assertEquals("Testing list toString", "[]", stack.elements.toString());
It turns out that my curiousity only led me to remember that Collections have a default format of outputing elements in a form of an array-like string, enclosed with square-brackets. The point of this assertion was to distinguish the mindset of good and bad testers. Good being the ones who take it a further step by objectively analyzing the hidden points being made by their program, and bad ones who leave after the 100% coverage which may or may not be truely 100% valid.
Overall, Emma is still a great tool that should be used continuously in pre-development to production stages. However, I would not recommend using Emma in the middle of a project, unless the task is minimally affected by the program as a whole.
Here's a link to the final Stack code: stack-anthony.m.du-6.0.930.zip
Conclusion:
I learned the importance of building my programs using some sort of coverage tool. Although, coverage tools evidently determines if our code is tested for certain conditions, it limits us from determining the thorough aspects of what our program is truly expressing in terms of context. Upon questioning the effectiveness of converage tools, such as Emma, I've come to realize that there is no better tool than human analysis. Sure we can ease the pain of manually checking whether or not our code is executing like it should, but we must also take into account for its lack of human ingenuity and influence. Since we wrote the program using our own patterns and processes, we should also rely on our instinct when it comes to assessing our own work. I feel that the usage of Emma should go hand-in-hand in the beginning stages of developing test cases. Again, the concept of incremental programming should be exercised for assuring the quality of your code. As for future use, I would recommend using the integrated plugin called: EclEmma for Eclipse. It works very much like Emma. It also gives visual highlights of what code has been tested or "covered," as well as the usual quantified metrics provided after every build.