By Ali Hamoody
Software Lifecycle and Quality
Traditionally, when developers work on a software project, they write the application code or parts of it, and then test that code to verify correct implementation of the requirements. This unit testing process is usually a manual one, where the code is run interactively and tested immediately after development.
Further development or modifications of the code may have impact on other parts of the system that have been already verified and tested, and could potentially introduce software bugs, therefore regression testing is required.
Although the process seems reasonable, and delivers a product with a acceptable quality, the story does not end here. The application code that has become part of a production system will eventually evolve overtime. Bugs need to be fixed, new business requirements arise, superseded functionality removed or modified.
When developers attempt to fix the reported bugs, implement new features or augment existing ones, they modify the application code and test it again.
In extreme cases, when the application is old and complex, it may become cost prohibitive to add new features, while at the same time keep the existing features intact. Maintenance costs will be so high that writing a new application may be more cost effective.
Is there a way to avoid this scenario? Is there a way to enhance software quality and reduce maintenance costs?
What is TDD?
Simply put, Test Driven Development (TDD) is a development method where the developer first writes an automated test case, then writes the application code that causes the test to pass.
The test case is run first before developing the application code. The test fails as expected. Then, the developer writes just enough application code to cause the test to pass.
After the test passes, the application code is refactored to meet the established development standards. Running the test case again after refactoring will verify that the application code is still working as expected.
When that’s done, the developer moves on to write another test case for the next feature being implemented, and so on.
The cycle is repeated for each requirement of the project. Over time, the collection of test cases will grow. All the tests are run every time the system is modified or new features added to make sure they all pass successfully.
Note that TDD is a development method that incorporates some form of unit testing. TDD does not replace the traditional system integration testing or user acceptance testing, which are still required.
Focus on Requirements
By writing the test case first, developers direct their attention to the project requirements and translate them into test cases, then write the application code that satisfies those requirements.
In a traditional approach, where application code is written first then tested after, the developers’ focus will be to verify that their code is working correctly, with the possibility of missing some requirements.
The difference is increased focus on requirements, and application code that actually satisfies those requirements.
Automating and integrating unit testing into the development cycle results in a better detection of bugs and software defects. This allows fixing them early in the development process, before building more complex code and dependencies.
The result is a final product with less bugs and a more reliable system.
Reduced Maintenance Cost
When a bug is being fixed or a new feature implemented, there's a potential of breaking existing features or introducing new bugs. This is especially true if the developer is not familiar with all parts of the system, or the system is too large for one developer to appreciate the full impact of the change.
TDD offers a layer of protection against such scenarios. Assuming that automated test cases are available for all parts of the system, running the full test suite will immediately expose such issues, and helps the developer locate and fix them, thus reducing the effort and cost needed to maintain the system.
Documentation and Knowledge Transfer
Each one of the test cases serves as up-to-date technical documentation on how the system should work and how to invoke the associated feature, including input and output parameters with examples on their expected values.
Developers will need less time to get onboard with the project and less time to implement code requiring integration with existing system functionality or features.
The benefit is reduced development and integration time, and readily available technical documentation that’s guaranteed to stay current with the system.
Adopting TDD in LANSA
At the time of writing this blog, there is no formal TDD framework shipped with LANSA development tools. However, the good news is that it’s fairly simple to adopt TDD into your LANSA project, and implement a simple framework that helps run the test code. Here are some suggestions to help you develop a framework that works for you.
Building the Test Harness
A test harness is a LANSA function or a reusable part method that is used to implement the test case for a specific part of the production code. Typically, the test harness invokes a specific part of the application code, passing in a predetermined set of parameters with specific values, then checks the result of that invocation and compares the returned values against the expected values.
Typically, all test harnesses return the same set of parameters, namely a Boolean flag that indicates success or failure of the test, and some sort of a message to provide details about the test being performed.
Automating the Testing Process
In addition to building the test harness for each of the project requirements, you can build one or more test modules. These are functions or reusable part methods that call all the test harnesses in succession, and provide visual feedback or list the results.
To verify the integrity of the system being developed, all tests must pass. The test modules can be run as often as necessary, and should be run at least after every change or bug fix.
Initial effort spent in writing the test code may appear to some developers as an unnecessary overhead and propels them to jump into writing the application code instead.
Consider that effort as an investment that you are making upfront, to understand the requirements and document them in a test case, and then use it to verify the application code. This is true not only after you finish developing the production code, but you can repeat the tests many times more and long after when required.
It's not always easy or even possible to write test code for all parts of the application code. This is particularly true in the case of graphical user interface testing, like Web pages or Windows forms. There are software tools capable of recording user actions into automated scripts and re-running them when required. Testing may get a bit more complex at this point, and you have a choice to draw the line here, and test the user interface manually, or adopt such UI testing tools in your environment.
If you make the choice to test the user interface manually, I would recommend separating the user interface from the business and program logic, and IOs where possible. Think of the user interface as just a medium to display data, capture user input and then call other functions that will perform the actual business logic. Avoid building complex logic or processing steps inside your visual components where feasible.
This approach allows you to build more test harnesses to check all the non-visual code, and minimize the visual code that requires manual testing effort.
Certain testing scenarios require the database to be in a specific state. For example, a test case may require a specific database record in a specific table, with a specific status code or flag value for the test to pass.
You can facilitate this by writing some code at the beginning of the test harness to initialize the data into a state suitable for the test being performed.
In some cases, you may benefit from running test cases that are related to each other in a certain sequence. Let’s consider an example of testing a workflow application, where there are three functional areas, create a new workflow, advance workflow and end workflow. Obviously, for the “end workflow” test to pass successfully, a workflow must exist first. If you run the “create new workflow” test case first, it builds the data required for the “end workflow” test.
Summary and Conclusion
Test Driven Development is a development approach that calls for building automated test cases then writing the application code that causes the tests to pass. Test cases can be run repeatedly after implementing new features or modifying the system to confirm the integrity of all parts of the system.
This increases software quality, reduces maintenance costs and provides up-to-date technical documentation.
Implementing TDD is simple in LANSA, using functions or reusable parts that invoke the application code and verify the accuracy of the returned values.
Next time you start working on a LANSA project, give TDD a try and share your experience with us.
If you enjoyed this article follow us on:Follow LANSA on Twitter Join LANSA on Facebook Follow LANSA on LinkedIn Follow LANSA on Google+ Follow LANSA on YouTube Subscribe to RSS Feed
- International i-Power Conference 2015
- 5 Guiding Principles of Enterprise Modernization (part 3)
- Seneca College professor succeeds at adding Mobile App Development for IBM i to curriculum
- Building business apps for mobile devices should be easy (part 3)
- Building business apps for mobile devices should be easy (part 2)