Wednesday 29 January 2014

E-STRATEGIES
Practical Road Map for Software Testing
-- Ranitha Ganguly
Testing is the process of checking and improving the quality of software. This article explains the process of testing. It also explicates how bugs are tracked and logged, and a combination of testing techniques, documents and formats required.
Testing is a critical activity in quality assurance that requires ample time and sufficient planning. The goal of testing is bug fixing of bug related errors in requirement analysis, design and coding phase. Here, the primary goal is bug prevention and the secondary goal is bug discovery. In development activities, errors result from careless or improper communication, and need to rush through the whole process of software development. Therefore, testing is done to mitigate those errors.
The basic rationale behind software testing is executing software in controlled manner to determine whether it performs according to customer satisfaction. It is important to have an understanding of relative criticality of defects, when planning test, reporting status and recommending actions.
What are Bugs? What is Debugging?
Bugs are errors found during execution of programs, which are introduced due to logical or syntactical faults. Bugs can be of two types - software bugs and hardware bugs. Some bugs may be deferred for locking in subsequent release of the software. These bugs are called `Deferred Bugs'.
Debugging is the process of locating, understanding and removing bugs during software failure. Debugging supports testing but cannot replace it. However, it is not possible to guarantee 100% error free software.
Severity and Priority of Bugs
Severity indicates how serious the bug is. It reflects the adverse impact of bugs on products.
Critical Severity: System crash, data loss, and data corruption.
Major Severity: Operational errors, wrong results, and loss of functionality.
Minor Severity: Defect in user interface layout, and spelling mistakes.
Priority indicates how important it is to fix the bug and when it should be fixed.
Immediate Priority: Bugs blocking further testing and is very visible.
At the Earliest Priority: Bug must be fixed at the earliest before the product is released.
Normal Priority: Bug should be fixed if time permits.
Later Priority: Bug may be fixed, but can be released as it is.
Example: Classification of bugs as per their severity and priority.
Steps for Testing Software
There are various steps for testing software including Smoke Testing, Ad-hoc Testing, Regression Testing and Integration Testing .
In Smoke testing, each time the software team gets the latest version of the software; an initial test is done to check whether the built software is stable enough. Testing major functionalities helps to verify whether the software is stable and performs effectively and can be considered for future test efforts.
Ad hoc testing is a creative informal test based on formal test cases, thus need not be documented by the testing team. Tests are random, based on error guessing ability and business knowledge.
Regression testing is done when a change is made in source code and new module is added. Set of predefined test cases is checked to determine whether any other portions of the software are affected.
Integration testing involves testing combined parts of application like units, modules, interfaces, individual applications, clients and servers, to determine whether they correctly function together.
The Bug Tracking Life Cycle
This helps in tracking bugs and has several phases.
Phase I: Tester finds the bugs entering into defect tracking tools like Bugzilla, MS-Excel, MS- word, etc.
Phase II: Project leader analyses bugs, assigns priority to them and passes them to the concerned developers.
Phase III: Developer fixes bugs and changes their status to `Fixed' along with `Fixed' details. New version, with all fixtures, is released to the Quality Control team.
Phase IV: Quality Control team or tester performs regression test and checks the status of bugs fixed.
Phase V: Fixed bugs are closed and if defects recur, issues are reopened and again forwarded to developers.
Defect Report Format
The fields of defect report format are: Defect found, Defect type, Classification (Critical/minor/major), Status (new/removed), Defect removal time, Stage at which defect was injected and removed.
Test Plan Format
Details of the testing, like resources required, testing approaches and methodologies and test cases to be designed, are prepared during the process of project planning. The format has the following information:
  • Project Name.
  • Estimated effort in person-months or person-hours.
  • Estimated start date and end date, actual start date and end date.
  • Test set-up including hardware and software environment and peripherals required, any special tool or equipment required, test personnel and their respective responsibility.
  • Types of testing to be carried out including functional testing, structural testing, - testing, -testing, Gorilla testing, usability testing, performance testing, stress testing, etc.
  • For each testing technique, test cases, test schedule and testing tools have to be specified.
  • Defect reporting format has to be specified.
Test Availability
Test Availability is how easily computer programs can be tested. It tests for operability, observability, controllability, simplicity and understandability. In system development life cycle, requirements are translated to specifications, through which the code is developed. Once construction is over, the product goes through various stages of testing before the final release.
Traceability Matrix
There are several matrices to measure test availability. Requirement tracing is the process of documenting links between user requirements for the system we are building and work-product to be implemented. This helps in areas of Requirement management, Change management and Defect management. Complete list of details to be tested is prepared using traceability matrix before any test cases are written.
Some Important Testing Techniques
Web Specific Testing
Due to more complex user interface, technical issues and compatibility combinations, testing effort required for Web application is considerably larger than that for application without any Web interface. Web testing includes tests that are defined without interface and several Web specific tests, like compatibility testing. Compatibility testing determines if an application is performing as expected with various combinations of hardware and software. This includes testing of different browsers, like Netscape Navigator, Internet Explorer and their various releases, different types of operating system, like Windows 95, Windows 98, Windows 2000, Windows NT, Unix, Linux, different monitor resolution, like color resolution, font settings, display settings, etc.
Compatibility testing can also include testing of different hardware configurations, like PC, laptop and other hand held devices, different Internet connections, like proxy server, firewalls, modems, etc., desktop items like display, sound and video.
Security Testing
Security testing determines the ability of application to resist unauthorized system entry or modification. It verifies whether proper protection mechanisms are built in the system and is accomplished through the following.
  • Audit: Ensures that all products installed in the site are secured when checked against known vulnerabilities.
  • Attacking: Attacking the server through vulnerabilities in network infrastructure.
  • Hacking: Hacking directly through website and HTML code.
  • Cookies Attack: Finding patterns in cookies and attempting to create the necessary algorithms.
Functional testing for web applications
This includes all types of tests listed in general testing in addition to the following:
Link Testing: Verifies that a link takes us to the expected destination without failure. The link is not broken and all the parts of the site are connected. Links to be tested includes embedded links (underlying extension indicating that more stuff is available), structural links (links to a set of pages that are subordinate to the current page and reference link), associative links (additional links that may be of some interest to users).
Static Testing
It includes verification or review of documentation, technology and code. It is a testing technique that does not execute code. Static testing is performed at the beginning of development life cycle to detect and remove defects early in development and test cycle. It, thus, prevents spreading of defects to later phases of development. This further improves communication with the entire project development team. There are basically two stages of static testing - inspection and walkthrough.
Dynamic Testing
At the end of development, dynamic testing validates whether client requirements are satisfied. It is a testing that executes code. The bulk of testing effort is based on dynamic testing.
Concurrency Testing
This technique is followed when there are multiple clients with the same server and ensures that the server can handle simultaneous requests from clients.
Parallel Testing
This is a testing technique performed by using the same data to run both old and new systems.
Client Server Testing
Application function tests, server tests, database tests and network communication tests are commonly conducted on client server architecture at three different levels. At the first level, individual client applications are tested in `disconnected mode' and operations of server and underlying network are not considered. At the second level, though network operations are not tested, client software and server applications are tested. At the third level, entire client server architecture is tested.
Load Testing
It verifies that running large number of concurrent clients do not break connection with server or/and client software. Loading is steadily increased until system failure occurs. It discovers deadlock and problem with queries.
Stress Testing
Subjecting a system to unreasonable amount of work load, while it denies resources, like RAM, disks, etc. It checks the system for robustness to determine whether it can continue to operate in adverse situations demanding resources in abnormal frequency, quantity or volume.
Gray Box Testing
It is partially a white box and partially a black box testing, as it is related to coding as well as specification.
Testing Life Cycle
The stages are as follows:
  • Corresponding to test strategy, the test strategy review is done.
  • Corresponding to test plan and specification, review of test plan and specification is done.
  • After test execution, quality review is done.
  • After defect reporting and closure, defect reporting review is performed.
  • On receiving test results, test result review is done.
  • Corresponding to final inspection and review, the final inspection and quality review is done.
  • Finally, we have product delivery.
Test Cases Design
Test cases are systematically designed to uncover different error classes with minimum amount of time and effort. Test cases are devised as set of data with the intent of determining whether system processes them correctly.
A test case is defined as "a set of input parameters for which the software will be tested". Test cases are selected, programs are executed and results are compared with estimated results. Test cases have to be designed based on two criteria - Reliability and Validity. A set of test cases is considered reliable if it detects all the errors. A set of test cases is considered valid, if at least one test case reveals errors. We can develop test cases using the Black Box and White Box testing techniques.
Test Case Design Format is given below:
  • Test Case No./Test No.
  • Pre-condition/Pre-requisition
  • Input Action/Test Action
  • Expected Results
  • Actual Results
  • Comments/Remark
Black Box Testing / Functional Testing / Behavioral Testing
It ensures that functional requirements and specifications of clients are fulfilled. Test cases are generated to evaluate system correctness. It is "testing by looking at requirements to develop test cases". It simulates actual system users and makes no assumption on system structure. However, it has the potential of missing logical errors and involves the possibility of redundant testing.
White Structural Testing / Glass Box Testing
It ensures that technical and house keeping functions of the system works. Test cases are designed to verify that system structure is sound enough and can perform intended task. It is `testing by looking at the program skeleton'.
We can test structural logic of software and statements. However, it does not ensure that user requirements are fulfilled. These tests may not be applicable in real life situations.
Types of Testing Documents
There are generally three types of test documents.
Plan Documents (Project, Quality and SCM plan),
Input Output Documents (Requirement Specification, High Level Design, Low Level Design), and Deliverables (User Manuals, Installation and customization guide).
Testing As a Validation Strategy
Validation checks whether we are building the product right. Validation strategy includes Unit Testing, Integration Testing, System testing, Performance testing, -testing, User acceptance testing, Installation testing and -testing. Generally, verification testing finds 20% of total bugs, while validation testing finds 80% of total bugs by `Pareto' principle.
When Does Testing Get Completed?
The criteria for determining completion of testing are as follows:
  • When we run out of time or money.
  • Based on statistical criteria and bug rate falls below certain level.
  • Test budget is depleted or the deadline is reached for releasing the software.
  • Test cases are completed with certain percentage.
  • Code coverage or functionality requirement reaches specific point.
  • -testing or -testing period ends.
Current Trends in Software Testing
V - Model, An Effective Test Methodology in Industries:
Phase I: In Requirement gathering phase, user acceptance test plan is prepared.
Phase II: Functional specification is prepared and system test plan is designed.
Phase III: Design phase has corresponding integration test plan.
Phase IV: Program specification stage, where high-level design and low-level design are made and code and unit test report is generated.
We have all test plans prepared and accordingly all tests are done based on defined phase-wise tests, which needs to be done till project implementation.
Conclusion
Good testing techniques find maximum uncovered errors. Success of testing depends on selecting appropriate combination of testing techniques, developing test conditions, test evaluation criteria, creating test script required to test rest of the above conditions, managing fixes and re-testing.
Every day new bugs are being found along with the testing techniques, putting technicians of testing and researchers under steeper and tougher challenges.
About the Author
Ranitha Ganguly is a Faculty Member at INC. The author can be reached at Ranita-g80@hotmail.com.

No comments:

Post a Comment