Wednesday, August 14, 2019

Introduction to Software Testing


1. Introduction:
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

2. What is Testing?

l  Testing is the process of executing a Program / system with the intent of finding errors.
l  Testing is an activity that must be performed during the software development cycle  prior to release into production.
l  Testing is the process of demonstrating that defects are not present.
l  Testing is the process of showing that a program/system performs all intended functions correctly before being released into production.

3. Why testing?

The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the requirements may be erroneously or imperfectly specified. Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity.


4. When do we do testing?

Testing activities can be started as soon as the SRS has been prepared where test planning can be initiated and progressed along with the SDLC through the design & coding phases by developing test designs & test cases.
As soon as coding is completed, the focus can be on ‘Test Execution’.
This approach of involving testing early in the SDLC will contribute to meeting deadlines without compromising on the testing activities

5. When to Stop Testing?

This can be difficult to determine. Many modern software application are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:
l  Deadlines (release deadlines, testing deadlines.)
l  Test cases completed with certain percentages passed
l  Test budget depleted
l  Coverage of code/functionality/requirements reaches a specified point
l  The rate at which Bugs can be found is too small
l  Beta or Alpha Testing period ends
l  The risk in the project is under acceptable limit.
Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -
  • Measuring Test Coverage.
  • Number of test cycles.
  • Number of high priority bugs. 

No comments:

Post a Comment

If any suggestions or issue, please provide