Reading Time: 4 mins
Testing is an activity to check whether the software product is defect-free and verification of an application (or a software) against the expected (or client’s) requirements. The pioneer in software engineering, Watts. M. Hemphrey is regarded as the “Father of Software Quality” and the American computer scientist, Gerald M. Weinberg was the first person to perform testing on a human flight in 1958.
Manual testing is performed to detect errors in an application (or software) without using any scripts and automation tools. The tester here plays the role of an end-user to ensure the correct behaviour of an application by testing most of the application features. Manual testing helps the tester to build test cases needed to perform automation at later stages.
Despite the fact that how large or small a company, application and project is, the manual testing forms the first step in a testing process. Further, it becomes a crucial factor than any of the testing strategies since it facilitates the tester with hands-on experience from the end-user perspective such as the GUI attributes.
It is a kind of testing which solely examines the initial specific requirements of an AUT (Application Under Test) and completely ignores its internal structures. Usually, it’s major concern would be on validating the inputs and outputs rather than on the internal code and pathways. Thus, for this reason, it is also known as Closed-box Testing (since it is unaware of its internal logic) and Specific-Based Testing.
Types of Black-box Testing
By falling under the Black-box testing concept, it is executed to confirm the expected behavior or results of the application’s functionality. Again, the initial (client’s) requirements and specifications form the input to functional testing. It never bothers about the workflow of the processing instead of the results of processing.
As the name suggests, it examines the non-functional aspects of an application such as performance, scalability, speed, resilience, reliability, etc. In contrast to Functional Testing, it focuses on how the process is done instead of what the process does.
The White-box testing, also called as Clear-box Testing, Structural Testing, Glass-box Testing and so on is performed to assess the software’s (application) internal structure and workings. Unlike in the Black-box testing, here the internal logic and programming knowledge form crucial to the Tester (usually the Developer) to check which unit conks out.
In a single sentence, Grey-box Testing is a combination of Black-box and White-box testing. Similar to the White-box testing, it demands the knowledge of an application’s internal code to design the test cases but from the user level (end-users perspective) as in the case of Black-box testing. One exemption is that in the Grey-box testing, partial programming knowledge is sufficient than the demand of wholeness as in the White-box testing.
This testing aims to ensure the fine workings of the older code/program in the case of when a newer code/program has been executed. In simple words, it confirms that the new code’s change hasn’t had any adverse effects over the existing programmes or features in an application.
The User Acceptance Testing (UAT) generally done by the user or client is what determines whether the software/application can be accepted or not. It is because of this reason, it is performed after all the necessary tests such as the functional, non-functional and regression tests have been carried out. It is also known as End User testing or Beta testing.
It is an informal way of testing a software/application generally with an aim to break the software. The testing is done with no prior planning, documentation and test cases as its primary intention is to find errors on the go more likely an Error-Guessing approach. Due to its informal testing trait, Ad-hoc testing is also known as Random Testing.
This is a kind of testing wherein the compatibility factor of the application is examined. It verifies whether the application is compatible across different versions, browsers, operating systems, applications, networks, mobile devices, hardware and software. The Compatibility Testing is further divided into Backward Compatibility Testing (examines the compatibility of the application with its older version) and Forward Compatibility Testing (examines the compatibility of the application with its newer versions).
The Performance Testing is done to inspect whether the software can perform well under increased workload, stress and other instances. Here, the term ‘Performance’ is a broad term and stands for various factors such as speed, scalability, stability, reliability and so forth.
Being the subset of Usability Testing, it is performed to ensure whether the software/application performs well for everyone in terms of accessibility, which includes people with certain disabilities. Further, there are few accessibility laws to abide and its major concern is to examine the usability and accessibility of the AUT.
It is performed to determine whether the application is stable enough to proceed with further testing. It is usually done before any of the functional and regression testing and acts as a minimal group of a test to eradicate the QA team’s loss of time and resources.
Being a subset of regression testing, the Sanity Testing is carried out to verify whether the changes that have been made (be it code or functionality) works as expected. It also confirms that the new changes don’t affect any other dependent functionalities. Hence, it can be viewed as a narrow regression test specifically performed in few places.
This type of testing is executed to check how quick an application recovers from a system crash or any other catastrophic collapse. It examines the recovery factor of an application under a forced failure of hardware or software. It also ensures whether the application’s normal functionalities work well after recovering from any such failures.