Testing FAQ

Q. What is Testing?

A. Testing is a process to check correctness, completeness & quality of the Software/Product.

Q. What is defect?
A. Variation between expected results (requirement, specification) & actual results (software/product output).

Q. What is a Bug?
A. If testers find any mismatch in the application/system in testing phase then they call it as Bug.

Q. What is an Error?
A. We can’t compile or run a program due to a coding mistake in a program. If a developer unable to successfully compile or run a program then they call it as an error.

Q. What is a Failure?
A. Once the product is deployed and customers find any issues then they call the product as a failure product. After release, if an end user finds an issue then that particular issue is called as a failure.

Q. What is the main benefit of designing tests early in the life cycle?
A. It helps prevent defects from being introduced into the code.

Q. What is the difference between Testing Techniques and Testing Tools?
A. Testing Technique is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools whereas Testing Tools is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing.

Q. What is component testing?
A. Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.

Q. What is functional system testing?
A. Testing the end to end functionality of the system as a whole is defined as a functional system testing.

Q. What is random/monkey testing? When it is used?
A. Random testing often known as monkey testing. In such type of testing data is generated randomly often using a tool or automated mechanism. With this randomly generated input the system is tested and results are analysed accordingly. These testing are less reliable; hence it is normally used by the beginners and to see whether the system will hold up under adverse effects.

Q. What is Ad-hoc testing?
A. Ad-hoc testing is a commonly used term for software testing performed without planning and documentation, but can be applied to early scientific experimental studies. The tests are intended to be run only once, unless a defect is discovered. Ad-hoc testing is the least formal test method.

Q. What are Boundary Value Analysis and Equivalence Class Partitioning Techniques?
A. Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.

Equivalence Class Partitioning:
In equivalence class partitioning, inputs to the software or system are divided into groups that are expected to exhibit similar behavior, so they are likely to be proposed in the same way. Hence selecting one input from each group to design the test cases.
Each and every condition of particular partition (group) works as same as other. If a condition in a partition is valid, other conditions are valid too. If a condition in a partition is invalid, other conditions are invalid too.
It helps to reduce the total number of test cases from infinite to finite. The selected test cases from these groups ensure coverage of all possible scenarios.

Example 1: Assume, we have to test a field which accepts Age 18 – 56

Valid Input: 18 – 56
Invalid Input: less than or equal to 17 (<=17), greater than or equal to 57 (>=57)

Valid Class: 18 – 56 = Pick any one input test data from 18 – 56

Invalid Class 1: <=17 = Pick any one input test data less than or equal to 17

Invalid Class 2: >=57 = Pick any one input test data greater than or equal to 57

We have one valid and two invalid conditions here.

Example 2: Assume, we have to test a filed which accepts a Mobile Number of ten digits.

Valid input: 10 digits

Invalid Input: 9 digits, 11 digits

Valid Class: Enter 10 digit mobile number = 9876543210

Invalid Class Enter mobile number which has less than 10 digits = 987654321

Invalid Class Enter mobile number which has more than 11 digits = 98765432100

Boundary Value Analysis
Boundary value analysis (BVA) is based on testing the boundary values of valid and invalid partitions. The Behavior at the edge of each equivalence partition is more likely to be incorrect than the behavior within the partition, so boundaries are an area where testing is likely to yield defects.
Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition.
A boundary value for a valid partition is a valid boundary value. Similarly a boundary value for an invalid partition is an invalid boundary value.
Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen.
For each boundary, we test +/-1 in the least significant digit of either side of the boundary.

Example 1: Assume, we have to test a field which accepts Age 18 – 56

Minimum boundary value is 18

Maximum boundary value is 56

Valid Inputs: 18,19,55,56

Invalid Inputs: 17 and 57

Test case 1: Enter the value 17 (18-1) = Invalid

Test case 2: Enter the value 18 = Valid

Test case 3: Enter the value 19 (18+1) = Valid

Test case 4: Enter the value 55 (56-1) = Valid

Test case 5: Enter the value 56 = Valid

Test case 6: Enter the value 57 (56+1) = Invalid

Example 2: Assume we have to test a text field (Name) which accepts the length between 6-12 characters.

Minimum boundary value is 6

Maximum boundary value is 12

Valid text length is 6, 7, 11, 12

Invalid text length is 5, 13

Test case 1: Text length of 5 (min-1) = Invalid

Test case 2: Text length of exactly 6 (min) = Valid

Test case 3: Text length of 7 (min+1) = Valid

Test case 4: Text length of 11 (max-1) = Valid

Test case 5: Text length of exactly 12 (max) = Valid

Test case 6: Text length of 13 (max+1) = Invalid

Q. What are Quality Assurance and Quality Control?
A. Quality Assurance involves in process-oriented activities. It ensures the prevention of defects in the process used to make Software Application. So the defects don’t arise when the Software Application is being developed. It is also called Verification.
Quality Control involves in product-oriented activities. It executes the program or code to identify the defects in the Software Application. It is also called Validation.

Q. What is Verification in software testing?
A. Verification is the process of evaluating the work product (Documents, SRS, BRS) to check whether it is according to requirement specification or not. It is done by QA team. It is also ensure that whether we are building the product right i.e., to verify the requirements which we have and to verify whether we are developing the product accordingly or not. Activities involved here are Inspections, Reviews, Walk-through.

Q. What is Validation in software testing?
A. Validation is the process of evaluating the end product (Software/Product) to check whether it is according to requirement specification are fulfilled or not. It is done by QC team. It is also ensure that whether we are building the right product i.e., to validate the product which we have developed is right or not. Activities involved in this is Testing the software application.

Q. What is White Box Testing?
A. White Box Testing is also called as Glass Box, Clear Box, and Structural Testing. It is based on applications internal code structure. In white-box testing, an internal perspective of the system, as well as programming skills, are used to design test cases. This testing usually was done at the unit level.

Q. What is Black Box Testing?
A. Black Box Testing is a software testing method in which testers evaluate the functionality of the software under test without looking at the internal code structure. This can be applied to every level of software testing such as Unit, Integration, System and Acceptance Testing.?

Q. What is Grey Box Testing?
A. Grey box is the combination of both White Box and Black Box Testing. The tester who works on this type of testing needs to have access to design documents. This helps to create better test cases in this process.

Q. What is Positive and Negative Testing?
A. Positive Testing is to determine what system supposed to do. It helps to check whether the application is justifying the requirements or not. Negative Testing is to determine what system not supposed to do. It helps to find the defects from the software.

Q. What is Test Strategy?
A. Test Strategy is a high-level document (static document) and usually developed by project manager. It is a document which captures the approach on how we go about testing the product and achieve the goals. It is normally derived from the Business Requirement Specification (BRS). Documents like Test Plan are prepared by keeping this document as a base.

Q. What is Test Plan and contents available in a Test Plan?
A. Test plan document is a document which contains the plan for all the testing activities to be done to deliver a quality product. Test Plan document is derived from the Product Description, SRS, or Use Case documents for all future activities of the project. It is usually prepared by the Test Lead or Test Manager.

  • Test Plan Identifier
  • References
  • Introduction
  • Test Items (Functions)
  • Software Risk Issues
  • Features To Be Tested
  • Features Not To Be Tested
  • Approach
  • Items Pass/Fail Criteria
  • Suspension Criteria And Resolution Requirements
  • Test Deliverables
  • Remaining Test Tasks
  • Environmental Needs
  • Staff And Training Needs
  • Responsibility
  • Schedule
  • Plan Risks And Contingencies
  • Approvals
  • Glossaries

Q. What is Test Suite?
A. Test Suite is a collection of test cases. The test cases which are intended to test an application.

Q. What is Test Scenario?
A. Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level test case.

Q. What is Test Case?
A. Test cases are the set of positive and negative executable steps of a test scenario which has a set of pre-conditions, test data, expected result, post-conditions and actual results.

Q. What is Test Bed?
A. An environment configured for testing. Test bed consists of hardware, software, network configuration, an application under test, other related software.

Q. What is Test Environment? Explain with example.
A. Test Environment is the combination of hardware and software on which Test Team performs testing.


  • Application Type: Web Application
  • OS: Windows
  • Web Server: IIS
  • Web Page Design: Dot Net
  • Client Side Validation: JavaScript
  • Server Side Scripting: ASP Dot Net
  • Database: MS SQL Server
  • Browser: IE/FireFox/Chrome

Q. What is Test Data?
A. Test data is the data that is used by the testers to run the test cases. Whilst running the test cases, testers need to enter some input data. To do so, testers prepare test data. It can be prepared manually and also by using tools.
For example, To test a basic login functionality having a user id, password fields. We need to enter some data in the user id and password fields. So we need to collect some test data.

Q. What is Test Harness?
A. A test harness is the collection of software and test data configured to test a program unit by running it under varying conditions which involves monitoring the output with expected output.

Q. What is Test Closure? List out Test Deliverables?
A. Test Closure is the note prepared before test team formally completes the testing process. This note contains the total number of test cases, total number of test cases executed, total number of defects found, total number of defects fixed, total number of bugs not fixed, total number of bugs rejected etc.

  • Test Strategy
  • Test Plan
  • Effort Estimation Report
  • Test Scenarios
  • Test Cases/Scripts
  • Test Data
  • Requirement Traceability Matrix (RTM)
  • Defect Report/Bug Report
  • Test Execution Report
  • Graphs And Metrics
  • Test Summary Report
  • Test Incident Report
  • Test Closure Report
  • Release Note
  • Installation/Configuration Guide
  • User Guide
  • Test Status Report
  • Weekly Status Report (Project Manager To Client)

Q. What is Unit Testing?
A. Unit Testing is also called as Module Testing or Component Testing. It is done to check whether the individual unit or module of the source code is working properly. It is done by the developers in developer’s environment.

Q. What is Integration Testing?
A. Integration Testing is the process of testing the interface between the two software units. Integration testing is done by three ways. Big Bang Approach, Top Down Approach, Bottom-Up Approach.

Q. What is Top-Down Approach?
A. If upper layer module are ready and lower layer module are not ready then w’ll replace lower layer module with stub for testing purpose. Stub is called program by which component to be tested. Testing takes place from top to bottom. High-level modules are tested first and then low-level modules and finally integrating the low-level modules to a high level to ensure the system is working as intended. Stubs are used as a temporary module if a module is not ready for integration testing.

Q. What is Bottom-Up Approach?
A. If upper layer module are not ready and lower layer module are ready then w’ll replace upper layer module with dummy code called driver for testing purpose. Driver is calling program which caused the component tested. It is a reciprocate of the Top-Down Approach. Testing takes place from bottom to up. Lowest level modules are tested first and then high-level modules and finally integrating the high-level modules to a low level to ensure the system is working as intended. Drivers are used as a temporary module for integration testing.

Q. What is Big Bang Approach?
A. Combining all the modules once and verifying the functionality after completion of individual module testing. Top down and bottom up are carried out by using dummy modules known as Stubs and Drivers. These Stubs and Drivers are used to stand-in for missing components to simulate data communication between modules.?

Q. What is System Testing?
A. Testing the fully integrated application to evaluate the system’s compliance with its specified requirements is called System Testing or End to End testing. Verifying the completed system to ensure that the application works as intended or not.

Q. What is Functional Testing?
A. In simple words, what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application. Taking example of FAN, provides air, is functional, i.e. what a software product should do is functional.

Q. What is Non-Functional Testing?
A. In simple words, how well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc., Main focus is to improve the user experience on how fast the system responds to a request. Taking example of FAN, how fast it’s wings are rotating, is non-functional, i.e. how a software product will do is non-functional.

Q. What is Acceptance Testing?
A. It is also known as pre-production testing. This is done by the end users along with the testers to validate the functionality of the application. After successful acceptance testing. Formal testing conducted to determine whether an application is developed as per the requirement. It allows the customer to accept or reject the application. Types of acceptance testing are Alpha, Beta & Gamma.

Q. What is Alpha Testing?
A. Alpha testing is done by the in-house developers (who developed the software) and testers. Sometimes alpha testing is done by the client or outsourcing team with the presence of developers or testers.

Q. What is Beta Testing?
A. Beta testing is done by a limited number of end users before delivery. Usually, it is done in the client place.

Q. What is Gamma Testing?
A. Gamma testing is done when the software is ready for release with specified requirements. It is done at the client place. It is done directly by skipping all the in-house testing activities.

Q. What is Smoke Testing?
A. Smoke Testing is done to make sure if the build we received from the development team is testable or not. It is also called as “Day 0” check. It is done at the “build level”. It helps not to waste the testing time to simply testing the whole application when the key features don’t work or the key bugs have not been fixed yet.

Q. What is Sanity Testing?
A. Sanity Testing is done during the release phase to check for the main functionalities of the application without going deeper. It is also called as a subset of Regression Testing. It is done at the “release level”. At times due to release time constraints rigorous regression testing can’t be done to the build, sanity testing does that part by checking main functionalities.

Q. What is Retesting?
A. Basically, re-execution of failed test cases is called retesting. We do the retesting on failed test cases. To ensure that the defects which were found and posted in the earlier build were fixed or not in the current build. Say, Build 1.0 was released. Test team found some defects (Defect Id 1.0.1, 1.0.2) and posted. Build 1.1 was released, now testing the defects 1.0.1 and 1.0.2 in this build is retesting.

Q. What is Regression Testing?
A. To check whether change part of the code/software does not affect the unchanged part of the code/software. We do regression testing on passed test cases. Repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software components.

  • Usually, we do regression testing in the following cases:
  • New functionalities are added to the application
  • Change Requirement (In organizations, we call it as CR)
  • Defect Fixing
  • Performance Issue Fix
  • Environment change (e.g. Updating the DB from MySQL to Oracle)

Q. What is difference between update and upgrade?
A. An upgrade is the act of replacing your product with a newer, and often more superior, version or similar product. Therefore, an update modifies your current product while an upgrade totally replaces it. On the other hand,upgrades are distinct and do not need the older software to function.
An update is a patch that is made available after the product has been released, often to solve problems or glitches, while an upgrade is the replacement of an older version of one product to a newer one

There could be many updates for a certain product but only few upgrades
Updates are often free while an upgrade would cost money
Updates are often necessary while upgrades are not

Q. What is GUI Testing?
A. Graphical User Interface Testing is to test the interface between the application and the end user.

Q. What is Recovery Testing?
A. Recovery testing is performed in order to determine how quickly the system can recover after the system crash or hardware failure. It comes under the type of non-functional testing.

Q. What is Globalization/Internationalization Testing (I18N Testing)?
A. Globalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes.

Q. What is Localization Testing (L10N Testing)?
A. Localization is a process of adapting globalization software for a specific region or language by adding local specific components.

Q. What is Installation Testing?
A. It is to check whether the application is successfully installed and it is working as expected after installation.

Q. What is Formal Testing?
A. It is a process where the testers test the application by having pre-planned procedures and proper documentation.

Q. What is Risk Based Testing?
A. Identify the modules or functionalities which are most likely cause failures and then testing those functionalities.

Q. What is Compatibility Testing?
A. It is to deploy and check whether the application is working as expected in a different combination of environmental components.

Q. What is Exploratory Testing?
A. Usually, this process will be carried out by domain experts. They perform testing just by exploring the functionalities of the application without having the knowledge of the requirements.

Q. What is Usability Testing?
A. To verify whether the application is user-friendly or not and was comfortably used by an end user or not. The main focus in this testing is to check whether the end user can understand and operate the application easily or not. An application should be self-exploratory and must not require training to operate it.

Q. What is Security Testing?
A. Security testing is a process to determine whether the system protects data and maintains functionality as intended.

Q. What is Soak Testing?
A. Running a system at high load for a prolonged period of time to identify the performance problems is called Soak Testing.

Q. What is Performance Testing?
A. This type of testing determines or validates the speed, scalability, and/or stability characteristics of the system or application under test. Performance is concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the project or product.

Q. What is Load Testing?
A. It is to verify that the system/application can handle the expected number of transactions and to verify the system/application behavior under both normal and peak load conditions.

Q. What is Volume Testing?
A. It is to verify that the system/application can handle a large amount of data

Q. What is Stress Testing?
A. It is to verify the behavior of the system once the load increases more than its design expectations.

Q. What is Scalability Testing?
A. Scalability testing is a type of non-functional testing. It is to determine how the application under test scales with increasing workload.

Q. What is Concurrency Testing?
A. Concurrency testing means accessing the application at the same time by multiple users to ensure the stability of the system. This is mainly used to identify deadlock issues.

Q. What is Fuzz Testing?
A. Fuzz testing is used to identify coding errors and security loopholes in an application. By inputting massive amount of random data to the system in an attempt to make it crash to identify if anything breaks in the application.

Q. What is Interface Testing?
A. Interface testing is performed to evaluate whether two intended modules pass data and communicate correctly to one another.

Q. What is Reliability Testing?
A. Perform testing on the application continuously for long period of time in order to verify the stability of the application

Q. What is Bucket or A/B or Split Testing?
A. Bucket testing is a method to compare two versions of an application against each other to determine which one performs better.

Q. What are the principles of Software Testing?

  • Testing shows presence of defects
  • Exhaustive testing is not possible
  • Early testing
  • Defect clustering
  • Pesticide Paradox
  • Testing is context depending
  • Absence of error fallacy

Q. What is Exhaustive Testing?
A. Testing all the functionalities using all valid and invalid inputs and preconditions is known as Exhaustive testing.

Q. What is Early Testing?
A. Defects detected in early phases of SDLC are less expensive to fix. So conducting early testing reduces the cost of fixing defects.

Q. What is Defect clustering?
A. Defect clustering in software testing means that a small module or functionality contains most of the bugs or it has the most operational failures.

Q. What is Pesticide Paradox?
A. Pesticide Paradox in software testing is the process of repeating the same test cases, again and again, eventually, the same test cases will no longer find new bugs. So to overcome this Pesticide Paradox, it is necessary to review the test cases regularly and add or update them to find more defects.

Q. What is Walk Through?
A. A walk through is an informal meeting conducts to learn, gain understanding, and find defects. The author leads the meeting and clarifies the queries raised by the peers in the meeting.

Q. What is Inspection?
A. Inspection is a formal meeting lead by a trained moderator, certainly not by the author. The document under inspection is prepared and checked thoroughly by the reviewers before the meeting. In the inspection meeting, the defects found are logged and shared with the author for appropriate actions. Post inspection, a formal follow-up process is used to ensure a timely and corrective action.

Q. Who are all involved in an inspection meeting?
A. Author, Moderator, Reviewer(s), Scribe/Recorder and Manager.

Q. What is Bug Severity?
A. Bug/Defect severity can be defined as the impact of the bug on customer’s business. It can be Critical, Major or Minor. In simple words, how much effect will be there on the system because of a particular defect.

Q. What is Bug Priority?
A. Defect priority can be defined as how soon the defect should be fixed. It gives the order in which a defect should be resolved. Developers decide which defect they should take up next based on the priority. It can be High, Medium or Low. Most of the times the priority status is set based on the customer requirement.

Q. Tell some examples of Bug Severity and Bug Priority?

High Priority & High Severity: Submit button is not working on a login page and customers are unable to login to the application

Low Priority & High Severity: After 100 times clicking on submit button in same session application get crashed

High Priority & Low Severity: Spelling mistake of a company name on the homepage

Low Priority & Low Severity: FAQ page takes a long time to load

Q. What is the difference between a Standalone application, Client-Server application and Web application?
A. Standalone application: Standalone applications follow one-tier architecture. Presentation, Business, and Database layer are in one system for a single user.
Client-Server Application: Client-server applications follow two-tier architecture. Presentation and Business layer are in a client system and Database layer on another server. It works majorly in Intranet.
Web Application: Web server applications follow three-tier or n-tier architecture. The presentation layer is in a client system, a Business layer is in an application server and Database layer is in a Database server. It works both in Intranet and Internet.

Q. What is Bug Life Cycle?
A. Bug life cycle is also known as Defect life cycle. In Software Development process, the bug has a life cycle. The bug should go through the life cycle to be closed. Bug life cycle varies depends upon the tools (QC, JIRA etc.,) used and the process followed in the organization.

Q. What is Bug Leakage?
A. A bug which is actually missed by the testing team while testing and the build was released to the Production. If now that bug (which was missed by the testing team) was found by the end user or customer then we call it as Bug Leakage.

Q. What is Bug Release?
A. Releasing the software to the Production with the known bugs then we call it as Bug Release. These known bugs should be included in the release note.

Q. What is Defect Age?
A. Defect age can be defined as the time interval between date of defect detection and date of defect closure.
Defect Age = Date of defect closure – Date of defect detection
Assume, a tester found a bug and reported it on 1 Jan 2016 and it was successfully fixed on 5 Jan 2016. So the defect age is 5 days.

Q. What is Error Seeding?
A. Error seeding is a process of adding known errors intendedly in a program to identify the rate of error detection. It helps in the process of estimating the tester skills of finding bugs and also to know the ability of the application (how well the application is working when it has errors.)

Q. What is Showstopper Defect?
A. A showstopper defect is a defect which won’t allow a user to move further in the application. It’s almost like a crash.
Assume that login button is not working. Even though you have a valid username and valid password, you could not move further because the login button is not functioning.

Q. What is HotFix?
A. A bug which needs to handle as a high priority bug and fix it immediately.

Q. What is Decision Table testing?
A. Decision Table is aka Cause-Effect Table. This test technique is appropriate for functionalities which has logical relationships between inputs (if-else logic). In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs.

Q. What is State Transition?
A. Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state.

Q. What is an entry criteria?
A. The prerequisites that must be achieved before commencing the testing process. Entry criterion is used to determine when a given test activity should start. It also includes the beginning of a level of testing, when test design or when test execution is ready to start.

Verify if the Test environment is available and ready for use.
Verify if test tools installed in the environment are ready for use.
Verify if Testable code is available.
Verify if Test Data is available and validated for correctness of Data.?

Q. What is an exit criteria?
A. Exit criterion is used to determine whether a given test activity has been completed or NOT. Exit criteria can be defined for all of the test activities right from planning, specification and execution.
Exit criterion should be part of test plan and decided in the planning stage.

Verify if All tests planned have been run.
Verify if the level of requirement coverage has been met.
Verify if there are NO Critical or high severity defects that are left outstanding.
Verify if all high risk areas are completely tested.
Verify if software development activities are completed within the projected cost.
Verify if software development activities are completed within the projected timelines.

Q. What is SDLC? What are the different available models of SDLC?
A. Software Development Life Cycle (SDLC) aims to produce a high-quality system that meets or exceeds customer expectations, works effectively and efficiently in the current and planned information technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.

  • Waterfall Model
  • Spiral Model
  • V-Model
  • Prototype
  • Agile Methodology

Phases in SDLC:

  • Requirement Gathering Analysis
  • Planning
  • System Design
  • Implementation
  • Testing
  • Deployment
  • Maintenance

Q. What is STLC?
A. STLC (Software Testing Life Cycle) identifies what test activities to carry out and when to accomplish those test activities. Even though testing differs between Organizations, there is a testing life cycle.

Phases in STLC:

  • Requirement Analysis
  • Test Planning
  • Test Design
  • Test Implementation
  • Test Case Execution
  • Defect Reporting
  • Test Closure

Q. What is RTM?
A. Requirements Traceability Matrix (RTM) is used to trace the requirements to the tests that are needed to verify whether the requirements are fulfilled. Requirement Traceability Matrix AKA Traceability Matrix or Cross Reference Matrix.

Q. What is Test Metrics?
A. Software test metrics is to monitor and control process and product. It helps to drive the project towards our planned goals without deviation. Metrics answer different questions. It’s important to decide what questions you want answers to.

Q. When to stop testing? (Or) How do you decide when you have tested enough?
A. There are many factors involved in the real-time projects to decide when to stop testing.

  • Testing deadlines or release deadlines
  • By reaching the decided pass percentage of test cases
  • The risk in the project is under acceptable limit
  • All the high priority bugs, blockers are fixed