Uploaded by prameshghising054

Assignment1 phurwa

advertisement
1. What is Software Testing?
Software testing is a crucial process in developing and maintaining software systems. It involves
evaluating a software application or system to ensure that it meets the specified requirements,
functions as intended, and is free from defects or errors. The primary goal of software testing is
to identify any discrepancies between the actual behavior of the software and its expected
behavior.
2. How it was originated?
Software testing has its roots in the 1940s and 1950s when computers became more complex.
Over time, formal methodologies and quality assurance practices were established. The adoption
of industry standards and the rise of agile development further shaped the discipline. Today,
software testing involves verifying and validating software to ensure it meets requirements and
functions as intended. Advanced technologies like test automation and AI have enhanced testing
practices. The goal remains the same: to improve software quality, reliability, and performance
while reducing the risk of failures.
3. .What is Typical Objectives of Testing?








Identifying Defects: The primary objective is to uncover defects or errors in the software.
By executing various tests, testers aim to find discrepancies between the expected and
actual behavior of the software.
Validating Requirements: Testing ensures that the software meets the specified
requirements and functions as intended. It verifies that the software meets user needs
and aligns with business goals.
Improving Quality: Testing aims to improve the quality, reliability, and performance of
software systems. By identifying and fixing defects early in the development cycle, the
overall quality of the software can be enhanced.
Ensuring Functionality: Testing verifies that the software performs the intended functions
accurately and efficiently. It ensures that all features and functionalities work as
expected.
Enhancing Usability: Usability testing evaluates the software's user interface and user
experience. The objective is to identify any usability issues, such as poor navigation,
confusing layouts, or inconsistent behavior.
Ensuring Compatibility: Testing checks the compatibility of the software with different
platforms, operating systems, browsers, and hardware configurations. It ensures that the
software functions correctly across various environments.
Assessing Performance: Performance testing measures the software's responsiveness,
scalability, and resource usage under different load conditions. It aims to identify
performance bottlenecks, optimize system resources, and ensure efficient operation.
Ensuring Security: Security testing focuses on identifying vulnerabilities and weaknesses
in the software that could be exploited by attackers. It aims to protect the software from
unauthorized access, data breaches, and other security risks.


Complying with Standards: Testing ensures that the software adheres to industry
standards, regulations, and guidelines. It verifies compliance with legal, security,
accessibility, and other relevant standards.
Building User Confidence: By thoroughly testing the software and fixing issues, the
objective is to build user confidence in its reliability, stability, and functionality. Testing
helps to reduce the risk of failures, customer dissatisfaction, and reputational damage.
Overall, the main objectives of software testing are to deliver high-quality software that meets
user requirements, functions as expected, and provide a positive user experience.
4. What are the requirements for testing?
The requirements of testing can vary depending on the specific software project and its context.
However, some common requirements of testing include:








Clear and Well-Defined Requirements: Testing requires a clear understanding of the
software's functional and non-functional requirements. Detailed and unambiguous
requirements serve as the basis for designing test cases and determining the expected
behavior of the software.
Testable Software: The software being tested should be in a testable state, meaning it
should be sufficiently developed and stable enough to undergo testing. It should have the
necessary functionalities implemented to execute test cases effectively.
Test Environment: A suitable test environment is needed to conduct testing. This includes
hardware, operating systems, databases, networks, and other infrastructure components
required to execute the tests. The test environment should closely mimic the production
environment to ensure accurate results.
Test Data: Relevant and realistic test data is necessary to execute test cases effectively.
Test data should cover various scenarios, including both valid and invalid inputs, edge
cases, and boundary conditions. It should be representative of real-world scenarios to
uncover potential defects.
Test Documentation: Proper documentation of test plans, test cases, and test procedures
is crucial. Documentation helps testers understand the testing scope, objectives, and
specific requirements. It also facilitates communication and knowledge sharing among
team members.
Testing Tools and Frameworks: Testing often requires the use of specialized tools and
frameworks to automate testing activities, manage test cases, and generate reports.
Testing tools can include test management tools, automation tools, performance testing
tools, and defect tracking systems.
Skilled Testers: Skilled testers with domain knowledge and expertise in testing
techniques are essential. Testers should have a good understanding of the software
requirements, testing methodologies, and the ability to design effective test cases. They
should also possess analytical skills to identify and diagnose defects.
Time and Resources: Sufficient time and resources must be allocated for testing
activities. Adequate time allows for comprehensive testing, including functional testing,
performance testing, security testing, and regression testing. Sufficient resources, such
as hardware, software, and skilled personnel, are necessary to carry out testing
effectively.


Defect Management Process: A well-defined defect management process is crucial to
track, prioritize, and resolve defects identified during testing. It includes defect reporting,
tracking, analysis, and resolution, ensuring that identified issues are properly addressed.
Compliance and Standards: Depending on the software domain, there may be specific
compliance requirements and industry standards that need to be considered during
testing. These requirements ensure that the software meets legal, regulatory, and
industry-specific guidelines.
By fulfilling these requirements, testing can be carried out effectively, leading to improved
software quality and reduced risks.
5. What is functional and non-functional requirements?
Functional requirements define the specific functions and features that a software system must
perform, such as user authentication or data storage. They describe the intended behavior and
capabilities of the software in terms of inputs, outputs, and interactions. On the other hand, nonfunctional requirements define the quality attributes and characteristics of the software system,
such as performance, security, or usability. These requirements focus on how the system should
perform rather than what it should do. Both types of requirements are crucial for developing a
software system that not only meets functional expectations but also fulfills the desired quality
attributes and user experience.
6. Enlist advantages and disadvantages of testing.
Advantages










Eliminates software bugs, defects, and failures.
Increases the efficiency of the software.
Reduces the number of times the software needs to be repaired.
Improves the quality of the software.
The testing team can assists the software development team by detecting the mistakes
made by them.
Helps to optimize the code and get rid of unwanted lines of code.
Prevents future problems, crashes, and complaints from the end users.
Increases customer satisfaction and user experience.
Saves time for test execution when automation testing is used.
Test cases and test scenarios that are written for an application may be reusable for other
systems.
Disadvantages





Most testing types are time-consuming due to executing tests continuously. Thus, the
failing tests should repeatedly run until fixing all the issues.
Lack of experienced software and QA testers who are aware of testing techniques.
The software testing team requires many members.
Increases the cost of the software and the budget.
Enhances the scope and increases the duration of the software development life cycle
(SDLC).

Many test management systems are expensive.
7. Mention characteristics of testing.




High probability of detecting errors: To detect maximum errors, the tester should
understand the software thoroughly and try to find the possible ways in which the
software can fail. For example, in a program to divide two numbers, the possible way in
which the program can fail is when 2 and 0 are given as inputs and 2 is to be divided by 0.
In this case, a set of tests should be developed that can demonstrate an error in the
division operator.
No redundancy: Resources and testing time are limited in software development process.
Thus, it is not beneficial to develop several tests, which have the same intended purpose.
Every test should have a distinct purpose.
Choose the most appropriate test: There can be different tests that have the same intent
but due to certain limitations such as time and resource constraint, only few of them are
used. In such a case, the tests, which are likely to find more number of errors, should be
considered.
Moderate: A test is considered good if it is neither too simp1e, nor too complex. Many
tests can be combined to form one test case. However, this can increase the complexity
and leave many errors undetected. Hence, all tests should be performed separately.
8. Different between Testing and Debugging.
SN
1
2
3
4
Testing
Testing is the process to identify the
bugs in the system under test.
A group of test engineers executes
testing, and sometimes it can be
performed by the developers.
Once the coding phase is done, we
proceed with the testing process.
Most of the test cases in testing can be
automated.
Debugging
Debugging is the process to identify the root
cause of the bugs.
Debugging is done by the developer or the
programmer.
After the implementation of the test case, we
can start the Debugging process.
Automation in the debugging cannot possible.
9. What is Test case?
A test case is a document, which has a set of test data, preconditions, expected results and
postconditions, developed for a particular test scenario in order to verify compliance against a
specific requirement.
Test Case acts as the starting point for the test execution, and after applying a set of input
values, the application has a definitive outcome and leaves the system at some end point or also
known as execution postcondition.
10. What is the difference between Quality Assurance, Quality Control, and
Testing?
Quality Assurance (QA) is focused on preventing defects and ensuring adherence to defined
standards and processes throughout the software development lifecycle. It involves activities
such as establishing quality standards, defining procedures, and implementing continuous
improvement initiatives.
Quality Control (QC) is the process of testing and evaluating the software to identify defects,
deviations from standards, and ensure compliance with quality requirements. It involves activities
like test planning, execution, and defect tracking.
Testing is a subset of QC that involves executing planned activities to evaluate the software's
behavior, functionality, and performance. It aims to validate the software against specified
requirements. Together, QA, QC, and testing contribute to achieving and maintaining high
software quality.
11. When do you think QA activities should start?
QA activities should start after the requirement process. The benefit of starting earlier is that test
team can understand the domain of the software and can make a test plan.
12. What is Negative testing? How is it different from Positive testing?
Positive Testing is a type of testing that is performed on a software application by providing valid
data sets as input. It checks whether the software application behaves as expected with positive
inputs or not. Positive testing is performed in order to check whether the software application
does exactly what it is expected to do. On the other hand, Negative Testing is a testing method
performed on the software application by providing invalid or improper data sets as input. It
checks whether the software application behaves as expected with negative or unwanted user
inputs. The purpose of negative testing is to ensure that the software application does not crash
and remains stable with invalid data inputs.
13. What are the different artifacts you refer to when you write the test cases?


Test Strategy: A test strategy is a document that lists the details about how the whole
project will proceed.
Test Plan: A Test plan is a detailed document covering all the aspects of the testing
phase. Whereas a test strategy is just an outline for the whole project.






Test Scenario: A test scenario is a condition created to perform successful end-to-end
testing. Several test cases come under a test scenario.
Test Cases: The test cases are the extended part of the test scenario which helps in the
execution of testing.
Test Data: To run the test cases the QA engineer needs some data on which the test case
passed or failed is decided. For this, a document is prepared that contains all the test
data required to run the created test cases.
Required Traceability Matrix: This is used to match the requirements of the client with the
testing approach. There are two types of RTM- forward RTM and backward RTM.
Bug Report: A defect report is a document that is created once all the test cases are
executed and the results are recorded. It enlists all the defects or bugs identified during
the testing process.
Test summary report: At the end of each of the complete testing cycles, a final report is
created. This report includes the details of the whole testing process.
14. What is meant by Verification and Validation?
SN
1
Verification
It is a process of checking if a product is developed as
per the specifications.
2
It tests the requirements, architecture, design, and code
of the software product.
3
It does not require executing the code.
4
Mainly developers are involved in this process.
5
A few verification methods are inspection, code review,
desk-checking, and walkthroughs.
Validation
It is a process of ensuring
that the product meets the
needs and expectations of
stakeholders.
It tests the usability,
functionalities, and reliability
of the end product.
It emphasizes executing the
code to test the usability and
functionality of the end
product.
QA Team is involved in this
process.
A few widely-used validation
methods are integration
testing and acceptance
testing.
15. How do you determine which piece of software requires how much testing?
Determining the required testing for software involves considering factors such as criticality,
complexity, user base, regulatory requirements, risk analysis, budget, and time constraints. The
criticality and complexity of the software influence the extent of testing needed. Factors like user
base size, usage scenarios, and compliance requirements impact testing coverage. Risk analysis
helps identify high-risk areas for focused testing. Budget and time constraints also play a role in
determining the achievable testing effort. Ultimately, a collaborative effort involving stakeholders
helps determine the appropriate level of testing to ensure software quality and mitigate risks
effectively.
Download