Uploaded by harish.sangu

Manual Testing

advertisement
End
Testing Interview Questions
1. Release Note and importance .................................................................. 2
2. Smoke and Sanity testing ....................................................................... 3
3. Priority and Severity ............................................................................... 4
4. Verification and Validation ....................................................................... 5
5. Alpha and Beta testing ............................................................................ 6
6. Regression testing and Confirmation testing .............................................. 7
7. Test Efficiency and Test Effectiveness ....................................................... 8
8. Test Policy and Test Strategy ................................................................... 9
9. Functional and Non-Functional Requirements ........................................... 10
10. Software Requirement Specification ........................................................ 11
11. Integration, Interoperability and Compatibility ......................................... 12
12. Agile Testing........................................................................................ 13
13. End of the document ............................................................................ 13
1.
2.
3.
4.
5.
6.
7.
8.
Maintenance Testing, Maintainability
Traceability Matrix
Test Plan
Use case
Test Process
Test Type and Test Level
Error, Defect and Failure
Test Levels and Test Types
Home
Home
Release Note and importance
A document identifying test items, their configuration, current status and
other delivery information delivered by development to testing, and possibly other
stakeholders, at the start of a test execution phase. [After IEEE 829]
Release notes are documents that are distributed with software products,
often when the product is still in the development or test state (e.g., a beta release).
For products that have already been in use by clients, the release note is a
supplementary document that is delivered to the customer when a bug is fixed or an
enhancement is made to the product.
Release notes may include the following sections:
1. Header – Document name (i.e. Release Notes), product name, release
number, release date, note date, note version, etc.
2. Overview - A brief overview of the product and changes, in the absence of
other formal documentation.
3. Purpose - A brief overview of the purpose of the release note with a listing of
what is new in this release, including bug fixes and new features.
4. Issue Summary - A short description of the bug or the enhancement in the
release.
5. Steps to Reproduce - The steps that were followed when the bug was
encountered.
6. Resolution - A short description of the modification/enhancement that was
made to fix the bug.
7. End-User Impact - What different actions are needed by the end-users of
the application. This should include whether other functionality is impacted by
these changes.
8. Support Impacts - Changes required in the daily process of administering
the software.
9. Notes - Notes about software or hardware installation, upgrades and product
documentation (including documentation updates)
10. Disclaimers - Company and standard product related messages. e.g.;
freeware, anti-piracy, duplication etc.
11. Contact - Support contact information.
A release note is usually a terse summary of recent changes, enhancements and bug
fixes in a particular software release. It is not a substitute for user guides. Release
notes are frequently written in the present tense and provide information that is
clear, correct, and complete. They may be written by a technical writer or any other
member of the development or test team. Release notes can also contain test results
and information about the test procedure. This kind of information gives readers of
the release note more confidence in the fix/change done; this information also
enables implementers of the change to conduct rudimentary acceptance tests.
Home
Home
Smoke and Sanity testing
Smoke Testing:
Smoke testing is an integration approach that is commonly used when software
products are being developed.
Smoke test should exercise the entire system from end to end. It does not have to
be exhaustive, but it should be capable of exposing major problems. The smoke test
should be thorough enough that if the build passes, you can assume that it is stable
enough to be tested more thoroughly.
Smoke testing originated in the hardware testing practice of turning on a new piece
of hardware for the first time and considering it a success if it does not catch fire and
smoke.
Sanity Testing:
Sanity testing is usually done at a ‘build level’ to test its stability. It is also hence
called as BVT (Build Verification Test).
This is usually done via a set of test cases that are executed to make sure that all
major functionality works and that there are no issues with either software or
hardware and that testing can officially kick off. These test cases are usually
documented.
Sane or Sanity means healthy. Sanity testing means to check whether the software
is healthy or not.
Home
Home
Priority and Severity
Severity:



High: A major issue where a large piece of functionality or major system
component is completely broken. There is no workaround and testing cannot
continue.
Medium: A major issue where a large piece of functionality or major system
component is not working properly. There is a workaround, however, and
testing can continue.
Low: A minor issue that imposes some loss of functionality, but for which
there is an acceptable and easily reproducible workaround. Testing can
proceed without interruption.
Priority:



High: This has a major impact on the customer. This must be fixed
immediately.
Medium: This has a major impact on the customer. The problem should be
fixed before release of the current version in development, or a patch must be
issued if possible.
Low: This has a minor impact on the customer. The flaw should be fixed if
there is time, but it can be deferred until the next release.
A Modest Proposal by Brian Beaver
I recommend eliminating the Severity and Priority fields and replacing them with a
single field that can encapsulate both types of information: call it the Customer
Impact field. Every piece of software developed for sale by any company will have
some sort of customer. Issues found when testing the software should be
categorized based on the impact to the customer or the customer’s view of the
producer of the software. In fact, the testing team is a customer of the software as
well. Having a Customer Impact field allows the testing team to combine
documentation of outside-customer impact and testing-team impact. There would no
longer be the need for Severity and Priority fields at all. The perceived impact and
urgency given by both of those fields would be encapsulated in the Customer Impact
field.
High Priority and Low Severity
A spelling error on a user-interface screen. If any web site say "Google" now, if the
logo of site "Google" spelled as "Goggle".
Low Priority and High Severity
An anomalous application crash. Suppose there is an application which generates
some banking related reports weekly, monthly, quarterly & yearly. In this
application, there is a fault while calculating yearly report.
Home
Home
Verification and Validation
In every development life cycle, a part of testing is focused on Verification testing
and a part is focused on Validation testing.
Verification:
Verification is concerned with evaluating a work product, component, or system to
determine whether it meets the requirement set.
Verification focuses on the question, “Is the system correct to specification?”
Verification is a Quality control process.
Validation:
Validation is concerned with evaluating a work product, component or system to
determine whether it meets the user needs and requirements.
Validation focuses on the question, “Is this the correct specification?”
Validation is a Quality Assurance process.
Home
Home
Alpha and Beta testing
Alpha testing is performed at the developing organization’s site. Beta testing, or field
testing, is performed by people at their own locations. Both are performed by
potential customers, not the developers of the product.
Organizations may use other terms as well, such as factory acceptance testing
and site acceptance testing for systems that are tested before and after being
moved to a customer’s site.
Alpha and Beta testing concepts are different from Product based company and a
Service based company.
In a Product based company:
Alpha Testing defines as the testing which is done at the developer's site by the
testers.
Beta testing is the testing done by the tester by generating the customer's
environment.
In a Service based company:
Alpha testing is the testing conducted by customers at developer's site.
Beta Testing is the process of giving the product to customers and let them do the
testing at their environment.
Home
Home
Regression testing and Confirmation testing
Regression testing:
Testing of a previously tested program after modification to ensure that defects have
not been introduced (or uncovered) in unchanged areas of the software [as a result
of the changes made].
These defects may be either in the software being tested, or in another related or
unrelated software component. It is performed when the software, or its
environment, is changed. The extent of regression testing is based on the risk of not
finding defects in software that was working previously.
Confirmation testing:
Testing that runs test cases that failed the last time they were run, in order to verify
the success of corrective actions.
Home
Home
Test Efficiency and Test Effectiveness
Efficiency is productivity metric and Effectiveness quality metric.
Test Efficiency:
Test Efficiency can be "No. of test cases executed per hour or per person day".
Test Effectiveness:
Testing effectiveness metrics can be "No. of bugs identified by a tester in a given
feature / Total no. of bugs identified in that feature".
Home
Home
Test Policy and Test Strategy
Test Policy:
A high-level document describing the principles, approach and major objectives of
the organization regarding testing.
Testing Principles:
1.
2.
3.
4.
5.
6.
7.
Testing shows presence of defects
Exhaustive testing is impossible
Early testing
Defect clustering
Pesticide paradox
Testing is context dependent
Absence of errors fallacy
Test Strategy:
A high-level description of the test levels to be performed and the testing within
those levels for an organization or program (one or more projects)
Test Approach:
The implementation of the test strategy for a specific project.
One way to classify test approaches or strategies is based on the point in time at
which the bulk of the test design work is begun:
 Preventative approaches, where tests are designed as early as possible.
 Reactive approaches, where test design comes after the software or system
has been produced.
Typical approaches or strategies include:
 Analytical approaches, such as risk-based testing where testing is directed to
areas of greatest risk.
 Model-based approaches, such as stochastic testing using statistical
information about failure rates (such as reliability growth models) or usage
(such as operational profiles).
 Methodical approaches, such as failure-based (including error guessing and
fault-attacks), experienced-based, check-list based, and quality characteristic
based.
 Process- or standard-compliant approaches, such as those specified by
industry-specific standards or the various agile methodologies.
 Dynamic and heuristic approaches, such as exploratory testing where testing
is more reactive to events than pre-planned, and where execution and
evaluation are concurrent tasks.
 Consultative approaches, such as those where test coverage is driven
primarily by the advice and guidance of technology and/or business domain
experts outside the test team.
 Regression-averse approaches, such as those that include reuse of existing
test material, extensive automation of functional regression tests, and
standard test suites.
Home
Home
Functional and Non-Functional Requirements
Functional Requirements: [Software quality characteristics: (ISO 9126)]





Suitability
Accuracy
Security
Interoperability
Compliance
Non-Functional Requirements: [Software quality characteristics: (ISO 9126)]





Reliability ( Endurance, Robustness, Recovery )
Usability (Understandability, Learnability, Operability, Attractiveness)
Efficiency (Time behavior, Resource utilization)
Maintainability (Analyzability, Changeability, Stability, Testability)
Portability (Adaptability, Installability, Co-existence, Replaceability)
Suitability: The capability of the software product to provide an appropriate set of
functions for specified tasks and user objectives.
Accuracy: The capability of the software product to provide the right or agreed
results or effects with the needed degree of precision.
Security: The ability to prevent unauthorized access, whether accidental or
deliberate, to programs and data.
Interoperability: The capability of the software product to interact with one or more
specified components or systems.
Compliance: The capability of the software product to adhere to standards,
conventions or regulations in laws and similar prescriptions.
Reliability: The ability of the software product to perform its required functions under
stated conditions for a specified period of time, or for a specified number of
operations.
Usability: The capability of the software to be understood, learned and used under
specified conditions.
Efficiency: The capability of the software product to provide appropriate
performance, relative to the amount of resources used under stated conditions.
Maintainability: The ease with which a software product can be modified to correct
defects, modified to meet new requirements, modified to make future maintenance
easier, or adapted to a changed environment.
Portability: The ease with which the software product can be transferred from one
hardware or software environment to another.
Home
Home
Software Requirement Specification
A Software Requirements Specification (SRS) is a complete description of the
behavior of the system to be developed. It includes a set of use cases that describe
all the interactions the users will have with the software. Use cases are also known
as functional requirements.
In addition to use cases, the SRS also contains non-functional (or supplementary)
requirements. Non-functional requirements are requirements which impose
constraints on the design or implementation (such as performance engineering
requirements, quality standards, or design constraints).
Requirements analysis in systems engineering and software engineering,
encompasses those tasks that go into determining the needs or conditions to meet
for a new or altered product, taking account of the possibly conflicting requirements
of the various stakeholders, such as beneficiaries or users.
Requirements analysis is critical to the success of a development project.
Requirements must be actionable, measurable, testable, related to identified
business needs or opportunities, and defined to a level of detail sufficient for system
design. Requirements can be functional and non-functional.
A corporate stakeholder is a party that affects or can be affected by the actions of
the business as a whole.
Home
Home
Integration, Interoperability and Compatibility
Integration is concerned with the process of combining components into an overall
system (after IEEE 610). Integration testing is all encompassing in that it is
concerned not only with whether the interface between components is correctly
implemented, but also with whether the integrated components – now a system behave as specified. This behavior will cover both functional and non-functional
aspects of the integrated system.
An example of integration testing would be where the aperture control component
and the shutter control component were integrated and tested to ensure that
together they performed correctly as part of the camera control system.
Interoperability is the ability of two or more systems (or components) to exchange
and subsequently use that information (after IEEE 610). So interoperability is
concerned with the ability of systems to communicate – and it requires that the
communicated information can be understood by the receiving system - but it is not
concerned with whether the communicating systems do anything sensible as a
whole.
The interoperability between two systems could be fine, but whether the two
systems as a whole actually performed any useful function would be irrelevant as far
as the interoperability was concerned. Interoperability is therefore involved with the
interfaces (as is integration) but not with whether the communicating systems as a
whole behave as specified. Thus interoperability testing is a subset of integration
testing.
An example of interoperability testing would be where flight information is passed
between the (separate) booking systems for two airlines. Interoperability testing
would test whether the information reached the target system and still meant the
same thing to the target system as the sending system. It would not test whether
the target system subsequently used the booking information in a reasonable
manner.
Compatibility is concerned with the ability of two or more systems or components
to perform their required functions while sharing the same environment (after IEEE
610). The two components (or systems) do not need to communicated with each
other, but simply be resident on the same environment – so compatibility is not
concerned with interoperability. Two components (or systems) can also be
compatible, but perform completely separate functions – so is not concerned with
integration which would considers whether the two components together performed
a function correctly.
An example of compatibility testing would be to test whether word processor and
calculator applications (two separate functions) could both work correctly on a PC at
the same time.
Home
Home
Agile Testing
Agile testing is a software testing practice that follows the principles of the agile
manifesto, emphasizing testing from the perspective of customers who will utilize the
system. Agile testing does not emphasize rigidly defined testing procedures, but
rather focuses on testing iteratively against newly developed code until quality is
achieved from an end customer's perspective. In other words, the emphasis is
shifted from "testers as quality police" to something more like "entire project team
working toward demonstrable quality."
Agile testing involves testing from the customer perspective as early as possible,
testing early and often as code becomes available and stable enough from
module/unit level testing.
Since working increments of the software are released often in agile software
development, there is also a need to test often. This is commonly done by using
automated acceptance testing to minimize the amount of manual labor involved.
Doing only manual testing in agile development may result in either buggy software
or slipping schedules because it may not be possible to test the entire build manually
before each release.
In agile testing, testers are no longer a form of Quality Police. Testing moves the
project forward leading to new strategy called Test Driven Development. Testers
provide information, feedback and suggestions rather than being last phase of
defense.
Testing is no more a phase; it integrates closely with Development. Continuous
testing is the only way to ensure continuous progress.
Reduce feedback loops, Manual regression tests take longer to execute and, because
a resource must be available, may not begin immediately. Feedback time can
increase to days or weeks. Manual testing, particularly manual exploratory testing, is
still important. However, Agile teams typically find that the fast feedback afforded by
automated regression is a key to detecting problems quickly, thus reducing risk and
rework.
Keep the code clean. Buggy software is hard to test, Harder to modify and slows
everything down. Keep the code clean and help fix the bugs fast.
Lightweight Documentation Instead of writing verbose, comprehensive test
documentation, Agile testers:





Use reusable checklists to suggest tests
Focus on the essence of the test rather than the incidental details
Use lightweight documentation styles/tools
Capturing test ideas in charters for Exploratory Testing
Leverage documents for multiple purpose
Home
End of the document
Download