ISTQB CTFL Certified Tester Foundation Level – Test Management Part 2
April 12, 2023

3. Tasks of Test Manager and Tester

Testing tasks may be done by people in a specific testing role, or may be done by someone in another role, such as a project manager, quality manager, developer, business and domain expert infrastructure, or It operations. Anyone? The ice cream curriculum talks in detail about two roles only the test manager and the tester. Though the same people may play both roles at various points during the project. The active, safeties and tasks performed by these two roles depend on the project and product context, the skills of the people in the roles, and the organization. Let’s take a look at the work done by these roles, starting with the Test Manager the test manager is tasked with overall responsibility for the test process and successful leadership of the test activities.

Typical test manager tasks may include white overview, a test strategy for the project and test policy for the organization if not already in place. They blend the test activities, considering the context and understanding the test objectives and risks, including selecting test approaches, estimating the time, effort and cost of testing acquiring resources, defining test levels, test cycles and planning defect management and create high level test schedule. Notice that I said high level test schedule. They also write and update the test plans, coordinate the test strategy and test plans with project managers, product owners and others. They share the testing perspective to other project activities such as integration planning. They initiate the analysis, design, implementation and execution of tests monitor test progress and results, and check the status of exit criteria or definition of done. If we are talking about Agile, they prepare and deliver tested progress reports and test summary reports based on the information gathered during testing.

They also adopt planning based on test results and progress sometimes documented in test burger boards and or in test summary reports for other testing already completed on the project, and they take action when necessary or if necessary for test control. They support setting up the defect management system and adequate configuration management of testware. We will talk more about defect management and configuration management in future videos. In this section, they introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.

They support the selection and implementation of tools to support the test process, including recommendation or recommending the budget for tool selection and possibly purchase and or support this tool selection allocating time and effort for pilot projects and providing continuing support in the use of the tools. Design test Environment so they make sure the test environment is put into place before test execution and managed during test execution. They promote and advocate the tester, the test team, and the test profession within the organization. And last, they develop the skills and careers of testers, though through training plans, performance evaluations, coaching and so on. The test management role might be performed by a professional test manager or by a project manager, a development manager, or a quality assurance manager. In larger projects or organizations, several test teams may report to a test manager, test coach or test coordinator, each team being headed by a test leader or lead tester. The way in which the test manager role is carried out varies depending on the software development lifecycle.

For example, in Agile development or Agile projects, some of the tasks mentioned above are handled by the Whole Agile team. We have a feature in Agile called the Whole Team, which the whole team acts as one. So some of the tasks mentioned above are handled by the Whole Agile team, especially those tasks concerned with the day to day testing done within the team, often by a tester working within the team. Some of the tasks that span multiple teams or the entire organization those that have to do with the Snail management may be done by test managers outside the development team, who are sometimes called test coaches. On the other hand, typical tester tasks may include review and contribute to tester plans, analyze for testability review and assess user requirements, specifications and models for testability identify and recommend test conditions and capture to a stability between test.

Cases, test conditions and the test cases. Design, set up and verify test environment or test environments setting up the hardware and software needed for testing, often coordinating with system administration and network management. Design and implement test cases and test procedures prepare test data and acquire the test data if needed. Create the detailed test execution schedule yes, we leave it up to the testers to create their own detailed test schedule around the high level test schedule created by the test manager. As we mentioned above or before. Execute tests evaluate the results and comment deviations from expected results. Use abruptly tools to facilitate the test process. Automate tests may be supported by a developer or a test automation expert.

Evaluate non functional characteristics such as performance, efficiency, reliability, usability, security, compatibility and portability. Review and contribute to test blends and also review tests developed by others again and again. Depending on the risks related to the product and the project and the software development lifecycle model selected, different people may take over the role of tester at different test levels. For example, at the component testing level and the component integration testing level, the role of a tester is often done by developers. At the Acceptance test level, the Rollover tester is often done by business analysts, subject matter experts and users.

At the system test level and the system integration test level, the Rollover tester is often done by independent testing. At the Operational Acceptance test level, the Rollover tester is often done by operations and or system administration staff. People who work on test analysis, test design, specific test types, or test automation may be specialists in these faults. The questions in this part are usually to differentiate between the tasks of a tester and the test manager. As you may have noticed, the test manager tasks are related to how to do things, while the tester tasks are related to the actual hands on of doing those things.

4. Test Strategy and Test Approach

A testing strategy provides a generalized description of the test process, usually at the product or organizational level. The testing strategy describes at a high level, independent of any specific project, the how of testing for an organization. The choice of the testing strategy is one of the powerful factors, if not the most powerful factor in the success of the test effort and the action you see of the tester plans and estimates. Let’s look at the major types of tester strategies that are commonly found analytical. In analytical test approaches, our testing will be based on the analysis of some factor, usually during the requirements and design stages of the project, to decide where to test first and where to test more and when to stop testing. For example, the risk based strategy involves performing a risk analysis using project documents and stakeholder input. Risk based testing gives higher attention to areas of highest risk. Another analytical test strategy is the requirements based strategy where an analysis of the requirement specifications forms the basis for planning, estimating and designing tests.

Model Based In model based approaches, we create, design or benchmark some formal or informal model that our system must follow. The model will be based on some required aspects of the product. For example function, a business process, an internal structure, or an unfunctional characteristic like Viability for example. To give you an example, our software response time should be faster than that of the competitors software. We will keep on testing our software until the behavior of the system under test confirms to that predicted by the model. Examples of such models include business bosses models, estate models, and reliability growth models. Methodical this type of test strategy relies on making systematic use of some predefined set of tests or test conditions such as a taxonomy of common or likely types of failures, a list of important quality characteristics or companywide look and feel standards for mobile applications or web pages.

Examples of methodical approaches are failure based, including error guessing and fault attacks, a checklist based and quality characteristic based process or standard compliant. In this approach, you might adopt an industry standard or unknown process to test the system. For example, you might adopt the IEEE 29 one nine standard for your testing. Alternatively, you might adopt one of the agile methodologies such as extreme programming process or standard compliant. Strategies have in common reliance upon external rules or standards.

Reactive in this type of test strategy, testing is reactive to the component of a system being tested and the events occurring during test execution rather than being replanted as the preceding strategies are, tests are designed and implemented and may be immediately be executed in response to knowledge gained from period test results. Exploratory testing is a common technique employed in reactive strategies, consultative or directed. This type of strategy is driven primarily by the advice, guidance or instructions of stakeholders, business domain experts or technology experts who may be outside the test team who may be outside the test team or outside the organization itself. For example, you might ask the users or developers about the system to tell you what to test or even rely on them to do the testing themselves. Regression Averse this type of test strategy is motivated by a desire to avoid regression of existing capabilities.

Regression adverse test strategy includes reuse of existing test ware, especially test cases and test data, extensive automation of regression tests and standard test suites so that whenever anything changes, you can rerun every test to ensure nothing has been broken. So which approach is the best? Again, there is no one best answer to this question. An appropriate test strategy is often created by combining several of these types of test strategies according to your own situation. For example, risk based testing and analytical strategy can be combined with exploratory testing and active strategy. They complement each other and may achieve more effective testing when used together. While the test strategy provides a generalized description of the tested process, the test approach is the implementation of the test strategy for a specific project or release.

So when you assess the risk of your project, as we mentioned in a previous lecture, and refine your project objectives and your project test objectives, then you can decide on the test approach you will take to test your project based on your organization. Test Strategy the test approach will be reflected in your decisions in planning the test effort. It’s the starting point for planning the test process, for selecting the test techniques, test levels and test types to be applied, and for defining the entry and exit criteria or the definition of done. The tailoring of the test strategy is based on decisions made in relation to the complexity and goals of the project, the type of the product being developed and product. Risk analysis questions in this topic in the historical exam are more about defining a situation and ask you which best approach or strategy to use. For example, if you are using Agile or Waterfall then what the approach you are using? Correct process or standard compliant? If you are asking the user which areas to test, then what’s the approach you are using Directed?

If you are using test cases from an old version of the software, then what’s the approach you are using? Aggressive, adverse and so on. So how do you know which strategies to bake or blend for the best chance of success of your project? There are many factors to consider, but let’s highlight a few of the most important ones. Risks testing is about risk management, so consider the risks and the level of risk for a well established application that is evolving slowly. Regression is an important risk, so regression adverse strategies make sense for a new application. A risk analysis may reveal different risks if you pick a risk based analytical strategy, available resources and skills so you have to consider which skills your testers possess and lack. A standard compliance strategy is a smarter choice when you lack the time and skill in your team to create your own approach. Objectives testing must satisfy the needs of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of upfront time and effort invested, then a reactive strategy makes sense. Regulations sometimes you must satisfy not only stakeholders but also some regulations. In this case, a methodical test strategy may satisfy those regulations. Product the nature of the product and the business. For example, a different approach is required for testing mobile phone coverage than for testing an online banking operation. Safety of course, safety considerations promote more of formal strategies. And last technology.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!