top of page

PMI-ACP Study Notes: Domain VI Problem Detection and Resolution

Writer's picture: sameralqudahsameralqudah

According to the PMI-ACP Exam Content Outline, Domain VI Problem Detection and Resolution consists of 5 tasks:

Below is a collection of the key knowledge addressed in Domain VI Problem Detection and Resolution and the 5 tasks related to the domain:


Risk / Threat Management

Risk is uncertainty that could affect the success/failure of the project. Risks become problems or issues once they occur.

Risks can be threats or opportunities, negative project risks are considered as "anti-value".

To maximize values, negative risks must be minimized while positive risks should be utilized. But once problems or issues arise, they must be resolved in time to reduce effects on value creation.

Risk identification should involve the customer, project team, and all relevant stakeholders

Five Core Risks mentioned in the book "The Software Project Manager’s Bridge to Agility"

productivity variation (difference between planned and actual performance)

scope creep (considerable additional requirements beyond the initial agreement)

specification breakdown (lack of stakeholder consensus on requirements)

intrinsic schedule flaw (poor estimates of task durations)

personnel loss (the loss of human resources)


Risks are assessed by risk probability (how likely it occurs) and risk impact

(how severe the risk impact is):

Risk Severity = Risk Probability x Risk Impact

Risk probability can be a percentage value or a number on a relative scale (the higher the more likely)

Risk impact can be dollar value or a number on a relative scale (the higher the more costly to fix)


As a general rule, "riskier" features (with high values) should be tested in

earlier sprints to allow the project to "fail fast" as failing during the earlier phase of the project is much less costly than failing during a later phase Risk is high at the beginning of the project (both for traditional and Agile projects) but Agile projects have higher success rates as the very nature of Agile project management tends to reduce risks as changes are inherent to the projects.

Risk can be categorized into the following: Business – related to business value

Technical – about technology use and/or skill sets Logistic – related to schedule, funding, staffing, etc.

Others – Political, Environmental, Societal, Technological, Legal or Economic (PESTLE)


To tackle risks: Identify Risks -> Assess Qualitatively and Quantitatively -> Plan Response -> Carry Out Responses Should Risks Arise -> Control and Review

Risk-Adjusted Backlog

Prioritization criteria for the backlog: value, knowledge, uncertainty, risk Backlog can be re-prioritized by customers as needed to reduce risks while still realizing values

The customer can give each feature/risk response action (non-functional requirements) on the backlog a value by, for features, assessing ROV and, for the case of risks, the costs involved (by multiplying the probability of the risk in %)


The backlog of features and risk response activities can then be prioritized based on the dollar values


Risk adjustment tries to identify and mitigate the risk at an early stage of development

‘Fail fast’ allows the team to learn and adjust course


Risk Burn Down Graphs / Charts

to show the risk exposure of the project

created by plotting the sum of the agreed risk exposure values (impact x probability) against iterations

to be updated regularly (per iteration) to reflect the change in risk exposure

general recommendation: top 10 risks are included

the risk burndown chart should have the total risks heading down while the project progresses

Risk-based Burn-Up Chart

tracks the targeted and actual product delivery progress

includes estimates of how likely the team is to achieve targeted value adjusted for risk by showing the optimistic, most likely, and the worst-case scenario


Risk-based Spike

Spike: a rapid time-boxed experiment (by the developers) to learn just enough about the “unknown” of a user story for the estimation / establishing realistic expectations / as a proof of concept

the “unknown” can be: new technologies, new techniques can be a “proof of concept”

spikes are carried out between sprints and before major epics/user stories

products of a spike are usually intended to be thrown away types :

architectural spike: associated with an unknown area of the system, technology, or application domain

non-architectural spike: others


Risk-based spike: a spike to allow the team to eliminate or minimize some major risks

if the spike fails for every approach available, the project reaches a condition known as “fast failure”, the cost of failure is much less than failing later


Problem Detection


Definition of Done (DoD)

Done usually means the feature is 100% complete (including from analysis, design, coding to user acceptance testing and delivery & documentation) and ready for production (shippable)

Done for a feature: feature/backlog item completed Done for a sprint: work for a sprint completed

Done for a release: features shippable


The exact definition of done has been being agreed upon by the whole team (developer, product owner/customer, sponsor, etc.)

The definition of done includes acceptance criterion and acceptable risks


Frequent Validation and Verification

Early and frequent testing both within and outside the development team to reduce the cost of quality (change or failure)

validation: (usually external) the assurance that a product, service, or system meets the needs of the customer

verification: (usually internal of a team) the evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition


Agile measure to ensure frequent validation and verification:

testers are included in the development team from the beginning taking part in user requirements collection

unit testings are created for continuous feedback for quality improvement and assurance

automated testing tools are used allowing quick and robust testing examples: peer reviews, periodical code-reviews, refactoring, unit tests, automatic and manual testing

feedback for various stages: team (daily) -> product owner (during sprint) -> stakeholders (each sprint) -> customers (each release)


Variance and trend analysis

variance is the measure of how far apart things are (how much the data vary from one another)

e.g. the distribution of data points, small variance indicates the data tend to be close to the mean (expected value)


Trend analysis provides insights into the future which is more important for problem detection

though measurements are lagging, they will provide insights should trends be spotted


variance and trend analysis is important for controlling (problem detection) and continuous improvement, e.g the process to ensure quality Control limits for Agile projects

by plotting the time to delivery/velocity / escaped defects / etc. as a control chart

if some data fall outside the upper / lower control limits, a root cause analysis should be performed to rectify the issue

common cause – systematic issue, need to be dealt with through trend analysis

special cause – happens once only due to special reasons

another example is the WIP limit in Kanban boards


Escaped Defects

Agile performance (on quality) can also be measured by the number of escaped defects (defects found by customers)

defects should be found and fixed during coding and testing

defects found early are much less expensive to fix than defects found late


Problem Resolution


The Five WHYs

a systematic approach to analyzing identifying the root cause of a problem / cause-and-effect for the problem or issue

perform by repeatedly asking the question "Why" at least 5 times until the root cause has been identified

imaginary example: Looking for the root cause for failing the PMI-ACP Exam


1. Why did I fail the PMI-ACP Exam?

  • Because I got a lower mark than the passing mark

2. Why did I get a lower mark?

  • Because I was not sure about the answers to many questions.

3. Why was I not sure about the answers to many questions?

  • Because I could not remember some facts for the exam.

4. Why couldn't I remember some facts for the exam?

  • Because I was not familiar with the PMI-ACP Exam content.

5. Why was I not familiar with the PMI-ACP Exam content?

  • Because I did not spend enough time revising the PMI-ACP Exam notes.


Fishbone Diagram Analysis

another tool for carrying out cause and effect analysis to help discover the root cause of a problem or the bottle-necks of processes

aka cause and effect diagrams/Ishikawa diagrams to use Fishbone diagram technique:

  1. write down the problem/issue as the "fish head" and draw a horizontal line as the "spine" of the fish

  2. think of major factors (at least four or above) involved in the problem/issue and draw a line spinning off from the spine representing each factor

  3. identify possible causes and draw lines spinning off the major factors

(your diagram will look like a fishbone now)

  1. analyze the fishbone diagram to single out the most possible causes to carry out further investigation



The following domains will be published later on with separated articles, ONE MORE TO GO STAY TUNED!



bottom of page