Observation Management Procedure

Purpose

Observations, also commonly referred to as findings, exceptions or deficiencies, refer to any Tier 3 information system risks that are identified as an output of compliance operations or other mechanisms by team members, such as self-identification of a system specific risk.

Scope

Risks identified at the information system or business process levels. The volume and granularity of these Tier 3 risks make it inappropriate to track via the Bramble Lab Risk Register. This observation management process will guide team-members on how to track Tier 3 risks.

Roles and Responsibilities

Role Responsibility
Security Compliance Responsible for executing Security control tests to determine test of design and test of operating effectiveness of Security and IT general controls.
Internal Audit Responsible for executing Internal Audit control tests to determine test of design and test of operating effectiveness of all internal controls as required by audit plan.
Risk and Field Security Responsible for executing Third Party Risk Management (TPRM) risk and security assessments to determine risk associated with third party applications and servicees.
Observation Identifier Responsible for being the observation DRI through the observation lifecycle including designing remediation plans in order to meet legal and regulatory requirements.
Remediation Owner Validates observation, confirms assignee, stop date (due date), finalizes remediation plan and conducts remediation activity based on defined remediation SLA’s
Observation Program DRI Responsible for regular reviews of program health and stakeholder report delivery
Managers to Executive Leadership Responsible for escalation as necessary and resource allocation for remediation activity

Observation Phases Overview


graph TD; A[Identified] --> B[Assigned]; B --> C[Remediation in progress]; B --> D[Ignored or Invalid]; C --> F[Resolved];

Procedure

The following phases walk through the observation lifecycle.

Identifying Observations

Observations can be identified in the following ways:

  1. Internal audit activities
  2. Security control testing
  3. Third Party Risk Management (TPRM) activities
  4. Customer Assurance activities
  5. External audits
  6. Third party application scanning (BitSight)
  7. Ad-hoc issues

Observation Identifier Responsibilities Based on Observation Type:

Identified Observation Responsible Party
Internal audit activities Security Compliance
Security control testing Security Compliance
Third Party Risk Management (TPRM) activities Security Compliance
Customer assurance activities Security Compliance
External Audits Security Compliance
Third party application scanning (BitSight) Security Compliance
Ad-hoc issues Security Compliance

Recording Observations

The Observation Identifier is responsible as the Observation DRI through the observation lifecycle. The Observation Identifier completes all necessary observation information and remediation recommendations for the remediation owner. They are responsible for managing the observation through the observation lifecycle. This includes creating the observation, validating the observation with the Remeidation Owner, tracking all remediation progress and updating the associated ZenGRC issue with current information and status updates. Each observation has both a GitLab Issue (for Remediation Owners) and a mirrored ZenGRC Issue (for Observation Identifiers). Each observation will be assigned a risk rating, which should drive the priority of remediation.

If multiple observation issues relate to the same root cause or are blocked by the same component of work, these issues will be connected together into an Epic in order to more clearly see how multiple observations issues are connected.

To ensure transparency accross the organization, Security Compliance documents observations in the Assurance project

Observation Risk Ratings

Tier 3 information system risk ratings are based off the STORM risk rating methodology.

Risk Rating = Likelihood x Impact

Determining the likelihood

At Bramble, observations will be rated based on the likelihood the observation has of recurring and/or the frequency that the control has seen observations. The criteria used to assess this likelihood can be found in the Likelihood Table below. Note that there are two different definitions for each likelihood rating level:

  • Control Observation: This criteria is utilized to rate observations identified as an output of control testing (e.g. where control testing performed internally by Internal Audit and/or Security Compliance has failed). The assumption of the Likelihood Table is to consider observations individually rather than in aggregate (i.e if 2 similar observations occur against a single test of a sample of 25, the failure rate is 8% and would be scored a 3. The control does not need to be tested multiple times in the current year or prior 9 months with an observation each time to meet the requirement for a score of 3).
  • Information System Risk (Tier 3): This criteria is utilized to score the likelihood of an information system being exploited (e.g. insufficient encyrption mechanisms for the storage of data within [System Name] result in the unintentional exposure/leakage of this information to the public)

Likelihood Table

Qualitative Score Description
1 Control Observation: The observation noted is considered to be a one off occurence for the control as a result of extenuating circumstances. It is unlikely to occur again once remediated.

Information System Risk: Theoretically impossible and/or requires significant technical expertise for the risk to be exploited.
2 Control Observation: The observation was identified as a result of management's oversight on the control and may potentially occur again. This is the only observation associated with the control in the current fiscal year or prior 9 months, whichever is longer.

Information System Risk: Even with technical expertise, it is somewhat difficult to exploit the risk.
3 Control Observation: The control has had multiple observations in the current fiscal year or prior 9 months, whichever is longer.

Information System Risk: Minimal expertise is required to exploit the risk.
4 Control Observation: The control has observations that have persisted and continue to occur year to year AND/OR the observation noted is associated with the design of the control.

Information System Risk: The risk can be easily exploited and does not require any technical expertise.

Determining the observation impact

In addition to applying a qualitative scoring factor for likelihood, all observations need to be evaluated for the impact they would have to Bramble at the organization level and/or the compliance impact (if applicable). The criteria and qualitative scores for assessing the impact of an observation can be found in the Impact Scoring Table below. The highest rating in any field is the final impact score of the observation so as to approach observations in a more conservative manner (i.e if all fields are rated at a value of 2 except Remediation Effort which is scored a 3, the final impact score would be a 3).

Important Note: Team members who are leveraging the impact scoring criteria below may judgementally select the impact factors most relevant to them. Internal Audit and Security Compliance utilize all columns when scoring observations identified as part of controls testing because there may be specific impacts to external compliance audit requirements as a result of these findings. Any information system risk identified outside of control testing may utilize the columns that are most relevant.

Impact Scoring Table

Qualitative Score External Audit Impact Remediation Effort Financial Impact Legal & Regulatory Impact Stakeholder/ ICOFR (Internal Controls Over Financial Reporting)  Impact
1 The observation would not lead to an adverse audit opinion. The observation was related to extenuating circumstances and requires simple reinforcement of policy/process, no additional management oversight required. Observation has potential financial impact which results in loss / misstatement of up to $25K. The observation would not lead to major action by a regulator. The observation has minimal impact on all stakeholders (internal and external)
2 The observation would likely not lead to an adverse audit opinion because it is an isolated occurrence. The observation has remediation effort that requires oversight/support at the management level. Observation has potential financial impact which results in loss / misstatement between $25K to $250K. The observation could lead to minor regulatory action. The observation impacts internal stakeholders and or could lead to financial misstatements, if not addressed on time.
3 The observation is likely to result in an adverse audit opinion if a full sample for remediation testing cannot be provided. The observation has remediation effort that requires oversight/support at the director level. Observation has potential financial impact which results in loss / misstatement between $250K to $500K. The observation could lead to an investigation or regulatory action. The observation impacts internal and external stakeholders. It requires attention of Executives and Board.
4 The observation will result in an adverse audit opinion. The observation has remediation effort that requires oversight/support at the Executive level. Observation has potential financial impact which results in loss / misstatement above $ 500K The observation could directly result in major regulatory action against Bramble. The observation impacts internal and external stakeholders. It requires attention of Executives and Board and could impact management assertion in 10Q / 10-K.

Determining the Observation Risk Rating

In order to arrive at a final observation risk rating, the likelihood and impact scores of an observation are multiplied together. The final score determined will determine whether or not the observation is a LOW, MODERATE, or HIGH risk observation using the Observation Risk Rating Table

Observation Risk Rating Table

Observation Risk Matrix
Impact Score
Likelihood 1 2 3 4
1 1 2 3 4
2 2 4 6 8
3 3 6 9 12
4 4 8 12 16
Observation Risk Thresholds
LOW 1 - 3
MODERATE 4 - 9
HIGH 12 - 16

Additional Considerations Specific to Control Observations

The procedures outlined in the preceding sections below are used specifically by Internal Audit and Security Compliance. Team members utilizing the observation management program for rating information system risks outside of control testing activities will not need to engage in the procedures below.

Determining the Individual Control Health & Effectiveness Rating (CHER)

The importance of risk rating each control observations comes into play when making a final determination on how to establish a control’s Control Health & Effectiveness Rating (CHER). CHER ratings on a sliding scale outside of the typical effective/ineffective rating used for compliance, allow for clearer communication and prioritization with broader audiences outside of compliance functions and allows non-compliance stakeholders the ability to view how observations impact the control environment.

CHER provides a qualitative value of a control’s effectiveness that is used as an input for various processes within the Risk Management Program. When needing to report to management, these quantitative values are translated to qualitative terms: Fully Effective, Substantially Effective, Partially Effective, Largely Ineffective, Ineffective. Refer to the CHER Quantitative vs. Qualitative Terms and Definitions Table below for a mapping of CHER to it’s definition and the related qualitative term and definition. Use the rating determined by completing the observation risk rating with likelihood and impact scores and applying that risk rating into the table below (i.e if a control has 1 low risk observation per the Observation Risk Rating table, the CHER for that control would be a 2 (Substantially Effective)).

CHER Quantitative vs. Qualitative Terms and Definitions (For individual controls)

Quantitative Value Quantitative Definition CHER Qualitative Term Qualitative Definition
1 The control has no outstanding HIGH, MODERATE, or LOW risk observations open. Fully Effective Nothing more to be done except review and monitor existing controls. Controls are well designed for the risk, and address the root causes. Management believes they are effective and reliable at all times.
2 There are no outstanding HIGH or MODERATE risk observations associated with the control, but there are some LOW risk observations that are open Substantially Effective Most controls are designed correctly and in place and effective. Some more work to be done to improve operating effectiveness or there are doubts about operational effectiveness and consistent reliability.
3 There are no outstanding HIGH risk observations associated with the control, but there is a single open MODERATE (below 9 rating) risk observation and any number of LOW risk observations. Partially Effective Design of controls is largely correct and they treat most of the root causes of the risk, however they are not currently operating very effectively.
4 There are no outstanding HIGH risk observations associated with the control, but there are multiple open MODERATE (below 9 rating) risk observations OR a single open MODERATE risk observation with a 9 rating. There can be any number of LOW risk observations. Largely Ineffective Significant control gaps. Either controls do not treat root causes or they do not operate at all effectively.
5 There are outstanding HIGH risk observations associated with the control. Ineffective Practically no credible control. Management has almost no confidence that any degree of control is being achieved due to poor control design or very limited operational effectiveness.
0 The control is not yet implemented. Control Not Implemented Control is not implemented and this is expected. This is different from a control gap because of the awareness around the control and the intentional exclusion of the control from being a key control in the environment. There are other sufficient controls to secure the environment in place.

System Health Rating - Quantitative vs. Qualitative Terms and Definitions

CHER is assigned on a control by control basis but in instances where we want to report on system health, the ratio of high risk observations to the number of applicable controls that were assessed against the system is determined. That ratio is used to determine the system health rating from the following table:

Ratio of CHER rating to applicable controls assessed System Health Rating Value
Between 0% and 5% of controls = CHER 2,3,4,5,0 1
Between 5% and 35% of controls = CHER 2,3,4,5,0 2
Greater than 35% up to 65% of controls = CHER 2,3,4,5,0 3
Greater than 65% up to 85% of controls = CHER 2,3,4,5,0 4
Greater than 85% of controls = CHER 2,3,4,5,0 5

Refer to the System Effectiveness Rating Table below for a mapping of averaged CHERs to the qualitative term and definition that can be used to report on system health/effectiveness. Note that when using this table the final average of CHER values should be rounded up to the nearest quantitative value to determine the CHER for the system (i.e if average of all CHER’s equals 2.3, the final CHER for the system would be rounded up to a 3)

System Health Rating Table

System Health Rating Value System Health Rating Qualitative Term Qualitative Definition
1 Fully Effective Nothing more to be done except review and monitor the existing controls. Controls are well designed for the risk, and address the root causes. Management believes they are effective and reliable at all times.
2 Substantially Effective Most controls are designed correctly and are in place and effective. Some more work to be done to improve operating effectiveness or management has doubts about operational effectiveness and reliability.
3 Partially Effective While the design of controls may be largely correct in that they treat most of the root causes of the risk, they are not currently operating very effectively.
4 Largely Ineffective Significant control gaps. Either controls do not treat root causes or they do not operate at all effectively.
5 Ineffective Virtually no credible control. Management has no confidence that any degree of control is being achieved due to poor control design or very limited operational effectiveness.

Control Family Effectiveness Rating - Quantitative vs. Qualitative Terms and Definitions

CHER is assigned on a control by control basis but in instances where we want to report on control family effectiveness, the CHER for each of the individual underlying controls in a control family can be averaged to provide a more holistic view. Refer to the Control Family Effectiveness Rating Table below for a mapping of averaged CHERs to the qualitative term and definition that can be used to report on control family health/effectiveness. Note that when using this table the final average of CHER values should be rounded up to the nearest quantitative value to determine the CHER for the control family (i.e if average of all CHER’s equals 2.3, the final CHER for the control family would be rounded up to a 3).

Control Family Effectiveness Rating Table

Quantitative Value Control Family Effectiveness Rating Qualitative Term Qualitative Definition
1 Fully Effective Nothing more to be done except review and monitor the existing controls. Controls are well designed for the risk, and address the root causes. Management believes they are effective and reliable at all times.
2 Substantially Effective Most controls are designed correctly and are in place and effective. Some more work to be done to improve operating effectiveness or management has doubts about operational effectiveness and reliability.
3 Partially Effective While the design of controls may be largely correct in that they treat most of the root causes of the risk, they are not currently operating very effectively.
4 Largely Ineffective Significant control gaps. Either controls do not treat root causes or they do not operate at all effectively.
5 Ineffective Virtually no credible control. Management has no confidence that any degree of control is being achieved due to poor control design or very limited operational effectiveness.

CHER Override

To account for edge case scenarios or other extenuating circumstances that may not be modeled appropriately using the Bramble Observation Management methodology as outlined, the final CHER can be downgraded (i.e move from 2 to 3) at the discretion of the Security Compliance Director if it is determined that the observation’s risk rating and therefore CHER does not appropirately reflect the current control or control environment health. The rating cannot be upgraded (i.e move from 4 to 3) to ensure a conservative approach to securing the organization and managing risk.

Remediation

Once all remediation activities have been completed, the Remediation Owner is responsible for tagging the Observation Identifier in the observation issue. If no individual Observation Identifier is assigned, tag the Security Compliance Team. The Observation Identifier will then validate the remediation activity for completeness, re-test the observation as appropriate and close the observation issue.

It is the responsibility of the Observation Identifier to track the milestones, work progress and validation of the remediation activity.

The remediation workflow by observation stage can be found here (access is available only to internal Bramble team members)

Non Remediation Owner Actions To Support Observation Closure

In cases where Internal Stakeholders (not the Remediation Owner) provide remediation documentation to support closure of the observation. Please tag the Observation Identifier in the observation issue. This will trigger the validation of the remediation activity for completeness, re-test as appropriate and closure by the Observation Identifier.

Remediation SLA

Observation remediation SLA’s are determined by the risk rating of the individual observation. The following table shows the SLA for each risk rating:

Risk Rating Remeditation SLA Remediation Goal
High 6 months, or as otherwise defined by the agreed upon remediation plan 4 weeks
Moderate 6-12 months, or as otherwise defined by the agreed upon remediation plan 6 weeks
Low > 12 months, or as otherwise defined by the agreed upon remediation plan 8 weeks

A more detailed SLA and Remediation Goal process can be found here (access is available only to internal Bramble team members)

Opportunities for Improvement (OFI)

Throughout the course of testing or general monitoring of the Bramble ecosystem, Opportunities for Improvement (OFI) may be identified and documented so that the overall control environment and Bramble’s processes can be improved.

To capture an OFI, create an issue in the Assurance project and add the RiskRating::OFI label.

OFI’s do not have defined remediation SLA’s as they are process improvements or suggestions only. The Remediation Goal to communicate the OFI to the appropriate stakeholder is 10 weeks.

What is the difference between an OFI and an Observation?

  • Observations are tied to specific testing attributes and/or reflect areas where a third party compliance professional would be of the opinion that a relevant risk wouldn’t be or hasn’t been, mitigated.
  • OFIs are not tied to specific testing attributes and are general areas of improvement that may streamline compliance or business activities.
  • Observations will always impact control effectiveness ratings
  • OFIs will never impact control effectiveness ratings

Exceptions

Exceptions will be created for observations that breach a mutually agreed upon remediation date, breach in SLA or if the Remediation Owner confirms the observation will not be remediated.

Exceptions to this procedure will be tracked as per the Information Security Policy Exception Management Process.

References

Contact

If you have any questions or feedback about the security compliance observation management process please contact the Bramble security compliance team.