The Software Security Risk Equation Risk is about considering what might go wrong. If the “wrong” thing is the violation of security requirements, then it is a security risk: concern about a failure of security. [NIST] defines risk as “a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.” Adverse impacts: The three major components of security are conventionally regarded as confidentiality, integrity, and availability. Thus, adverse impacts need to be analyzed for each of the failure modes in which these components are compromised: such as “What is the consequence of a breach of confidentiality?” Risk management would address means of mitigation (reducing the consequences when a failure occurs). Likelihood of occurrence: Again, the different failure modes may well have different probabilities of occurring, and each would be the result of a series of contributing factors. Just as there would be a distribution of potential consequences, the likelihood of various causal chains (initiated by different threat agents) would be distributed across a range of values. Risk management would address means of avoidance (reducing the probability of a failure occurring). Quantifying risk is typical done in terms of risk exposure, which is calculated from these two basic factors: Risk Exposure = (Event Probability) X (Event Consequence). With both probabilities and consequences as distributions, one might want to consider best-case and worst-case bounds (lowest and highest values) and perhaps also an estimated most-likely value. Organizationally, risk exposures would also be summed across all potential damaging events. Ultimately, actual security failures, or successful exploits, would contribute to actual (monetary or other) losses to the organization. Given the uncertainties of the various assigned values, another risk management task might be the effort to reduce these uncertainties. Data-driven decisions certainly benefit from better data. Timing is another consideration for security risks. As discussed below, some of the risk factors are particularly time-dependent. The effectiveness of both avoidance and mitigation actions often is a function of how promptly they are performed. Drilling Down Further decomposition of the probability factor provides insights into the particulars of the causal chain. Risk avoidance activities can then seek to reduce each contributing factor and thus reduce the overall probability of failure. Security failures begin with the actions of a threat agent. Security is breached because the threat agent has a means of exploiting a weakness in the system. The existence and characteristics of exploitable weaknesses, or vulnerabilities, are at the root of all security risks. As a first approximation, the number of successful exploits (security failures) depends on the number of weaknesses in a system that might be exploited: Nsuccessful_exploits ~ Nvulnerabilities Of course a threat agent must first know that the vulnerability exists. Given that a vulnerability does exist, what is the probability of its discovery? Here is the initial timedependent factor -- the real issue is “How long does it take for the vulnerability to be discovered?” The most conservative approach is to assume that, with enough time, the discovery is certain (probability equals one). In the most general case, at any time the number of discovered vulnerabilities is some fraction of all that exist: Ndiscovered_vulnerabilities = Nvulnerabilities X P(discovery|vulnerability) Not all vulnerabilities that are discovered will be successfully exploited. Again, timing must also be considered here, as some exploits require more time (in gathering information, pursuing alternative attacks, and so on) to be successful. In general, at any time some fraction of exploits will have been successfully completed. Nsuccessful_exploits = Ndiscovered_vulnerabilities X P(successful_exploit|discovery) What contributes to the success of an exploit? First consider how likely an attacker is to possess the skills required to exploit a known vulnerability. Then ask how extensive are the resources (access, computing power) that the attacker might bring to bear. And finally evaluate the motivations that would keep an attacker on task to successful completion of the attack. P(successful_exploit) = P(skills) X P(resources) X P(motivation) Alternative Frameworks In another setting, a target-analysis process [CARVER] has these components: CRITICALITY … “Criticality means target value.” ACCESSIBILITY … “A target is accessible when an operational element can reach the target with sufficient personnel and equipment to accomplish its mission.” RECUPERABILITY … “How long will it take to replace, repair, or bypass the destruction of or damage to the target?” VULNERABILITY … “A target is vulnerable if the operational element has the means and expertise to successfully attack the target.” EFFECT … “a measure of possible military, political, economic, psychological, and sociological impacts at the target and beyond.” RECOGNIZABILITY … “the degree to which it can be recognized by an operational element and/or intelligence collection and reconnaissance assets under varying conditions.” In terms of risk exposure calculations, EFFECT and RECUPERABILITY (a time element, again) can be thought of as aspects of consequences, while the other terms enter into calculation of probability. Rearranging the sequence of probability factors as (RECOGNIZABILITY) X (VULNERABILITY) X (ACCESSIBILITY) X (CRITICALITY) is analogous to the previous labeling of (discovery) X (skills) X (resources) X (motivation).\ Yet another alternative framework from [Defense Science Board], portrays risk as a function of the following parameters: Threat = Intent (desire + expectance) and Capabilities (resources + knowledge) Vulnerabilities = Inherent and Introduced Consequences = Fixable and Fatal Helpfully, this characterization suggests corresponding risk management responses: Intent Deter Capabilities Disrupt Inherent Vulnerabilities Defend Introduced Vulnerabilities Detect Fixable Consequences Restore Fatal Consequences Discard These alternative representations use different terminology and groupings, but they do not introduce any new contributing factors. Risk Management Responses Let’s return to: Nsuccessful_exploits = Nvulnerabilities X P(discovery) X P(skills) X P(resources) X P(motivation) That is, the number of successful exploits can be treated in terms of vulnerability prevalence, vulnerability discoverability, attacker skills, attacker resources, and attacker motivation. Using this terminology, what risk management actions are available? Vulnerability Prevalence Vulnerabilities can only be exploited if they exist. The fewer defects in a product or system, the fewer opportunities for those defects to be exploited. Over time, removing these defects reduces the attack surface, although it is not certain how much risk exposure is reduced, given uncertainty in the attack agent’s subsequent responses. This factor can be reduced by (a) injecting fewer vulnerabilities during development and (b) identifying and removing vulnerabilities as soon as possible. Vulnerability Discoverability “Security through obscurity” is rightly regarded pejoratively. Zero-day exploits are increasingly common. What is unconscionable is a repair delay measured in weeks and months. This factor probably cannot be reduced much except through the timeliness of remediation, so the emphasis remains on finding and fixing vulnerabilities as soon as possible. Attacker Skills As exploit kits become more sophisticated and widely available, the likelihood of an attacker having sufficient skill is another factor ramping up toward unity. Threat agents increasingly draw, not on their own skills, but on commoditized capabilities. This factor also probably cannot be reduced much except through possible countermeasures against attacks known to be supported by available tools. Attacker Resources Most broadly, a security failure will occur when a threat capability exceeds the ability to resist the threat. One way the security level of a target system can be measured operationally is in terms of the time-to-compromise the system. The longer it takes to compromise a system, the more likely the attacker may abandon the attempt. Coupled with extensive “trip-wiring” of the system, this delay will also increase the likelihood of detection and defensive response. This factor can be reduced by (a) making time-to-compromise as large as possible and (b) making compromises as obvious as possible. Attacker Motivation Whether personal, corporate, criminal, state-sponsored, or ideologically driven, different threat agents will have different levels of commitment and thresholds of deterrence. One great uncertainty in the motivation component is assigning a value the target might possess in the eyes of a potential attacker. “Thinking like an adversary” is required; each adversary may have quite different incentives as well as disincentives (such as desire to avoid detection or to maintain anonymity). Consider the classic “MICE” of motivations: Money, Ideology, Compromise (or Coercion), and Ego (or Extortion). Security practitioners will need to expect any of these motivations could be driving the attackers against whom they need to defend. This factor seems the least affected by system developers or users. Their most promising response might be in terms of deterrence, lowering the expectation of success on the part of the attacker. References [CARVER] U. S. Army Field Manual 34-36, SPECIAL OPERATIONS FORCES, INTELLIGENCE AND ELECTRONIC WARFARE OPERATIONS, APPENDIX D: TARGET ANALYSIS PROCESS. U.S. Department of the Army. September 1991. [Defense Science Board] TASK FORCE REPORT: Resilient Military Systems and the Advanced Cyber Threat. U.S. Department of Defense. January 2013. [NIST] Special Publication 800-53, Revision 4. Recommended Security and Privacy Controls for Federal Information Systems and Organizations. National Institute of Standards and Technology. April 2013.