Definitions The following are key terms used in this document and

advertisement
Definitions
The following are key terms used in this document and their definitions as those terms are meant to
be understood in this document.
Abuse | Malicious misuse, usually with the objective of alteration, disruption, or destruction.
Assurance | Justifiable grounds for confidence that the required properties of the software have
been adequately exhibited. In some definitions, assurance also incorporates the activities that
enable the software to achieve a state in which its required properties can be verified or assured.
Grounds for confidence that the other four security goals (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes (1) functionality that performs correctly, (2) sufficient protection against unintentional errors (by users or software), and (3)
sufficient resistance to intentional penetration or by-pass.
SOURCE: SP 800-27
The grounds for confidence that the set of intended security controls in an information system are effective in their application.
SOURCE: SP 800-37; SP 800-53A
Measure of confidence that the security features, practices, procedures, and architecture of an information system accurately mediates and enforces the security policy.
SOURCE: CNSSI-4009; SP 800-39
In the context of OMB M-04-04 and this document, assurance is
defined as 1) the degree of confidence in the vetting process used
to establish the identity of an individual to whom the credential
was issued, and 2) the degree of confidence that the individual
who uses the credential is the individual to whom the credential
was issued.
SOURCE: SP 800-63
… This and subsequent RED entries from NIST IR 7298 rev 2
Attack | An attempt to gain unauthorized access to a system’s services or to compromise one of
the system’s required properties (integrity, availability, correctness, predictability, reliability, etc.).
When a software-intensive system or component is the target, the attack will most likely manifest
as an intentional error or fault that exploits a vulnerability or weakness in the targeted software.
An attempt to gain unauthorized access to system services, resources, or information, or an attempt to compromise system integrity.
SOURCE: SP 800-32
Any kind of malicious activity that attempts to collect, disrupt,
deny, degrade, or destroy information system resources or the information itself.
SOURCE: CNSSI-4009
Availability | The degree to which the services of a system or component are operational and
accessible when needed by their intended users. When availability is considered as a security
property, the intended users must be authorized to access the specific services they attempt to
access, and to perform the specific actions they attempt to perform. The need for availability
generates the requirements that the system or component be able to resist or withstand attempts
to delete, disconnect, or otherwise render the system or component inoperable or inaccessible,
regardless of whether those attempts are intentional or accidental. The violation of availability is
referred to as Denial of Service or sabotage.
Ensuring timely and reliable access to and use of information.
SOURCE: SP 800-53; SP 800-53A; SP 800-27; SP 800-60; SP
800-37; FIPS 200; FIPS 199; 44 U.S.C., Sec. 3542
The property of being accessible and useable upon demand by
an authorized entity.
SOURCE: CNSSI-4009
Blackhat | A person who gains unauthorized access to and/or otherwise compromises the
security of a computer system or network.
Component | A part or element within a larger system. A component may be constructed of
hardware or software and may be divisible into smaller components. In the strictest definition, a
component must have —
 A contractually specified interface (or interfaces)
 Explicit context dependencies
 The ability to be deployed independently
 The ability to be assembled or composed with other components by someone other
than its developer.
In the less restrictive definition used in this SOAR, a component may also be a code module
or code unit. A code unit is either one of the following:
 A separately testable element of a software component
 A software component that cannot be further decomposed into constituent
components
 A logically separable part of a computer program.
A code module is one of the following:
 A program unit that is discrete and identifiable with respect to compilation,
combination with other units, and loading, i.e., a code unit
 A logically separable part of a computer program, i.e., a code unit.
Compromise | A violation of the security policy of a system, or an incident in which any of the
security properties of the system are violated.
Disclosure of information to unauthorized persons, or a violation of the security policy of a system in
which unauthorized intentional or unintentional disclosure, modification, destruction, or loss of an object may have occurred.
SOURCE: SP 800-32
Compromise –
The unauthorized disclosure, modification, substitution, or use of sensitive data (including plaintext
cryptographic keys and other CSPs).
SOURCE: FIPS 140-2
Disclosure of information to unauthorized persons, or a violation of the security policy of a system in
which unauthorized intentional or unintentional disclosure, modification, destruction, or loss of an object may have occurred.
SOURCE: CNSSI-4009
Correctness | The property that ensures that software performs all of its intended functions as
specified. Correctness can be seen as the degree to which any one of the following is true:
 Software is free from faults in its specification, design, and implementation.
 Software, documentation, and other development artifacts satisfy their
specified requirements.
 Software, documentation, and other development artifacts meet user needs and
expectations, regardless of whether those needs and expectations are specified or
not.
In simple terms, software that is correct is (1) free of faults, and (2) consistent with
its specification.
Countermeasure | An action, device, procedure, technique, or other measure that reduces the
vulnerability or weakness of a component or system.
Actions, devices, procedures, or techniques that meet or oppose (i.e., counters) a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by
discovering and reporting it so that corrective action can be taken.
SOURCE: CNSSI-4009
Critical Software | Software the failure of which could have a negative impact on national
security or human safety, or could result in a large financial or social loss. Critical software is also
referred to as high-consequence software.
Denial of Service (DoS) | The intentional violation of the software’s availability resulting from an
action or series of actions that has one of the following outcomes:



The system’s intended users cannot gain access to the system.
One or more of the system’s time-critical operations is delayed.
A critical function in the system fails.
Also referred to as sabotage.
The prevention of authorized access to resources or the delaying of time-critical operations.
(Time-critical may be milliseconds or it may be hours, depending upon the service provided.)
SOURCE: CNSSI-4009
Dependability | The ability of a system to perform its intended functionality or deliver its
intended service correctly and predictably whenever it is called upon to do so. The following
properties of software directly contribute to its dependability:
 Availability
 Integrity
 Reliability
 Survivability
 Trustworthiness
 Security
 Safety.
NOTE: Out of sequence …Execution Environment | The aggregation of hardware, software,
and networking entities surrounding the software that directly affects or influences its execution.
Error | (1) Deviation of one of the software’s states from correct to incorrect. (2) Discrepancy
between the condition or value actually computed, observed, or measured by the software, and
the true, specified, or theoretically correct value or condition. Some sources give a third meaning
for error: (3) a human action that leads to a failure. For clarity, this SOAR uses the word mistake
to convey this third meaning.
Failure | (1) Non-performance by a system or component of an intended function or service.
(2) Deviation of the system’s performance from its specified, expected parameters (such as its
timing constraints).
Fault | The adjudged or hypothesized cause of an error.
Flaw | A mistake of commission, omission, or oversight in the creation of the software’s
requirements, architecture, or design specification that results in an inadequate and often weak
design, or in one or more errors in its implementation. Some software assurance practitioners
object to the word “flaw” because it is often confused with “error,” “fault,” and “defect.” (Just as
“defect” is sometimes similarly confused with “flaw.”)
Error of commission, omission, or oversight in an information system that may allow protection
mechanisms to be bypassed.
SOURCE: CNSSI-4009
Formal | Based on mathematics. (This narrow definition is used in this SOAR to avoid the
confusion that arises when “formal” is used both to mean “mathematically based” and as a
synonym for “structured” or “disciplined.”)
Formal Method | A process by which the system architecture or design is mathematically modeled
and specified, and/or the high-level implementation of the system is verified, through use of
mathematical proofs, to be consistent with its specified requirements, architecture, design, or security
policy.
Mathematical argument which verifies that the system satisfies a mathematically-described security policy.
SOURCE: CNSSI-4009
Independent Verification and Validation (IV&V) | Verification and validation performed by a
third party, i.e., an entity that is neither the developer of the system being verified and validated,
nor the acquirer or user of that system.
A comprehensive review, analysis, and testing (software and/or hardware) performed by an objective third party to confirm (i.e., verify) that the requirements are correctly defined, and to confirm (i.e., validate) that the system correctly implements the required functionality and security
requirements.
SOURCE: CNSSI-4009
Integrity | The property of a system or component that reflects its logical correctness and
reliability, completeness, and consistency. Integrity as a security property generates the
requirement for the system or component to be protected against intentional attempts to do one of
the following:
 Alter or modify the software in an improper or unauthorized manner. (Note that
attempts to destroy the software in an improper or unauthorized manner are
considered attacks on the system’s availability, i.e., Denial of Service attacks)
 Through improper or unauthorized manipulation to cause the software to either
perform its intended function(s) in a manner inconsistent with the system’s
specifications and the intended users’ expectations, or to perform undocumented or
unexpected functions.
Guarding against improper information modification or destruction, and includes ensuring information
non-repudiation and authenticity.
SOURCE: SP 800-53; SP 800-53A; SP 800-18; SP 800-27; SP 800-37; SP 800-60; FIPS 200; FIPS 199;
44 U.S.C., Sec. 3542
Integrity –
The property that sensitive data has not been modified or deleted in an unauthorized and undetected
manner.
SOURCE: FIPS 140-2
The property whereby an entity has not been modified in an unauthorized manner.
SOURCE: CNSSI-4009
Least Privilege | The principle whereby each subject (i.e., actor) in the system is granted only
the most restrictive set of privileges needed by the subject to perform its authorized tasks, and
whereby the subject is allowed to retain those privileges for no longer than it needs them.
The security objective of granting users only those accesses they
need to perform their official duties.
SOURCE: SP 800-12
The principle that a security architecture should be designed so
that each entity is granted the minimum system resources and
authorizations that the entity needs to perform its function.
SOURCE: CNSSI-4009
Malicious Code | Undocumented software or firmware intended to perform an unauthorized or
unanticipated process that will have adverse impact on the dependability of a component or
system. Malicious code may be self-contained (as with viruses, worms, malicious bots, and
Trojan horses), or it may be embedded in another software component (as with logic bombs, time
bombs, and some Trojan horses). Also referred to as malware.
Software or firmware intended to perform an unauthorized process that will have adverse impact
on the confidentiality, integrity, or availability of an information system. A virus, worm, Trojan
horse, or other code-based entity that infects a host. Spyware and some forms of adware are also
examples of malicious code.
SOURCE: SP 800-53; CNSSI-4009
Mistake | An error committed by a person as the result of a bad or incorrect decision or
judgment by that person. Contrast with “error,” which is used in this document to indicate the
result of a “mistake” committed by software (i.e., as the result of an incorrect calculation or
manipulation).
Misuse | Usage that deviates from what is expected (with expectation usually based on the
software’s specification).
NOTE: out of sequence …. Quality | The degree to which a component, system, or process
meets its specified requirements and/or stated or implied user, customer, or stakeholder needs
and expectations.
Predictability | The properties, states, and behaviors of the system or component never deviate
from what is expected.
Problem | Used interchangeably with anomaly, although “problem” has a more negative
connotation and implies that the anomaly is, or results from, a flaw, defect, fault, error, or failure.
Reliability | The probability of failure-free (or otherwise satisfactory) software operation for a
specified or expected period or interval of time, or for a specified or expected number of
operations, in a specified or expected environment under specified or expected operating
conditions.
Risk | The likelihood that a particular threat will adversely affect a system by exploiting a
particular vulnerability.
The level of impact on organizational operations (including mission, functions, image, or reputation), organizational assets, or individuals resulting from the operation of an information system
given the potential impact of a threat and the likelihood of that
threat occurring.
SOURCE: FIPS 200
The level of impact on organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the Nation resulting from the operation of an information system given the potential impact of a
threat and the likelihood of that threat occurring.
SOURCE: SP 800-60
A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the
adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence.
Note: Information system-related security risks are those risks that
arise from the loss of confidentiality, integrity, or availability of
information or information systems and consider the adverse impacts to organizational operations (including mission, functions,
image, or reputation), organizational assets, individuals, other organizations, and the Nation.
SOURCE: SP 800-53
A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (1) the
adverse impacts that would arise if the circumstance or event occurs; and (2) the likelihood of occurrence.
Note: Information system-related security risks are those risks that
arise from the loss of confidentiality, integrity, or availability of
information or information systems and reflect the potential adverse impacts to organizational operations (including mission,
functions, image, or reputation), organizational assets, individuals,
other organizations, and the Nation.
SOURCE: CNSSI-4009
A measure of the extent to which an entity is threatened by a
potential circumstance or event, and typically a function of: (i)
the adverse impacts that would arise if the circumstance or event
occurs; and (ii) the likelihood of occurrence.
[Note: Information system-related security risks are those risks that
arise from
the loss of confidentiality, integrity, or availability of information
or information
systems and reflect the potential adverse impacts to organizational
operations
(including mission, functions, image, or reputation), organizational
assets,
individuals, other organizations, and the Nation. Adverse impacts
to the Nation
include, for example, compromises to information systems that
support critical
infrastructure applications or are paramount to government continuity of
operations as defined by the Department of Homeland Security.]
SOURCE: SP 800-37; SP 800-53A
Robustness | The degree to which a component or system can function correctly in the
presence of invalid inputs or stressful environmental conditions, including inputs or conditions
that are malicious in origin.
The ability of an Information Assurance entity to operate correctly and reliably across a wide
range of operational conditions, and to fail gracefully outside of that operational range.
SOURCE: CNSSI-4009
Sabotage | See Denial of Service.
Safety | Persistence of dependability in the face of realized hazards (unsponsored, unplanned
events, accidents, mishaps) that result in death, injury, illness, damage to the environment, or
significant loss or destruction of property.
Sandboxing | A method of isolating application-level components into distinct execution
domains, the separation of which is enforced by software. When run in a sandbox, all of the
component’s code and data accesses are confined to memory segments within that sandbox. In
this way, sandboxes provide a greater level of isolation of executing processes than can be
achieved when processes run in the same virtual address space. The most frequent use of
sandboxing is to isolate the execution of untrusted programs (e.g., mobile code, programs written
in potentially unsafe languages such as C) so that each program is unable to directly access the
same memory and disk segments used by other programs, including trusted programs. Virtual
machines (VM) are sometimes used to implement sandboxing, with each VM providing an isolated
execution domain.
A method of isolating application modules into distinct fault domains enforced by software. The technique allows untrusted programs written in an unsafe language, such as C, to be executed
safely within the single virtual address space of an application.
Untrusted machine interpretable code modules are transformed so
that all memory accesses are confined to code and data segments
within their fault domain. Access to system resources can also be
controlled through a unique identifier associated with each domain.
SOURCE: SP 800-19
A restricted, controlled execution environment that prevents potentially malicious software, such as mobile code, from accessing
any system resources except those for which the software is authorized.
SOURCE: CNSSI-4009
Secure State | The condition in which no subject can access another entity in an unauthorized
manner for any purpose.
Condition in which no subject can access any object in an unauthorized manner.
SOURCE: CNSSI-4009
Security | Protection against disclosure, subversion, or sabotage. To be considered secure,
software’s dependability (including all constituent properties of that dependability) must be
preserved in the face of threats. At the system level, security manifests as the ability of the
system to protect itself from sponsored faults, regardless of whether those faults are malicious.
A condition that results from the establishment and maintenance of protective measures that enable an enterprise to perform its mission or critical functions despite risks posed by threats to its
use of information systems. Protective measures may involve a combination of deterrence, avoidance, prevention, detection, recovery, and correction that should form part of the enterprise’s risk
management approach.
SOURCE: CNSSI-4009
Service | A set of one or more functions, tasks, or activities performed to achieve one or more
objectives that benefit a user (human or process).
Software Security Assurance | Justifiable grounds for confidence that software’s security
property, including all of security’s constituent properties (e.g., attack-resistance, attack-tolerance,
attack-resilience, lack of vulnerabilities, lack of malicious logic, dependability despite the
presence of sponsored faults, etc.), has been adequately exhibited. Often abbreviated to software
assurance.
Level of confidence that software is free from vulnerabilities, either intentionally designed into
the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner.
SOURCE: CNSSI-4009
Software-Intensive System | A system in which the majority of components are implemented
in/by software, and in which the functional objectives of the system are achieved primarily by its
software components.
State | (1) A condition or mode of existence that a system or component may be in, for example
the input state of a given channel. (2) The values assumed at a given instant by the variables that
define the characteristics of a component or system.
Intermediate Cipher result that can be pictured as a rectangular array of bytes.
SOURCE: FIPS 197
Subversion | The intentional violation of the software’s integrity.
Survivability | The ability to continue correct, predictable operation despite the presence of
realized hazards and threats.
System | A collection of components organized to accomplish a specific function or set of
functions.
See Information System.
Any organized assembly of resources and procedures united
and regulated by interaction or interdependence to accomplish
a set of specific functions.
SOURCE: CNSSI-4009
Threat | Any entity, circumstance, or event with the potential to harm the software system or
component through its unauthorized access, destruction, modification, and/or denial of service.
Any circumstance or event with the potential to adversely impact organizational operations (including
mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the
Nation through an information system via unauthorized access, destruction, disclosure, modification of
information, and/or denial of service.
SOURCE: SP 800-53; SP 800-53A; SP 800-27; SP 800-60; SP 800-37; CNSSI-4009
The potential source of an adverse event.
SOURCE: SP 800-61
Threat –
Any circumstance or event with the potential to
adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, or individuals through an information system via unauthorized access, destruction, disclosure, modification of information,
and/or denial of service. Also, the potential for a
threat-source to successfully exploit a particular
information system vulnerability.
SOURCE: FIPS 200
Trustworthiness | Logical basis for assurance (i.e., justifiable confidence) that the system will
perform correctly, which includes predictably behaving in conformance with all of its required
critical properties, such as security, reliability, safety, survivability, etc., in the face of wide ranges
of threats and accidents, and will contain no exploitable vulnerabilities either of malicious or
unintentional origin. Software that contains exploitable faults or malicious logic cannot justifiably
be trusted to “perform correctly” or to “predictably satisfy all of its critical requirements” because
its compromisable nature and the presence of unspecified malicious logic would make prediction
of its correct behavior impossible.
The attribute of a person or organization that provides confidence
to others of the qualifications, capabilities, and reliability of that
entity to perform specific tasks and fulfill assigned responsibilities.
SOURCE: SP 800-79
The attribute of a person or enterprise that provides confidence to
others of the qualifications, capabilities, and reliability of that
entity to perform specific tasks and fulfill assigned responsibilities.
SOURCE: CNSSI-4009; SP 800-39
Security decisions with respect to extended investigations to determine and confirm qualifications, and suitability to perform
specific tasks and responsibilities.
SOURCE: FIPS 201
User | Any person or process authorized to access an operational system.
Individual or (system) process authorized to access an information system.
SOURCE: FIPS 200
Individual, or (system) process acting on behalf of an individual, authorized to access an information system.
SOURCE: SP 800-53; SP 800-18; CNSSI-4009
User –
An individual or a process (subject) acting on behalf of the individual that accesses a cryptographic
module in order to obtain cryptographic services.
SOURCE: FIPS 140-2
Verification And Validation (V&V) | The process of confirming, by examination and provision
of objective evidence, that —
 Each step in the process of building or modifying the software yields the right
products (verification). Verification asks and answers the question “Was the
software built right?” (i.e., correctness).
 The software being developed or modified will satisfy its particular requirements
(functional and nonfunctional) for its specific intended use (validation). Validation
asks and answers the question “Was the right software built?” (i.e., suitability).
In practical terms, the differences between verification and validation are unimportant except
to the theorist. Practitioners use the term V&V to refer to all of the activities that are
undertaken to ensure that the software will function according to its specification. V&V is
intended to be a systematic and technical evaluation of software and associated products of
the development and maintenance processes. Independent V&V is a process whereby the
products of the software development life cycle are reviewed, verified, and validated by an
entity that is neither the developer nor the acquirer of the software, which is technically,
managerially, and financially independent of the developer and acquirer, and which has no
stake in the success or failure of the software.
Vulnerability | A development fault or weakness in deployed software that can be exploited with
malicious intent by a threat with the objective of subverting (violation of integrity) or sabotaging
(violation of availability) the software, often as a step toward gaining unauthorized access to the
information handled by that software. Vulnerabilities can originate from weaknesses in the
software’s design, faults in its implementation, or problems in its operation.
Weakness in an information system, system security procedures,
internal controls, or implementation that could be exploited or
triggered by a threat source.
SOURCE: SP 800-53; SP 800-53A; SP 800-37; SP 800-60; SP
800-115; FIPS 200
A weakness in a system, application, or network that is subject
to exploitation or misuse.
SOURCE: SP 800-61
Weakness in an information system, system security procedures,
internal controls, or implementation that could be exploited by a
threat source.
SOURCE: CNSSI-4009
Weakness | A flaw, defect, or anomaly in software that has the potential of being exploited as a
vulnerability when the software is operational. A weakness may originate from a flaw in the
software’s security requirements or design, a defect in its implementation, or an inadequacy in its
operational and security procedures and controls. The distinction between “weakness” and
“vulnerability” originated with the MITRE Corporation Common Weaknesses and Exposures
(CWE) project (http://cve.mitre.org/cwe/about/index.html).
Whitehat | A person who is ethically opposed to the abuse of computer systems. Motivated by that
opposition, the whitehat frequently uses the blackhat’s techniques and tools in order to confound
blackhat compromise attempts and to protect the systems and networks targeted by them.
Download