Uploaded by xadag66911

Policies For SOC2

advertisement
THis is a fork of StrongDM’s Comply Open source documentation, the original is at
comply.strongdm
Policies govern the behavior of employees and contractors.
Access Onboarding and Termination Policy
Purpose and Scope:
1. The purpose of this policy to define procedures to onboard and offboard users to
technical infrastructure in a manner that minimizes the risk of information loss or
exposure.
2. This policy applies to all technical infrastructure within the organization.
3. This policy applies to all full-time and part-time employees and contractors.
Background:
In order to minimize the risk of information loss or exposure (from both inside and outside the
organization), the organization is reliant on the principle of least privilege. Account creation and
permission levels are restricted to only the resources absolutely needed to perform each
person’s job duties. When a user’s role within the organization changes, those accounts and
permission levels are changed/revoked to fit the new role and disabled when the user leaves
the organization altogether.
Policy:
1. During onboarding:
1. Hiring Manager informs HR upon hire of a new employee.
2. HR emails IT to inform them of a new hire and their role.
3. IT creates a checklist of accounts and permission levels needed for that
role.
4. The owner of each resource reviews and approves account creation and
the associated permissions.
5. IT works with the owner of each resource to set up the user.
2. During offboarding:
1. Hiring Manager notifies HR when an employee has been terminated.
2. HR sends a weekly email report to IT summarizing list of users terminated
and instructs IT to disable their access.
3. IT terminates access within five business days from receipt of notification.
3. When an employee changes roles within the organization:
1. Hiring Manager will inform HR of a change in role.
2. HR and IT will follow the same steps as outlined in the onboarding and
offboarding procedures.
4. Review of accounts and permissions:
1. Each month, IT and HR will review accounts and permission levels for
accuracy.
Application Security Policy
Purpose and Scope:
1. This application security policy defines the security framework and requirements
for applications, notably web applications, within the organization’s production
environment.
2. This document also provides implementing controls and instructions for web
application security, to include periodic vulnerability scans and other types of
evaluations and assessments.
3. This policy applies to all applications within the organization’ production
environment, as well as administrators and users of these applications. This
typically includes employees and contractors.
Background:
1. Application vulnerabilities typically account for the largest number of initial attack
vectors after malware infections. As a result, it is important that applications are
designed with security in mind, and that they are scanned and continuously
monitored for malicious activity that could indicate a system compromise.
Discovery and subsequent mitigation of application vulnerabilities will limit the
organization’s attack surface, and ensures a baseline level of security across all
systems.
2. In addition to scanning guidance, this policy also defines technical requirements
and procedures to ensure that applications are properly hardened in accordance
with security best practices.
Policy:
1. The organization must ensure that all applications it develops and/or acquires are
securely configured and managed.
2. The following security best practices must be considered and, if feasible, applied
as a matter of the application’s security design:
1. Data handled and managed by the application must be classified in
accordance with the Data Classification Policy (reference (a)).
2. If the application processes confidential information, a confidential record
banner must be prominently displayed which highlights the type of
confidential data being accessed (e.g., personally-identifiable information
(PII), protected health information (PHI), etc.)
3. Sensitive data, especially data specifically restricted by law or policy (e.g.,
social security numbers, passwords, and credit card data) should not be
displayed in plaintext.
4. Ensure that applications validate input properly and restrictively, allowing
only those types of input that are known to be correct. Examples include,
but are not limited to cross-site scripting, buffer overflow errors, and
injection flaws.
5. Ensure that applications execute proper error handling so that errors will
not provide detailed system information to an unprivileged user, deny
service, impair security mechanisms, or crash the system.
6. Where possible, authorize access to applications by affiliation,
membership or employment, rather than by individual. Provide an
automated review of authorizations on a regular basis, where possible.
7. Ensure that applications encrypt data at rest and in transit.
8. Implement application logging to the extent practical. Retain logs of all
users and access events for at least 14 days.
9. Qualified peers conduct security reviews of code for all new or
significantly modified applications; particularly, those that affect the
collection, use, and/or display of confidential data. Document all actions
taken.
10. Implement a change management process for changes to existing
software applications.
11. Standard configuration of the application must be documented.
12. Default passwords used within the application, such as for administrative
control panels or integration with databases must be changed
immediately upon installation.
13. Applications must require complex passwords in accordance with current
security best practices (at least 8 characters in length, combination of
alphanumeric upper/lowercase characters and symbols).
14. During development and testing, applications must not have access to
live data.
3. Where applications are acquired from a third party, such as a vendor:
1. Only applications that are supported by an approved vendor shall be
procured and used.
2. Full support contracts must be arranged with the application vendor for
full life-cycle support.
3. No custom modifications may be applied to the application without
confirmation that the vendor can continue to provide support.
4. Updates, patches and configuration changes issued by the vendor shall
be implemented as soon as possible.
5. A full review of applications and licenses shall be completed at least
annually, as part of regular software reviews.
4. Web applications must be assessed according to the following criteria:
1. New or major application releases must have a full assessment prior to
approval of the change control documentation and/or release into the
production environment.
2. Third-party or acquired applications must have a full assessment prior to
deployment.
3. Software releases must have an appropriate assessment, as determined
by the organization’s information security manager, with specific
evaluation criteria based on the security risks inherent in the changes
made to the application’s functionality and/or architecture.
4. Emergency releases may forego security assessments and carry the
assumed risk until a proper assessment can be conducted. Emergency
releases must be approved by the Chief Information Officer or designee.
5. Vulnerabilities that are discovered during application assessments must be
mitigated based upon the following risk levels, which are based on the Open
Web Application Security Project (OWASP) Risk Rating Methodology (reference
(b)):
1. High - issues categorized as high risk must be fixed immediately,
otherwise alternate mitigation strategies must be implemented to limit
exposure before deployment. Applications with high risk issues are
subject to being taken off-line or denied release into the production
environment.
2. Medium - issues categorized as medium risk must be reviewed to
determine specific items to be mitigated. Actions to implement mitigations
must be scheduled. Applications with medium risk issues may be taken
off-line or denied release into the production environment based on the
number of issues; multiple issues may increase the risk to an
unacceptable level. Issues may be fixed in patch releases unless better
mitigation options are present.
3. Low - issues categorized as low risk must be reviewed to determine
specific items to be mitigated. Actions to implement mitigations must be
scheduled.
6. Testing is required to validate fixes and/or mitigation strategies for any security
vulnerabilities classified as Medium risk or greater.
7. The following security assessment types may be leveraged to perform an
application security assessment:
1. Full - comprised of tests for all known web application vulnerabilities using
both automated and manual tools based on the OWASP Testing Guide
(reference (c)). A full assessment must leverage manual penetration
testing techniques to validate discovered vulnerabilities to determine the
overall risk of any and all discovered issues.
2. Quick - consists of an automated scan of an application for, at a
minimum, the OWASP Top Ten web application security risks (reference
(d)).
3. Targeted - verifies vulnerability remediation changes or new application
functionality.
4. To counter the risk of unauthorized access, the organization maintains a
Data Center Security Policy (reference (c)).
5. Security requirements for the software development life cycle, including
system development, acquisition and maintenance are defined in the
Software Development Lifecycle Policy (reference (d)).
6. Security requirements for handling information security incidents are
defined in the Security Incident Response Policy (reference (e)).
7. Disaster recovery and business continuity management policy is defined
in the Disaster Recovery Policy (reference (f)).
8. Requirements for information system availability and redundancy are
defined in the System Availability Policy (reference (g)).
Availability Policy
Purpose and Scope:
1. The purpose of this policy is to define requirements for proper controls to protect
the availability of the organization’s information systems.
2. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information controlled by the
organization (hereinafter referred to as “users”). This policy must be made readily
available to all users.
Background:
1. The intent of this policy is to minimize the amount of unexpected or unplanned
downtime (also known as outages) of information systems under the
organization’s control. This policy prescribes specific measures for the
organization that will increase system redundancy, introduce failover
mechanisms, and implement monitoring such that outages are prevented as
much as possible. Where they cannot be prevented, outages will be quickly
detected and remediated.
2. Within this policy, an availability is defined as a characteristic of information or
information systems in which such information or systems can be accessed by
authorized entities whenever needed.
Policy:
1. Information systems must be consistently available to conduct and support
business operations.
2. Information systems must have a defined availability classification, with
appropriate controls enabled and incorporated into development and production
processes based on this classification.
3. System and network failures must be reported promptly to the organization’s lead
for Information Technology (IT) or designated IT operations manager.
4. Users must be notified of scheduled outages (e.g., system maintenance) that
require periods of downtime. This notification must specify the date and time of
the system maintenance, expected duration, and anticipated system or service
resumption time.
5. Prior to production use, each new or significantly modified application must have
a completed risk assessment that includes availability risks. Risk assessments
must be completed in accordance with the Risk Assessment Policy (reference
(a)).
6. Capacity management and load balancing techniques must be used, as deemed
necessary, to help minimize the risk and impact of system failures.
7. Information systems must have an appropriate data backup plan that ensures:
1. All sensitive data can be restored within a reasonable time period.
2. Full backups of critical resources are performed on at least a weekly
basis.
3. Incremental backups for critical resources are performed on at least a
daily basis.
4. Backups and associated media are maintained for a minimum of thirty
(30) days and retained for at least one (1) year, or in accordance with
legal and regulatory requirements.
5. Backups are stored off-site with multiple points of redundancy and
protected using encryption and key management.
6. Tests of backup data must be conducted once per quarter. Tests of
configurations must be conducted twice per year.
8. Information systems must have an appropriate redundancy and failover plan that
meets the following criteria:
1. Network infrastructure that supports critical resources must have systemlevel redundancy (including but not limited to a secondary power supply,
backup disk-array, and secondary computing system). Critical core
components (including but not limited to routers, switches, and other
devices linked to Service Level Agreements (SLAs)) must have an
actively maintained spare. SLAs must require parts replacement within
twenty-four (24) hours.
2. Servers that support critical resources must have redundant power
supplies and network interface cards. All servers must have an actively
maintained spare. SLAs must require parts replacement within twentyfour (24) hours.
3. Servers classified as high availability must use disk mirroring.
9. Information systems must have an appropriate business continuity plan that
meets the following criteria:
1. Recovery time and data loss limits are defined in Table 3.
2. Recovery time requirements and data loss limits must be adhered to with
specific documentation in the plan.
3. Company and/or external critical resources, personnel, and necessary
corrective actions must be specifically identified.
4. Specific responsibilities and tasks for responding to emergencies and
resuming business operations must be included in the plan.
5. All applicable legal and regulatory requirements must be satisfied.
Recovery Time and Data Loss Limits:
System Change Policy
Purpose and Scope
1. This information security policy defines how changes to information systems are
planned and implemented
2. This policy applies to the entire information security program at the organization
(i.e. to all information and communications technology, as well as related
documentation).
3. All employees, contractors, part-time and temporary workers, service providers,
and those employed by others to perform work for the organization, or who have
been granted to the organization’s information and communications technology,
must comply with this policy.
Background:
This policy defines specific requirements to ensure that changes to systems and
applications are properly planned, evaluated, reviewed, approved, communicated,
implemented, documented, and reviewed, thereby ensuring the greatest probability of
success. Where changes are not successful, this document provides mechanisms for
conducting post-implementation review such that future mistakes and errors can be
prevented.
Policy:
1. Any changes to the security architecture or customer data handling of a system
must be formally requested in writing to the organization’s Information Security
Manager (ISM), and approved by the ISM and the Chief Information Officer
(CIO).
2. All change requests must be documented.
3. All change requests must be prioritized in terms of benefits, urgency, effort
required, and potential impacts to the organization’s operations.
4. All implemented changes must be communicated to relevant users.
5. Change management must be conducted according to the following procedure:
1. Planning: plan the change, including the implementation design,
scheduling, and implementation of a communications plan, testing plan,
and roll-back plan.
2. Evaluation: evaluate the change, including priority level of the service and
risk that the proposed change introduces to the system; determine the
change type and the specific step-by-step process to implement the
change.
3. Review: review the change plan amongst the CIO, ISM, Engineering
Lead, and, if applicable, Business Unit Manager.
4. Approval: the CIO must approve the change plan.
5. Communication: communicate the change to all users of the system.
6. Implementation: test and implement the change.
7. Documentation: record the change and any post-implementation issues.
8. Post-change review: conduct a post-implementation review to determine
how the change is impacting the organization, either positively or
negatively. Discuss and document any lessons learned.
Data Classification Policy
Purpose and Scope:
1. This data classification policy defines the requirements to ensure that information
within the organization is protected at an appropriate level.
2. This document applies to the entire scope of the organization’s information
security program. It includes all types of information, regardless of its form, such
as paper or electronic documents, applications and databases, and knowledge or
information that is not written.
3. This policy applies to all individuals and systems that have access to information
kept by the organization.
Background:
This policy defines the high level objectives and implementation instructions for the
organization’s data classification scheme. This includes data classification levels, as well as
procedures for the classification, labeling and handling of data within the organization.
Confidentiality and non-disclosure agreements maintained by the organization must reference
this policy.
Policy:
1. If classified information is received from outside the organization, the person who
receives the information must classify it in accordance with the rules prescribed
in this policy. The person thereby will become the owner of the information.
2. If classified information is received from outside the organization and handled as
part of business operations activities (e.g., customer data on provided cloud
services), the information classification, as well as the owner of such information,
must be made in accordance with the specifications of the respective customer
service agreement and other legal requirements.
3. When classifying information, the level of confidentiality is determined by:
1. The value of the information, based on impacts identified during the risk
assessment process. More information on risk assessments is defined in
the Risk Assessment Policy (reference (a)).
2. Sensitivity and criticality of the information, based on the highest risk
calculated for each information item during the risk assessment.
3. Legal, regulatory and contractual obligations.
Table 3: Information Confidentiality Levels
4. Information must be classified based on confidentiality levels as defined above.
5. Information and information system owners should try to use the lowest
confidentiality level that ensures an adequate level of protection, thereby
avoiding unnecessary production costs.
6. Information classified as “Restricted” or “Confidential” must be accompanied by a
list of authorized persons in which the information owner specifies the names or
job functions of persons who have the right to access that information.
7. Information classified as “Internal Use” must be accompanied by a list of
authorized persons only if individuals outside the organization will have access to
the document.
8. Information and information system owners must review the confidentiality level
of their information assets every five years and assess whether the confidentiality
level should be changed. Wherever possible, confidentiality levels should be
lowered.
9. For cloud-based software services provided to customers, system owners under
the company’s control must also review the confidentiality level of their
information systems after service agreement changes or after a customer’s
formal notification. Where allowed by service agreements, confidentiality levels
should be lowered.
10. Information must be labeled according to the following:
1. Paper documents: the confidentiality level is indicated on the top and
bottom of each document page; it is also indicated on the front of the
cover or envelope carrying such a document as well as on the filing folder
in which the document is stored. If a document is not labeled, its default
classification is Internal Use.
2. Electronic documents: the confidentiality level is indicated on the top and
bottom of each document page. If a document is not labeled, its default
classification is Internal Use.
3. Information systems: the confidentiality level in applications and
databases must be indicated on the system access screen, as well as on
the screen when displaying such information.
4. Electronic mail: the confidentiality level is indicated in the first line of the
email body. If it is not labeled, its default classification is “Internal Use”.
5. Electronic storage media (disks, memory cards, etc.): the confidentiality
level must be indicated on the top surface of the media. If it is not labeled,
its default classification is “Internal Use”.
6. Information transmitted orally: the confidentiality level should be
mentioned before discussing information during face-to-face
communication, by telephone, or any other means of oral communication.
11. All persons accessing classified information must follow the guidelines listed in
Appendix A, “Handling of Classified Information.”
12. All persons accessing classified information must complete and submit a
Confidentiality Statement to their immediate supervisor or company point-ofcontact. A sample Confidentiality Statement is in Appendix B.
13. Incidents related to the improper handling of classified information must be
reported in accordance with the Security Incident Management Policy (reference
(b)).
Appendix A: Handling of Classified
Information
Information and information systems must be handled according to the following guidelines:
1. Paper Documents
1. Internal Use
1. Only authorized persons may have access.
2. If sent outside the organization, the document must be sent as
registered mail.
3. Documents may only be kept in rooms without public access.
4. Documents must be removed expeditiously from printers and fax
machines.
2. Restricted
1. The document must be stored in a locked cabinet.
2. Documents may be transferred within and outside the organization
only in a closed envelope.
3. If sent outside the organization, the document must be mailed with
a return receipt service.
4. Documents must immediately be removed from printers and fax
machines.
5. Only the document owner may copy the document.
6. Only the document owner may destroy the document.
3. Confidential
1. The document must be stored in a safe.
2. The document may be transferred within and outside the
organization only by a trustworthy person in a closed and sealed
envelope.
3. Faxing the document is not permitted.
4. The document may be printed only if the authorized person is
standing next to the printer.
2. Electronic Documents
1. Internal Use
1. Only authorized persons may have access.
2. When documents are exchanged via unencrypted file sharing
services such as FTP, they must be password protected.
3. Access to the information system where the document is stored
must be protected by a strong password.
4. The screen on which the document is displayed must be
automatically locked after 10 minutes of inactivity.
2. Restricted
1. Only persons with authorization for this document may access the
part of the information system where this document is stored.
2. When documents are exchanged via file sharing services of any
type, they must be encrypted.
3. Only the document owner may erase the document.
3. Confidential
1. The document must be stored in encrypted form.
2. The document may be stored only on servers which are controlled
by the organization.
3. The document may only be shared via file sharing services that
are encrypted such as HTTPS and SSH. Further, the document
must be encrypted and protected with a string password when
transferred.
3. Information Systems
1. Internal Use
1. Only authorized persons may have access.
2. Access to the information system must be protected by a strong
password.
3. The screen must be automatically locked after 10 minutes of
inactivity.
4. The information system may be only located in rooms with
controlled physical access.
2. Restricted
1. Users must log out of the information system if they have
temporarily or permanently left the workplace.
2. Data must be erased only with an algorithm that ensures secure
deletion.
3. Confidential
1. Access to the information system must be controlled through
multi-factor authentication (MFA).
2. The information system may only be installed on servers
controlled by the organization.
3. The information system may only be located in rooms with
controlled physical access and identity control of people accessing
the room.
4. Electronic Mail
1. Internal Use
1. Only authorized persons may have access.
2. The sender must carefully check the recipient.
3. All rules stated under “information systems” apply.
2. Restricted
1. Email must be encrypted if sent outside the organization.
3. Confidential
1. Email must be encrypted.
5. Electronic Storage Media
1. Internal Use
1. Only authorized persons may have access.
2. Media or files must be password protected.
3. If sent outside the organization, the medium must be sent as
registered mail.
4. The medium may only be kept in rooms with controlled physical
access.
2. Restricted
1. Media and files must be encrypted.
2. Media must be stored in a locked cabinet.
●
3. If sent outside the organization, the medium must be mailed with a
return receipt service.
4. Only the medium owner may erase or destroy the medium.
3. Confidential
1. Media must be stored in a safe.
2. Media may be transferred within and outside the organization only
by a trustworthy person and in a closed and sealed envelope.
6. Information Transmitted Orally
1. Internal Use
1. Only authorized persons may have access to information.
2. Unauthorized persons must not be present in the room when the
information is communicated.
2. Restricted
1. The room must be sound-proof.
2. The conversation must not be recorded.
3. Confidential
1. Conversation conducted through electronic means must be
encrypted.
2. No transcript of the conversation may be kept.
In this document, controls are implemented cumulatively, meaning that controls for any
confidentiality level imply the implementation of controls defined for lower confidentiality
levels - if stricted controls are prescribed for a higher confidentiality level, then only such
controls are implemented.
Code of Conduct Policy
Purpose and Scope
1. The purpose of this policy is to define expected behavior from employees
towards their colleagues, supervisors, and the overall organization.
2. We expect all employees to follow our Code of Conduct. Offensive behavior,
disruptive behavior, and participation in serious disputes should be avoided.
Employees are expected to foster a respectful and collaborative environment.
3. This policy applies to all employees and contractors. They are bound by their
Employment Offer Letter or Independent Contractor Agreement to follow the
Code of Conduct Policy while performing their duties. The Code of Conduct is
outlined below:
Policy
4. Compliance with law
1. Employees should have an understanding of and comply with all
environmental, safety, and fair dealing laws. When performing their job
duty and dealing with the company’s products, finances, critical
information, & public image, employees are expected to be ethical and
responsible. If an employee is unsure of whether a contemplated action is
permitted by law or Company policy, they should seek advice from the
resource manager.
5. Respect in the workplace
1. Employees should respect their colleagues. Discriminatory behavior,
harassment, or victimization will not be tolerated.
6. Protection of company property
1. Company property, both material or intangible, should be treated with
respect and care. Employees and contractors:
2. Should not misuse company equipment
3. Should respect all intangible property, including trademarks, copyright,
information, reports, and other property. These materials should be used
only to complete job duties.
4. Should protect company facilities and other material property from
damage and vandalism, whenever possible.
7. Personal appearance
1. When in the workplace, employees must present themselves in an
appropriate & professional manner. They should abide by the company
dress code.
8. Corruption
1. Employees are discouraged from accepting gifts from clients or partners.
Briberies are prohibited for the benefit of any external or internal party.
9. Job duties and authority
1. Employees should fulfill their job duties with integrity and respect towards
all individuals involved.
2. Supervisors and managers may not use abuse their authority.
Competency and workload should be taken into account when delegating
duties to team members.
3. Team members are expected to follow their leaders’ instructions and
complete their duties with thoughtfulness and in a timely manner.
10. Absenteeism and tardiness
1. Employees should be punctual when coming to and leaving from work
and follow the schedule determined by their hiring manager. Exceptions
can be made for occasions that prevent employees from following
standard working hours or days, with approval from their hiring manager.
11. Conflict of interest
1. Employees should avoid any personal, financial, or other interests that
might compete with their job duties.
12. Collaboration
1. Employees should be friendly with their colleagues and open to
collaboration. They should not disrupt the workplace or present obstacles
to their colleagues’ work.
13. Communication
1. Colleagues, supervisors, or team members must be open to
communication amongst each other.
14. Benefits
1. We expect employees to not abuse their employment benefits. This can
refer to time off, insurance, facilities, subscriptions, or other benefits our
company offers. Refer to Human Resources for more information on
benefits.
15. Policies
1. All employees must comply with company policies. Questions should be
directed to their hiring managers and/or Human Resources.
16. Disciplinary actions
1. Repeated or intentional violation of the Code of Conduct Policy will be
met with disciplinary action. Consequences will vary depending on the
violation, but can include:
2. demotion
3. reprimand
4. suspension or termination
5. detraction of benefits for a definite or indefinite time
6. Cases of corruption, theft, embezzlement, or other unlawful behavior may
call for legal action.
Confidentiality Policy
Purpose and Scope:
1. This policy outlines expected behavior of employees to keep confidential
information about clients, partners, and our company secure.
2. This policy applies to all employees, board members, investors, and contractors,
who may have access to confidential information. This policy must be made
readily available to all whom it applies to.
Background:
1. The company’s confidential information must be protected for two reasons:
1. It may be legally binding (i.e. sensitive customer data)
2. It may be fundamental to our business (i.e. business processes)
2. Common examples of confidential information in our company includes, but is not
limited to:
1. Unpublished financial information
2. Customer/partner/vendor/external party data
3. Patents, formulas, new technologies, and other intellectual property
4. Existing and prospective customer lists
5. Undisclosed business strategies including pricing & marketing
6. Materials & processes explicitly marked as “confidential”
3. Employees will have varying levels of authorized access to confidential
information.
Policy:
1. Employee procedure for handling confidential information
1. Lock and secure confidential information at all times
2. Safely dispose (i.e. shred) documents when no longer needed
3. View confidential information only on secure devices
4. Disclose information only when authorized and necessary
5. Do not use confidential information for personal gain, benefit, or profit
6. Do not disclose confidential information to anyone outside the company
or to anyone within the company who does not have appropriate
privileges
7. Do not store confidential information or replicates of confidential
information in unsecured manners (i.e. on unsecured devices)
8. Do not remove confidential documents from company’s premises unless
absolutely necessary to move
2. Offboarding measures
1. The Hiring Manager should confirm the off-boarding procedure has been
completed by final date of employment.
3. Confidentiality measures
1. The company will take the following measures to ensure protection of
confidential information:
1. Store and lock paper documents
2. Encrypt electronic information and implement appropriate
technical measures to safeguard databases
3. Require employees to sign non-disclosure/non-compete
agreements
4. Consult with senior management before granting employees
access to certain confidential information
4. Exceptions
1. Under certain legitimate conditions, confidential information may need to
be disclosed. Examples include:
1. If a regulatory agency requests information as part of an audit or
investigation
2. If the company requires disclosing information (within legal
bounds) as part of a venture or partnership
2. In such cases, employee must request and receive prior written
authorization from their hiring manager before disclosing confidential
information to any third parties.
5. Disciplinary consequences
1. Employees who violate the confidentiality policy will face disciplinary and
possible legal action.
2. A suspected breach of this policy will trigger an investigation. Intentional
violations will be met with termination and repeated unintentional
violations may also face termination.
3. This policy is binding even after the termination of employment.
Business Continuity Policy
Purpose and Scope:
1. The purpose of this policy is to ensure that the organization establishes
objectives, plans and, procedures such that a major disruption to the
organization’s key business activities is minimized.
2. This policy applies to all infrastructure and data within the organization’s
information security program.
3. This policy applies to all management, employees, and suppliers that are
involved in decisions and processes affecting the organization’s business
continuity. This policy must be made readily available to all whom it applies to.
Background:
1. The success of the organization is reliant upon the preservation of critical
business operations and essential functions used to deliver key products and
services. The purpose of this policy is to define the criteria for continuing
business operations for the organization in the event of a disruption. Specifically,
this document defines:
1. The structure and authority to ensure business resilience of key
processes and systems.
2. The requirements for efforts to manage through a disaster or other
disruptive event when the need arises.
3. The criteria to efficiently and effectively resume normal business
operations after a disruption.
2. Within this document, the following definitions apply:
1. Business impact analysis/assessment - an exercise that determines the
impact of losing the support of any resource to an enterprise, establishes
the escalation of that loss over time, identifies the minimum resources
needed to return to a normal level of operation, and prioritizes recovery of
processes and the supporting system.
2. Disaster recovery plan - a set of human, physical, technical, and
procedural resources to return to a normal level of operation, within a
defined time and cost, when an activity is interrupted by an emergency or
disaster.
3. Recovery time objective - the amount of time allowed for the recovery of a
business function or resource to a normal level after a disaster or
disruption occurs.
4. Recovery point objective - determined based on the acceptable data loss
in the case of disruption of operations.
Policy:
1. Business Risk Assessment and Business Impact Analysis
1. Each manager is required to perform a business risk assessment and
business impact analysis for each key business system within their area
of responsibility.
2. The business risk assessment must identify and define the criticality of
key business systems and the repositories that contain the relevant and
necessary data for the key business system.
3. The business risk assessment must define and document the Disaster
Recovery Plan (DRP) for their area of responsibility. Each DRP shall
include:
1. Key business processes.
2. Applicable risk to availability.
3. Prioritization of recovery.
4. Recovery Time Objectives (RTOs).
5. Recovery Point Objectives (RPOs).
2. Disaster Recovery Plan
1. Each key business system must have a documented DRP to provide
guidance when hardware, software, or networks become critically
dysfunctional or cease to function (short and long term outages).
2. Each DRP must include an explanation of the magnitude of information or
system unavailability in the event of an outage and the process that would
be implemented to continue business operations during the outage.
Where feasible, the DRP must consider the use of alternative, off-site
computer operations (cold, warm, hot sites).
3. Each plan must be reviewed against the organization’s strategy,
objectives, culture, and ethics, as well as policy, legal, statutory and
regulatory requirements.
4. Each DRP must include:
1. An emergency mode operations plan for continuing operations in
the event of temporary hardware, software, or network outages.
2. A recovery plan for returning business functions and services to
normal on-site operations.
3. Procedures for periodic testing, review, and revisions of the DRP
for all affected business systems, as a group and/or individually.
3. Data Backup and Restoration Plans
1. Each system owner must implement a data backup and restoration plan.
2. Each data backup and restoration plan must identify:
1. The data custodian for the system.
2. The backup schedule of each system.
3. Where backup media is to be stored and secured, as well as how
access is maintained.
4. Who may remove backup media and transfer it to storage.
5. Appropriate restoration procedures to restore key business
system data from backup media to the system.
6. The restoration testing plan and frequency of testing to confirm the
effectiveness of the plan.
7. The method for restoring encrypted backup media.
Cyber Risk Assessment Policy
Purpose and Scope:
1. The purpose of this policy is to define the procedures to assess and treat
information security risks within the organization, and to define the acceptable
level of risk overall.
2. Risk assessment and risk treatment are applied to the entire scope of the
organization’s information security program, and to all information systems which
are used within the organization or which could have an impact on the
organization’s information security.
3. This policy applies to all management and employees that take part in the
organization’s risk assessments. This policy must be made readily available to all
whom it applies to.
Background:
1. This policy defines a step-by-step process for conducting risk assessments, as
well as to treat identified risks from an information security perspective. This
policy also describes how to prepare the Risk Assessment Report required as
part of the risk assessment process.
2. When conducting a risk assessment, the organization must identify all
organizational information systems . It must then identify all threats and
vulnerabilities having to do with such systems , and rate the severity of such
threats and vulnerabilities according to a predefined rating scale. Asset and risk
owners must be defined for each risk item.
3. Once the risk assessment is completed, the organization shall determine how to
manage risks where the overall assessed risk rating is deemed as too high. This
management is known as risk treatment. Risk treatment options include but are
not limited to applying security controls, outsourcing risk, accepting risk, or
discontinuing the activity associated with the risk.
4. A penetration test must be performed by a third party to verify the accuracy of the
risk assessment and effectiveness of deployed risk treatments.
Procedure To Execute Risk Assessment Report:
1. Confirms that the entire risk assessment and risk treatment process has been
carried out according to the Risk Assessment Policy.
2. The purpose of the risk assessment was to identify all information systems their
vulnerabilities, and threats that could exploit vulnerabilities. These parameters
were further evaluated in order to establish the criticality of individual risks.
3. The purpose of the risk treatment was to define the systematic means of
reducing or controlling the risks identified in the risk assessment.
4. All risk assessment and treatment activities were completed within the scope of
the organization’s information security program.
5. The risk assessment was implemented in the period from [day/month/year] to
[day/month/year]. The risk treatment was implemented from [day/month/year] to
[day/month/year]. Final reports were prepared during [specify period].
6. The risk assessment and risk treatment process was managed by [person
responsible for managing the risk assessment process] with expert assistance
provided by [person or company responsible for assistance].
7. During the risk assessment, information was collected through questionnaires
and interviews with responsible persons, namely the asset owners across
organizational units.
8. The process was conducted as follows:
1. All information systems and their owners were identified.
2. Threats were identified for each asset, and corresponding vulnerabilities
were identified for each threat.
3. Risk owners were identified for each risk.
4. Consequences of the loss of confidentiality, integrity and availability were
evaluated using a score from 0 to 2, with 0 being the lowest rating and 2
being the highest rating.
5. The likelihood of risk occurrence (i.e. that the threat will exploit the
vulnerability) was evaluated using a score from 0 to 2, with 0 being the
lowest rating and 2 being the highest rating.
6. The level of risk was calculated by adding up the consequence and
likelihood.
7. Risks with a score of 3 or 4 were determined to be unacceptable risks.
8. For each unacceptable risk, a risk treatment option was considered, and
appropriate information security controls were selected.
9. After controls were applied, residual risks were assessed.
9. The following documents were used or generated during the implementation of
risk assessment and risk treatment:
1. Risk Assessment Table (Appendix A): for each combination of systems ,
vulnerabilities and threats, this table shows the values for consequence
and likelihood, and calculates the risk.
2. Risk Treatment Table (Appendix B): defines the options for risk treatment,
selection of controls for each unacceptable risk, and the level of residual
risk.
Datacenter Policy
Purpose and Scope:
1. The purpose of this policy is to define security procedures within the
organization’s data centers and secure equipment areas.
2. This policy applies to any cloud hosted providers and facilities within the
organization that are labeled as either a data center or a secure equipment area.
Such facilities are explicitly called out within this document.
3. This policy applies to all management, employees and suppliers that conduct
business operations within cloud host or data centers and secure equipment
areas.
Background:
This policy defines the policies and rules governing data centers and secure equipment
areas from both a physical and logical security perspective. The document lists all data
centers and secure equipment areas in use by the organization, prescribes how access
is controlled and enforced, and establishes procedures for any visitor or third party
access. This policy also defines prohibited activities and requirements for periodic safety
and security checks.
Policy:
1. The following locations are classified by the organization as secure areas and
are governed by this policy:
1. [list all data center locations and secure areas under the organization’s
control]
2. Each data center and secure area must have a manager assigned. The
manager’s name must be documented in the organization’s records. In the case
3.
4.
5.
6.
of any on-prem data centers, the manager’s name must also be posted in and
near the secure area.
Each secure area must be clearly marked. Access to the secure area must be
controlled by at least a locked door. A visitor access log must be clearly marked
and easily accessible just inside the door.
Persons who are not employed by the organization are considered to be visitors.
Visitors accessing secure areas shall:
1. Obtain access to secure areas in accordance with reference a.
2. Only enter and remain in secure areas when escorted by a designated
employee. The employee must stay with the visitor during their entire stay
inside the secure area.
3. Log the precise time of entry and exit in the visitor access log.
The following activities are prohibited inside secure areas:
1. Photography, or video or audio recordings of any kind.
2. Connection of any electrical device to a power supply, unless specifically
authorized by the responsible person.
3. Unauthorized usage of or tampering with any installed equipment.
4. Connection of any device to the network, unless specifically authorized by
the responsible person.
5. Storage or archival of large amounts of paper materials.
6. Storage of flammable materials or equipment.
7. Use of portable heating devices.
8. Smoking, eating, or drinking.
Secure areas must be checked for compliance with security and safety
requirements on at least a quarterly basis.
Software Development Lifecycle Policy
Purpose and Scope:
1. The purpose of this policy is to define requirements for establishing and
maintaining baseline protection standards for company software, network
devices, servers, and desktops.
2. This policy applies to all users performing software development, system
administration, and management of these activities within the organization. This
typically includes employees and contractors, as well as any relevant external
parties involved in these activities (hereinafter referred to as “users”). This policy
must be made readily available to all users.
3. This policy also applies to enterprise-wide systems and applications developed
by the organization or on behalf of the organization for production
implementation.
Background:
1. The intent of this policy is to ensure a well-defined, secure and consistent
process for managing the entire lifecycle of software and information systems,
from initial requirements analysis until system decommission. The policy defines
the procedure, roles, and responsibilities, for each stage of the software
development lifecycle.
2. Within this policy, the software development lifecycle consists of requirements
analysis, architecture and design, development, testing,
deployment/implementation, operations/maintenance, and decommission. These
processes may be followed in any form; in a waterfall model, it may be
appropriate to follow the process linearly, while in an agile development model,
the process can be repeated in an iterative fashion.
Policy:
1. The organization’s Software Development Life Cycle (SDLC) includes the
following phases:
1. Requirements Analysis
2. Architecture and Design
3. Testing
4. Deployment/Implementation
5. Operations/Maintenance
6. Decommission
2. During all phases of the SDLC where a system is not in production, the system
must not have live data sets that contain information identifying actual people or
corporate entities, actual financial data such as account numbers, security codes,
routing information, or any other financially identifying data. Information that
would be considered sensitive must never be used outside of production
environments.
3. The following activities must be completed and/or considered during the
requirements analysis phase:
1. Analyze business requirements.
2. Perform a risk assessment. More information on risk assessments is
discussed in the Risk Assessment Policy (reference (a)).
3. Discuss aspects of security (e.g., confidentiality, integrity, availability) and
how they might apply to this requirement.
4. Review regulatory requirements and the organization’s policies,
standards, procedures and guidelines.
5. Review future business goals.
6. Review current business and information technology operations.
7. Incorporate program management items, including:
1. Analysis of current system users/customers.
2. Understand customer-partner interface requirements (e.g.,
business-level, network).
3. Discuss project timeframe.
8. Develop and prioritize security solution requirements.
9. Assess cost and budget constraints for security solutions, including
development and operations.
10. Approve security requirements and budget.
11. Make “buy vs. build” decisions for security services based on the
information above.
4. The following must be completed/considered during the architecture and design
phase:
1. Educate development teams on how to create a secure system.
2. Develop and/or refine infrastructure security architecture.
3. List technical and non-technical security controls.
4. Perform architecture walkthrough.
5. Create a system-level security design.
6. Create high-level non-technical and integrated technical security designs.
7. Perform a cost/benefit analysis for design components.
8. Document the detailed technical security design.
9. Perform a design review, which must include, at a minimum, technical
reviews of application and infrastructure, as well as a review of high-level
processes.
10. Describe detailed security processes and procedures, including:
segregation of duties and segregation of development, testing and
production environments.
11. Design initial end-user training and awareness programs.
12. Design a general security test plan.
13. Update the organization’s policies, standards, and procedures, if
appropriate.
14. Assess and document how to mitigate residual application and
infrastructure vulnerabilities.
15. Design and establish separate development and test environments.
5. The following must be completed and/or considered during the development
phase:
1. Set up a secure development environment (e.g., servers, storage).
2. Train infrastructure teams on installation and configuration of applicable
software, if required.
3. Develop code for application-level security components.
4. Install, configure and integrate the test infrastructure.
5. Set up security-related vulnerability tracking processes.
6. Develop a detailed security test plan for current and future versions (i.e.,
regression testing).
7. Conduct unit testing and integration testing.
6. The following must be completed and/or considered during the testing phase:
1. Perform a code and configuration review through both static and dynamic
analysis of code to identify vulnerabilities.
2.
3.
4.
5.
6.
Test configuration procedures.
Perform system tests.
Conduct performance and load tests with security controls enabled.
Perform usability testing of application security controls.
Conduct independent vulnerability assessments of the system, including
the infrastructure and application.
7. The following must be completed and/or considered during the deployment
phase:
1. Conduct pilot deployment of the infrastructure, application and other
relevant components.
2. Conduct transition between pilot and full-scale deployment.
3. Perform integrity checking on system files to ensure authenticity.
4. Deploy training and awareness programs to train administrative personnel
and users in the system’s security functions.
5. Require participation of at least two developers in order to conduct fullscale deployment to the production environment.
8. The following must be completed and/or considered during the
operations/maintenance phase:
1. Several security tasks and activities must be routinely performed to
operate and administer the system, including but not limited to:
1. Administering users and access.
2. Tuning performance.
3. Performing backups according to requirements defined in the
System Availability Policy
4. Performing system maintenance (i.e., testing and applying
security updates and patches).
5. Conducting training and awareness.
6. Conducting periodic system vulnerability assessments.
7. Conducting annual risk assessments.
2. Operational systems must:
1. Be reviewed to ensure that the security controls, both automated
and manual, are functioning correctly and effectively.
2. Have logs that are periodically reviewed to evaluate the security of
the system and validate audit controls.
3. Implement ongoing monitoring of systems and users to ensure
detection of security violations and unauthorized changes.
4. Validate the effectiveness of the implemented security controls
through security training as required by the Procedure For
Executing Incident Response.
5. Have a software application and/or hardware patching process
that is performed regularly in order to eliminate software bug and
security problems being introduced into the organization’s
technology environment. Patches and updates must be applied
within ninety (90) days of release to provide for adequate testing
and propagation of software updates. Emergency, critical, breakfix, and zero-day vulnerability patch releases must be applied as
quickly as possible.
9. The following must be completed and/or considered during the decommission
phase:
1. Conduct unit testing and integration testing on the system after
component removal.
2. Conduct operational transition for component removal/replacement.
3. Determine data retention requirements for application software and
systems data.
4. Document the detailed technical security design.
5. Update the organization’s policies, standards and procedures, if
appropriate.
6. Assess and document how to mitigate residual application and
infrastructure vulnerabilities.
Disaster Recovery Policy
Purpose and Scope:
1. The purpose of this policy is to define the organization’s procedures to recover
Information Technology (IT) infrastructure and IT services within set deadlines in
the case of a disaster or other disruptive incident. The objective of this plan is to
complete the recovery of IT infrastructure and IT services within a set Recovery
Time Objective (RTO).
2. This policy includes all resources and processes necessary for service and data
recovery, and covers all information security aspects of business continuity
management.
3. This policy applies to all management, employees and suppliers that are involved
in the recovery of IT infrastructure and services within the organization. This
policy must be made readily available to all whom it applies to.
Background:
1. This policy defines the overall disaster recovery strategy for the organization. The
strategy describes the organization’s Recovery Time Objective (RTO), which is
defined as the duration of time and service level for critical business processes to
be restored after a disaster or other disruptive event, as well as the procedures,
responsibility and technical guidance required to meet the RTO. This policy also
lists the contact information for personnel and service providers that may be
needed during a disaster recovery event.
2. The following conditions must be met for this plan to be viable:
1. All equipment, software and data (or their backups/failovers) are available
in some manner.
2. If an incident takes place at the organization’s physical location, all
resources involved in recovery efforts are able to be transferred to an
alternate work site (such as their home office) to complete their duties.
3. The Information Security Officer is responsible for coordinating and
conducting a bi-annual (at least) rehearsal of this continuity plan.
3. This plan does not cover the following types of incidents:
1. Incidents that affect customers or partners but have no effect on the
organization’s systems; in this case, the customer must employ their own
continuity processes to make sure that they can continue to interact with
the organization and its systems.
2. Incidents that affect cloud infrastructure suppliers at the core
infrastructure level, including but not limited to Google, Heroku, and
Amazon Web Services. The organization depends on such suppliers to
employ their own continuity processes.
Policy:
1. Relocation
1. If the organization’s primary work site is unavailable, an alternate work
site shall be used by designated personnel. The organization’s alternate
work site is located at [list the address of the alternate work site that the
organization will use].
2. The personnel required to report to the alternate work site during a
disaster includes [list the personnel titles responsible for reporting to the
alternate work site].
2. Critical Services, Key Tasks and, Service Level Agreements (SLAs)
1. The following services and technologies are considered to be critical for
business operations, and must immediately be restored (in priority order):
1. [list the critical services and technologies that must remain running
during a disaster]
2. The following key tasks and SLAs must be considered during a disaster
recovery event, in accordance with the organization’s objectives,
agreements, and legal, contractual or regulatory obligations:
1. [list of key tasks / SLAs that must be kept operational, with
respective deadlines]
3. The organization’s Recovery Time Objective (RTO) is [set the maximum amount
of time before critical processes must be restored, to include relocation and
getting critical services/technologies back online]. Relocation and restoration of
critical services and technologies must be completed within this time period.
4. Notification of Plan Initiation
1. The following personnel must be notified when this plan is initiated:
2. [list all personnel (including titles) that must be notified of plan initiation ]
5.
6.
7.
8.
9.
●
●
3. [person responsible for notifications, including title] is responsible for
notifying the personnel listed above.
Plan Deactivation
1. This plan must only be deactivated by [person or persons with authority to
deactivate the plan, including job title].
2. In order for this plan to be deactivated, all relocation activities and critical
service / technology tasks as detailed above must be fully completed
and/or restored. If the organization is still operating in an impaired
scenario, the plan may still be kept active at the discretion of [person or
persons with authority to deactivate the plan, including job title].
3. The following personnel must be notified when this plan is deactivated:
1. [list all personnel (including titles) that must be notified of plan
activation]
The organization must endeavor to restore its normal level of business
operations as soon as possible.
A list of relevant points of contact both internal and external to the organization is
enclosed in Appendix A.
During a crisis, it is vital for certain recovery tasks to be performed right away.
The following actions are pre-authorized in the event of a disaster recovery
event:
1. [job title] must take all steps specified in this disaster recovery plan in
order to recover the organization’s information technology infrastructure
and services.
2. [job title] is authorized to make urgent purchases of equipment and
services up to [amount].
3. [job title] is authorized to communicate with clients.
4. [job title] is authorized to communicate with the public.
5. [job title] is authorized to communicate with public authorities such as
state and local governments and law enforcement.
6. [job title] is authorized to cooperate with [name of supplier/outsourcing
partner].
7. [add/modify/remove authorizations in this section as necessary]
Specific recovery steps for information systems infrastructure and services are
provided in Appendix B.
Appendix A: Relevant Points of
Contact
Internal Contacts:
To be created by user
External Contacts:
●
To be created by user
Appendix B: Recovery Steps for
Information Systems Infrastructure &
Services
Specific recovery procedures are described in detail below:
Encryption Policy
Purpose and Scope:
1. This policy defines organizational requirements for the use of cryptographic
controls, as well as the requirements for cryptographic keys, in order to protect
the confidentiality, integrity, authenticity and nonrepudiation of information.
2. This policy applies to all systems, equipment, facilities and information within the
scope of the organization’s information security program.
3. All employees, contractors, part-time and temporary workers, service providers,
and those employed by others to perform work on behalf of the organization
having to do with cryptographic systems, algorithms, or keying material are
subject to this policy and must comply with it.
Background:
This policy defines the high level objectives and implementation instructions for the
organization’s use of cryptographic algorithms and keys. It is vital that the organization
adopt a standard approach to cryptographic controls across all work centers in order to
ensure end-to-end security, while also promoting interoperability. This document defines
the specific algorithms approved for use, requirements for key management and
protection, and requirements for using cryptography in cloud environments.
Policy:
1. The organization must protect individual systems or information by means of
cryptographic controls as defined in Table 3:
Table 3: Cryptographic Controls
2. Except where otherwise stated, keys must be managed by their owners.
3. Cryptographic keys must be protected against loss, change or destruction by
applying appropriate access control mechanisms to prevent unauthorized use
and backing up keys on a regular basis.
4. When required, customers of the organization’s cloud-based software or platform
offering must be able to obtain information regarding:
1. The cryptographic tools used to protect their information.
2. Any capabilities that are available to allow cloud service customers to
apply their own cryptographic solutions.
3. The identity of the countries where the cryptographic tools are used to
store or transfer cloud service customers’ data.
5. The use of organizationally-approved encryption must be governed in
accordance with the laws of the country, region, or other regulating entity in
which users perform their work. Encryption must not be used to violate any laws
or regulations including import/export restrictions. The encryption used by the
Company conforms to international standards and U.S. import/export
requirements, and thus can be used across international boundaries for business
purposes.
6. All key management must be performed using software that automatically
manages access control, secure storage, backup and rotation of keys.
Specifically:
7. The key management service must provide key access to specifically-designated
users, with the ability to encrypt/decrypt information and generate data
encryption keys.
8. The key management service must provide key administration access to
specifically-designated users, with the ability to create, schedule delete,
enable/disable rotation, and set usage policies for keys.
9. The key management service must store and backup keys for the entirety of their
operational lifetime.
10. The key management service must rotate keys at least once every 12 months.
Incident Reporting Policy
Purpose and Scope:
1. This security incident response policy is intended to establish controls to ensure
detection of security vulnerabilities and incidents, as well as quick reaction and
response to security breaches.
2. This document also provides implementing instructions for security incident
response, to include definitions, procedures, responsibilities, and performance
measures (metrics and reporting mechanisms).
3. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information controlled by the
organization (hereinafter referred to as “users”). This policy must be made readily
available to all users.
Background
1. A key objective of the organization’s Information Security Program is to focus on
detecting information security weaknesses and vulnerabilities so that incidents
and breaches can be prevented wherever possible. The organization is
committed to protecting its employees, customers, and partners from illegal or
damaging actions taken by others, either knowingly or unknowingly. Despite this,
incidents and data breaches are likely to happen; when they do, the organization
is committed to rapidly responding to them, which may include identifying,
containing, investigating, resolving , and communicating information related to
the breach.
2. This policy requires that all users report any perceived or actual information
security vulnerability or incident as soon as possible using the contact
mechanisms prescribed in this document. In addition, the organization must
employ automated scanning and reporting mechanisms that can be used to
identify possible information security vulnerabilities and incidents. If a
vulnerability is identified, it must be resolved within a set period of time based on
its severity. If an incident is identified, it must be investigated within a set period
of time based on its severity. If an incident is confirmed as a breach, a set
procedure must be followed to contain, investigate, resolve, and communicate
information to employees, customers, partners and other stakeholders.
3. Within this document, the following definitions apply:
1. Information Security Vulnerability: a vulnerability in an information system,
information system security procedures, or administrative controls that
could be exploited to gain unauthorized access to information or to disrupt
critical processing.
2. Information Security Incident: a suspected, attempted, successful, or
imminent threat of unauthorized access, use, disclosure, breach,
modification, or destruction of information; interference with information
technology operations; or significant violation of information security
policy.
Policy:
1. All users must report any system vulnerability , incident, or event pointing to a
possible incident to the Information Security Manager (ISM) as quickly as
possible but no later than 24 hours. Incidents must be reported by sending an
email message to with details of the incident.
2. Users must be trained on the procedures for reporting information security
incidents or discovered vulnerabilities, and their responsibilities to report such
incidents. Failure to report information security incidents shall be considered to
be a security violation and will be reported to the Human Resources (HR)
Manager for disciplinary action.
3. Information and artifacts associated with security incidents (including but not
limited to files, logs, and screen captures) must be preserved in the event that
they need to be used as evidence of a crime.
4. All information security incidents must be responded to through the incident
management procedures defined below.
5. In order to appropriately plan and prepare for incidents, the organization must
review incident response procedures at least once per year for currency, and
update as required.
6. The incident response procedure must be tested on at least twice per year
7. The incident response logs must be reviewed once per month to assess
response effectiveness.
Procedure For Establishing Incident Response System:
1. Define on-call schedule and assign an Information Security Manager (ISM)
responsible for managing incident response procedure during each availability
window.
2. Define notification channel to alert the on-call ISM of a potential security incident.
Establish company resource that includes up to date contact information for oncall ISM.
3. Assign management sponsors from the Engineering, Legal, HR, Marketing, and
C-Suite teams.
4. Distribute Procedure For Execute Incident Response to all staff and ensure upto-date versions are accessible in a dedicated company resource.
5. Require all staff to complete training for Procedure For Executing Incident
Response at least twice per year.
Procedure For Executing Incident Response:
1. When an information security incident is identified or detected, users must notify
their immediate manager within 24 hours. The manager must immediately notify
the ISM on call for proper response. The following information must be included
as part of the notification:
1. Description of the incident
2. Date, time, and location of the incident
3. Person who discovered the incident
4. How the incident was discovered
5. Known evidence of the incident
6. Affected system(s)
2. Within 48 hours of the incident being reported, the ISM shall conduct a
preliminary investigation and risk assessment to review and confirm the details of
the incident. If the incident is confirmed, the ISM must assess the impact to the
organization and assign a severity level, which will determine the level of
remediation effort required:
1. High: the incident is potentially catastrophic to the organization and/or
disrupts the organization’s day-to-day operations; a violation of legal,
regulatory or contractual requirements is likely.
2. Medium: the incident will cause harm to one or more business units within
the organization and/or will cause delays to a business unit’s activities.
3. Low: the incident is a clear violation of organizational security policy, but
will not substantively impact the business.
3. The ISM, in consultation with management sponsors, shall determine appropriate
incident response activities in order to contain and resolve incidents.
4. The ISM must take all necessary steps to preserve forensic evidence (e.g. log
information, files, images) for further investigation to determine if any malicious
activity has taken place. All such information must be preserved and provided to
law enforcement if the incident is determined to be malicious.
5. If the incident is deemed as High or Medium, the ISM must work with the VP
Brand/Creative, General Counsel, and HR Manager to create and execute a
communications plan that communicates the incident to users, the public, and
others affected.
6. The ISM must take all necessary steps to resolve the incident and recover
information systems, data, and connectivity. All technical steps taken during an
incident must be documented in the organization’s incident log, and must contain
the following:
1. Description of the incident
2. Incident severity level
3. Root cause (e.g. source address, website malware, vulnerability)
4. Evidence
5. Mitigations applied (e.g. patch, re-image)
6. Status (open, closed, archived)
7. Disclosures (parties to which the details of this incident were disclosed to,
such as customers, vendors, law enforcement, etc.)
7. After an incident has been resolved, the ISM must conduct a post mortem that
includes root cause analysis and documentation any lessons learned.
8. Depending on the severity of the incident, the Chief Executive Officer (CEO) may
elect to contact external authorities, including but not limited to law enforcement,
private investigation firms, and government organizations as part of the response
to the incident.
9. The ISM must notify all users of the incident, conduct additional training if
necessary, and present any lessons learned to prevent future occurrences.
Where necessary, the HR Manager must take disciplinary action if a user’s
activity is deemed as malicious.
Information Security Policy
Purpose and Scope:
1. This information security policy defines the purpose, principles, objectives and
basic rules for information security management.
2. This document also defines procedures to implement high level information
security protections within the organization, including definitions, procedures,
responsibilities and performance measures (metrics and reporting mechanisms).
3. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information controlled by the
organization (hereinafter referred to as “users”). This policy must be made readily
available to all users.
Background:
1. This policy defines the high level objectives and implementation instructions for
the organization’s information security program. It includes the organization’s
information security objectives and requirements; such objectives and
requirements are to be referenced when setting detailed information security
policy for other areas of the organization. This policy also defines management
roles and responsibilities for the organization’s Information Security Management
System (ISMS). Finally, this policy references all security controls implemented
within the organization.
2. Within this document, the following definitions apply:
1. Confidentiality: a characteristic of information or information systems in
which such information or systems are only available to authorized
entities.
2. Integrity: a characteristic of information or information systems in which
such information or systems may only be changed by authorized entities,
and in an approved manner.
3. Availability: a characteristic of information or information systems in which
such information or systems can be accessed by authorized entities
whenever needed.
4. Information Security: the act of preserving the confidentiality, integrity,
and, availability of information and information systems.
5. Information Security Management System (ISMS): the overall
management process that includes the planning, implementation,
maintenance, review, and, improvement of information security.
Policy:
1. Managing Information Security
1. The organization’s main objectives for information security include the
following:
2. The organization’s objectives for information security are in line with the
organization’s business objectives, strategy, and plans.
3. Objectives for individual security controls or groups of controls are
proposed by the company management team, including but not limited to
[list key roles inside the organization that will participate in information
security matters], and others as appointed by the CEO; these security
controls are approved by the CEO in accordance with the Risk
Assessment Policy (Reference (a)).
4. All objectives must be reviewed at least once per year.
5. The company will measure the fulfillment of all objectives. The
measurement will be performed at least once per year. The results must
be analyzed, evaluated, and reported to the management team.
2. Information Security Requirements
1. This policy and the entire information security program must be compliant
with legal and regulatory requirements as well as with contractual
obligations relevant to the organization.
2. All employees, contractors, and other individuals subject to the
organization’s information security policy must read and acknowledge all
information security policies.
3. The process of selecting information security controls and safeguards for
the organization is defined in Reference (a).
4. The organization prescribes guidelines for remote workers as part of the
Remote Access Policy (reference (b)).
5. To counter the risk of unauthorized access, the organization maintains a
Data Center Security Policy (reference (c)).
6. Security requirements for the software development life cycle, including
system development, acquisition and maintenance are defined in the
Software Development Lifecycle Policy (reference (d)).
7. Security requirements for handling information security incidents are
defined in the Security Incident Response Policy (reference (e)).
8. Disaster recovery and business continuity management policy is defined
in the Disaster Recovery Policy (reference (f)).
9. Requirements for information system availability and redundancy are
defined in the System Availability Policy (reference (g)).
Log Management Policy
Purpose and Scope:
1. This log management and review policy defines specific requirements for
information systems to generate, store, process, and aggregate appropriate audit
logs across the organization’s entire environment in order to provide key
information and detect indicators of potential compromise.
2. This policy applies to all information systems within the organization’s production
network.
3. This policy applies to all employees, contractors, and partners of the organization
that administer or provide maintenance on the organization’s production systems.
Throughout this policy, these individuals are referred to as system administrators.
Background:
In order to measure an information system’s level of security through confidentiality,
integrity, and availability, the system must collect audit data that provides key insights
into system performance and activities. This audit data is collected in the form of system
logs. Logging from critical systems, applications, and services provides information that
can serve as a starting point for metrics and incident investigations. This policy provides
specific requirements and instructions for how to manage such logs.
Policy:
1. All production systems within the organization shall record and retain auditlogging information that includes the following information:
1. Activities performed on the system.
2. The user or entity (i.e. system account) that performed the activity,
including the system that the activity was performed from.
3. The file, application, or other object that the activity was performed on.
4. The time that the activity occurred.
5. The tool that the activity was performed with.
6. The outcome (e.g., success or failure) of the activity.
2. Specific activities to be logged must include, at a minimum:
1. Information (including authentication information such as usernames or
passwords) is created, read, updated, or deleted.
2. Accepted or initiated network connections.
3. User authentication and authorization to systems and networks.
4. Granting, modification, or revocation of access rights, including adding a
new user or group; changing user privileges, file permissions, database
object permissions, firewall rules, and passwords.
5. System, network, or services configuration changes, including software
installation, patches, updates, or other installed software changes.
6. Startup, shutdown, or restart of an application.
7. Application process abort, failure, or abnormal end, especially due to
resource exhaustion or reaching a resource limit or threshold (such as
CPU, memory, network connections, network bandwidth, disk space, or
other resources), the failure of network services such as DHCP or DNS,
or hardware fault.
8. Detection of suspicious and/or malicious activity from a security system
such as an Intrusion Detection or Prevention System (IDS/IPS), anti-virus
system, or anti-spyware system.
3. Unless technically impractical or infeasible, all logs must be aggregated in a
central system so that activities across different systems can be correlated,
analyzed, and tracked for similarities, trends, and cascading effects. Log
aggregation systems must have automatic and timely log ingest, event and
anomaly tagging and alerting, and ability for manual review.
4. Logs must be manually reviewed on a regular basis:
1. The activities of users, administrators and system operators must be
reviewed on at least a monthly basis.
2. Logs related to PII must be reviewed on at least a monthly basis in order
to identify unusual behavior.
5. When using an outsourced cloud environment, logs must be kept on cloud
environment access and use, resource allocation and utilization, and changes to
PII. Logs must be kept for all administrators and operators performing activities in
cloud environments.
6. All information systems within the organization must synchronize their clocks by
implementing Network Time Protocol (NTP) or a similar capability. All information
systems must synchronize with the same primary time source.
Removable Media and Cloud Storage Policy
Purpose and Scope:
1. This removable media, cloud storage and Bring Your Own Device (BYOD) policy
defines the objectives, requirements and implementing instructions for storing
data on removable media, in cloud environments, and on personally-owned
devices, regardless of data classification level.
2. This policy applies to all information and data within the organization’s
information security program, as well as all removable media, cloud systems and
personally-owned devices either owned or controlled by the organization.
3. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information controlled by the
organization (hereinafter referred to as “users”). This policy must be made readily
available to all users.
Background:
1. This policy defines the procedures for safely using removable media, cloud
storage and personally-owned devices to limit data loss or exposure. Such forms
of storage must be strictly controlled because of the sensitive data that can be
stored on them. Because each of these storage types are inherently ephemeral
or portable in nature, it is possible for the organization to lose the ability to
oversee or control the information stored on them if strict security standards are
not followed.
2. This document consists of three sections pertaining to removable media, cloud
storage, and personally-owned devices. Each section contains requirements and
implementing instructions for the registration, management, maintenance, and
disposition of each type of storage.
3. Within this policy, the term sensitive information refers to information that is
classified as RESTRICTED or CONFIDENTIAL in accordance with the Data
Classification Policy (reference (a)).
Policy:
1. Removable Media
1. All removable media in active use and containing data pertinent to the
organization must be registered in the organization’s Asset Inventory
(reference (b)).
2. All removable media listed in reference (b) must be re-inventoried on a
quarterly basis to ensure that it is still within the control of the
organization.
3. The owner of the removable media must conduct all appropriate
maintenance on the item at intervals appropriate to the type of media,
such as cleaning, formatting, labeling, etc.
4. The owner of the removable media, where practical, must ensure that an
alternate or backup copy of the information located on the device exists.
5. Removable media must be stored in a safe place that has a reduced risk
of fire or flooding damage.
6. If the storage item contains sensitive information, removable media must:
7. All data on removable media devices must be erased, or the device must
be destroyed, before it is reused or disposed of.
8. When removable media devices are disposed, the device owner must
inform the ISM so that it can be removed from reference (b).
2. Cloud Storage
1. All cloud storage systems in active use and containing data pertinent to
the organization must be registered in reference (b). Registration may be
accomplished by manual or automated means.
2. All cloud storage systems listed in reference (b) must be re-inventoried on
a quarterly basis to ensure that it is still within the control of the
organization. To re-inventory an item, the owner of the removable media
must check in the item with the organization’s Information Security
Manager (ISM). Re-inventory may be accomplished by manual or
automated means.
3. The owner of the cloud storage system must conduct all appropriate
maintenance on the system at regular intervals to include system
configuration, access control, performance monitoring, etc.
4. Data on cloud storage systems must be replicated to at least one other
physical location. Depending on the cloud storage provider, this
replication may be automatically configured.
5. The organization must only use cloud storage providers that can
demonstrate, either through security accreditation, demonstration, tour, or
other means that their facilities are secured, both physically and
electronically, using best practices.
6. If the cloud storage system contains sensitive information, that
information must be encrypted in accordance with reference (d).
7. Data must be erased from from cloud storage systems using a technology
and process that is approved by the ISM.
8. When use of a cloud storage system is discontinued, the system owner
must inform the ISM so that it can be removed from reference (b).
3. Personally-owned Devices
1. Organizational data that is stored, transferred or processed on personallyowned devices remains under the organization’s ownership, and the
organization retains the right to control such data even though it is not the
owner of the device.
2. The ISM is responsible for conducting overall management of personallyowned devices, to include:
3. Personally-identifiable information (PII) may not be stored, processed or
accessed at any time on a personally-owned device.
4. The following acceptable use requirements must be observed by users of
personally-owned devices:
5. The organization must reserve the right to view, edit, and/or delete any
organizational information that is stored, processed or transferred on the
device.
6. The organization must reserve the right to perform full deletion of all of its
data on the device if it considers that necessary for the protection of
company-related data, without the consent of the device owner.
7. The organization will not pay the employees (the owners of BYOD) any
fee for using the device for work purposes.
8. The organization will pay for any new software that needs to be installed
for company use.
9. All security breaches related to personally-owned devices must be
reported immediately to the ISM.
Office Security Policy
Purpose and Scope:
1. This policy establishes the rules governing controls, monitoring, and removal of
physical access to company’s facilities.
2. This policy applies to all staff, contractors, or third parties who require access to
any physical location owned, operated, or otherwise occupied by the company. A
separate policy exists for governing access to the company data center.
Policy:
1. Management responsibilities
1. Management shall ensure:
2. Key access & card systems
1. The following policies are applied to all facility access cards/keys:
3. Staff & contractor access procedure
1. Access to physical locations is granted to employees and contractors
based on individual job function and will be granted by Human
Resources.
2. Any individual granted access to physical spaces will be issued a physical
key or access key card. Key and card issuance is tracked by Human
Resources and will be periodically reviewed.
3. In the case of termination, Human Resources should ensure immediate
revocation of access (i.e. collection of keys, access cards, and any other
asset used to enter facilities) through the offboarding procedure.
4. Visitor & guest access procedure
1. The following policies are applied to identification & authorization of
visitors and guests:
5. Audit controls & management
1. Documented procedures and evidence of practice should be in place for
this policy. Acceptable controls and procedures include:
6. Enforcement
1. Employees, contractors, or third parties found in violation of this policy
(whether intentional or accidental) may be subject to disciplinary action,
including:
Password Policy
Purpose and Scope:
1. The Password Policy describes the procedure to select and securely manage
passwords.
2. This policy applies to all employees, contractors, and any other personnel who
have an account on any system that resides at any company facility or has
access to the company network.
Policy:
1. Rotation requirements
1. All system-level passwords should be rotated on at least a quarterly
basis. All user-level passwords should be rotated at least every six
months.
2. If a credential is suspected of being compromised, the password in
question should be rotated immediately and the Engineering/Security
team should be notified.
2. Password protection
1. All passwords are treated as confidential information and should not be
shared with anyone. If you receive a request to share a password, deny
the request and contact the system owner for assistance in provisioning
an individual user account.
2. Do not write down passwords, store them in emails, electronic notes, or
mobile devices, or share them over the phone. If you must store
passwords electronically, do so with a password manager that has been
approved by IT. If you truly must share a password, do so through a
designated password manager or grant access to an application through
a single sign on provider.
3. Do not use the “Remember Password” feature of applications and web
browsers.
4. If you suspect a password has been compromised, rotate the password
immediately and notify engineering/security.
3. Enforcement
1. An employee or contractor found to have violated this policy may be
subject to disciplinary action.
Policy Training Policy
Purpose and Scope:
1. This policy addresses policy education requirements for employees and
contractors.
2. This policy applies to all full-time employees, part-time employees, and
contractors. Adherence to assigned policies is binding under their Employment
Offer Letter and/or Independent Contractor Agreement.
Applicability:
1. Upon hire of a new employee or contractor, the Hiring Manager will determine
which subsets of policies will apply to that individual. The individual will have five
working days to read the assigned policies. The following will be logged in the
Policy Training Policy Ledger:
1. Assignment date
2. Completion date
3. Policy
4. Assignee
5. Assigner
6. Notes
Remote Access Policy
Purpose and Scope:
1. The purpose of this policy is to define requirements for connecting to the
organization’s systems and networks from remote hosts, including personallyowned devices, in order to minimize data loss/exposure.
2. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information controlled by the
organization (hereinafter referred to as “users”). This policy must be made readily
accessible to all users.
Background
1. The intent of this policy is to minimize the organization’s exposure to damages
which may result from the unauthorized remote use of resources, including but
not limited to: the loss of sensitive, company confidential data and intellectual
property; damage to the organization’s public image; damage to the
organization’s internal systems; and fines and/or other financial liabilities incurred
as a result of such losses.
2. Within this policy, the following definitions apply:
1. Mobile computing equipment: includes portable computers, mobile
phones, smart phones, memory cards and other mobile equipment used
for storage, processing and transfer of data.
2. Remote host: is defined as an information system, node or network that is
not under direct control of the organization.
3. Telework: the act of using mobile computing equipment and remote hosts
to perform work outside the organization’s physical premises.
Teleworking does not include the use of mobile phones.
Policy
1. Security Requirements for Remote Hosts and Mobile Computing Equipment
1. Caution must be exercised when mobile computing equipment is placed
or used in uncontrolled spaces such as vehicles, public spaces, hotel
rooms, meeting places, conference centers, and other unprotected areas
outside the organization’s premises.
2. When using remote hosts and mobile computing equipment, users must
take care that information on the device (e.g. displayed on the screen)
cannot be read by unauthorized persons if the device is being used to
connect to the organization’s systems or work with the organization’s
data.
3. Remote hosts must be updated and patched for the latest security
updates on at least a monthly basis.
4. Remote hosts must have endpoint protection software (e.g. malware
scanner) installed and updated at all times.
5. Persons using mobile computing equipment off-premises are responsible
for regular backups of organizational data that resides on the the device.
6. Access to the organization’s systems must be done through an encrypted
and authenticated VPN connection with multi-factor authentication
enabled. All users requiring remote access must be provisioned with VPN
credentials from the organization’s information technology team. VPN
keys must be rotated at least twice per year. Revocation of VPN keys
must be included in the Offboarding Policy.
7. Information stored on mobile computing equipment must be encrypted
using hard drive full disk encryption.
2. Security Requirements for Telework
1. Employees must be specifically authorized for telework in writing from
their hiring manager .
2. Only device’s assigned owner is permitted to use remote nodes and
mobile computing equipment. Unauthorized users (such as others living
or working at the location where telework is performed) are not permitted
to use such devices.
3. Devices must be authorized using certificates
4. Users performing telework are responsible for the appropriate
configuration of the local network used for connecting to the Internet at
their telework location.
5. Users performing telework must protect the organization’s intellectual
property rights, either for software or other materials that are present on
remote nodes and mobile computing equipment.
Data Retention Policy
Purpose and Scope:
1. This data retention policy defines the objectives and requirements for data
retention within the organization.
2. This policy covers all data within the organization’s custody or control,
irregardless of the medium the data is stored in (electronic form, paper form, etc.)
Within this policy, the medium which holds data is referred to as information, no
matter what form it is in.
3. This policy applies to all users of information systems within the organization.
This typically includes employees and contractors, as well as any external parties
that come into contact with systems and information the organization owns or
controls (hereinafter referred to as “users”). This policy must be made readily
available to all users.
Background
1. The organization is bound by multiple legal, regulatory and contractual
obligations with regard to the data it retains. These obligations stipulate how long
data can be retained, and how data must be destroyed. Examples of legal,
regulatory and contractual obligations include laws and regulations in the local
jurisdiction where the organization conducts business, and contracts made with
employees, customers, service providers, partners and others.
2. The organization may also be involved in events such as litigation or disaster
recovery scenarios that require it to have access to original information in order
to protect the organization’s interests or those of its employees, customers,
service providers, partners and others. As a result, the organization may need to
archive and store information for longer that it may be needed for day-to-day
operations.
Policy
1. Information Retention
1. Retention is defined as the maintenance of information in a production or
live environment which can be accessed by an authorized user in the
ordinary course of business.
2. Information used in the development, staging, and testing of systems
shall not be retained beyond their active use period nor copied into
production or live environments.
3. By default, the retention period of information shall be an active use
period of exactly two years from its creation unless an exception is
obtained permitting a longer or shorter retention period. The business unit
responsible for the information must request the exception.
4. After the active use period of information is over in accordance with this
policy and approved exceptions, information must be archived for a
defined period. Once the defined archive period is over, the information
must be destroyed.
5. Each business unit is responsible for the information it creates, uses,
stores, processes and destroys, according to the requirements of this
policy. The responsible business unit is considered to be the information
owner.
6. The organization’s legal counsel may issue a litigation hold to request that
information relating to potential or actual litigation, arbitration or other
claims, demands, disputes or regulatory action be retained in accordance
with instructions from the legal counsel.
7. Each employee and contractor affiliated with the company must return
information in their possession or control to the organization upon
separation and/or retirement.
8. Information owners must enforce the retention, archiving and destruction
of information, and communicate these periods to relevant parties.
2. Information Archiving
1. Archiving is defined as secured storage of information such that the
information is rendered inaccessible by authorized users in the ordinary
course of business but can be retrieved by an administrator designated
by company management.
2. The default archiving period of information shall be 7 years unless an
approved exception permits a longer or shorter period. Exceptions must
be requested by the information owner.
3. Information must be destroyed (defined below) at the end of the elapsed
archiving period.
3. Information Destruction
1. Destruction is defined as the physical or technical destruction sufficient to
render the information contained in the document irretrievable by ordinary
commercially-available means.
2. The organization must maintain and enforce a detailed list of approved
destruction methods appropriate for each type of information archived,
whether in physical storage media such as CD-ROMs, DVDs, backup
tapes, hard drives, mobile devices, portable drives or in database records
or backup files. Physical information in paper form must be shredded
using an authorized shredding device; waste must be periodically
removed by approved personnel.
4. Retention and archival periods for information that is created, processed, stored
and used by the organization is defined internally.
Risk Assessment Policy
Purpose and Scope:
1. The purpose of this policy is to define the methodology for the assessment and
treatment of information security risks within the organization, and to define the
acceptable level of risk as set by the organization’s leadership.
2. Risk assessment and risk treatment are applied to the entire scope of the
organization’s information security program, and to all assets which are used
within the organization or which could have an impact on information security
within it.
3. This policy applies to all employees of the organization who take part in risk
assessment and risk treatment.
Background
A key element of the organization’s information security program is a holistic and
systematic approach to risk management. This policy defines the requirements and
processes for the organization to identify information security risks. The process consists
of four parts: identification of the organization’s assets, as well as the threats and
vulnerabilities that apply; assessment of the likelihood and consequence (risk) of the
threats and vulnerabilities being realized, identification of treatment for each
unacceptable risk, and evaluation of the residual risk after treatment.
Policy
1. Risk Assessment
1. The risk assessment process includes the identification of threats and
vulnerabilities having to do with company assets.
2. The first step in the risk assessment is to identify all assets within the
scope of the information security program; in other words, all assets
which may affect the confidentiality, integrity, and/or availability of
information in the organization. Assets may include documents in paper
or electronic form, applications, databases, information technology
equipment, infrastructure, and external/outsourced services and
processes. For each asset, an owner must be identified.
3. The next step is to identify all threats and vulnerabilities associated with
each asset. Threats and vulnerabilities must be listed in a risk
assessment table. Each asset may be associated with multiple threats,
and each threat may be associated with multiple vulnerabilities. A sample
risk assessment table is provided as part of the Risk Assessment Report
Template (reference (a)).
4. For each risk, an owner must be identified. The risk owner and the asset
owner may be the same individual.
5. Once risk owners are identified, they must assess:
6. The risk level is calculated by adding the consequence score and the
likelihood score.
Description of Consequence Levels and Criteria:
Description of Likelihood Levels and Criteria:
1. Risk Acceptance Criteria
1. Risk values 0 through 2 are considered to be acceptable risks.
2. Risk values 3 and 4 are considered to be unacceptable risks.
Unacceptable risks must be treated.
2. Risk Treatment
1. Risk treatment is implemented through the Risk Treatment Table. All risks
from the Risk Assessment Table must be copied to the Risk Treatment
Table for disposition, along with treatment options and residual risk. A
sample Risk Treatment Table is provided in reference (a).
2. As part of this risk treatment process, the CEO and/or other company
managers shall determine objectives for mitigating or treating risks. All
unacceptable risks must be treated. For continuous improvement
purposes, company managers may also opt to treat other risks for
company assets, even if their risk score is deemed to be acceptable.
3. Treatment options for risks include the following options:
4. After selecting a treatment option, the risk owner should estimate the new
consequence and likelihood values after the planned controls are
implemented.
3. Regular Reviews of Risk Assessment and Risk Treatment
1. The Risk Assessment Table and Risk Treatment Table must be updated
when newly identified risks are identified. At a minimum, this update and
review shall be conducted once per year. It is highly recommended that
the Risk Assessment and Risk Treatment Table be updated when
significant changes occur to the organization, technology, business
objectives, or business environment.
4. Reporting
1. The results of risk assessment and risk treatment, and all subsequent
reviews, shall be documented in a Risk Assessment Report.
Vendor Management Policy
Purpose and Scope:
1. This policy defines the rules for relationships with the organization’s Information
Technology (IT) vendors and partners.
2. This policy applies to all IT vendors and partners who have the ability to impact
the confidentiality, integrity, and availability of the organization’s technology and
sensitive information, or who are within the scope of the organization’s
information security program.
3. This policy applies to all employees and contractors that are responsible for the
management and oversight of IT vendors and partners of the organization.
Background
The overall security of the organization is highly dependent on the security of its contractual
relationships with its IT suppliers and partners. This policy defines requirements for effective
management and oversight of such suppliers and partners from an information security
perspective. The policy prescribes minimum standards a vendor must meet from an information
security standpoint, including security clauses, risk assessments, service level agreements, and
incident management.
Policy
1. IT vendors are prohibited from accessing the organization’s information security
assets until a contract containing security controls is agreed to and signed by the
appropriate parties.
2. All IT vendors must comply with the security policies defined and derived from
the Information Security Policy (reference (a)).
3. All security incidents by IT vendors or partners must be documented in
accordance with the organization’s Security Incident Response Policy (reference
(b)) and immediately forwarded to the Information Security Manager (ISM).
4. The organization must adhere to the terms of all Service Level Agreements
(SLAs) entered into with IT vendors. As terms are updated, and as new ones are
entered into, the organization must implement any changes or controls needed to
ensure it remains in compliance.
5. Before entering into a contract and gaining access to the parent organization’s
information systems, IT vendors must undergo a risk assessment.
1. Security risks related to IT vendors and partners must be identified during
the risk assessment process.
2. The risk assessment must identify risks related to information and
communication technology, as well as risks related to IT vendor supply
chains, to include sub-suppliers.
6. IT vendors and partners must ensure that organizational records are protected,
safeguarded, and disposed of securely. The organization strictly adheres to all
applicable legal, regulatory and contractual requirements regarding the
collection, processing, and transmission of sensitive data such as PersonallyIdentifiable Information (PII).
7. The organization may choose to audit IT vendors and partners to ensure
compliance with applicable security policies, as well as legal, regulatory and
contractual obligations.
Workstation Policy
Purpose and Scope:
1. This policy defines best practices to reduce the risk of data loss/exposure
through workstations.
2. This policy applies to all employees and contractors. Workstation is defined as
the collection of all company-owned and personal devices containing company
data.
Policy:
1. Workstation devices must meet the following criteria:
1. Operating system must be no more than one generation older than
current
2. Device must be encrypted at rest
3. Device must be locked when not in use or when employee leaves the
workstation
4. Workstations must be used for authorized business purposes only
5. Loss or destruction of devices should be reported immediately
6. Laptops and desktop devices should run the latest version of antivirus
software that has been approved by IT
2. Desktop & laptop devices
1. Employees will be issued a desktop, laptop, or both by the company,
based on their job duties. Contractors will provide their own laptops.
2. Desktops and laptops must operate on macOS or Windows.
3. Mobile devices
1. Mobile devices must be operated as defined in the Removable Media
Policy, Cloud Storage, and Bring Your Own Device Policy.
2. Mobile devices must operate on iOS or Android.
3. Company data may only be accessed on mobile devices with Slack and
Gmail.
4. Removable media
1. Removable media must be operated as defined in the Removable Media
Policy, Cloud Storage, and Bring Your Own Device Policy.
2. Removable media is permitted on approved devices as long as it does
not conflict with other policies.
Download