CSC4003: Computer and Information Security

advertisement
CSC4003: Computer and
Information Security
Professor Mark Early, M.B.A., CISSP, CISM, PMP,
ITILFv3, ISO/IEC 27002, CNSS/NSA 4011
Agenda
• Chapter 9: Unix and Linux
Security
• Chapter 10: Eliminating
the Security Weakness of
Linux and Unix Operating
Systems
• Lab
Unix History
• Created by Bell Labs on a PDP-8 computer
• Bell Labs transitioned Unix to standardization
efforts for which The Open Group then took over
– Standard is called “The Single Unix Specification”
– Open standard
• Some security features built-in due to it’s
multiuser focus
– User isolation
– Separation of kernel and user memory space
– Process security
Figure 10.1
The simplified Unix family tree presents a timeline of some of today’s most
successful Unix variants.
The phrase Historical practice in the description of the Single Unix Specification
refers to the many operating systems historically referring to themselves as Unix.
Linux History
• Mostly a GNU software-based operating system with a kernel
• Linus Torvalds originally developed mainly the kernel portion of the
O/S
• A community of developers maintains Linux today (Eric S. Raymond
calls the distributed development methodology as a bazaar)
• GNU/Linux implements same interfaces as most current Unix-style
operating systems
• Not derived from the Bell Labs Unix source code
• Distributed for free and the reason it is so popular
• Offshoots of Linux:
– Debian – developed by Ian Murdock with a goal of using only open and
free software
– Ubuntu – Based upon the Debian system, but has a goal of being easy
to use and easy access to a comprehensive Linux distribution for
newbies
Figure 10.2
History of Linux distributions
UNIX Architecture
• Kernel
– Manages many of the fundamental details that an O/S needs to deal
with, including memory, disk storage, and low-level networking
– Kernel talks to the hardware thus freeing the O/S from needing to care
about that level of detail (abstraction)
• File System
– Unix was the first hierarchical model of directories
• Users and Groups
– Unix was designed to be a time-sharing system (multiuser as a result)
– Users are identified by usernames which Unix translates into a UserID
(UID)
– A user can belong to one or more groups
– Group names are translated in Unix into a GroupID (GID)
UNIX Architecture
• Permissions
– Traditionally had a simple permissions architecture
• User/Group -> files in the file system
– Criticism of this approach:
• Simplicity
• Inflexibility
– Later implementations of extended file attributes and
access control lists address these criticisms
• Process
– Program execution = process in Unix
Figure 9.2
Kernel structure of a typical Unix process.
How is access to computing resources such as memory and I/O subsystems
safeguarded by the kernel?
Unix Security
• Remember the CIA triad of Information
Security:
– Confidentiality
– Integrity
– Availability
Unix Security
• Authentication
– Establishing and verifying the identity of the requester
• Authorization
– Determining the eligibility of an authenticated (or
anonymous) user to access or modify a resource
• Availability
– The system is accessible to me when I need it
– A compromised system can be easily made
unavailable
Unix Security
• Integrity
– There is no unauthorized change to data
– A compromise would make unauthorized data changes
possible
• Confidentiality
– Protecting resources from unauthorized access and
safeguarding the content
– Unix does this through
•
•
•
•
Access control policies
Process isolation – keeping processes isolated from one another
Remember: we discussed Access Control earlier
Unix: System Owner sets MACs and users set DACs
Unix Security
• Keep in mind…
– Unix is difficult to discuss regarding security due
to so many variants being used
Traditional Unix
• Kernel space versus user land
– Instructions are executed in 1 of 2 different contexts:
• User space (a.k.a. “user mode”)– is a less privileged context
that processes execute in
– Process = associates metadata about the user as well as other
environmental factors with the execution thread and its data
– Kernel protects memory, I/O subsystem by forcing system calls in
user mode; the kernel provides authorization decision and grants
the request or returns an error code of which the process is
supposed to properly react to
» Note that this process is “compute expensive” -> slow
operating system
• The kernel (a.k.a. “kernel mode”) – full access to the entire
hardware and software capabilities on the system
• Mixed mode – hybrid approach.
Mixed Mode
• Most modern Unix operating systems operate
using this methodology
• Usage of user device drivers as well as system
device drivers
User Space Security
• Everything starts with Access Controls
– Every user process structure has 2 important security
fields:
• User Identifier (UID)
• Group Identifier (GID)
– Every process started on behalf of a user inherits the
UID and GID of that user account
– UID = 0 is a superuser which can override many of the
security guards that the kernel establishes
– Default account in Unix with superuser permissions =
root
Unix File/Device Access
• Most Unix file systems use the Identity-Based
Access Control (IBAC) model to ensure
protection of files
– Most common IBAC policy of who can
read/modify files/folders is commonly referred to
an access control list (ACL)
– Note: there are different types of ACLs depending
on what you are talking about (metadata ACLs
versus centralized storage of ACLs)
ACL Permissions
• Normal files accessed in 3 simple ways:
– Read – read the data and cannot modify or run
– Write – read and modify, but not run
– Execute – only a binary program or script can execute
if the user has been given this right
• Ownership is also important
• Permissions are also set for the group and for all
others
• One file is associated with one group and all
members of this group can access the file with
the permissions established in the ACL
Special Permissions
• Set-ID Bit
– Only pertains to executables and only set for a UID or GID
– If set, the process for this program is not set to the UID or
GID of the invoking user, but instead to the UID or GID of
the file
• Sticky Bit
– When set on an executable file, its data (text) is kept in
memory, even after the process exits
• Mandatory Locking
– Forcing a file’s reading and writing permissions to be
locked while a program is accessing that file
Special Permissions
• Directory Permissions
– Read – list contents of a directory
– Write – Create files
– Execute – a process can set its working directory
to this directory
– SetID – deals with default ownership of newly
created files
Unix: DAC vs. MAC
• DAC controls discussed so far
• MAC can be done
– Certain Linux distros have this embedded to one
extent or another: Red Hat Linux and SELinux
based Linux Security Module (LSM)
– Generally needs additional configuration beyond a
base installation as well as additional software
modules
Achieving Unix Security
•
Patch, patch, patch!
– Different vendors distribute patches differently
– Patch removal is generally a feature, but ease of removal depends on the Unix variation
•
Harden the system!
–
–
–
–
Remove/disable unneeded services and facilities that are not needed for regular operation
Remove unused, unnecessary software
Deny network access from unknown hosts when possible
Limit the privileges on running services to only what is needed in order to minimize the
damage they might be subverted to cause
•
•
Apache Web Server – most popular web server for Unix O/Ses
Secure Shell (SSH) – secure remote access to the Unix console
–
–
•
Disable passwords and require private key authentication
Limit what accounts can run what commands
Database software – can be used as a hopping point
–
–
Common practice is to run sensitive or powerful services in a croot jail and make a copy of only those file system
resources that the service needs in order to operate
Use ulimit to limit powerful process accounts related to disk space, memory, and CPU utilization to minimize the
Denial of Service (DoS) vulnerability
– Less secure: Use of the tcpwrappers interface to limit the network hosts that are allowed to
access services provided by the server
– More secure: Firewalls would be preferred over the tcpwrappers interface
Achieving Unix Security
• Minimize user privileges & Identity Management
–
–
–
–
–
–
Don’t make everyone an “admin”
Use the principle of least privilege design for user access
Use ACLs to grant permissions by group (not user)
Create a separate “admin” account for the administrators to track their activity
Require all users to prove their identity before making any use of a service
Strong authentication (two-factor authentication)
• Something I know
• Something I am
• Something I have
– Services should have their own dedicated accounts (i.e. service accounts) that
only have permissions needed for that service (and that’s it!)
• Encryption
– Use SSL for secure web transmissions
– Use SSH for secure access to the shell – DON’T USE telnet or RSH!
Achieving Unix Security
• Log management
– Configure logging levels to flag security issues that can be reviewed
– Capture key security events for all services: web, database, etc.
– Setup Syslog export to a separate Syslog server to maintain the
integrity of the log files that will be reviewed; using secure Syslog is
even better!
– Add a SIEM on top to correlate the logs across the enterprise (more
about this later!)
• Backups – when in doubt, be able to restore the system!
• Other security controls
– IDS/IPS
• Snort, McAfee NIPS and/or HIPS, etc.
– File Integrity Monitor (FIM) for looking at monitoring file changes
• TripWire
– Application Whitelisting
• Bit9, McAfee AppControl
Achieving Unix Security
• Proactive measures
– Vulnerability Assessment (a.k.a. vulnerability scanning
or vulnerability management)
• We’ll talk much more about this later (and work a lab with a
vulnerability scanner too!
– Perform a Penetration Test by an external third-party
• Sometimes you don’t realize something is an issue until
someone else sees it!
– Have an Incident Response Plan (IRP)
• We’ll talk much more about this later as well!
– Personnel/Organizational Considerations
• Separation of Duties (SoD)
• Forced Vacations (everybody needs one, right?)
Unix User Accounts and Strengthening
Authentication
• Reserve interactive users to administrators
• Noninteractive users should be everyone else
• Username and (text) password is the most
common way to authenticate, but other
methods exist:
– Kerboros
– SSH
– Certificates
Unix Login Process
• Login process is a system daemon that is responsible
for coordinating the authentication and process setup
for interactive users
• Process does the following:
– Draw/display the login screen
– Collect the credential
– Present the user credential to any of the configured user
databases (files, NIS, Kerberos, LDAP, etc.) for
authentication
– Create a process with the user’s default command-line
shell, with the home directory as the working directory
– Execute system-wide, user, and shell-specific start-up
scripts
Controlling User Account Storage
• Local Files
– UID, GID, shell, home directory and GECOS
information are stored in the file
– NOT A SECURE APPROACH!
– Password hashes have to be read by several
services in order to authenticate
– Alternative: shadow file that stores the password
hashes under /etc/shadow which is only available
to the system
– STILL – NOT THE GREATEST METHOD!
Controlling User Account Storage
• Network Information System (NIS)
– Master/Slave architecture
– Systems using NIS are said to be in an NIS domain
– Master holds all authoritative data and uses an
ONC-RPC based protocol to talk to slaves and
clients
– Client systems are bound to one NIS server
(master or slave) during runtime (plan ahead)
– Addresses for the NIS master and slaves must be
provided when joining a system to the NIS domain
Controlling User Account Storage
• PAMs to modify AuthN
– Can be configured on a system through the
/etc/nsswitch.conf file
– In advanced scenarios, Kerberos or multifactor
authentication can be setup
– PAM is configured through /etc/pam.conf, but a directory
structure is typically used today
– With systemauth PAM, enforcement of complex
passwords, password length, and other password policy
parameters can be setup
• rlogin – don’t use it, your vulnerability scanner will tell
you this is an issue
• Trusted Hosts and Networks – don’t do it!
Protocol vulnerabilities
• Sniff, sniff – I just saw your credentials
– Don’t use cleartext protocols on Unix
• FTP
• rLogin
• Telnet
• Use SSH to encrypt in transit!
–
–
–
–
Authenticates the connection
Privacy protection of the connection
Integrity guarantee
Ships with UNIX or via open source (OpenSSH)
Limit Superuser Privileges
• Why is mandatory access control not a problem?
• In MAC scenarios, configure the superuser account to not
be able to affect the data
• In DAC scenarios:
– Restrict access to privileged functions and tightly control
– Personal background checks for admins
– Have a policy where other admins must be present before
privileged functions are performed; better yet - use multifactor
authentication for admin functions
– Restrict source IP of where admin functions can be carried out
– Log/restrict SU utility
– Use groups and not the root account
– Use the sudo mechanism
Local/Network File System Security
• System wide binary code should only be modified
by Systems Admins
• Use read-only partitions so that only frequently
changed files (user data, log files, etc.) are hosted
on readable file systems and all others on readonly partitions
• Verify ownership/permissions on critical files:
– Find
– Files with SetID set
– Identify any suspicious files/folders
Network Configuration Security
• Files to configure the network settings that need to be
reviewed:
– /etc/hostname – set the system name
– /etc/protocols – list the protocols that need to be turned on (IP,
TCP, etc.)
– /etc/hosts and /etc/networks – define what IP hosts and
networks are locally known the system
– /etc/nsswitch.conf - allows fine-grained setting of name
resolution
– /etc/resolv.conf – main config file for the DNS resolver libraries
and identifies the default DNS server for resolution
– /etc/services (and/or /etc/protocols) – contains a list of wellknown services and port numbers/protocol types they are
bound to – disable/remove unneeded ports/protocols!
Other Unix Security Considerations
• Disable any standard Unix service that is not needed for business
purposes/required functionality
• Use a host-based firewall
• Restrict remove administrative access to the Unix system
• Antivirus can be run on Linux – use it!
• Application Whitelisting can be run on Linux – use it!
• Run vulnerability scans on Unix systems to identify vulnerabilities
• Consider host-based IPS
• Close down inactive network sockets
• For interactive access
– Limit use to dedicated administrative terminals
– Use restricted networks
• Provide dedicated administrative networks
Linux/Unix Security Activities Checklist
• On page 179, there is a good listing of what to
work through to secure a Unix/Linux system
Additional Resources
• NIST SP 800-53 Revision 4:
http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf
• SANS Linux Security Checklist:
https://www.sans.org/media/score/checklists/linuxchecklist.pdf
• Center for Internet Security (CIS) Linux Checklists:
– Red Hat:
https://benchmarks.cisecurity.org/tools2/linux/CIS_Red_Hat_Enterprise_Linux_7_
Benchmark_v1.1.0.pdf
– Ubuntu 12.04:
https://benchmarks.cisecurity.org/tools2/linux/CIS_Ubuntu_12.04_LTS_Se
rver_Benchmark_v1.1.0.pdf
– CentOS 7:
https://benchmarks.cisecurity.org/tools2/linux/CIS_CentOS_Linux_7_
Benchmark_v1.1.0.pdf
OWASP TOP 10 & SECURE CODING
Secure Coding Overview
• Code is made by humans, humans are flawed,
therefore your code is not perfect (and is
vulnerable!)
• Discussion: why do we not care about secure
coding?
Why we don’t care about secure code?
• The Software Development Lifecycle is already
too long – we don’t need another layer
• It slows down productivity (instead of looking
at securing our code, we need to get on to the
next project to make our boss look good)
• I never learned how to code securely in school
• That’s what the testing/QA team is here for
How can this problem be solved?
• Premise: It’s easiest to bake security in at the
beginning of any project (including code
development)
– Requirements can capture security concerns up
front
– New coding methods may require additional
training/education regarding the most secure
method
– Awareness of the top coding issues (like the
OWASP Top 10) helps build better code in general
OWASP Top 10
•
•
•
•
•
•
•
•
•
•
A1: Injection
A2: Weak authentication and session management
A3: Cross Site Scripting (XSS)
A4: Insecure Direct Object References
A5: Security Misconfiguration
A6: Sensitive Data Exposure
A7: Missing Function Level Access Control
A8: Cross Site Request Forgery
A9: Using Components with Known Vulnerabilities
A10: Unvalidated Redirects and Forwards
Reference: https://www.owasp.org/index.php/OWASP_Top_Ten_Cheat_Sheet
How do we tell if my code is secure?
• Peer Review
– This is where you have your code checked by a
peer to validate security (and code efficiency)
– Varies in value due to who might be reviewing
your code (how much do they know about secure
coding practices?)
– Most time consuming method (generally a line-byline approach)
How do we tell if my code is secure?
• Static Code Scanning (best approach)
– Using a tool to scan compiled or source code (automated
method)
– Quicker method than peer review due to not involving
another person
– Proactive approach that can be baked into the SDLC model
– Slows the SDLC process down because code often changes
during the project (requirements change)
– May not avoid all issues once served on a web server (this
method does not test the back-end infrastructure)
– Can map best practices (i.e. PCI-DSS, OWASP, etc.)
compliance to the code itself
How do we tell if my code is secure?
• Dynamic Code Scanning
– Performed on deployed code (post-implementation
approach)
– Automated approach
• Although I feel it is reactive
– Would not slow down the SDLC process
– Would require rework after the project has
completed, but the rework may require more time
than if static code scanning were implemented
– Minimal labor compared to peer review
– Tests code on the deployed infrastructure
Download