Cal State Fullerton Clean Desk and Screen Standard

advertisement
Cal State Fullerton
Guidelines for Developing Secure Applications
For
Deployment
Introduction
Applications often serve as the delivery mechanism through which personal data and
other sensitive information is transferred online. Unsecured or poorly written applications
can be exploited to bypass security measures or used to transfer information that is easily
intercepted. The following guidelines outline several steps necessary for application
developers to prevent such abuses.
Federal law, state law, CSU and campus policies require protection of personal,
confidential and sensitive data.

CSU is required by the Buckley Amendment, Family Educational Rights and
Privacy Act (FERPA) and state statute to maintain the confidentiality of student
records. For more information, refer to Student Records.

Units that deal with patient information will need to be aware of their
responsibilities under the Health Insurance Portability and Accountability Act of
1996 (HIPAA).

Financial data is protected by the Gramm-Leech-Bliley Act (GLBA).
Defense in Depth
Security should be implemented at multiple levels to prevent a breach in one level from
compromising the entire application. Users should be aware of all the techniques
discussed in this document and use multiple techniques where possible and appropriate.
Consider access control lists and firewalls for added protection.
Methodology, Review and Testing
It's better to avoid security vulnerabilities than to fix them. Conduct internal peer review
or external, third party assessment. Choosing a formal development methodology will
impart structure, reduce errors, and encourage review at each stage of development.
Developers must demonstrate compliance with OWASP coding principles at
http://www.owasp.org/index.php/Main_Page .
Automated tools can be used for review and testing, but should not replace manual
methods. Testing should include the following methods:
 Attempt to impersonate users or servers
 Attempt to perform fraudulent transactions
 Attempt to compromise data
 Attempt to send junk data
 Attempt to compromise server
 Attempt a denial of service

General Application Security
Data Protection







Authentication, authorization and data permissions. User and application data
should be viewable and/or modifiable by only authorized applications and
users. Ensure proper protections (file permissions, DB permissions, etc) are
used so that unauthorized users or applications are unable to make use of the
data in question.
Encryption. Encryption at the communications layer prevents eavesdroppers
on the network from passively watching the application data as it passes from
client to server. Very sensitive data could be stored in an encrypted form as
well, further protecting against other breaches in the application.
SSL
VPN - While VPNs are becoming more popular, they aren't as widespread as
SSL, and in general SSL should be the users first choice for network
encryption
SSH - SSH is a secure replacement for telnet, rlogin, rcp and rsh, which uses
strong encryption to prevent login passwords and session data from being
compromised. Any application requiring remote terminal logins should use
ssh instead of the older, insecure protocols. Client and server versions are
available for most operating systems, including Windows.
Public Key (GPG/PGP/X509) - Use of public key encryption allows for user
data to be gathered and encrypted with the collection application without the
need for the private (decrypting) key. Another application can then process the
data using the private key. This prevents a compromise of the collection
application from allowing the decryption of collected data.
Symmetric Key - Symmetric key encryption can also provided added
protection, but the key must be present in all applications requiring encryption
or decryption, so protection of the key is paramount.








Temporary files. Temporary files are subject to attacks if not handled
properly. In general, an attacker compromises temp files by creating a worldwritable file or symbolic link (unix) with the name of a file the application
will open, then either capturing data written there or using operations
performed on the symlink to compromise other files on the system. To ensure
the application uses temp files securely, use the following guidelines:
Use temporary files sparingly, if at all.
Avoid publically writable temporary directories if possible. If using a
publically writable directory, make a directory within the publically writable
directory for temporary files, with read and write permissions for the
application only.
When generating temporary file names, make the names as hard to guess as
possible, eg., use a filename generated by running random data through a
digest operation (MD5/SHA1).
Before creating the file, attempt to remove it and check the return value of the
remove operation. If the return value indicates invalid permissions on a file
already existing, choose another filename. On UNIX-based systems, use the
flags O_CREAT|O_EXCL to create the file. The open operation will fail if
the file exists.
On systems that support it, use the mkstemp() system call, returning the
handle to an already-open, securely-created file.
Ensure temp files are created with permissions that allow only the application
to use or delete them.
Always delete temporary files when finished with them.
Auditability and Logs
The code itself should be easily auditable. This includes following proper programming
guidelines and documenting code where the intent is not immediately obvious. Most
importantly, however, usage, errors, and abnormal conditions should be tracked with logs
that are monitored in some manner. Watching application logs is one of the best ways to
detect a number of different cracking attempts, such as password brute forcing, data
injection and other forms of data input validation abuse. Proper logging will record
failures and the error conditions.
Consider archival of the logs for a reasonable length of time to allow comparison of
current logs with previous logs.
If the volume of logs generated by an application are prohibitively large, try automating
parsing of the logs to prevent the application developer or system administrator from
having to review the entire log by hand. Additionally, coding different verbosity levels
within the application for different levels of log monitoring is extremely useful.
Data Input and Validation
Before working with user input, ensure that it is safe by limiting the allowed characters.
If direct input were trusted, users could abuse the application for many malicious
purposes such as compromising the host or retrieving protected information. For more
information, see the following CERT advisories and referenced articles:
1. Advisory CA-1997-25: Sanitizing User-Supplied Data in CGI Scripts
2. How To Remove Meta-characters From User-Supplied Data In CGI Scripts
(CERT Coordination Center article)
3. Advisory CA-2000-02: Malicious HTML Tags Embedded in Client Web
Requests
4. Scriptlet Security (Microsoft Developers Network article)
Never Trust Client Data
o
o
o
o
Validate client IP addresses. Never trust the client machine to tell you its IP
address or DNS name. If you're restricting a function to certain IP numbers,
check the actual remote IP address of the connection. If you're doing
restrictions by DNS name, translate that numeric IP address to a name, then
translate that name back to a number. Only accept the name if the number
you get back matches the number you started with.
Use server-side validation of all commands.
Environment variables, like HTTP_REFERER and REMOTE_USER should
never be used to implement restrictions, as they are easily spoofed.
Consider replay attacks as well as spoofing. The attacker does not have to
guess what a valid session ID would look like if s/he can just use somebody
else's.
Special Characters
Certain characters are interpreted by the operating system (both Unix and Windows) to
perform functions, such as sending output of one program to another. The preferred
approach is to specify exactly what data is allowed. Also, do not rely on client-side
validation such as JavaScript to validate user input. Always validate on the server.
Finally, if your application development environment supports it, do taint checks, which
cause an application to fail if user input is used when executing system commands. In
Perl, for example, taint checks are enabled with the -T switch.
Malicious HTML
HTML tags can be abused to change the display of an application or even execute code
on the client machine. If HTML tags are used by users to format text input, always be
sure to explicitly define which tags are allowed. For example, tags other than <P>, <B>,
<I>, and <FONT> could be stripped.
Buffer Overflows and Memory Management





Any application written in a programming language that requires the developer to
allocate and deallocate memory (C, C++) must be written with proper memory
management in mind.
All memory allocated with malloc(), et al., should be freed when no longer
needed. Care should be taken not to free an area of memory more than once.
Integer variables that index memory should stay within appropriate bounds,
overflow and underflow. See the Microsoft document, Reviewing Code for
Integer Manipulation Vulnerabilities, for more information.
Buffer sizes should be checked so that no attempts to copy data larger than the
buffer itself should occur. For string management, the strl* routines are generally
preferred over the strn* routines. Strcpy(), strcat(), et al. should be avoided
completely, and developers should use snprintf() over sprintf(). See the paper on
strlcpy and strlcat by Todd Miller and Theo de Raadt.
Programming languages such as C and C++ have difficulty of correctly
programming for memory management. Languages such as Perl and Java use
garbage collection and are better at string handling than C/C++. Care must still
be taken when passing data to operating system calls, as most operating systems
are written in C/C++.
Format bugs
Programmers should never allow the user to specify their own format string:
read(sockfd,buf,sizeof(buf));
syslog(LOG_ERR, buf); /* BAD, allows user to specify format string */
This allows the user to send in arguments like the following:
char *garbage = "%s%s%s%s%s%s%s%s <a lot of garbage>"
write(sockfd, garbage, sizeof(garbage))
Which can cause the reading app to move up and down the stack and possibly
overwrite areas of memory, similar to the way buffer overflows work.
Be sure all such routines that take a format string have a developer-supplied
format string:
read(sockfd, buf, sizeof(buf));
syslog(LOG_ERR, "%s", buf); /* GOOD, developer-supplied format string */
Form implementation: POST vs. GET
When using forms, prefer POST to GET. The GET method transfers all information to
the application in the URL.
Example:
http://www.somedomain.fullerton.edu/cgibin/application.cgi?username=johndoe&password=noel
This information is much more visible to casual observers and is logged in many places.
POST, on the other hand, is hidden from the browser screen, is not logged and does not
have some of the size and content restrictions of GET.
Data Injection and Cross-site Scripting
Two examples of application abuse when proper data validation does not occur are data
injection and cross-site scripting. Data Injection allows the user to specify extra
information along with legitimate input, and cross-site scripting is a relatively new form
of attack in which data is embedded by a user which is presented to other users and
executed in some manner to cause other clients to behave in an undesirable way. For
example, a user may post an image in a comment to a forum, which other users browsers
will automatically visit, potentially giving away cookie or other session information. For
more information on cross-site scripting, see the CERT documents Malicious HTML
Tags Embedded in Client Web Requests and Understanding Malicious Content
Mitigation for Web Developer.
Web Security
Important to note: The sample CGI applications that come installed on many Web
servers often represent a serious security risk.
Securing Web Services With Secure Socket Layer (SSL)
SSL is a commonly used Web protocol that uses strong encryption to communicate
securely, it also allows the browser to authenticate the server and detect whether the data
has been altered or tampered with in transit. Fully compatible with the Web's most
popular browsers, SSL is used by many major e-commerce Web sites. When users
connect to a secure Web server using SSL, such as https://www.somewhere.fullerton.edu/
(note that the "s" indicates a secure server), that server authenticates itself to the Web
browser by presenting a digital certificate. Digital certificates are issued to Web sites by a
Certifying Authority (CA) , once the CA is satisfied 1) as to the identity of the requester
and 2) that the requester actually owns, or has the right to use, the domain name for
which the certificate will be issued. A valid certificate gives customers confidence that
they are sending personal information securely and that it is going to the right place.
Session management
To build secure applications, particularly web applications, it is imperative that a series of
requests from a particular client be associated with each other and that the server be
aware when the client is no longer active. This set of precautions is known as session
management. Without session management, an unauthorized user could gain access to
personal data from CSUF systems by assuming an idle session.
Session management is especially important for web applications, since they do not
establish a persistent connection between client and server, but send each page as a
separate network connection. However, even applications which use persistent network
connections can benefit from the guidelines below.
Many different techniques may be employed. Your choices will be determined by your
platform, server software, application software, programming language, and your desired
level of security. Listed below are some ways that campus application developers can
use session management techniques to protect data served via campus applications
1. Authorization - While not strictly part of session management, authorization
is typically done at the beginning of a session. Authorization is the process of
determining who may do what.
2. Authentication - You must uniquely identify the user at the creation of a
session. This means you must log them onto your system or have some way of
determining they are already logged onto another trusted application. The
recommended practice is to use existing campus authentication methods to
identify the user.
3. Preventing Session Hijacking - If an intruder can eavesdrop on the
beginning of a session, they may be able to "hijack" that session by using the
session's keys or sequence numbers to impersonate the rightful client. To
prevent this, sessions involving sensitive data should always be encrypted;
and session keys, shared secrets, and TCP sequence numbers must never be
easily guessable.
4. Tracking the user - Since these techniques are subject to spoofing and replay
attacks, use two or more of the following.
5. IP Address - The IP address technique associates a session with a client IP
address. This technique works well when all clients have fixed IP addresses. It
does not work reliably if the client does not have a fixed IP address, for
example, if the client uses Network Address Translation NAT. This technique
also does not work for clients that use a proxy server, because all users of the
proxy report the same IP address. IP addresses can be spoofed. For these
reasons, this technique is probably useful only for intranet applications.
6. Cookies - The web service sends an HTTP cookie to the client. The client will
then return the cookie on each subsequent request to the server. The server
should confirm the cookie is valid, even it did not originate from the current
server. To be secure, the cookie exchanges must be protected by an SSLencrypted session, and cookies must be expired promptly by the server after
logout. Even then, session cookies can be compromised by cross-site
scripting attacks or spyware on the client workstation. Since cookies are
easily forged, the server should be able to distinguish cookies it gave out from
bogus ones. One way to do this is to place a timestamp, IP address and
username in the cookie, make an MD5 or SHA-1 hash of all that information,
then sign it with a private key. Then any presented cookie can be validated by
verifying the signature. Another way to do this is to keep a record of all
recently issued cookies, along with the time issued, the IP address issued to,
browser version information and the user. Only accept a cookie if it matches
one you know you sent to that location. In general, cookies have limitations
on the size of the cookie and the number of cookies an application can have.
Cookies also raise privacy concerns which cause many users to disable
cookies in their browser preferences. Your site should detect whether cookie
support is absent or turned off. You can do this by sending a test cookie and
checking for it, as early as possible in the session. If your application detects
that cookies are not being returned, an alternate means can be employed to
maintain session data, or the user can be told how to enable cookies.
7. URL encoding - URL encoding involves rewriting URLs on the fly to include
session information. This is very useful when a client refuses to accept
cookies. The unique session key is appended to each URL on the page as a
name/value pair. When the user clicks a link or submits a form via GET, the
key is sent along with the HTTP GET request. URL encoding is easily
spoofed, so take appropriate steps to encrypt or validate URL content. As
with cookies, you must protect the session key with SSL and expire it
promptly. Unlike cookies, the URLs will also show up in web server logs and
browser histories, making them even easier to compromise.
8. Hidden HTML form fields - This involves placing the unique session key in
a hidden HTML form field, so that it is included each time a form is submitted
via POST. As before, you must protect the session key with SSL and expire it
promptly. Hidden fields are slightly more secure than URL encoding and
about as secure as cookies. They will work when a client refuses cookies, but
they are less flexible; form submission always has to be used instead of link
clicking to move to a new page. For example, you can store the IP address
and browser version of the requester on the server side.
9. Maintaining session state.
10. Maintaining all state in browser. It's possible to save the entire state of a
web session in hidden form variables, URLs, or cookies, so that the server
doesn't have to remember anything between requests (Google searches work
this way). This is not recommended unless you encrypt and sign the data
stored on the client. Use strong encryption, and use existing encryption
available for most major programming languages. The best way to implement
encryption is with public key cryptography, which allows you to not only
decrypt the data, but verify that your key was used to sign the data in the first
place. Symmetric cryptography (same key encrypts/decrypts) may be used,
however, be aware that once that secret is known, the security of the
application may be compromised. Additionally, encode the data in such a
way that a stolen encrypted token is not usable by another user; whether with
IP restrictions, timeouts, or other of the methods described in this document.
If you absolutely must use this technique because of server limitations, you
should re-validate all the session information at each request.
11. Maintaining state in server side database -With this technique the state of a
web session is stored at the server; the browser only keeps enough information
to uniquely identify the session. This is the recommended method. The
server could store the information using a database, disk files, persistent
objects, or some other method.
12. Ending the session
13. Logout - To protect access to campus data records, the best practice is for
users to be asked to logout when they are finished.
14. Timeouts - Idle sessions must be invalidated by enforcing timeout periods.
The user could be logged out completely, or could be asked to re-authenticate
in order to re-activate session. Keep the timeout to a minimum, 5 minutes or
even less for very sensitive data services.
How to Set Up a Secure Web Site at Cal State Fullerton
Deploying a secure Web service involves a few administrative steps.
First, identify which entity is hosting your Web site. If your site is hosted by
Information Technology Web Design and Internet Technologies, then they can
handle the whole process for you. Contact the Director of Web Design and
Internet Technologies to request secure Web services on the campus.
If you are hosting your Web site locally on your own machine, then you will have
to follow these steps:
1. Have your system administrator generate a public/private key pair for the
Web server you would like to secure.
2. Submit your digital certificate to be signed by a well recognized certifying
authority (CA), i.e.: Thawte, Verisign, etc.
3. Install the digital certificate on your Web server and create a domain name
entry for the secure (https) service.
4. Make sure the secured resources can't be accessed by the insecure [http]
methods.
5. Keep your digital certificate up to date; they are time limited.
Questions about setting up secure Web services should be directed to the Director
of Web Design and Internet Technologies.
Databases and Other Data Storage
If external files are used in your application, further checks must be done to ensure that
malicious users are not exploiting the application to retrieve sensitive information. Use
the following guidelines when reading from or writing to external files:
1. Authenticate connections from the web server.
2. Consider encrypted database connections if the database server is not on
the same machine as the web server or on the same local network segment.
3. Do not put passwords in scripts. If the script needs a password, for
instance, to connect to a database, it should read that password from a
protected configuration file.
4. Allow backend database access only from the web server if possible.
5. Files that are properly protected from unauthorized web access are not
necessarily properly protected from other methods of access.
6. Use a data directory. Data files should not be placed in the cgi-bin
directory or in any directory that can be accessed from a Web browser.
7. Do not use raw path and filenames. Never accept user input as the path
information for a file. Instead, use some type of identifier for the form
which points to an actual file.
8. Use absolute paths. Do not assume that an application is being called from
a certain directory. Use a fully qualified path to the files your application
accesses.
9. Specify the mode when opening a file. Configuration files should be
specified as read-only, data files should be specified as write-only, and log
files should be specified as append-only.
10. Avoid temp files
11. Maintain current patch levels
Using Campus Data Resources
User supplied data is frequently misspelled, incomplete or missing. Applications
that rely on personal data supplied by the user are vulnerable to data corruption.
Applications should therefore limit the amount of personal data input from users.
Instead, it is highly recommended that campus developers require user
authentication from official CSUF systems. Personal data about authenticated
users can then be acquired from official university directory resources as
described below.
Questions about setting up secure directory service methods should be directed to
the Director of Web Design and Internet Technologies.
Appendix A
Procedures
For
The Deployment of
Web Development Projects
Management approval of web development projects
The following questions all need to be answered before a project
is accepted and approved.
PROJECT BASICS
Global Scope and Givens
1.
2.
3.
4.
What is the project and what special agreements have been made?
What special methods must be used in the release?
What budget do we have for this release?
What is the priority of this release? And what is the priority in relation to existing
projects?
Release Goals
1. What is the goal of this release?
Release Timeline
1. What is the timeline for the pilot release?
2. What is the date for the full campus release?
3. What contingencies exist before we can hit these timelines?
RELEASE PREPARATIONS
Technical Requirements and Review – Owner TBD
1. What is required technically
2. What hardware is needed for this release
Functional Requirements and Review – Owner TBD
1. What is required functionally?
2. Who are the impacted units?
3. Who is the local owner/point of contact?
Pilot Group Identified – Owner TBD
1. How big is the initial pilot group?
2. What will be delivered with the pilot different than the full release?
RELEASE MANAGEMENT CHECKOFFS
Training Requirements – Owner TBD
1.
2.
3.
Has the audience for the application been identified?
What is the projected release date(s)?
Have the core, secondary and tertiary features of an application and the needs
they fill been identified and documented?
4. Has a pilot group been used and their experience assessed to identify training
issues/needs or unforeseen problems?
5. Has training documentation been finalized (if needed) and tested/revised with a
pilot group?
6. Has live training (in person or online) been finalized (if needed) and
tested/revised with a pilot group?
Support Requirements – Owner TBD
1. What change in staffing may be required
2. What admin or other tools will need to be developed to support the release.
3. A “descriptor” of service that users will call/ask about.
 What is the process a person goes through to “sign up”
4. Expectations of user, what is needed to satisfy caller
 What are the students and faculty told about this service?
 What instructions are provided to the users for setup?
5. A list of potential tests to do/questions to ask about the service provided
 What types of questions are presently encountered?
 What are the questions and items we want to check before escalating?
6. A method of escalation in “emergencies.”
 Who is the escalation contact for issues?
 What is the turnaround time for response to escalation?
 When would an escalation be needed and where would it go?
7. A “feedback loop” for status of calls generated through Help Desk.
 Who will be calling us in return to the escalation?
 How long before we will be notified?
 What is this person does not respond?
8. Notification and training for the Help Desk on service updates
Communications Requirements – Owner TBD
1. What messages and what timing is needed for them
2. Who are questions directed towards
3. What are key dates for communication releases
Transition from Project to Operations – Owner TBD
1. Who is developing this release?
2. Who will operationally support this release?
3. Who has production access?
Procedures for
management approval of
web development
projects
Web
Development
Projects
Deploy project to
production server
Yes
Assign to
programmer
Does management
approve project?
Check in / Check
out project from
SourceSafe
No
Code change &
project walk-thru
with user
No
Does the project
meets user
requirements?
Yes
Deploy project to
staging server
Procedures for restricting access to program source
code
1. Accessing source code on Source Safe, test servers, and production servers is only
allowed when the individual is a member of the approved access list.
2. The lead or the director must approve it first before an account is added to the
access list.
3. Everyone and Anonymous accounts are removed from the access list.
4. Once an individual separated from the university or from the department, the lead
and/or the director will immediately remove the account from the access list.
5. Source codes are not be copied and taken away without the permission from the
director.
Procedures for the handling, control and protection of
system test data
1. Only programmers, database, and web server administrators can have access to
test data.
2. Protection to test data is just as important as production data since it is an extract
portion of production data. Therefore, precautions such as encryption, user access
control, and limiting IP addresses are still implemented.
3. Test data extracted from production server must be approved by project lead
and/or by the director.
Procedures for the testing and acceptance of web
projects (validation checks, vulnerabilities, etc)
1. After a web project is created and ran through 508 compliance checks, the
programmer is required to schedule a show and tell on his/her development
machine with the user.
2. Programmers are allowed to make minor and cosmetic changes, but major
changes are to be approved by the director and/or by the CITO.
3. The director and/or the CITO can limit the scheduling of the show and tell with
the user.
4. Once the user accepts the project and acknowledges it via email or via phone call,
the programmer must ask the project lead or the director to deploy the project on
the test server for final testing before deploying to production server.
Documentation of the formal procedures of system
acceptance criteria and approval required before going
live with changes or upgrades
Procedures for the protection of source/production
program code (and tracking of changes made)
Programmers or anyone needing to access Source Safe must get approval from the
project lead or from the director.
Only users, who have an account in SourceSafe User Access List, can check in and check
out
codes.
Programmers are to never take the codes out of PLS-180 except when approved by the
director. All codes in Source Safe are the property of Cal State Fullerton. Printed codes
are to be shredded.
Windows Access List of users who have permission to read SourceSafe folder.
When deploying a project on a production server, codes are to be compiled into Bin
file(s). Unless otherwise approved by the project lead and/or by the director, project
codes Bin and other files can be uploaded to the production server.
Only members of CodeReaders group on the production server have read access to the
production code folders.
Members of the CodeReaders group.
This diagram illustrates the process of checking in/out of a project and deploying it to
production server.
Programmers are to always check out latest from SourceSafe before making any changes
to a project codes.
Web Project Status for Webmasters on Campus
Requesting Changes
URL: http://www.fullerton.edu/Project_Status/
Manage by: Edwin Bibera
When webmasters in other departments request changes to the web pages, they must
login to the URL above and input the request. Then the project status updater, Edwin
Bibera, fulfills the request and updates the web page with the status of the request.
Requests still opened:
Requests Completed:
References
1
2
3
4
5
Writing Secure Web Applications
Ovid's "Web Programming Using Perl" Course: Basic Security with
CGI.pm
Macadamia: code review
Open Web Application Security Project
SecureProgramming.com
Download