Applying the Security Development Lifecycle at Windows Live May 2010 Authors: - Windows Live—Ali Pezeshk, Deepak Manohar, and Spencer Low The SDL team—Bryan Sullivan Contributors: - The SDL team—Don Ankney, Grant Bugher, and Russ Mcree Windows Live—Gruia Pitigoi-Aron, Andy Glover, Len Zuvela, and Shankar Rajagopalan June 29, 2010 For the latest information, please see http://www.microsoft.com/sdl Applying the Security Development Lifecycle at Windows Live This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. © 2010 Microsoft Corporation. All rights reserved. Applying the Security Development Lifecycle at Windows Live 1 TABLE OF CONTENTS Introduction..................................................................................................1 Overview of Windows Live .........................................................................1 Security Considerations in Windows Live ................................................2 Windows Live and the SDL Process ..........................................................2 Training Phase ..............................................................................................3 Requirements Phase ....................................................................................4 Design Phase ................................................................................................5 Implementation Phase ................................................................................7 Verification Phase ..................................................................................... 11 Response Phase ......................................................................................... 12 Conclusions and Results .......................................................................... 12 Applying the Security Development Lifecycle at Windows Live INTRODUCTION Security is of critical importance in today’s software industry. According to U.S. Government agencies, cybercrime is now a $100 billion a year business. Software vendors have worked hard to reduce the attack surface of their platforms. Consequently attackers have shifted their focus to the applications that run on these platforms and have become much more targeted and stealthy in their approach. Attackers seem to be most interested in Web applications: A recent Gartner report showed that 75 percent of hacks take place at the Web application layer. Furthermore, many of these Web applications must comply with government regulations or meet other industry-specific requirements. Loss of end-user data due to security breaches can erode trust in the brand and can affect the online industry as a whole. Microsoft developed the Security Development Lifecycle (SDL) to address general concerns about software security. The SDL was designed to reduce the number of existing software vulnerabilities and to limit the severity of any new vulnerabilities that might be introduced during development. The SDL is a process that encompasses education, technology, and executive commitment. The first version of the SDL, which focused heavily on client/server applications, dates back to 2002. Later versions of the SDL have expanded their scope to include teams who are designing and building Web applications. The Windows Live™ team adopted many of the newer Web-focused requirements of the SDL before these requirements were incorporated into the SDL. This paper summarizes these new features, describes the process that the Windows Live team followed to roll out the SDL, and captures some of the lessons that they learned along the way. This paper also describes how the use of SDL by the Windows Live team has evolved, starting with Windows Live Wave 2, through Windows Live Wave 3, and on to the upcoming release, Windows Live Wave 4. OVERVIEW OF WINDOWS LIVE Windows Live provides two classes of products: Web applications, such as Windows Live Hotmail®, running on a set of Web servers hosted for Microsoft. Client applications, such as Windows Live Messenger, running on users' desktops. The security threats and mitigations for these two classes of products are very different. The most common vulnerabilities observed in the Web applications are cross-site scripting (XSS), cross-site request forgery (XSRF), open redirects (XSRs), and JavaScript object notation (JSON) hijacking. In the client applications, most of the vulnerabilities are due to buffer overflows and integer overflows. Some other common security vulnerabilities, such as Structured Query Language (SQL) injection attacks, are not so prevalent in Windows Live products because of their limited use of SQL. The tools that are used to find vulnerabilities, and the mitigations that are applied to fix these vulnerabilities, vary significantly for the two classes of products. The SDL process for building client applications is well documented. Therefore, this paper only considers how the SDL applies to the Web applications at Windows Live. Applying the Security Development Lifecycle at Windows Live 1 SECURITY CONSIDERATIONS IN WINDOWS LIVE Windows Live serves over 500 million customers, making it a very attractive target to attackers. This paper only covers the Web applications at Windows Live, so the most impactful threats in this case are XSS, XSRs, XSRF, and JSON hijacking. An XSS or XSR vulnerability in one of the Web applications could result in users’ credentials being stolen and their e-mail messages or documents being read. This kind of vulnerability could even be used to launch phishing or spamming scams. Hackers could also use such a vulnerability to distribute malicious software (also called malware) to other users. An XSRF or JSON hijacking vulnerability could result in users’ accounts being deleted without their knowledge, their e-mail messages being forwarded to the attacker’s e-mail addresses without their knowledge, leakage of personal information, and other information disclosure. WINDOWS LIVE AND THE SDL PROCESS The SDL process consists of several well-defined phases, each with its own set of tasks. The following image shows the phases in the SDL process. Figure 1: Phases of the SDL The SDL team has released externally many of the tools that are used to support the SDL process. These include the SDL Process Template, the SDL Threat Modeling Tool, the BinScope Binary Analyzer, and the MiniFuzz File Fuzzer. Other Microsoft teams have also released externally tools that the SDL requires, including Static Code Analysis in Visual Studio 2005 (FxCop) and the Web Protection library. Since 2004, the release policy for all Microsoft products mandates the use of the SDL, and every major product must complete a Final Security Review (FSR) during the Release phase to ensure that it complies with all of the relevant security requirements. In addition to completing the security requirements in the SDL, the Windows Live team added its own security requirements. These included the use of dynamic canaries and safe redirects, which are described in more detail in the white paper, "Preventing Security Development Errors: Lessons Learned at Windows Live by Using ASP.NET MVC.” The SDL has now incorporated these requirements. The SDL process is constantly evolving to take into account the changing threat landscape of software, the ways in which software is used, and advances in technology. The SDL process is updated regularly as the security environment and the threats that are posed to applications evolve. These updates incorporate the best practices, and the tools that support secure software development. The experiences of the Windows Live Security team have contributed to the evolution of the SDL. Specifically, the Windows Live team collaborated with the SDL team and added some additional steps to the SDL that are used at Windows Live, as shown in the following image. Applying the Security Development Lifecycle at Windows Live 2 Create custom training courses Feature Risk assessmen t tool Security Test Plan Generator Security Test plan review Threat model assistance Security by default: ASP.NET MVC Security reliant on clear trust boundaries Centralized security functionality Static code Directed Live DNS code reviews Passive security tools Active security tools Internal penetration testing Tighter integration with incident monitoring, incident response, and engineering teams analysis Figure 2: SDL with additional steps The following sections describe the most important phases of the SDL and provide more details about the additional steps that the Windows Live Security team introduced to the SDL process. TRAINING PHASE The Windows Live Security team dedicated considerable effort to training program managers, developers, and testers on the SDL process, requirements, and tools. The team created several SDL-certified courses that covered the following areas: The threat-modeling process for Web applications. The security landscape and underground business model. The threats that are affecting Windows Live. The mitigations that the Windows Live Security library offers. The various security-testing tools that have been customized for Windows Live. Many Windows Live engineers have taken these courses, and the following feedback captures the success of the courses: “This was one of the best courses I’ve attended in Microsoft. It was very relevant to my product and feature.” “The hands-on, in-class practice was very useful. I can directly apply these techniques.” “The course was very informative and I really liked the in-depth coverage of the Windows Live–specific libraries and APIs that I need to use.” Lessons Learned Training laid the foundation for focusing engineers on the right security problems and it was a crucial step to the other phases in the development life cycle. By providing Windows Live engineers with custom, tailored courses about the details of the libraries and tools that are used at Windows Live, the engineers could easily apply this knowledge during product development. Applying the Security Development Lifecycle at Windows Live 3 The timing of the availability of these courses is also of critical importance. Windows Live releases products in waves. It is important to have the training courses ready and delivered before embarking on work that is targeted at the next feature milestone. REQUIREMENTS PHASE The optimal point to specify security requirements for a software project is during the initial planning phases of a new release or a new version. This enables development teams to identify key objectives and integrate security in such a way that it is a fundamental tenet of the design. In this way, they can minimize the disruption to product plans and development schedules that can occur if security requirements are specified later in the process. To determine the security requirements for Windows Live, the Windows Live Security team analyzed the most common errors that developers made, examined the external vulnerabilities that were reported to the Microsoft Security Response Center (MSRC), and consulted external sources on the most common threats that were affecting Web-based products. The team concluded that the biggest security problems resulting from development errors at Windows Live were: Cross-site scripting Cross-site request forgery Open redirects JSON hijacking For a more in-depth explanation of the above vulnerabilities, see “OWASP Top 10” and our previous paper. In the version of Windows Live that was currently under development, the engineering leadership team decided to move from ASP.NET to ASP.NET Model-View-Controller (MVC). The security team quickly recognized that this was a great opportunity to build security into the framework, and prevent developers from making the errors that lead to security problems. The following sections briefly describe the design choices that were made. However, for a more detailed study of how the Windows Live Security team prevented developers from accidentally introducing XSRF, XSR, and JSON hijacking issues into the code base, read the paper, "Preventing Security Development Errors: Lessons Learned at Windows Live by Using ASP.NET MVC." Security Risk Assessment When Program Managers (PMs) specified features, some teams also opted to have their PMs fill out a Security Risk Assessment questionnaire. This helped evaluate the security risk level of the feature. The questionnaire included questions such as: Does this feature handle high business impact (HBI) data? Does this feature handle code from third parties? The Security Risk Assessment questionnaire helped the PMs to evaluate the security risk level of their feature more easily. It also helped the Security team to scale better and more easily focus on the features that carried the highest risk. Applying the Security Development Lifecycle at Windows Live 4 Lessons Learned Maintaining a repository of bugs that were found internally and vulnerabilities that were found externally, which the team could easily categorize and search, facilitated the analysis of these bugs. This repository enabled the Windows Live team to tackle security at a strategic level, rather than a tactical level. DESIGN PHASE The Design phase includes tasks such as performing threat modeling and reviewing security test plans. Threat Modeling Threat modeling is a systematic process that is used to identify potential threats and vulnerabilities in software. It enables the designers to incorporate the appropriate mitigations and protections into the software. A careful review of customer requirements and expectations regarding security can help to identify the portions of the software that may pose risks, and assist in conducting thread modeling. Defining a threat model requires the identification of all possible threats against a given application and then documenting their assessed probability together with possible countermeasures to mitigate the threat. In some cases, the mitigation takes the form of changing the design itself, in which case the team must analyze the new or changed elements in an additional threat-modeling iteration. Performing threat modeling in a systematic manner enables the design team to identify many security issues that it should address early on in the Design phase before coding starts. Microsoft has released the Microsoft SDL Threat Modeling Tool. This is an easy-to-use, graphical tool for creating and analyzing threat models, capturing impact assessments and their proposed mitigations, and producing actionable bug reports to help to ensure that the team mitigates or eliminates all identified vulnerabilities. Threat Modeling in Windows Live The Windows Live Security team decided to use the threat-modeling task as an opportunity to provide some on-the-job training for the engineers at Windows Live. The Security team worked with the product teams at Windows Live to create the most critical threat models. This proved to be a great approach to share and disseminate the security knowledge of the Security team to the engineers at Windows Live. For our current release (Wave 4), Windows Live engineers have identified over 300 security issues in the Design phase due to threat modeling. Security Test Plan Reviews and the Security Test Plan Generator at Windows Live Performing threat modeling is a standard practice within the product groups at Microsoft. However, the Windows Live team wanted to integrate security more tightly into the product life cycle. In addition, the philosophy within the Windows Live product group is that the Windows Live Security team does not own security. Instead, security is owned by the team members for individual features: the PM, the developer, and the tester. This approach enables the Security team to take responsibility for creating testing guidelines and partner with the personnel who are responsible for testing to help them adopt these guidelines. To facilitate this strategy, the Security team used two tools: The Security Test Plan Review Service. Applying the Security Development Lifecycle at Windows Live 5 The Security Test Plan Generator. The following sections describe these tools. Security Test Plan Review Service At Windows Live, test engineers create test plans for their features during the Design phase of the product development life cycle, before testing begins. The last step of the Design phase is for key partners to review the engineers’ test plans. The Security Test Plan Review Service makes the Windows Live Security team a key partner for the features that the team develops that carry the highest risk. During this review, the Security team members attend the test plan review meeting, share their security knowledge, and identify potential gaps in the test plan. The Security team reviews the test plans of the most critical features. The Windows Live engineers can use the Security Risk Assessment questionnaire to identify the criticality of the features. In addition to identifying several gaps in the test plans, this service offers several key benefits: Teaching many at once. By providing the review with other team members present, the Security team can pass on its knowledge to many team members simultaneously, rather than on a one-to-one basis. On-the-job training. By providing security feedback on the individual test engineer’s feature, it focuses the engineer’s attention and delivers more targeted information than a training course. Security Test Plan Generator In addition to the above, the Windows Live Security team recognizes that certain common feature attributes indicate the need for particular test cases. For example, if the feature is going to perform POST operations, it indicates the need to test for the presence of XSRF mitigations. The Windows Live Security team identified the most common attributes of the features that were developed at Windows Live and identified the appropriate test cases for them. An extract of this information is shown in the following image. Figure 3: Selecting features the Security Test Plan Generator The test engineer selects the type of threats that apply to his or her feature. Based on this selection, the Security Test Plan Generator automatically generates test cases that are relevant to the feature. For example if the above two check boxes are selected, the test cases in the following image are generated. Applying the Security Development Lifecycle at Windows Live 6 Figure 4: Test cases generated by the Security Test Plan Generator To use the Security Test Plan Generator, the test engineer selects the check boxes for the attributes for that feature (for example, he or she performs POST operations). The Security Test Plan Generator then provides the test engineer with a list of test cases that the engineer should include in the test plan. IMPLEMENTATION PHASE The SDL process mandates that developers adhere to specific process requirements for coding software, and these requirements are enforced by using source code analysis tools. The following sections describe some of the tools and practices that the Windows Live team has implemented, including: Windows Live implementation principles. Performing static code analysis. The Windows Live Web Security library. Performing directed code reviews. Implementation Principles The Windows Live Security team took certain principles to heart: Be secure by default. Implement security that relies on well-defined trust boundaries. Design for quick incidence response. Applying the Security Development Lifecycle at Windows Live 7 Secure by Default This concept is explained in detail in the paper, Preventing Security Development Errors: Lessons Learned at Windows Live by Using ASP.NET MVC. When some teams at Windows Live moved from ASP.NET to ASP.NET MVC, the Windows Live Security team took the opportunity to build certain mechanisms into the framework that turn on, by default, some of the mitigations against the most common attacks that Windows Live faces. Building these mechanisms into the framework prevents or reduces the chance that a developer error will result in a security vulnerability in Windows Live. Technical readers should read the paper that is referenced above to learn more about how the framework was designed and constructed to defend against cross-site request forgeries, JavaScript and JSON hijacking, and open redirects. Security That Relies on Well-Defined Trust Boundaries The advantage of application-level programming is the flexibility that it offers to developers in implementing solutions. This advantage can become a disadvantage when there is a team of more than a thousand engineers who implement custom security mitigations for common vulnerabilities. Each of these custom implementations must be tested for security weaknesses, and each of them must be verified if some external attacker finds a new way to exploit some component or technology that the mitigation relies on. Furthermore, it may take some time before users are no longer vulnerable to an attack because many users only upgrade slowly to a new version of software. Instead of letting developers identify their own custom mitigations and rely on some arcane technology to implement this mitigation, the Windows Live Security team opted to rely on well-defined trust boundaries such as the Same Origin Policy. The Same Origin Policy is critical to the functioning of the Web, so browser manufacturers must quickly fix issues that break this trust boundary. Two examples of applying this principle are: The provision of a separate domain for content that users have uploaded. The policy of lowering domains. Same Origin Policy Wikipedia describes the Same Origin Policy as follows: "In computing, the same origin policy is an important security concept for a number of browserside programming languages, such as JavaScript. The policy permits scripts running on pages originating from the same site to access each other's methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites." http://en.wikipedia.org/wiki/Same_origin_policy Separate Domain for User-Uploaded Files Serving files that users have uploaded poses several threats to the Windows Live service. The most obvious threat is that users can upload viruses or malicious software. Another problem is that users may attempt to upload files that are interpreted as HTML. One solution to these problems is to examine the header of an uploaded file and match it with the header of the expected file format. However, this type of technique does not offer a clear boundary; distinguishing good and bad file headers is not a solid enough line. Instead, the Windows Live team opted to use a separate domain to serve files that users had uploaded. This means that, even if the attacker injects HTML into an uploaded file, it cannot run in the Document Object Model (DOM) of the primary domain. Thus, the attacker cannot steal any important data because of the protection that the Same Origin Policy provides. Applying the Security Development Lifecycle at Windows Live 8 The alert reader may be quick to point out that an attacker could upload HTML that redirects users to a different site serving malicious software or phishing ads. However, Windows Live checks user-uploaded content to ensure that it does not redirect to any arbitrary site. Policy of Lowering Domains The Same Origin Policy prevents different domains or different sub-domains of the same domain from communicating with each other. Sometimes, web application developers lower document.domain to the top-level domain so that two subdomains on the same domain can communicate with each other. However, this also opens up the two subdomains to all other subdomains on the same domain, which may not be intended as it affects the security principle of least privilege. Hence, the Windows Live Security team worked with the SDL team to make it an SDL recommendation that teams should not lower their document.domain property to the top-level domain. Design for Quick Incidence Response Many of the Windows Live features are built with the ability to be easily disabled by using a setting in a configuration file. In the case of a catastrophic security issue, we can turn off a specific feature without having to bring down the entire service. Static Code Analysis Static code analysis tools examine the product source code and uncover patterns and idioms that are known to be problematic from a security perspective. The Windows Live team uses Microsoft Code Analysis Tool .NET (CAT.NET), as the SDL requires. Microsoft makes CAT.NET available as a free download. The Application Consulting & Engineering (ACE) team develops and maintains CAT.NET. Run Frequency Windows Live teams are required to run CAT.NET twice in any development milestone: once just after Code Complete and once before zero bug bounce (ZBB). This approach helps teams to identify issues that were introduced by coding, and the second pass helps with regressions. Precompilation of Files (*.aspx, *.ascx, and so on) Teams frequently write code that sends output to a Web page in the presentation layer. This code is usually in the *.aspx or *.ascx files. For a more complete analysis by using CAT.NET, it was determined that teams should compile the whole site as part of the build process and run it on the precompiled binaries. WebSecurity Library Centralizing security functionality has several benefits: Fewer implementations of security-critical code means fewer errors. Errors that have been fixed once apply across multiple teams. Reduced costs for implementing fixes because there is no need to search the entire code base for similar problems. The Windows Live Security team opted to centralize security functionality. This resulted in the creation of the WebSecurity library, which contains most of the security functions that developers use. The components of the WebSecurity library are: Applying the Security Development Lifecycle at Windows Live 9 A wrapper around the Anti-XSS library. A wrapper around the OSafeHtml library. A Saferedirect library. Wrapper Around the Anti-XSS (Web Protection) Library Windows Live uses the same Web Protection library that is available externally for performing output encoding. A different team at Microsoft with different ship cycles owns the Web Protection library, but the Windows Live team wanted the ability to modify its functionality. This is one of the key reasons that the Windows Live team created a wrapper around the Anti-XSS library, and this proactive approach paid off when the team was able to modify the behavior of the library for mobile phones and the Japanese language. Wrapper Around the OSafeHtml Library Windows Live uses the OSafeHtml library to filter input for dangerous HTML tags. The Windows Live team opted to create a wrapper around the OSafeHtml library for the same reasons that it created the wrapper around the Anti-XSS library. The externally available Microsoft Web Protection library has a new application programming interface (API) to support input filtering without relying on OSafeHtml. Saferedirect Library By design, Windows Live products often need to perform redirects of HTTP requests to different destinations, mostly within the Window Live sites. As described earlier, accepting redirection URLs from the user may result in open redirects that spammers and phishers could exploit. To prevent this problem, the Saferedirect library checks that the destination domain is on an allowed, customizable list of domains. Note: The paper Preventing Security Development Errors: Lessons Learned at Windows Live by Using ASP.NET MVC describes a technique where the team built an open redirect mitigation directly into the framework so that developers would not need to call this library. Directed Code Reviews Two approaches were considered when performing code reviews: 1. Have the central security team perform code reviews on the features that have the highest risk in Windows Live. 2. Share the knowledge from the baseline security code review with the engineers and have them do a code review of every feature that was checked in. The Windows Live Security team opted for the latter approach. In an organization of more than one thousand people, this was the only effective way to scale. The Windows Live Security team and security leaders across Windows Live developed concise guidelines for performing security code reviews. The Windows Live Security team evangelized the guidelines, passing on the knowledge through brown bag sessions and through assisting other engineers during their first few code reviews. This methodology has proven to be very successful in the last two milestones, where engineers with no security background have found over 75 security issues. In addition to finding issues, this approach has been very useful in raising skill levels by providing hands-on experience, and in raising engineers’ confidence that they can find security issues by following the right approach. Applying the Security Development Lifecycle at Windows Live 10 VERIFICATION PHASE Security testing is a critically important part of the SDL process. It addresses two primary concerns: To ensure that there are security features and functionality that are specifically designed to provide confidentiality, integrity, and availability of the software and of the data that the software processes. To verify the overall quality of the software and to help ensure that it is free from bugs that could result in security vulnerabilities. For example, attackers could exploit buffer overruns in data parsing code. To address these concerns, the Verification phase includes several security reviews and tests, as described in the following sections. Test Verification Tools In any large-scale software development, the only feasible approach to scale testing is by taking advantage of automation. The Windows Live team uses automation to test many of the features, and this was determined to be the best entry point for adding automated security testing. The Windows Live team employs several tools that the SDL requires, in addition to some specific tools that are not yet publicly available. This section concentrates on following an approach based on publicly available tools. Premise for Automating Security Testing—Why Can Some Security Testing Be Automated? The key reason that you can automate to some degree security testing of XSS, XSRF, JSON hijacking, and a few other well-understood attacks is because they can be spotted by looking for certain recognizable patterns in the HTTP stream. It is important to remember that Windows Live security testing examines both the HTTP requests and how the Windows Live servers respond to these requests. This enables the Windows Live team to identify whether the server implements the necessary security measures. For example, consider performing very simple XSS testing. If the HTTP request contains a dangerous string such as “<”, automated testing can determine whether the server includes the appropriate security measure by looking at the response. If the HTTP response contains the text “<” rather than the encoded version “&#60;”, the server probably does not implement the required encoding. Note that this case is a gross simplification, but the general principle is valid. The approach of looking for patterns is feasible, as the number of tools that exist in this space demonstrates. The popular tools are ratProxy from Google and Fiddler Watcher from Casaba. The Windows Live team uses Fiddler Watcher as part of the automation testing to identify security vulnerabilities. Fiddler Watcher ships with 38 standard checks. The tool can be downloaded from here. Using these tools for automating security testing has helped with: - Systematically finding these issues. Integrating security testing into normal functionality testing. Internal Penetration Testing One of the downsides of using external security companies to perform penetration testing is that it does not foster the transfer of knowledge from the security company to the in-house engineers. This means Applying the Security Development Lifecycle at Windows Live 11 that the skill level of the in-house engineers in performing internal penetration testing may not progress effectively. The Windows Live Security team wanted to address this key problem. The team offered penetration testing as a service to the Windows Live product teams. As part of the service, the Security team would organize a treasure hunt for the security bugs. The Security team would find the security vulnerabilities, but, rather than simply pointing out the bugs to the teams, would give clues to the engineers and encourage them to find the bugs in the product themselves. This approach has gone a long way toward: Raising the skill level of the engineers with respect to penetration testing. Raising the confidence of the engineers in finding security bugs. Effectively scaling the Windows Live Security team. RESPONSE PHASE Despite the best engineering efforts and diligently following the SDL, it is inevitable that code issues will exist and that some of these may lead to security vulnerabilities. Creating a Security Response Plan (SRP) prepares for this contingency. The Windows Live team often revisits the SRP to determine what improvements they can make, and to address new classes of vulnerabilities as and when they arise. This happens on a regular basis and is not tied to specific releases. In addition to having a well-documented SRP, many of the Windows Live features can also be turned off by changing a setting in configuration files. This enables a very quick response in the worst-case scenario where we need to disable a feature to reduce the impact of the security vulnerability. CONCLUSIONS AND RESULTS In addition to meeting its commitments for Windows Live, the team has stepped up to the challenge and gone beyond the minimum bar that the SDL describes in several areas. During the development of Wave 4, more than 1,400 security bugs had been identified and addressed by April 14, 2010. This is a good indication of the interest that every Windows Live engineer has toward ensuring the security of the product. The key things that you should remember after reading this paper are: Provide customized training to the engineers early in the development phase. Prioritize the security investment based on the risk of the features that are being built: - Consider creating a Security Risk Assessment questionnaire for the organization. Create a central team of security consultants to assist in raising the level of security skills among the engineers in the organization. The central team should work on disseminating its knowledge in the form of the following services: - Threat model services - Code review services Applying the Security Development Lifecycle at Windows Live 12 - Penetration testing services Structure these services to provide on-the-job training to the engineers. The services should also target the features with the highest security risk. Create test case generators to distribute security testing knowledge to the engineers. Implement security that relies on well-established trust boundaries such as the Same Origin Policy and do not rely on arcane technical details to provide security. Avoid inventing custom security mechanisms because they are hard to get right. Precompile *.aspx, *.ascx, and other such files to help identify more issues with static code analysis tools. Static code analysis tools need to be run once when the code is complete to identify any bugs that were introduced during coding, and once before ZBB to handle regressions. In addition to using security-testing tools like static code analysis tools, educate the engineers to perform security code reviews and penetration tests. These enable engineers to gain familiarity with the feature that they are building and look at it from a different angle. Applying the Security Development Lifecycle at Windows Live 13