Monday, October 30, 2006

Policy Controls for Building Secure Applications

A number of recent surveys indicate that an increasing number of attacks are targeting applications, rather than operating systems. Hackers have discovered that applications are patched far less frequently than operating systems and web servers. For example, the recent release of the SANS Top 20 vulnerabilities of 2005 points to a number of problems related to application security. The results prompted SANS Institute Research Director Allan Paller to state that "Security has been set back nearly six years in the past 18 months" because of problems with application patching.

Application security weaknesses are now under tremendous scrutiny within commercial software. For years, commercial software vendors have been under fire for not developing secure code and then not fixing flaws fast enough once they are discovered. Applications that effect large number of users, such as email clients and web browsers, have been the focus of much coverage in the news. While commercial software is certainly a large problem, often overlooked are the applications that are developed in-house.

Many organizations that are assessing their internal controls for Sarbanes-Oxley or other compliance efforts are discovering that many in-house applications (as well as those developed by commercial vendors) are lacking in basic security controls. To help reduce the overall corporate risk, compensating controls in the form of manual procedures will need to be implemented. As organizations are beginning to see, the cost of not building security into applications from the beginning can be very many times the cost of manual compensating controls.

Policy-based controls

There are a number of internal control points that make sense to address with policies and procedures. First, is to have an overall policy that concisely establishes security as part of the overall application development process. For example:

Policy: For all business application systems, systems designers and developers must consider security from the beginning of the systems design process through conversion to a production system.

Of course, this policy is equally valid for any development undertaken by the company, either using in-house or contracted staff. A similar policy should be implemented for the acquisition of new systems from third party or commercial vendors. So what some of the organizational standards and procedures that will support this policy?

Security Requirements Reviews - Applications usually begin with a set of requirements. The first step is to review system requirements document for security, and putting specific security controls in the application from the design phase. If you organization has a formal project development process, a formal security review checkpoint should be established.
With all of the procedures mentioned here, it is important to understand the implied personnel responsibilities. A person or team in the organization should be designated to review applications for security requirements. These can either be members of the development staff training in information security practices, or members of the information security team with specific knowledge of application development issues.


Secure Coding Practices - Once requirements have been defined, design and coding begins. At this point, developers begin the process of turning ideas into code. Ideally, developers should be trained in secure coding practices. However, more realistic would be to have one or two senior developers or system architects that can participate in code reviews and coach other team members. For example, these lead developers can establish a set of secure coding "best practices" that get distributed to all development staff.

Testing for Security Features - Assuming that security features where included in the system requirements documents, these features would then generate test cases for system and integration testing. The more complicated the application, the more opportunity there is for vulnerabilities to be created by unanticipated combinations of system state, or assumptions of secure messaging that may get compromised. Again, the testing team should have key members who are trained in developing cases that test availability, confidentiality and data integrity, including error and recovery states. Testing may also include disaster recovery scenarios, such as how to recover the application state from a complete system failure.

Application Vulnerability Analysis - Finally, some organizations may consider performing "white-hat" vulnerability analysis on their own systems. In this scenario, team members or outside consultants who are familiar with system vulnerability can try to "hack" the applications systems in a test environment. This process may expose vulnerability associated with operating system or network configuration flaws that were impossible to anticipate during the design phase.

Conclusion - Have procedures to support your policies
It is important to have policies in place that require security in the application development and acquisition process. However, if your internal procedures are not modified to support the policy, there is no way for the policy to have any impact on the organization. A little bit of homework, and some targeted training for key staff members will help insure that your applications are developed with security in mind. Secure applications will not only make your customers happy, they may keep you out of the headlines.

Related Resources and Information
Information Security Policies Made Easy, Version 10.0 contains over 1300 pre-written policies, including policies for application development and system acquisition. If you have any gaps in your incident policies, this is the most cost-effective way to fill them.

No comments: