Monday, October 30, 2006

Security Policy and Responsibility

Last month we discussed the security policy problems revealed within the department of Veteran's Affairs (VA) in the wake of the highly public data breach, including the firing of two employees responsible for information security. Over the last month, employees at both AOL and Ohio University were terminated or resigned in the aftermath of data privacy breaches. All of these cases point to some interesting security policy questions for all organizations to consider.

Security Scapegoats?
While termination seems to be an obvious step to attempt to restore customer confidence, in both cases serious questions were raised about the overall security and privacy practices of the entire organization. In the wake of very damaging or embarrassing data breaches, some organizations seem to focus the blame on individuals, rather than on weaknesses of internal policies and procedures.


In the past, similar incidents have resulted in lawsuits for improper termination, since many organizations failed to clearly communicate their data security and privacy policies to all employees. In the case of Ohio University, lawyers have already made statements for the fired employees indicating that they were improperly targeted. Similar statements were made by ex-employees of the VA.

Security Policy Lessons

These incidents and their public fall-out raise some important questions for organizations concerned with policy creation, education and enforcement:

Question: Do your information security policies cover sanctions against employees? Is the language in the policies specific to violation of existing corporate policies?

In neither of these cases did the public statements mention that employees were violating any specific policy, but instead seemed to indicate that the employees should have "known better." AOL CEO Jon Miller in an internal memo stated that "This incident took place because some employees did not exercise good judgment or review their proposal with our privacy team. We are taking appropriate action with the employees who were responsible."


The fundamental question here is whether or not an employee should be fired for making mistakes, especially in areas where there is very little official guidance on how employees can operate safely with sensitive data. While we are not attempting to judge the legality of such actions, evidence suggests that terminating employees without proper cause or documentation will create problems.

During a risk-assessment or policy update phase, organizations would do well to consider what would happen in their own organization if an individual makes a mistake that causes an information security and privacy breach. What should be done if the organizational policies only address violation of stated policy?

Question: Does your organization clearly communicate information security and privacy policies to users based on their role in the organization?

Organizations that wish to terminate employees for violation for company policy should take great care to have their information security and privacy policies clearly documented and communicated.

In the case of AOL, it is not clear if there was a corporate privacy policy that prohibited researchers from using data without consulting the privacy group. But other data casts some doubt. Public statements by AOL suggest that they are now taking a serious look at their internal policies. Public response to the AOL incident included allegations that sensitive search data should be destroyed as part of a regular data destruction policy.

In a separate statement, Ohio University announced a 20-point plan to improve information security at the school, which has about 16,640 undergraduate students and 862 full-time faculty members on its Athens campus.

Question: Are information security and privacy responsibilities clearly documented in job responsibilities?

In the case of the VA and Ohio University, the terminated employees had direct responsibility for information security. Even so, statements from the attorneys of fired employees seem to raise some questions as to which systems the individuals were responsible for.

In the case of AOL, the employees were doing research on web searches. Company statements indicate that there were no official procedures in place for protecting customer privacy, but that the employees "were to consult the privacy team" before posting their research.

While we can only extrapolate from these public statements, the common thread is all of these cases is a poor documentation of information security responsibilities. While have information security policies is critical, they are much more effective when they are tied to specific responsibilities of various job roles. Organizations that take this more structured approach will not only have better security, but will be better prepared for any sanctions.

Resources
Information Security Policies Made Easy, Version 10 - A complete library of information security policies, including policies for personnel security.
Information Security Roles & Responsibilities Made Easy, Version 2 - An extensive library of documented information security requirements for various organizational roles.
Privacy Management Toolkit, Version 1 - A complete resource for managing customer and employee privacy based on OECD Fair Information Principles.

Policy Controls for Building Secure Applications

A number of recent surveys indicate that an increasing number of attacks are targeting applications, rather than operating systems. Hackers have discovered that applications are patched far less frequently than operating systems and web servers. For example, the recent release of the SANS Top 20 vulnerabilities of 2005 points to a number of problems related to application security. The results prompted SANS Institute Research Director Allan Paller to state that "Security has been set back nearly six years in the past 18 months" because of problems with application patching.

Application security weaknesses are now under tremendous scrutiny within commercial software. For years, commercial software vendors have been under fire for not developing secure code and then not fixing flaws fast enough once they are discovered. Applications that effect large number of users, such as email clients and web browsers, have been the focus of much coverage in the news. While commercial software is certainly a large problem, often overlooked are the applications that are developed in-house.

Many organizations that are assessing their internal controls for Sarbanes-Oxley or other compliance efforts are discovering that many in-house applications (as well as those developed by commercial vendors) are lacking in basic security controls. To help reduce the overall corporate risk, compensating controls in the form of manual procedures will need to be implemented. As organizations are beginning to see, the cost of not building security into applications from the beginning can be very many times the cost of manual compensating controls.

Policy-based controls

There are a number of internal control points that make sense to address with policies and procedures. First, is to have an overall policy that concisely establishes security as part of the overall application development process. For example:

Policy: For all business application systems, systems designers and developers must consider security from the beginning of the systems design process through conversion to a production system.

Of course, this policy is equally valid for any development undertaken by the company, either using in-house or contracted staff. A similar policy should be implemented for the acquisition of new systems from third party or commercial vendors. So what some of the organizational standards and procedures that will support this policy?

Security Requirements Reviews - Applications usually begin with a set of requirements. The first step is to review system requirements document for security, and putting specific security controls in the application from the design phase. If you organization has a formal project development process, a formal security review checkpoint should be established.
With all of the procedures mentioned here, it is important to understand the implied personnel responsibilities. A person or team in the organization should be designated to review applications for security requirements. These can either be members of the development staff training in information security practices, or members of the information security team with specific knowledge of application development issues.


Secure Coding Practices - Once requirements have been defined, design and coding begins. At this point, developers begin the process of turning ideas into code. Ideally, developers should be trained in secure coding practices. However, more realistic would be to have one or two senior developers or system architects that can participate in code reviews and coach other team members. For example, these lead developers can establish a set of secure coding "best practices" that get distributed to all development staff.

Testing for Security Features - Assuming that security features where included in the system requirements documents, these features would then generate test cases for system and integration testing. The more complicated the application, the more opportunity there is for vulnerabilities to be created by unanticipated combinations of system state, or assumptions of secure messaging that may get compromised. Again, the testing team should have key members who are trained in developing cases that test availability, confidentiality and data integrity, including error and recovery states. Testing may also include disaster recovery scenarios, such as how to recover the application state from a complete system failure.

Application Vulnerability Analysis - Finally, some organizations may consider performing "white-hat" vulnerability analysis on their own systems. In this scenario, team members or outside consultants who are familiar with system vulnerability can try to "hack" the applications systems in a test environment. This process may expose vulnerability associated with operating system or network configuration flaws that were impossible to anticipate during the design phase.

Conclusion - Have procedures to support your policies
It is important to have policies in place that require security in the application development and acquisition process. However, if your internal procedures are not modified to support the policy, there is no way for the policy to have any impact on the organization. A little bit of homework, and some targeted training for key staff members will help insure that your applications are developed with security in mind. Secure applications will not only make your customers happy, they may keep you out of the headlines.

Related Resources and Information
Information Security Policies Made Easy, Version 10.0 contains over 1300 pre-written policies, including policies for application development and system acquisition. If you have any gaps in your incident policies, this is the most cost-effective way to fill them.