Tuesday, October 28, 2014

Week 10 - Security Protection Mechanisms

Security protection mechanisms cover a broad range of areas and technologies that are used to ensure the safety of both physical and logical assets.  Included in this discussion are things like access control methods, network and system firewalls, remote access options, intrusion detection, and data encryption.  Looking at this from a layered perspective outside to inside there is Internet, network, system, and data level controls that can be applied.  From this it is apparent that the last line of defense is at the data level.  A number of controls are available to protect data from theft and corruption, but one of the most interesting is encryption.

In the electronic payments industry data encryption has been in place for some time, however not consistently across the industry.  Until the last decade it was not uncommon to store some transaction information in the clear and even some of the transactions themselves were commonly sent over private networks unencrypted unless there was a regulatory reason to encrypt these.  Encryption of automated teller machine (ATM) and debit card transactions has been required for over twenty years as an example.  As the use of public networks, mainly the Internet, have expanded, the need to encrypt data both at rest and during transmission has increased.  As more data storage and processing moves to cloud based solutions, the need for data encryption has also increased.

In a standard cloud based data storage scenario the service provider provides encryption controls and also maintains the encryption keys for whatever method is used to perform the encryption.  Another concern is the encryption methods that cloud service providers employ to accomplish encryption.  This can easily be an open source solution and the customer has no control over the method that the service provider uses.  More recently though the possibility of Bring-Your-Own-Encryption (BYOE) has been proposed as a security model and has the potential to change the way companies control data stored on cloud based services.

Eduard Kovacs writes in SecurityWeek on the BYOE method in a July 15, 2014 article Bring Your Own Encryption: Is it the Right Choice for Your Enterprise that making use of BYOE allows companies to retain control of data encryption.  A company can define what encryption methods it wants to use and also retain control of the encryption keys.  Keys can be managed within the cloud service or can be retained in the company’s data center and lower the possibility of compromise further.  The latter scenario provides greater security even if the cloud provider is breached and also has the additional advantage of protection should the provider be subject to a court order to provide access to all information in the provider's systems.

There are some drawbacks and disadvantages to the BYOE method.  The company making use of BYOE must take responsibility for implementing and managing the selected solution.  Key management must be simple and readily available so that responses to server requests can be met quickly.  Finally, Software as a Service (SaaS) does not currently support this method, and this is a key point for the electronics payment industry.


As electronics payments move to cloud based offerings, the SaaS model is the most likely path currently available.  Data encryption of customer records and transactions is the utmost concern for creating this type of product.  Although BYOE looks like a promising solution, for the electronic payments industry it is not quite ready for deployment.  It does appear though that for other applications that this method should definitely be considered.

Tuesday, October 21, 2014

Week 9 - Risk Management Part 2

Last week discussed Risk Management from the perspective of identifying where risk is present in an organization by cataloging assets, their vulnerabilities, and potential threats that could take advantage of these vulnerabilities.  Once these risks are identified they must then be controlled.  Five general categories of risk control are available including defense, transferal, mitigation, acceptance, and termination.  Most often an economic feasibility study or cost-benefit analysis is conducted to review the different control options available and decide on the best course of action.  However, the best economic choice might not always meet the needs of the organization and it's possible other factors may affect the decision of what controls to put in place.  Once the financial considerations are reviewed it is also important to understand if the controls selected, or even the controls available, will meet the organizations risk appetite.  The remaining risk after controls are applied is the residual risk and this area is where the consideration of risk appetite is measured.  When performing risk analysis it is possible that there will not be specific measures available.  It is possible to use expert opinion and group consensus to estimate values so that the process can move along.  Future review when feedback information is available can be used to true up these estimates.  There are a number of possible risk management approaches that can be applied by an organization to define it's risk management practices.  Regardless of the method used, monitoring and measurement must take place periodically to ensure the effectiveness of the controls used.  One question that comes up with gathering these metrics is how to best retain this information and make use of it?

Marcus Ranum wrote an article in SecurityWeek on September 19, 2014 titled True White-Knuckled Stories of Metrics in Action: The Faculty Systems that describes how a university security manager used metrics gathered to convince owners of a wide array of independent systems that existed at the university to consolidate under a centralized security configuration management control.  He did this by presenting his metric information so that it was relevant to each situation being discussed.  The important point in this discussion was that the metric information was stored in an unconsolidated manner so that the security manager could tailor the information for each discussion.  Using this information the the security manager was able to show that the systems that were administered independent of the information technology department were twice as likely to be compromised and take twice as long to address the compromise once it was identified.  Based on this a policy was put in place that stated that any independent system that was compromised would be isolated from the university network until corrections were put in place.  This resulted in 75% of independent systems moving to information technology management in short order.  The suggestions given for the security metric information are to keep this as fine-grained as possible, think ahead of time about the information being collected, and consider what data is available when a problem arises so that the best picture of what is occurring can be presented.  As Mr. Ranum and others have pointed out, storage space is inexpensive but analysis is not.  Using this method he was able to create a policy and implement a control to address the risk occurring systems running multiple configurations.

Tuesday, October 14, 2014

Week 8 - Risk Management

Risk Management is a term often heard in the information technology industry, but many in the industry do not understand the process or what is involved in managing risk.  All of us are aware of threats to information security, there seems to be a story in the news almost daily about another organization’s security being compromised and customer information being stolen.  Much of this is focused on the electronic payments industry as there is a high likelihood that financial account details can be used to steal funds using this information.  There are a number of compliance standards for this industry, and pressure from both the industry and government to implement these.  But how much thought goes into preventing this type of activity beyond standards and what is involved in protecting an organization’s information.  This is where risk management comes in.  Every organization has some amount of risk involved in their operations.  The unique situation with information technology risks is that the threats against the organization can come from anywhere.  Once there is a public connection available to an organization’s information technology system these risks are present.  To manage risks a risk assessment is conducted to understand what assets exist in the organization, what the threats to the organization are, and what efforts can be made to mitigate these threats.  There also must be an understanding of what the risk tolerance is for the organization as there may be situations where the risk to an asset is higher than the organization is willing to accept and consideration will be needed to the steps to reduce this risk to an acceptable level or discontinue use of the asset.  The National Institute of Standards and Technology (NIST), a part of the U.S. Department of Commerce, provides a number of guidelines related to information security.  On the subject of risk management, the NIST Special Publication 800-30 Revision 1, available here provides detailed instructions, information, and templates for conducting risk assessment.  Steps described for conducting a risk assessment include:

1. Identify Threat Sources – Threats are subdivided into adversarial and non-adversarial categories.  Potential threats are identified as part of risk assessment preparation.  It is important to exclude threats that are not applicable to the organization as this will waste resources that can be used on true threats.
2. Identify potential threat events – Threat events look at specific threat situations.  In the recommendations from NIST there are three tiers of threat events.  Tier 1 are events that could affect the entire organization, tier 2 events can cross information system boundaries between different systems but not necessarily affect the entire organization, and tier 3 events are targeted a specific environments, technologies, or systems.
3. Identify vulnerabilities and predisposing conditions – This step uses the information from steps 1 and 2 to determine what vulnerabilities exist in the organization’s information assets.  Since the complexity of information systems is ever increasing it may be difficult to perform this step for every asset an organization has.  In this case this step can be used to understand the general nature of vulnerabilities that the organization faces.  The existence of predisposing conditions that increase vulnerability is also considered to reduce the size of the vulnerabilities to be reviewed.
4. Determine likelihood – This looks at how likely it is that the threats identified will occur.  This must take into account threat sources that could launch an attack, the vulnerabilities to organization assets that have been identified, and how effective the organization’s countermeasures are at defending against the threat.  Worded differently, how likely is the threat to occur, how well can the organization defend against the threat, and what is the overall likelihood of both of these occurring.
5. Determine impact – Here a review of the adverse effects of threat events is made.  This includes consideration of the possible threat sources, the identified vulnerabilities, and how susceptible countermeasures are to the type of threat. To some extent this step considers worst case scenarios and how the organization will react to these.

Many of the ideas included in the threat assessment can be tied back to the previous discussion on incident response, incident recovery, and disaster recovery.  This is a more in-depth look at what conditions could cause an incident to occur and the preparations the organization has made to deal with threats so they do not become incidents.  The information presented here is very high level.  The NIST SP 800-30 rev 1 document is 95 pages, so the items discussed just briefly touch on this subject matter.  In addition, the appendices in the document provide a substantial amount of detailed information for the subject areas.  An excellent source for threat information is the Verizon Data Breach Investigations Report available here.  The 2014 version of this report breaks out threats by industry with statistics on the number of attacks of each type reported.  This is a great source to use when considering the type of threats an organization could be facing.

Tuesday, October 7, 2014

Week 7 - Of models and practices

     The discussion area for this week is security management models and security management practices.  Security management models describe the rules of the road giving guidelines on designing information security for an organization.  Many of the security management models begin as frameworks that are used to create detailed blueprints for implementation of information security.  The framework approach allows selection of relevant items to include in a blueprint resulting in a customized information security model for an organization.  There are also generic blueprints available as security models from third-party vendors that allow an organization to get a head start on defining an information security model.  Once the security model is defined and a plan is made for implementing this, or perhaps better yet as the plan is made for implementing this, consideration must be given for how the effectiveness of information security will be measured. This is the point that security management practices must be considered.  Measurement, or metrics, must be taken for key processes and activities to ensure that information security is being performed in a manner that benefits the organization over the cost to the organization.  Before measurements mean anything there must be an understanding of what a measurement is expected to be.   This can be discovered using benchmarking by reviewing what similar organizations, or an industry, expect a measurement to be or through baselining where the performance of a process is measured and this becomes the standard that is used to compare future results.  Both of these methods have some challenges but are useful when starting from scratch.  Adjustments can be made as learning takes place over time, and this is in fact a standard method used with metrics in other management areas where a range is used to define success then over time the range is narrowed to encourage continuous improvement.
     So, your organization has its security management model in place, security management practices are defined, best practices and industry practices and regulations are being followed then what happens next?  Lately that has been a security issue that comes out of the blue that no one has seen or considered previously.  The latest example of this is the so called “Shellshock” bug found in the widely used Borne again shell (Bash) used with the Unix and Linux operating systems.  These operating systems essentially run the Internet as well as newer Apple Mac PC’s and other devices.  In the c|net article "Bigger than Heartbleed: Bash bug could leave IT systems in shellshock" on September 24, 2014 Claire Reilly discusses the ramifications of this new bug and how widespread the effects could be.  The most concerning aspect of this bug is that it has been in place for over 25 years passing through numerous code reviews by both the original author and subsequent reviewers as changes have been made.  There is little any organization can do to combat this type of flaw regardless of models or management practices put in place.  At best a rapid installation of patches to correct the problem is the only way to address it.  What other options do organizations have to protect themselves in this situation?
     A number of years ago several authors started a program that made payments to students who found errors in their textbooks.  The result was that the accuracy of the textbooks improved over time. Google has implemented a similar program regarding its Chrome browser beginning in 2010.  Seth Rosenblatt writing for c|net on September 30, 2014 in a story titled "Chrome bug hunters, Google's giving you a raise" highlights some of the successes of this program states that Google has paid out $1.25 million for identification of over 700 bugs.  The maximum bounty for bugs has now been raised to $15,000 from $5,000 with the bottom of the range remaining at $500.  In the past Google has overridden the maximum for especially significant bug finds as well.  Is this something other companies should be considering and would putting an army of bug finders to work reduce the surprise bugs such as Heartbleed and Shellshock that have already come up this year?  There have been incidents in the past where those locating bugs have attempted to blackmail companies into paying them to identifying these.  Perhaps a better arrangement would be to have representation, an agent of sorts who could contact companies and negotiate a reasonable payment for bugs located.  Looking back to the retail credit card problems that have come to light over the last year due to malware being loaded to point-of- sale devices would the companies affected have been better suited if the attacker had been paid to identify these problems?  The surely had security models and security management in place, but to no avail.  Offering a commission for researchers is something to consider adding to the security landscape.