The means and the methods for using the Internet for business are constantly expanding—and so are challenges to protecting information assets. Throughout, this guide has examined fundamental issues—both business and technical—entailed in the use of the Internet. This chapter examines several emerging and dynamic areas of concern for Internet security:
Each of these entails threats to Internet use that can compromise business and organizational activity if not properly addressed. It is the goal of this chapter to provide a starting point for adapting to these emerging threats.
Smart phones, personal digital assistants (PDAs), and Blackberries are becoming commonplace in today's organizations due to their ability to help increase productivity and enable more effective communications. As with any new technology, the introduction of mobile devices brings both benefits and costs. The cost of mobile devices includes security risks that did not exist in businesses before their introduction. Some of the most important are:
Mobile devices radically improve accessibility to information and many users would be hardpressed to live without their mobile email access devices. These tools are here to stay. We simply need to ensure that they can be used in such a way that protects the confidentiality, integrity, and availability of information assets.
Emerging technologies tend to leverage and expand on existing technologies, and mobile devices are no different. Although these devices provide fundamentally new types of functionality, they build on existing platforms. For example, mobile versions of Windows operating systems (OSs) are used in mobile devices. However, software designed for mobile devices may not have the same level of security features as its more mature counterparts developed for traditional client devices. Some of the most pronounced vulnerabilities of mobile devices include
These vulnerabilities can be countered, but first they must be understood.
The same threats and vulnerabilities that afflict wired networks often apply to wireless networks. In addition, wireless networks have several of their own to contend with. For example, unencrypted communications on wireless networks can be intercepted without physical connections to a wired network. Unauthorized wireless access points can be introduced, allowing for rogue access to the wired network; some steps that can be taken to reduce these risks include:
As Figure 4.1 shows, rogue wireless devices can gain access to a network as legitimate wireless devices if the access point can be compromised in some way. In addition to preventing such access, systems administrators must address OS vulnerabilities.
Figure 4.1: Wireless access points create the potential for both legitimate and rogue users if proper countermeasures are not deployed.
Theft is a key problem for mobile devices. By definition, these devices allow a user to keep the device with him or her and use it as needed. The great convenience has the accompanying downside of making the devices vulnerable to theft. Well-publicized laptop thefts are becoming too common:
In addition to improving behaviors of mobile device users, encrypting data on these devices is essential to reducing the risk that a stolen device will be yield useful information. Additional security measures, such as the use of security tokens and biometric authentication mechanisms, can further mitigate the risk of data loss due to theft. The problem of mobile device theft makes it clear that more than technical solutions are required to protect information assets.
Too often, security is thought of as a technology problem. If only you can find the right encryption algorithm or patch a buffer overflow vulnerability, you can make your device or application secure. It is true that the technical characteristics matter, a lot, but they are not the only ones. The non-technical, human use factors play critical roles as well.
Consider some disturbing results from several surveys about mobile device use and security:
Some of the challenges in mobile device security should be addressed by technical countermeasures, such as encryption and multi-factor authentication; others need to be addressed by well-defined and enforced policies, such as limiting the types of data that can be downloaded to mobile devices.
Mobile devices are now, not surprisingly, the target of malware developers. Examples of malware targeting mobile devices include:
In addition to spreading via Bluetooth and memory cards, some mobile malware can be deployed from rich clients. The Cardtrp worm can infect mobile devices during synchronization.
Clearly, the same level of measures used to protect stationary clients and servers from malware should be deployed to protect mobile devices as well. Although some of the threats to mobile devices are converging with their older counterparts in stationary devices, there some unique characteristics of mobile devices that make them especially challenging to manage.
Many IT organizations standardize on a set of applications, OSs, and hardware configurations, either by design or by unintended consequence. Either the management evaluates a set of applications and related platforms and chooses the best option for the organization or their options are limited by resources and skills. Once an IT infrastructure is in place, it tends to stick. Not many organizations will throw out a platform in one radical change and replace it with another. Changes tend to be evolutionary and might require support for multiple platforms simultaneously.
Similar patterns occur with the use and deployment of mobile devices with one significant difference: it is often not the IT professional that introduces mobile devices into an organization—it is professionals from throughout the company or agency who bring their personal devices into the technology mix. This has a number of implications.
IT no longer dictates the platform used within the IT infrastructure. Some executives may have smartphones that run Symbian; others will have Windows Mobile devices. Many will use Blackberries for remote email access; others will use Palm OS PDAs for on-the-road email access. The grassroots introduction of mobile devices may be slow, but at some point, it may reach a critical mass and become part of the expected level of support.
IT is left with the responsibility for technology it may not have planned on supporting. Consider the problem of legal issues threatening to shutdown the Blackberry network in the United States. Now imagine working for an organization with executives that had become dependent on constant email access and you are expected to formulate a "Plan B" in the event Blackberry email access is not available. That is the kind of support challenge that can creep into an organization; it often starts slowly but then builds momentum. A de facto information service, whether it was planned or not, becomes the responsibility of IT departments.
Another issue is that mobile devices are often owned by employees. This raises a number of management, service, and liability questions, such as:
These are just some of the questions that arise when mixing personal and corporate resources. Regardless of how an organization might answer each of these questions, it is important to document policies governing the use of corporate information on personal devices. At the very least, policies should describe:
Mobile devices can improve productivity and enable staff to work more effectively in a wide range of circumstances. The benefits of mobility are difficult to deny. At the same time, mobile devices introduce management challenges as well as security vulnerabilities that must be understood and addressed. Organizations should formulate policies governing the use of both corporate and personal devices, train users on basic security principals, and compensate for vulnerabilities by introducing appropriate countermeasures. Mobile devices are also part of a broader trend in fundamental changes to the network perimeter and the effectiveness of perimeter security measures.
Network perimeters are the boundaries between internal and external networks. Typically, firewalls and boundary routers are used at perimeters to control the flow of traffic; the intention is to prevent unauthorized traffic from entering while keeping internal traffic safe behind a logical barrier. This simple model was fairly representative of organization networks at one time and with some modifications for multiple layers, as in DMZ configurations. Two trends have emerged at the perimeters that make the perimeter both more complex and more porous than it has been.
In some ways, network perimeters have become more complex. In addition to firewalls and boundary routers, other devices are appearing near perimeters, including:
These devices provide additional levels of protection. For example, although firewalls are adept at determining when packets should be blocked and allowed through, they are not designed for other essential perimeter defense operations, such as blocking viruses and spam before it enters trusted zones of the network or email servers. As security threats have changed, so too have the countermeasures deployed at the perimeter.
Intrusion detection and intrusion prevention systems monitor networks and hosts in an effort to identify attacks. In the case of intrusion detection, the goal is to identify an attack and notify systems administrators who can then respond to the attack. Intrusion prevention, ideally, detects and stops attacks. Detection methods are based on heuristics, generally applicable rules, statistical profiles, or comparisons with known secure states.
Heuristic detection uses a library of attack patterns to determine whether an attack is underway. For example, if a large number of TCP connection requests are made and confirmations are not received after initiating the connection, a SYN Flood attack may be in progress.
Heuristic rules have the advantage of targeting known attacks and can be effective against them, but there are limitations. Variations in attacks may be missed and new attacks will go undetected until the base of rules is updated. Statistical approaches can compensate, at least to some degree, for these shortcomings.
Attacks against a host or a network usually represent something out the ordinary for a system.
They may manifest themselves as an increase in incoming network traffic, as in a Denial of Service (DoS) attack, or an increase in outgoing traffic (for example, if a database has been compromised and a large amount of data is being stolen). In the case of an attack on a host, an unusually high CPU utilization during off hours may be indicative of a Trojan horse application stealing CPU cycles.
Watching for patterns that are out of the ordinary allows for a more customized approach to intrusion detection than found in heuristic methods. It also, though, has limitations. One of the most difficult challenges with statistical profiling is defining what is the "normal" behavior for a system.
For example, normal behavior for a particular set of servers may not entail large volumes of data being copied from several servers to a single server. This could indicate an attacker stealing data and consolidating it before transferring it out of the network. However, it could also be the nightly load of a data warehouse. If the profile of "normal" behavior included variations over a 24-hour period, the data load may not appear to be anomalous behavior. But what about activities that occur every 2 or 3 days, every week, or every month? What is the appropriate window of time for defining the profile? If the window becomes too large, the range of possible "normal" behaviors becomes so large that attacks can appear normal; if the window is too narrow, normal behavior can trigger the detection of a supposed attack.
Known good profiles are signatures that are used to detect changes in files. These techniques are used for host intrusion detection. The basic idea is that for any known good file, such as an OS file just installed on a cleanly formatted drive, a signature is calculated. These signatures are known as message digests. These message digests are strings that correspond to the contents of a file; if the contents change, so does the message digest. Message digests are like fingerprints; they are unique and correspond to a single entity, such as a person in the case of fingerprints, or files, in the case of message digests.
The calculations used to create message digests have two very important properties. First, it is extremely rare to find two input strings (or files) generating the same output string. Therefore, it is highly unlikely that someone can make a change to a file and get the same message digest as the original. The second property is that given a message digest, it is impossible to determine what input string was used to create the message digest. Thus, one cannot determine the contents of the file from the message digest.
For intrusion detection purposes, the following procedure is used:
No single method for intrusion detection and prevention will work well in all circumstances, but combining multiple techniques can help to mitigate the limitations of each method while benefiting from their strengths. Not all threats first manifest themselves by direct attacks on servers or networks. Some threats move as content in and out of networks.
Information moving in and out of an organization can present a variety of threats to security and efficiency. Consider, for example:
To minimize these threats, content should be filtered at the perimeter as well as at the host. Multiple layers of countermeasures, known as defense in depth, help to ensure some level of protection if one of the countermeasures is compromised or otherwise ineffective.
A challenge of effective content filtering is maintaining up-to-date information about malicious software, phishing attacks, and inappropriate Web sites. Ideally, a content-filtering system will utilize:
Content filters supplement the work of firewalls by examining traffic at a more aggregated level. They are able to identify threats that packet-level analysis alone cannot detect. The perimeter is also the point at which many VPNs establish an endpoint for incoming connections.
In addition to traditional content filters, authenticated email frameworks, such as the Sender
Policy Framework (SPF), can help block emails that contain forged sender addresses. Within the SPF model, a sender can publish information about the servers it uses to send messages. This information is stored along with DNS information. Recipients of messages can use this information to verify that a message originated from one of the servers used by the purported sender. In addition to SPF, another email authentication mechanism is Sender ID, which is supported by Microsoft.
VPNs logically extend a secured network by providing an encrypted information tunnel. VPNs extend beyond the perimeter to remote locations and allow remote users to access network services as if those users were within the perimeter. These secure tunnels use encryption to protect the confidentiality of information as well as message digests (described earlier) to protect the integrity of messages.
The perimeter of a network is an active area. A VPN is no longer just a single level of defense based on packet-level analysis and simple blocking strategies. Content filtering, encryption, and decryption at VPNs, and monitoring for intrusions are all security services that make the perimeter a much more complicated place than it used to be. Ironically, although additional security measures are making perimeters more secure, the perimeters themselves are becoming more porous.
Figure 4.2: VPNs create the equivalent of an encrypted, electronic tunnel (dark lines) through the Internet for confidential communications.
In response to growing demands for access to information resources from a variety of sources, network administrators have carefully begun to open communications channels through the perimeter. A number of methods are commonly used that essentially permeate the perimeter, including:
Trusted users can access the corporate network through VPNs. This method provides the greatest access to network services but must be used judiciously. Client devices using VPNs, in some important respects, are treated as if they were physically on the network. Once packets reach the terminal VPN point and the packets are decrypted, there may be no additional security checks on the content. This can lead to problems. For example, an employee who connects to the VPN from a home computer may find that a worm that has infected his PC is now working its way into the corporate network.
Another way in which the boundaries of the network are becoming less well defined entails access rights granted to users outside the organization. For example, auditors, consultants, suppliers, and customers may all be granted access to databases, information portals, and other enterprise applications. This introduces challenges around identity management and authorizations. How will you know if an employee of your auditing firm has left the company and their access rights should be revoked? How do you control transmission of data from your managed devices to third-party devices? Extended access controls allow organizations to adapt to the particular needs of business partners and other stakeholders, but the risks should be understood and, as much as possible, mitigated.
Access to specific applications is becoming much more common, especially using Web interfaces. Traveling executives and sales representatives can retrieve email and check calendars from unmanaged devices such as hotel and airport kiosks. Customers can run form-driven database queries to check the status of accounts, orders, and other information. Like the other techniques that make the perimeter more porous, this one has its risks. For example, checking email or downloading documents to an unmanaged device may leave copies of that information in a browser cache accessible to others.
Figure 4.3: Users should be made aware of the need to clear buffer caches to prevent the disclosure of sensitive data. Browsers, such as Mozilla Firefox, allow users to remove several types of cached information.
The network perimeter is becoming more complex. In many ways, it is better protected with the use of intrusion protection and content-filtering devices. At the same time, drivers to improve service, provide more flexible access, and reduce costs are leading to a more porous perimeter. With much attention on protecting information assets from outside threats, it is sometimes easy to forget that threats can emerge from the inside as well.
Businesses have long tried to gather information about their competitors. Often, a great deal can be learned about a business by using publicly available sources, such as government filings, press releases, articles and speeches by corporate executives, and so on. These are all legal and legitimate sources of information. Unfortunately, not everyone plays by the rules and, as the value of intellectual property increases, such data is becoming more of a target for cyberattackers. To see evidence of the rise of intellectual property theft and economic cybercrime, consider recent changes by the U.S. Department of Justice, which according to Enterprise Government (http://governmententerprise.com) include:
Intellectual property theft can come in many forms, such as copycat goods; pirated software, music, and videos; and the unauthorized use of patented or trade secret techniques. Whenever intellectual property is embodied in a digital form and stored on devices connected to a network, and even in some cases when devices are physically isolated, it is at risk of theft.
Protecting information assets is not just about keeping the bad guys out; it is also about preventing the loss of confidential and proprietary information. An important point to remember when controlling information loss is that the thieves may come from inside as well as outside the organization. For example:
What can organizations do to protect themselves? Following the basic principles of risk management, defense in depth, patch management, and backup and recovery is important. Specifically:
Another challenge that systems and network administrators must contend with is the problem of zero-day threats.
Zero-day threats are attacks that are launched before software vendors and security researchers are aware of the vulnerability exploited by the attackers. By definition, there is no specific patch for the vulnerability and no targeted information about how to minimize the risk from the vulnerability. Zero-day threats are a popular topic in security discussions, perhaps driven in part by the speed with which new attacks are emerging and the seemingly never-ending stream of vulnerabilities disclosed by vendors and researchers.
Countering a zero-day threat is like planning for a natural disaster: you don't know when it will hit, how bad it will be, or what kind it will be—you don't even know whether you will ever experience one. But you can certainly be prepared minimize the effects. Some general principles apply to both situations.
First, take general precautions that can mitigate the risks of a variety of attacks. A basic backup and recovery plan will allow you to restore systems to a previous state of functionality regardless of what caused the loss of data or functionality to begin with.
There is always the risk that malware or logic bombs that destroy data may also be on all backups as well. If the code that caused a disruption in service can be identified, backups can be checked and the code purged before returning systems to operation.
Second, implement a patch management program. Doing so might not prevent a zero-day attack but may mitigate the consequences. A common attack method is to deploy a blended threat that exploits multiple vulnerabilities. A well-patched system is less likely to be compromised by multiple attack vectors. Also remember that a non-zero day attack can be just as damaging as a zero-day threat if a system is not patched. For example, Microsoft had released a patch for the vulnerability exploited by the SQL Slammer worm months before the attack that shut down large segments of the Internet in 2003. As a result, those customers with effective patch management programs were mostly unaffected by the worm.
In some cases, intrusion detection systems may recognize anomalous behavior associated with a zero-day attack. As attacks become more sophisticated, it is less likely that a single countermeasure will adequately protect an information infrastructure. Coordinating the use of multiple countermeasures will become increasingly important.
A variety of countermeasures are now deployed in enterprises that use defense-in-depth strategies. The basic tools for protecting information assets have come a long way from basic packet-filtering firewalls at the perimeter and antivirus software on the desktop. The fundamental countermeasures are now:
The countermeasures themselves are growing in complexity. Antivirus software, for example, used to depend on scanning content for signatures of known viruses; after virus developers created techniques to mutate their code as it replicated, antivirus developers had to deploy more elaborate behavior-based detection techniques. Complexity is also increasing because of how the countermeasures are used.
Countermeasures are more likely to be used in combinations to counter attacks, especially when diagnosing unusual events. For example, if an intrusion detection system detects a change in an OS file, the audit logs may contain related events around the same time that can provide further information about the change. That information might also lead to a review of access control audit logs indicating what identity was used to gain access to the system. As the example shows, information from one countermeasure might lead to another countermeasure's log, which, in turn, leads to another, and so on. Coordinating information from multiple countermeasures, especially in real time, is a current challenge for the information security industry.
Content on corporate and government networks can be categorized by three types:
Organizations are adapting to the changing nature of content and the requirements of protecting information assets. The future of content protection will entail several factors:
This guide has examined the nature of protecting information assets on the Internet. The business case for protecting content-based assets is clear: compliance, human resource and workplace issues, and the threat of service disruption are substantial and immediate challenges facing organizations. The threats faced are evolving, often in response to protective measures. Malware is more complex, spam and phishing messages more stealthy, and the threats from inside the organization are also becoming clear. However, with proper policies and supporting procedures and the deployment of appropriate countermeasures, organizations can mitigate the risks they face every day. Of course, as the attack and disruption methods are constantly changing, organizations must constantly adapt to emerging trends in cyber threats. Fortunately, this can be done, especially as information security practices become more and more a part of normal operating procedures.