Information Security

An enterprise’s technology infrastructure must enable the enterprise to connect with customers, interact with suppliers and partners, enhance the productivity of its workforce, and optimize its back-office activities. In addition, this infrastructure requires security measures that protect information assets and ensure system availability. Today, organizations must manage their security devices to achieve both security of exclusion (keeping intruders out) and security of inclusion (allowing authorized users in).

■ Technology Infrastructure Security Components When planning for technology infrastructure security, organizations must ensure the operational sustainability and physical security of the infrastructure. Organizations also must consider the security requirements of the network and perimeter, platforms, and applications and their associated data stores. Additionally, as more organizations deploy Web services for internal and external integration projects, they also need to understand the unique security requirements for this emerging technology.


Infrastructure security depends upon the concept of operational sustainability; that is, using a consistent approach for the deployment of infrastructure components and ensuring sufficient capacity to establish a robust, scalable, and highly available infrastructure. Organizations can use performance and capacity monitors, resource utilization controls, and calculations of expected resource usage to help them establish adequate capacity. Mechanisms such as load balancers, fail-over devices, and network management systems help to maintain high availability. Operational sustainability also provides common supporting operations for backup, disaster recovery, and replication.


Physical security refers to controlling access to physical assets such as buildings, computers, and paper documents. These assets may contain critical information or provide access to networked resources. When implementing physical security, organizations should consider the appropriate level of security for and access to locations such as the site, buildings, computer rooms, and data centers—and how to monitor the facility. In addition, organizations should define methods for protecting removable media and offline data storage, and determine how to label and protect documents that must remain confidential.


Network security focuses on the technologies that compose the enterprise network, as well as the boundary of an enterprise’s technology infrastructure—where it connects to the Internet. Organizations also must consider the security of their wireless networks. By providing controlled access to network resources and facilitating low-level prevention and detection of attacks, organizations can enable access and protect assets in their computing networks.


Platform security focuses on the security of the underlying operating systems, such as client and server versions of Windows, UNIX, Linux, and OS/400. Security at this level provides detailed user access controls, permissions, and configuration options. By establishing restrictive access control lists (ACLs) and by defining security, auditing, and logging settings, organizations can enhance the detection and prevention of attacks on operating systems. In addition, reducing operating system functionality to the minimum required can help strengthen platforms. Other techniques to improve platform security include establishing login and usage parameters, defining a restrictive set of services and service configurations, restricting system access points, and defining lockout and password policies.


Data store security focuses on the security of databases, directories, and other enterprise information repositories. Organizations can enable the appropriate level of access and permissions to data stores by using authentication mechanisms; defining the users, applications, and network services allowed to access the data; and monitoring, auditing, and logging all activities regarding data access, update, and deletion. Data store security also provides mechanisms to protect the integrity, accuracy, and completeness of data.


Application security refers to security options, settings, and configurations for mainframe, traditional client/server, and Web applications. Security at this level enables people who have the appropriate level of user access and permission to use applications. Application security also includes securing the data store used by the application. Organizations are beginning to deploy Web services for application integration projects as well as using them as a mechanism for exposing limited application functionality to both internal and external users. Because Web services are a largely untested and still evolving standards-based technology, security issues related to Web services require special attention and a multifaceted approach.

■ Network Security Network security focuses on the technologies and the security surrounding those technologies that make up the enterprise network. A common approach to network security is to surround an insecure network with a defensive perimeter that controls access to the network. A perimeter defense is good as part of an overall defense. However, once past the perimeter, a user is left unconstrained and may cause intentional or accidental damage. The network is entirely vulnerable if a hostile party gains access to a system inside the perimeter or compromises a single authorized user. A combination of security tools, including firewalls, intrusion detection systems (IDSs), intrusion prevention systems (IPSs), and virtual private networks (VPNs), helps to ensure perimeter and network security.


Network security focuses on the technologies that compose the enterprise network, as well as the boundary of an enterprise’s technology infrastructure—where it connects to the Internet. Organizations also must consider the security of their wireless networks. By providing controlled access to network resources and facilitating low-level prevention and detection of attacks, organizations can enable access and protect assets in their computing networks.


Platform security focuses on the security of the underlying operating systems, such as client and server versions of Windows, UNIX, Linux, and OS/400. Security at this level provides detailed user access controls, permissions, and configuration options. By establishing restrictive access control lists (ACLs) and by defining security, auditing, and logging settings, organizations can enhance the detection and prevention of attacks on operating systems. In addition, reducing operating system functionality to the minimum required can help strengthen platforms. Other techniques to improve platform security include establishing login and usage parameters, defining a restrictive set of services and service configurations, restricting system access points, and defining lockout and password policies.


Data store security focuses on the security of databases, directories, and other enterprise information repositories. Organizations can enable the appropriate level of access and permissions to data stores by using authentication mechanisms; defining the users, applications, and network services allowed to access the data; and monitoring, auditing, and logging all activities regarding data access, update, and deletion. Data store security also provides mechanisms to protect the integrity, accuracy, and completeness of data.


Application security refers to security options, settings, and configurations for mainframe, traditional client/server, and Web applications. Security at this level enables people who have the appropriate level of user access and permission to use applications. Application security also includes securing the data store used by the application. Organizations are beginning to deploy Web services for application integration projects as well as using them as a mechanism for exposing limited application functionality to both internal and external users. Because Web services are a largely untested and still evolving standards-based technology, security issues related to Web services require special attention and a multifaceted approach.

■ Network Security Network security focuses on the technologies and the security surrounding those technologies that make up the enterprise network. A common approach to network security is to surround an insecure network with a defensive perimeter that controls access to the network. A perimeter defense is good as part of an overall defense. However, once past the perimeter, a user is left unconstrained and may cause intentional or accidental damage. The network is entirely vulnerable if a hostile party gains access to a system inside the perimeter or compromises a single authorized user. A combination of security tools, including firewalls, intrusion detection systems (IDSs), intrusion prevention systems (IPSs), and virtual private networks (VPNs), helps to ensure perimeter, network security and analysis application. The actions the firewall takes are based on definitions established when the firewall is configured and are determined by the corporate security policy articulated in terms of users, resource access, and acceptable Internet usage. For example, if an organization decides that communications from particular IP addresses may not access the organization’s network, configuration of the firewall will ensure that the traffic from that point of origin will be denied.

Firewall Deployment

Firewall implementation and deployment have now become a common practice as most companies and organizations develop these capabilities to varying degrees, depending on their needs. Significant changes in security technologies and a trend toward appliance devices (standalone, integrated hardware and software packages that have an embedded operating system) have resulted in the addition of several capabilities to firewalls. However, the fundamental principles in deploying a firewall remain the same, as discussed in the following sections.


A good firewall deployment always starts with a complete understanding of the assets (business processes, data, and intellectual property) that an organization needs to protect. Based on the threats and the corresponding risks involved, firewall protection must be commensurate with the risk level. Also, legal and regulatory requirements must be taken into account, especially in certain industry sectors such as financial services and health care. Typically, firewalls facing a public network such as the Internet would require the maximum level of protection, especially when protecting a large corporate network.

Security policy

The firewall policy should dictate the level of protection required and the detailed technical implementation, such as applications or protocols allowed or denied, source or destination IP addresses allowed or denied, level of authentication required, and level of monitoring and logging required. In large corporations, especially, the creation of firewall policies should include the business IT community and feedback, because critical business process design and implementation (such as e-commerce and business-to-business activities) rely upon ubiquitous network connectivity. Once established, firewall policies should be reviewed periodically (and as the need arises) and updated based on new threats and business requirements.

Firewall architecture

Based on the established risk profile and security policy, the firewall architecture should be developed with consideration of operational requirements, assets being protected, networks to which the protected assets will be exposed, performance expectations, anticipated growth, and interoperability with existing and anticipated applications. Given the range of firewalls and vendors in the market, the selection process must be rigorous and thorough to avoid future issues.

Operations and management

Without proper operational procedures that can detect, track, and thwart attacks, companies and organizations are at great risk. A sound operational policy will incorporate and appropriately implement (based on the risk levels) at least these minimum requirements: auditing and logging of events; real-time monitoring of events using appropriate network management software; backup and recovery; periodic integrity checks using integrity software (Tripwire, for example); change and configuration management procedures for performing firewall policy changes; and periodic management reports.

Threats and vulnerabilities

New threats and vulnerabilities that affect systems and firewalls are discovered daily and can cause great damage, effectively shutting down an organization’s business processing environment. An organization’s firewall operations should include procedures by which newly discovered threats and vulnerabilities are vetted as needed and appropriate countermeasures (such as patches, hot fixes, and virus definitions) are applied immediately. Several organizations (most notably the Computer Emergency Response Team, or CERT) and vendors provide security intelligence services when new vulnerabilities are discovered.

Security incident response

An effective incident response procedure provides a road map for actions to be performed when real-time monitoring has determined that a potential firewall compromise is under way or has occurred. By determining the types of actions required for the levels of attacks and compromises, an organization can continue to perform key business functions, minimize damages, and defend against attacks effectively.

Managed firewall services

Although considerable stigma was once attached to outsourcing security functions, significant activity has occurred recently in the outsourcing market. This trend can be attributed to the growing acceptance of outsourcing in general, the economic situation in 2003, and the maturity of security outsourcing vendors. Firewall operations and management have become wellestablished and mundane functions that companies outsource to qualified vendors who can take advantage of economies of scale for monitoring and defending against new and complex attacks. Before outsourcing, due consideration should be given to the qualifications of the vendors, assessment by independent third parties (for example, conducting an SAS 70 audit of the vendor’s data center), and the terms of contracts and service level agreements (SLAs).

Firewall Products and Technologies

Firewalls may be packaged as system software, hardware and software combined, or dedicated hardware appliances. Enterprise firewalls typically use a combination of hardware and software that provides high availability, high throughput, and mechanisms for centralized management of the firewall system. Centralized management becomes especially important when large enterprises use many firewalls in different physical locations. Firewall appliances simplify installation and management tasks and are well suited for organizations that have limited resources. Software firewalls often are used to protect single computers in smaller environments. Firewall technologies work at different layers of the Open Systems Interconnection (OSI) reference model for the communications process. At the network layer, a firewall can restrict packet flow based on the protocol attributes. The packet’s source address, destination address, originating Transmission Control Protocol/User Datagram Protocol (TCP/UDP) port, destination port, and protocol type are used for these control decisions.
At the application layer, a firewall may participate in the communications between the source and destination applications, and the firewall would base its control decisions on the details of the conversation (for example, rejecting all conversations that discuss a particular topic or use a restricted keyword) and other available information such as previous connectivity or user identification. Most firewall products use a combination of techniques and technologies to provide a security mechanism. Common techniques include application-level proxies, packet-filtering gateways, and stateful inspection. Packet filtering works at the network layer (Layer 3) of the OSI model, where it can monitor inbound and outbound connections. Application-level proxies and stateful inspection technologies can be applied through the application layer (Layer 7). Stateful inspection monitors the status of communication flow, and application-level proxies examine the packet header and the application data in the packet. Attacks such as the Nimda and SQL Slammer worms are increasing the need for deeper inspection of application content.
Application-level proxies. Application-level proxies are software programs that relay traffic for a specified application or service such as Telnet, File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Hypertext Transfer Protocol (HTTP), or the Network News Transport Protocol (NNTP). In this type of firewall, a client application outside the firewall (such as Telnet or FTP) communicates with the proxy server instead of directly with the protected application servers. The application-level proxy firewall can control the application communication, for example, by intercepting the message traffic and asking for strong authentication before allowing the conversation to continue. Because no direct network connectivity occurs between external networks and the protected server, the protected system is secured from network-level attacks such as the Ping of Death denial of service attack. The connection interception inherent in application-level proxies makes this firewall technique more secure than packet filters, but the connection interception can slow network performance. Also, each network application requires a specific application proxy. Because proxy development can be time-consuming and enterprise needs might be very specific, most commercial packages include only the main Internet services. A circuit-level gateway is a special case of a proxy-based gateway where an applicationspecific proxy does not exist. In this case, the gateway does not perform any control functions at the application protocol layer, but rather passes traffic transparently for a given application. A circuit-level gateway typically is used as part of a gateway that performs application-level proxy functionality; it essentially bypasses the control functions of the gateway for a particular application that is deemed not to pose a security threat and for which no application-specific proxy exists. One of the more widely used circuit relays is SOCKS (for SOCKetS), a circuit level protocol developed in 1990 and supported by the Internet Engineering Task Force (IETF). SOCKS provides a secured proxy data channel for internetwork traffic and can be used in conjunction with or as a standalone circuit-level firewall. SOCKS establishes a secure and authenticated channel regardless of the application or protocol requested. The client workstation needs to be running software that enables it to negotiate with the SOCKS server; this requirement of installing client software is one reason why circuit-level gateways are not widely used in enterprises.

Packet-filtering gateways

Unlike application-level proxies, which add security by monitoring and possibly altering the application-to-application communication, a packet-filtering gateway controls traffic at the network (IP) and transport (TCP) levels (Layer 3 and Layer 4). Packet-filtering gateways examine the source and destination addresses of data packets, source and destination service ports, packet types, and packet options. Packets received by these filtering gateways are permitted or denied based on an ACL—a mechanism used to implement the site’s security policy (for example, deny all inbound timeserver messages by blocking TCP port 525). (See Figure24.) A packet-filtering firewall cannot prevent all network-level attacks against the protected server. For example, if an ACL-based rule is not set for a particular class of traffic, then the firewall will be ineffective against the undefined IP traffic. Moreover, a packet-filtering firewall does not examine higher-level information such as user authentication or activities inside the application conversation and thus can allow a malicious user to get inside the perimeter. However, the reduced per-packet processing means that a packet-filtering gateway usually outperforms an application-level firewall.

Stateful inspection

Stateful inspection is an advanced packet-filtering technique that operates between the data link layer (Layer 2) and the network layer (Layer 3) of the OSI model. Filters can be applied to allow inspection through all seven layers. Stateful inspection examines header information and compares the data to a state table created for each service. This table keeps track of inbound and outbound connections and the conversation’s state. If the user attempts an action that is not registered in the table or for which permissions were not granted, the user is rejected. This technique is more efficient than using a proxy for specific applications. Stateful inspection reassembles fragmented TCP and IP packets and examines them before permitting them through the firewall. This reassembly helps ensure that fragmented packets of legitimate connections are not dropped and that attacks hidden in fragmented packets are detected and blocked. Stateful inspection works well with complex protocols, can be easily extended to support new services, and works best in environments where the security technologies’ impact on network performance is a key consideration. To fully understand the context of the communication, however, a firewall using stateful inspection may need to let a packet or two pass through before the analysis is completed.

Deep packet inspection

While stateful inspection analyzes the packet at various levels of the OSI model, it does not typically analyze packet data. The newer technique of deep packet inspection works like an IDS (discussed below) and analyzes a packet’s application payload (its content) and tries to identify and block malicious code. Deep packet inspection uses such techniques as signature inspection and content examination. Only a few products commercially available as of October 2003 include deep packet inspection, but more vendors are expected to include this functionality in the future.

Firewall Trends

Firewall products are developing greater integration—using a combination of technologies to provide more functionality, and facilitating interoperability of security components throughout the infrastructure. Many firewalls include VPN technology, where a secure tunnel is created through the external network via an encrypted connection between the firewalls to access the internal, protected network transparently. More firewalls are expected to incorporate the technologies used in current IDSs. Application-level firewall capabilities are becoming increasingly important as business operations demand greater access to information. Additionally, the increasing use of client-side, personal firewalls in enterprises is expected.

Application firewalls

Traditional firewalls inspect traffic at the network layer. Even the application-level proxy firewalls, which inspect traffic at the application layer, typically do not perform detailed application-level checking.
A recent spate of attacks at the application level has used the HTTP standard Web protocol, which can pass through a standard firewall as long as the protocol is allowed by policy. To defend against such attacks, organizations have increasingly used application firewalls, which inspect application-level packets and intercept application-level exploits. These firewalls provide an additional and strong layer of security, especially when protecting critical Web applications and Web hosting farms.

Integrated security appliances

As companies require enhanced connection speeds to the Internet and advanced security features, and because firewalls typically reside at vantage points on the network, firewall vendors are increasingly incorporating features such as intrusion detection and prevention into firewall appliances. Before deploying such devices, administrators should consider factors such as overall security posture and performance.

Hardware-based firewalls.

In hardware-based firewalls, the firewall logic is embedded into the hardware (traditional firewalls embed the logic in the software). This hardware-based approach, which has become more prevalent during the past few years, helps improve firewall performance significantly.

Blurred traditional network boundaries

Business-to-business and e-commerce applications have caused a significant increase in network activity that originates outside the corporate network boundaries. To accommodate such requests and to provide appropriate levels of protection, companies must re-architect their firewalls and provide neutral zones between public and private networks (so-called demilitarized zones or DMZs). This blurring of network boundaries has also forced a trend toward enhanced authentication, authorization, and host-level security.

Personal firewalls

Personal or client-side firewalls have become a necessity as more people connect to the Internet using broadband technologies that have a persistent connection. Personal firewalls are typically a scaled-down version of traditional firewalls. Many companies that encourage employees to connect to company networks using VPN services over the Internet will also deploy policy enforcement software. In essence, if the end-user system does not meet an acceptable level of security as defined by the company, then access to the company network through the VPN is not permitted.

Wireless firewalls

Wireless networks, if not deployed with security controls, provide unauthorized users with an easy method of access to company networks, and this access cannot be easily detected or controlled. Firewall functionality that protects corporate networks from attacks via wireless networks can be found in several of the higher end wireless routers that are used for corporate deployment. Firewalls also can be implemented as an additional layer of security between the corporate network and the wireless router.


An intrusion detection system (IDS) is a type of security management system for computers and networks. It gathers and analyzes information from various areas within a computer or network to identify possible security breaches, including intrusions (attacks from outside the organization) and misuse (policy violations within the organization)—preferably in real time.

IDS functions include:
• Monitoring and analyzing user and system activities

• Analyzing system configurations

• Assessing system and file integrity

• Recognizing typical patterns of attacks

• Analyzing abnormal activity patterns

• Tracking user policy violations IDSs have had some overlap with the functionality of firewalls, but the two have been considered complementary services, because both technologies are necessary for the enforcement and constant monitoring of a comprehensive security strategy. Current trends indicate that firewalls will soon incorporate more IDS functionality, enhancing firewalls and helping to overcome the shortcomings of IDS.

IDSs can be characterized by the following key attributes: audit source location, detection method, behavior on detection, and usage frequency.
Audit Source Location The most common approach is to categorize IDSs on the basis of the audit source of the events being monitored. The two major sources of audit data are the hosts and the network. A host-based IDS relies on system logs and audit trails, whereas a networkbased IDS samples the network event stream. Some IDSs are hybrid solutions, which encompass the desirable features and functionality of both host-based and networkbased systems.
Host-based IDSs. These IDSs require the installation of software on the system to be monitored. The main source of information about user activity for a host-based IDS is the set of audit records from the computer system. Host-based IDSs may be deployed as independent packages on key systems or as distributed agent-based software with a separate central monitoring station. An application-based IDS sensor is host-based and uses an application’s transaction log files to analyze events within a software application. In selecting host-based IDSs, the following points should be considered:
• Audit records may not arrive in a timely fashion, because some IDSs use a separate computer to perform the analysis.

• The audit system itself may be vulnerable because intruders may turn off the audit system or modify the audit records to hide their intrusions.

• Audit records may not contain enough information to detect certain types of intrusions.

• Privacy implications and regulations may arise related to the monitoring and mining of user data. Network-based IDSs. These IDSs monitor the traffic on a network segment. The monitoring device sensor is usually a strategically located computer or device on the network, and it can see only the packets that are carried on the network segment to which it is attached. Packets are considered to be of interest if they match a signature. There are three primary types of signatures:
• String signature—A text string that indicates a possible attack.

• Port signature—A connection attempt to a well-known, frequently attacked port.

•Header condition signature—A dangerous or illogical combination in a packet header.
An intruder may be able to evade detection by cleverly fragmenting packets or through collaborative attacks. The need to inspect packets and their contents limits the effectiveness of network-based systems in VPN environments because they cannot detect an intrusion attempt perpetrated across an encrypted connection.
Hybrid IDSs. Hybrid IDSs combine host- and network-based intrusion detection features that monitor network packets for IP spoofing and packet flooding attacks with log analysis products that run on hosts and monitor system logs. The hybrid systems detect attacks on both networks and systems. This type of integrated solution gives IT administrators a more comprehensive view of attacks across the enterprise, and the common interface and central monitoring console increases the likelihood of detecting attacks.

Detection Methods

In addition to the audit source classification, IDSs also can be classified by the detection methods used to identify anomalous activity: anomaly (or behavior-based) detection and misuse (or knowledge-based) detection. The detection method is the core of an IDS and describes the approach used to identify problems.
Anomaly detection. Often referred to as behavior-based detection, anomaly detection consists of establishing normal behavior profiles for users and system activity and observing significant deviations from the established normal patterns. Significant deviations are flagged as anomalous and should raise suspicion. Ideally, the daily use of a computer system by a given user follows a recognizable behavior pattern that can serve as a characterization of the user’s identity and expected system activity. Because a user’s normal activity profile may change over time as he or she learns new commands and gains greater familiarity, his or her established profile must also be updated continuously to accommodate the changes in behavior.
It is possible to consider application behavior profiles instead of (or in addition to) user behavior profiles. Application behavior profiles are more reliable in detecting anomalies, especially masquerading applications such as Trojan horses, because application programs generally have more predictable behaviors than user subjects.

Techniques used for anomaly detection include:

• Statistical analysis—Considers activity intensity, audit record distribution, categorical, and ordinal measures.

• Predictive pattern generation—Takes past events into account when analyzing data to predict future events.

• Neural networks—Detect subtle patterns in data sets such as the system logs.

These techniques can discover attempts to exploit new and unforeseen vulnerabilities. They are less dependent on operating-system–specific mechanisms and can also help detect abuse-of-privileges attacks that do not actually involve the exploitation of any security vulnerability.
A high false-alarm rate is generally cited as the main drawback of anomaly techniques because the entire scope of the behavior of an information system may not be covered during the learning phase. Another drawback is the difficulty associated with the choice of an appropriate threshold to flag anomalous behavior. Lower thresholds may result in an increase in false-alarm rates, while higher thresholds may miss anomalous activity. Behavior can also change over time, introducing the need for periodic online retraining of the behavior profile, resulting in either unavailability of the IDS or additional false alarms. During the retraining, the information system can undergo attacks at the same time that the IDS is learning the behavior, and, as a result, the behavior profile could be infused with intrusive behavior that would not be detected as anomalous when encountered later.
Misuse detection. Also referred to as knowledge-based detection, misuse detection consists of searching the audit trails for the occurrence of specific patterns of audit records generated by known misuses of the system based on past intrusions, known vulnerabilities, and security policies. In misuse detection, the knowledge base is an encoding of the specific actions, as manifested in the audit trail, that constitute misuse of the system. The pattern of audit records representing the penetration scenario is also referred to as the intrusion or attack signature. Reliance on these attack signatures limits the detection to known vulnerabilities.
The techniques used for detection of misuse include:

• Expert systems—Automate the application of heuristics or rules of thumb.

• Signature analysis—Searches for particular patterns that indicate intrusion.

• Colored Petri nets—Can be used to make a model of a system that can be analyzed for problems.

• State transition analysis—Identifies particular sequences of actions.
Knowledge-based approaches have the potential for very low false-alarm rates, and the contextual analysis proposed by the IDS is detailed, making it easier for the incident response team to take immediate preventive or corrective action.
Drawbacks include the difficulty of gathering the required information on the known attacks and keeping it up-to-date with new vulnerabilities and environments. Maintenance of the knowledge base of the IDS requires careful analysis of each vulnerability and is a time-consuming task. Knowledge-based approaches also may make it difficult to generalize the intrusion detection capability. Knowledge about attacks is very focused and is dependent upon the operating system, version, system, and application.

The resulting IDS is closely tied to a given environment. Detection of insider attacks involving an abuse of privileges is more difficult because no vulnerability is actually exploited by the attacker.

Behavior on Detection

Several responses to an attack are possible, particularly if the attack is in progress when detected. The most common response is to log the incident and generate an alarm to allow the administrators to evaluate the need for action. This method is referred to as a passive response. Alternatively, an intelligent autonomous IDS can decide the next course of action on the basis of the level of certainty that an attack is genuine. In this case, a session may be terminated, or the system may shut itself down, thereby preventing any further breach. This type of reaction is referred to as an active response. Companies must consider the regulatory and legal ramifications regarding both types of detection.

Passive response.

Most IDSs are passive, meaning that when an attack is detected, an alarm is generated, but no countermeasure is actively applied to thwart the attack. The incident response team is notified of a possible attack, investigates the incident, and responds accordingly. Following the incident, the security team analyzes the attacked system to determine the vulnerability that may have been used by the intruder and the extent of the damage.

Active response.

Active response systems may be divided into two classes, according to the control they exercise. Some active response systems exercise control over the attacked system: that is, they modify the state of the attacked system to thwart or mitigate the effects of the attack. This control can take the form of terminating network connections or killing errant processes. The response can include increasing the security logging to facilitate after-the-fact incident analysis. Other active response systems exercise control over the attacking system: that is, they counterattack in an attempt to nullify the attacker’s platform of operation. One possible concern with active response is that an attacker could exploit the system to trigger DoS attacks. For example, by repeatedly triggering a target IDS to terminate a firewall, an attacker could reduce the availability of the firewall.

Usage Frequency

IDSs can be characterized by two types of usage frequency. Continuous monitoring IDSs attempt to detect intrusions in real or near-real time. Periodic monitoring IDSs process audit data with some delay.

Continuous monitoring

This class of IDS can be either host based or network based. Continuous monitoring of audit and system logs or network traffic makes it possible to detect an intrusion and signal its presence with the briefest possible delay. Thwarting an attack before damage is done is a desirable capability that is possible only with realtime IDSs; however, it requires that a well-defined process be in place for the review of data by security personnel.

Periodic monitoring

This class processes data in batches at regular intervals. These systems are useful for detecting attack trends or patterns, but they offer limited capabilities for timely notification of intrusions. Gathering data for analysis allows the detection of possible new attack signatures but requires advanced data management tools to process the volumes of information.

IDS Functionality The functionality of IDSs can be described by their accuracy, performance, completeness, security, and timeliness.


Inaccuracy occurs when an IDS flags a legitimate action as anomalous or intrusive (false positive). The occurrence of false positives can be reduced by finetuning an IDS to distinguish attacks from legitimate actions and by updating attack signatures on a regular basis. In addition, an IDS must be capable of integrating with technologies such as vulnerability assessment or system penetration testing tools to prevent conflicts with the various systems’ normal operations.


The performance of an IDS is the rate at which audit events are processed. If the performance is poor, then real-time detection is not possible. Performance can be improved by fine-tuning IDSs to prioritize events and to focus on detecting the most likely attacks.


Incompleteness occurs when the IDS fails to detect an attack, creating a false-negative result. This measure is very difficult to evaluate because it is impossible to have a global knowledge about attacks or abuses of privileges. Regular updates of attack signatures and statistical profiles resulting from changes of user or system behavior can help minimize oversights.


An IDS should be resistant to attacks, particularly against malicious attempts at triggering a DoS attack, and should be designed with this goal in mind. IDSs run on commercially available operating systems or hardware, which are known to be vulnerable to attacks. In addition, an IDS should function correctly under extreme conditions, such as a very high level of computing activity on the host system.


An IDS should perform and publish its analysis as quickly as possible before much damage occurs from an attack. The ability to readily identify intrusions may allow an IDS to disable services or prevent the attacker from subverting the audit source or the IDS itself. An IDS may also communicate requests to other devices, such as routers and firewalls, to block the source or terminate the intruder’s session. Additionally, the IDS may notify the incident response team to quickly react to an intrusion. Organizations must have well-defined processes in place for reviewing and acting upon IDS data.

IDS Deployment

The strategic deployment of IDSs should significantly enhance detection capabilities. However, understanding the limitations of an IDS is essential to a deployment strategy, which might consist of locating the intrusion detection sensors to protect those network resources where the most valuable assets reside, instead of trying to protect every resource on the network. Such a deployment decision is important because it requires ranking the value of assets according to their priority before sensor placement.
Network architectures consisting primarily of switches can pose problems with respect to locations of IDSs due to the segregation of each switched segment and the difficulty of viewing event streams across all segments. Virtual local area networks (VLANs) can also limit accessibility to event streams. Locating IDSs at the switches can provide a means of viewing all traffic passing through the switch. However, encrypted sessions within the network may prevent an IDS from inspecting the contents of the session.

Although encrypted sessions may prevent unauthorized users from accessing data and systems, they also may be used by authorized users attempting to gain unauthorized access to sensitive data.
Administration and management of distributed IDSs must also be considered with respect to the cost of deploying an IDS and the security of the data and logs as they are transferred across the network, either to a centralized repository or to a management system. The integrity of the data and the availability of the IDS are key requirements in delivering an effective solution.
Local IDSs run on only one system and monitor only the audit trails of that system, which means that they cannot detect what is happening on the network or on other systems. To provide broader coverage, copies of the tools must be installed on many platforms. The cost and complexities of maintaining this type of solution may make it impractical within larger organizations that may benefit from a more centrally managed IDS using remote agents or sensors.
Both host- and network-based IDSs can be distributed across a LAN. Sensors may reside on various network points or on multiple hosts. There are three main issues to consider when deploying a distributed IDS.

Effectiveness. Distributed detection—taking results from multiple sensors deployed throughout the enterprise and correlating them into one coherent view—can be challenging. The effectiveness of the system at detecting intrusions in a multihost heterogeneous environment hinges on its ability to interpret data from different sensors, and possibly from different vendors. Proper configuration and strategic location of the sensors are key to delivering an effective service.

Scalability. The system’s ability to scale is closely tied to the management functions. Products that can be centrally managed and integrated with a central security management system result in reduced management overhead and scale more easily. Scalability is vital for the effective deployment of an IDS in most corporate networks.

Management. A centralized management facility allows an organization to monitor and administer local and remote sensors in a cost-effective manner. It also alleviates some of the problems of detecting intrusion patterns in a multihost environment, where attack evidence may span multiple audit trails that are collected at several hosts of possibly different architectures, operating systems, and auditing facilities. Addressing the intrusion detection problem in a multihost environment involves more than analysis of all security-relevant events produced locally by each monitored host. It also requires intelligent correlation of related event data from multiple hosts.


Intrusion detection systems were designed to detect unauthorized access or misuse of computing resources. Now, newer security systems called intrusion prevention systems (IPS) emphasize the prevention of attack before the actual damage has occurred. IPSs protect against common threats such as worms, viruses, and Trojan horses; installation and activation of back doors; modifications to system files; changes in user privilege levels (especially acquisition of root, superuser, or administrator privileges); buffer overflow attacks; and attacks on scripts and file systems. Some IPSs, however, are IDSs that have been relabeled for marketing purposes.

Like IDSs, IPSs use anomaly and misuse detection to examine the behavior of an attack. By focusing on what an attack does, IPSs learn to recognize potential attacks. IPS implementations can be host based and network based.
In addition to anomaly detection, host-based IPSs might use sandbox and kernel-based techniques to provide protection. A sandbox is a restricted area where suspicious code can be quarantined. Here, the code can run but cannot access system resources. If the code’s behavior violates a predefined policy, it will be prevented from executing outside the sandbox. Mobile code such as ActiveX and Java applets are frequently quarantined upon entering a system.
Organizations can guard against execution of malicious system calls by loading a software agent between the user application and the operating system kernel. The agent intercepts system calls to the kernel and checks them against an ACL before allowing or blocking access to resources.
Network-based IPSs are inline network devices that use anomaly and misuse detection as well as stateful inspection to prevent network intrusion. These IPSs can examine packets, detect an attack, and drop the traffic, thereby protecting the destination device.
The need for intrusion protection will grow as organizations strive to enable access to systems while guarding against internal and external attacks. The development of robust IPS products, however, is still in the early stages.


A virtual private network (VPN) is a private data network that makes use of the public communications infrastructure, maintaining confidentiality through the use of a tunneling protocol and associated security procedures like encryption. Essentially, a VPN allows organizations to extend their network trust perimeter over public networks without sacrificing security. Using the Internet as a backbone, a VPN can securely and cost-effectively connect all of an organization’s offices, telecommuters, mobile users, customers, and business partners.
Typical corporate IT connectivity goals are to enable the use of the Internet:

• Between corporate facilities to reduce wide area network (WAN) charges, which are frequently large.

• For remote access to corporate network assets, to avoid expensive long-distance charges by mobile users.

• As a communication medium between different corporate entities.
However, because the Internet is a public network, some basic security requirements must be satisfied to protect the information transmitted by means of VPNs to ensure:

• Maintenance of data confidentiality across the public network

• Maintenance of data integrity

• Mutual authentication of the endpoints

• Availability of systems

VPN Applications VPN technologies are generally deployed in three different types of applications.

Remote access. Remote-access VPNs let remote users gain access to the corporate network by dialing in to an Internet service provider (ISP) for access to the Internet. As Figure27 shows, software on the user’s workstation negotiates an encrypted VPN session with the corporate VPN gateway. Encrypted data moves between the user and the VPN gateway over the Internet. Once connected and authenticated, users can access a variety of services, such as e-mail, internal Websites, databases, and corporate applications. This approach provides several benefits to the enterprise. Large ISPs typically have points of presence in most metropolitan areas, and the call to the ISP is typically a local call, so long-distance charges are minimized. Also, the provision and management of the dial-in infrastructure (phone lines and modems) is the responsibility of the ISP rather than of the corporate IT organization.

Branch office.

The networks in various corporate locations can be connected using IPbased VPNs rather than using leased lines to construct a private data network or using a carrier’s frame relay network. In this case, an IP network service provider or ISP provides IP connectivity to all sites, and VPN gateways transparently encrypt the data transferred between sites.

Business partner/supplier.

An increasingly popular application for VPNs is the corporate extranet for sharing important business information with trusted business partners and corporate joint ventures. In a typical extranet application, an organization creates a discrete network segment that is separated by a firewall from the main corporate network. Located on this network segment are servers that host the data and applications being shared between the extranet participants. The extranet VPN is configured as a controlled-access, shared network that only the organization and its designated business partners can access. In addition to the network security, the server itself typically requires authenticated access, particularly if multiple business partners are accessing the same resources. For example, with a VPN, a supplier can have global online access to a manufacturer’s inventory plans and production schedules at all times. If a frame relay service or leased line is used, it may be more expensive, and the geographic reach may be limited. VPNs provide an alternative, cost-effective approach.

How VPNs Work VPNs are based on familiar networking technology and protocols. Rather than linking to a network across a dedicated line, however, most VPNs create a tunnel through a shared network, such as the Internet. Tunneling wraps, or encapsulates, the entire original packet in a new packet with a new header, making a new payload.

Tunneling involves three types of protocols: The passenger protocol is the protocol being encapsulated; the encapsulating protocol is used to create, maintain, and tear down the tunnel; and the carrier protocol is the protocol used to carry the encapsulated protocol. The most widely used method of creating industry-standard VPN tunnels is to encapsulate network protocols (passenger protocols) inside an encapsulating protocol, and then send the entire package using the carrier protocol.

Implementing VPNs

Application and other enterprise requirements will drive the choice of protocols, tunnels, and encryption, leading to selection of the VPN products most appropriate for an organization. Enterprises also must consider how the VPN is configured relative to the network’s firewalls, whether the application will require the use of public or private keys, and what its availability and performance requirements are.
Out-of-the-box VPN products offer a username/password combination or sharedsecret method of authenticating the participants in a secure channel. These same VPN products also offer integration with enhanced security solutions such as public key infrastructure (PKI).

Protocol selection.

Where the VPN is placed within the seven-layer OSI reference model. Protocols work at different layers of the OSI model, such as Layer 2, and are used as encapsulating protocols to provide VPN tunneling and encryption services. Some of the more common ones are Internet Protocol Security (IPSec), Pointto-Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), and Multiprotocol Label Switching (MPLS).
The Secure Sockets Layer (SSL) protocol has recently emerged as another method for establishing VPNs. SSL is a protocol commonly used to enhance security by encrypting the messages traveling over public networks. SSL, developed by Netscape during the early 1990s, gained tremendous popularity because it could provide a secure method for transmitting Web-based information over the Internet. As a protocol, SSL has been widely used for several years to secure Internet communications and its popularity has fueled adoption for other uses, such as VPNs.

SSL-based VPN products use SSL as the underlying protocol for secure communications and these products have been gaining market share recently. The use of SSL eliminates the need to install VPN clients on end-user systems, which significantly reduces deployment, operational, and support costs. SSL-based VPNs use standard Web browsers, such as Netscape Navigator and Microsoft Internet Explorer, as their clients.
SSL-based VPNs are not expected to replace IPSec-based VPNs for all types of user communities, but SSL-based VPNs provide a viable option for some users. IPSec-based VPNs are advantageous because they can provide greater security and ubiquitous corporate network access (they are application agnostic), especially for user communities that need continuous uninterrupted access. However, for users who travel and need quick access to e-mail and corporate intranet portals as well as the ability to update or upload files, SSL-based VPNs may provide an easier, quicker, and economical solution. Factors to consider before deploying SSL-based VPNs include access control requirements, level of security required, level of support required for corporate applications, encryption, performance, and scalability. For some of these areas, IPSec may provide a better solution depending on the organization’s needs and requirements.
Configuration. Within the network environment, configuration is an essential factor in establishing effective perimeter security. Because a VPN carries encrypted packets, the normal filtering and checking a firewall performs to enforce network access controls are overridden. The firewall simply passes the VPN’s encrypted packets; it does not decrypt them.
Several configuration possibilities exist. If the VPN gateway is placed outside the firewall with a single connection to an external network, the VPN is subject to compromise, because an attacker may be able to access the network segment to which the VPN gateway is attached. When the VPN gateway is behind the firewall, the firewall can perform normal screening on non-VPN traffic and pass through encrypted VPN traffic. Combining VPN and firewall processing on one server is suitable only for very small installations. The combined load of cryptographic processing and firewall checking and logging may be too heavy for larger networks.
Another VPN configuration alternative is to locate the VPN gateway on a dedicated secure network segment with a second firewall between it and the rest of the corporate network. This configuration allows the additional firewall to filter all unencrypted traffic from the VPN gateway as it enters the corporate LAN. The VPN gateway also encrypts all traffic before it leaves the corporate firewall on its way to the Internet.

Key management.

VPN access may be used to grant access to sensitive corporate assets. It is essential that the two end points in a VPN are mutually and strongly authenticated, either between two firewalls or between a remote user and a corporate gateway. Multiple authentication schemes are available. For example, IPSec-compliant VPN products can provide authentication via certified public keys or private keys.

High availability and performance.

Enterprise-level solutions require a high level of availability and scalability. There are many ways to provide high availability, including redundant hardware, clustering, or dynamic load balancing. The challenge is to provide active session fail-over, so that the communicating VPN devices can synchronize all the information required to successfully recover a session and the cryptographic keys involved with the security association.
Because of the varying amount of traffic trying to use the VPN, and because encryption and decryption are computationally intensive operations, designing the VPN architecture and transferring existing services to a VPN environment requires careful consideration. Although processing power is constantly improving, Internet usage is growing rapidly, and the computational workload for VPN gateways may require moving some encryption and decryption tasks to special, dedicated encryption coprocessors. In addition, the increased speeds of network accesses and the increased need for simultaneous connections may make it difficult for security-aware network devices to perform the necessary processing.

VPN Trends

VPN technology has matured significantly and is widely adopted in corporations as an economical and viable remote access solution. IPSec has emerged as a clear choice for VPN technology. However, Layer-2 VPNs like those using the MPLS protocol are gaining acceptance. SSL-based VPNs promise lower operational support and deployment costs and provide a good solution for certain types of users.
As the VPN technology has matured, more organizations are deploying network management software that seeks to improve operational efficiency. This software helps to manage client and gateway software by updating software, distributing new access phone numbers, distributing new access and security policies, and so forth. Another trend is the emergence of policy enforcement software, which helps to enhance overall security by limiting VPN access to systems and devices that have enforced an adequate and predetermined level of security.

■ WLAN Security Many organizations use wireless connectivity as a complement to traditional, wired Ethernet-based networking techniques. Based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of specifications, wireless fidelity (Wi-Fi) technology is a simple and cost-effective means of building wireless networks (WLANs). From a security perspective, however, wireless raises many new challenges in addition to those associated with traditional networking. Broadcasting data traffic over the public airwaves opens up the possibility of eavesdropping in ways not possible with landline networks.
The risks are similar whether the WLAN is based on 11Mbps 802.11b hardware or the newer, faster 802.11a and 802.11g standards. In most cases, the risk associated with WLANs does not outweigh the reward. Organizations should be able to enjoy the

benefits of wireless with high confidence, provided network administrators observe appropriate precautions in designing the corporate WLAN and exercise diligence in monitoring its use.


The security of any WLAN can be compromised. In most metropolitan areas, the practice of “war driving”—literally, driving around neighborhoods armed with a wireless-enabled laptop in search of open access points—is commonplace. Software tools are available that can scan the airwaves for any Wi-Fi access points within range and return information about them, the most prominent being an application called NetStumbler. A corporate WLAN randomly discovered in this way might be of little interest to the original intruder. Once its existence is made public through online forums or other channels, however, the chances of a subsequent, more damaging attack increase dramatically.
Therefore, regardless of the perceived level of risk, architects of corporate WLANs should always assume that their network is vulnerable to attack. As a matter of policy, they should implement as many layers of countermeasures against potential intruders as possible, provided such measures will not unduly hamper legitimate use of the WLAN within the organization.

Default Settings

When installing wireless access points, administrators should change the hardware’s default factory configurations. Preconfigured values typically include the Service Set ID (SSID, a unique identifying name that clients need to know to access the network) and the administrative password used to reconfigure the access point. Savvy intruders will know the default values for all the major hardware vendors, and using minimal effort these intruders can access a poorly configured WLAN.

Signal Range

Signal amplifiers and high-power after-market antennas have become popular accessories for ambitious Wi-Fi projects, but in many cases they are unnecessary and can extend the WLAN’s area of coverage beyond a prudent range. In traditional Ethernet networks, access is limited to those areas where hard-wired network ports have been installed. In wireless networks, access can be much more difficult to control, because coverage is likely to extend beyond an organization’s physically secure areas that house infrastructure components to lobbies, cafeterias, meeting rooms, and even outside of the building.

Periodic Surveys

Installation of wireless access points should never be a haphazard process, and potential unintended results should be considered. After the WLAN is installed, administrators should conduct regular, periodic surveys of the site using the same tools that potential attackers would employ (such as NetStumbler) to verify that all access points are configured properly and their areas of coverage are as expected.

Network Bridges

Network engineers also should exercise care when bridging traffic between the wired and wireless networks. “Dumb” bridges, those that indiscriminately forward all traffic between the two, are particularly to be avoided, because they effectively make packets intended solely for the wired network just as vulnerable as those on the WLAN. Instead, network architects should specify intelligent routers that forward only the traffic that is specifically intended for the wireless network.


No amount of diligence during the installation process can guarantee that a WLAN will escape unwanted scrutiny. Armed with a network’s SSID and access to the coverage area, an intruder can use port-sniffing software to effectively monitor every packet of traffic that crosses the network. For this reason, wireless vendors have implemented several additional technologies designed to limit access only to authorized users.
Wired Equivalent Privacy (WEP)

The most prominent of these is Wired Equivalent Privacy (WEP), a protocol that is part of the 802.11b specification. WEP encrypts all traffic on the WLAN using unique keys assigned by the access point. Although first-generation hardware used only 40-bit encryption keys, today’s devices routinely use key lengths of 128 bits or more. Longer keys are more difficult to crack using brute-force methods.
Unfortunately, not long after the WEP specification was finalized, researchers at the University of California, Berkeley discovered a flaw in the implementation of WEP that makes it possible for an attacker to predict the sequence of keys generated by the access point, regardless of the key’s length. In August 2001, the Wireless Ethernet Compatibility Alliance (WECA) announced that it had discovered yet another weakness in the protocol’s design, this one even more significant than the first. Today, it is generally accepted throughout the industry that WEP is vulnerable to attack and cannot provide an effective guarantee of security.
This is not to say that WEP encryption is valueless. Although it can be compromised, doing so takes time, especially with longer key lengths. Thus, while it cannot guarantee security, WEP still represents one more obstacle to a potential intruder. Administrators are advised to enable WEP on their Wi-Fi networks in all cases, and use the longest key length available on their hardware.
Some Wi-Fi vendors have developed additional, proprietary security extensions to compensate for WEP’s shortcomings. Examples include the Cisco Wireless Security Suite, SMC EliteConnect, and 3Com Dynamic Security Link. Such vendor-specific extensions can be problematic, however, because they are not part of the published 802.11 specifications. One reason for the popularity of Wi-Fi is that it is an open standard; competing product offerings from a variety of manufacturers are based on the same specifications and can therefore interoperate without much trouble. Adopting proprietary extensions means that not only must an organization standardize on a single vendor for its Wi-Fi hardware, but it also must limit access to its WLAN by outside customers, vendors, or other partners who may not use the same equipment.

MAC Address Filtering

Many vendors offer Media Access Control (MAC) address filtering as a means of limiting WLAN access to pre-approved clients. Every piece of networking hardware—whether wired or wireless—is assigned a unique identifying number by its manufacturer, called a MAC address. The MAC address is hard-wired into the device and is not designed to be changed by the user. When MAC address filtering is enabled, an access port will accept only traffic broadcast by authorized transmitters based on their MAC addresses, as specified by the network administrator. Such a policy places restrictions on guests and other temporary users, and in larger organizations such policies can cause significant administrative overhead.


Many enterprise environments have implemented an additional authentication layer, which forces users to manually log in with a username and password before allowing them access to the network. One method of implementing such a system is the Remote Authentication Dial-In User Service (RADIUS), currently the de facto standard for remote authentication. RADIUS has long been used in terminal servers to authenticate dial-in users and is now appearing in some vendors’ Wi-Fi access points, including products from Avaya, Lucent, and Orinoco. Another, more complicated authentication standard is 802.1x, which allows support for a variety of authentication standards. 802.11x is implemented in Windows XP and by wireless access points from such vendors as Hewlett-Packard, Intel, and Linksys.

802.11 Standard

Stopgap measures like those mentioned earlier should not be necessary for much longer. The Wi-Fi Alliance, the industry consortium responsible for advancing the 802.11 family of specifications, is developing the 802.11i standard as a replacement for WEP-based wireless security. One component of 802.11i, an authentication scheme called Wi-Fi Protected Access (WPA), is already being implemented by major vendors in their 54Mbps 802.11g hardware. The other component of the new specification, called Robust Security Network (RSN), will be a transparent encryption layer for wireless network traffic. Unfortunately, it remains unclear when the full 802.11i specification will be finalized and available for hardware manufacturers to implement in their products. Until it is, widespread adoption even of WPA is likely to be a slow process.


In the meantime, using an additional layer of application-level security is advisable for all WLAN traffic. SSL has long been available for secure Web-based transactions and is supported by most browsers. Traffic for other applications can be secured by encapsulating it in encrypted “tunnels” using the Secure Shell (SSH) protocol. In either case, the encrypted packets will be unintelligible, even to intruders who have otherwise compromised the WLAN.


Even these methods can allow some traffic to escape unencrypted, however. Most security experts advise that the only way to truly secure all traffic on a WLAN is to employ a VPN solution. By separating the wireless network from other networks and forcing all traffic to and from it to pass through the VPN gateway, organizations can be reasonably certain that their WLANs are virtually invulnerable to attack.
It seems probable that most organizations will implement some form of VPN for wireless connectivity in the near future, even after the final 802.11i specification is ratified. Already, enterprises routinely mandate VPN clients for remote users who access their networks over modem dial-up or public networks such as the Internet. By standardizing on a single VPN package for both wired and Wi-Fi–enabled machines, network administrators can simplify their security policies considerably.

■ Application and Data Store Security While security mechanisms at the network level provide a necessary foundation for a protected infrastructure, permissions and security mechanisms within applications can help protect the application. Additional measures should be taken, however, at the operating system and data store levels, because intruders can attack a system by going around the application.


Many applications, such as enterprise resource planning (ERP) software, include controls that help ensure security. ERP packages are used to manipulate businesscritical data, including human resources information, financial transactions, and supply chain data. Permissions and other security functionality within ERP software are meant to ensure appropriate access to sensitive processing functions and the ability to read data and enforce segregation of duties (SOD).
The authorization mechanism within ERP packages can be designed to permit or prohibit certain business transactions within the system. Each transaction has a defined set of objects that can support a deeper definition of the security. For example, perhaps a user wants the ability to execute sales reports. This type of report is identified within the system by a unique transaction code. That transaction code, for Display Sales Report, may be further restricted by the sales organization. The result is that the user can only display the sales report within his or her sales unit or department.
Each type of transaction is assigned to a role within the ERP system. This system role differs from a position or job within the business in that a given job may consist of many roles. For example, a construction superintendent is a job, but that individual may perform the roles of project scheduler and purchase order approver. Both of these roles within the system must contain the required transactions and authorization objects to perform the activities associated with that business function.
A potential risk is that a specific individual might be assigned a combination of system roles that present a SOD violation. The construction superintendent, for example, has the capability to approve purchase orders, so the security design would need to ensure that this individual is not assigned the roles of material requestor, goods receiver, or invoice processor. The organization must define an exhaustive list of SOD violations and stringent security maintenance and approval processes to mitigate this risk.

Another issue may arise if the construction superintendent can approve purchase orders outside of the business unit. That authorization would allow this individual to make commitments for another organization. The user ID should be restricted only to the appropriate business unit(s) for which a specific individual is authorized to perform activities. Understanding and documenting the security requirements and an individual’s organizational affiliation will aid in managing this risk.
Although ERP applications offer security at the application level (such as roles), other underlying components require security measures to prevent users from circumventing application-level controls.  The use of ERP applications implies use of the following components:

• One or more data stores that store application data and user information.

• One or more operating systems to support the ERP package and associated data store(s).

• A network infrastructure to connect those components to each other, as well as to the end user community.
Securing ERP applications requires securing all of those components. If appropriate security and controls are not in place for all access points to the database, the integrity, availability, and confidentiality of the database may be compromised.

For example, an accounts payable clerk may be assigned the appropriate roles and responsibilities in the SAP application, but might somehow have obtained the ability to log in directly to the Oracle database with write permissions to all database fields. Technically, this ability provides an avenue for the accounts payable clerk to change data outside of his or her assigned role, and outside of the application controls. The clerk hypothetically could add himself or herself as payee, and create transactions to generate payment to himself or herself In addition to application controls, general computer controls are also used to provide application and data store security. These controls protect application code and operating system files, control database connections and access, enable more complex forms of security (such as data encryption and single sign-on), and ensure the recoverability of corporate data.


Representative general computer control categories for operating system and data store security include the following:
User administration. Processes for administering user access should be documented and address the addition, modification, and revocation of user access, as well as the necessary approval required for access. A common oversight occurs when users change responsibilities. Often, users are granted access for their new responsibilities, but access they no longer require is not removed. Administrators should periodically review user access to ensure that it is commensurate with job responsibilities. The following are general guidelines.


All user accounts should require a password that has an established minimum length. A blank password or a password shorter than the minimum length increases the risk of unauthorized access to the system. For new user accounts, administrators should set unique passwords that expire at first login and ensure that passwords are distributed securely. Other users could log in as a newly created user and immediately begin to execute transactions if they know the default passwords given at account creation.
Setting passwords for all users to expire periodically, such as every 60 to 90 days, helps reduce the risk that someone who has gained unauthorized access to the system by compromising a password will have continued access to the system. The passwords for default accounts should be changed from their default values. Otherwise, the risk increases that an unauthorized person will gain access to the system.
The system administrator should not create guest accounts or temporary accounts, because such accounts increase the risk of unauthorized users gaining access to the system and subsequently affecting the system’s integrity and confidentiality. For cases when operating systems create accounts by default and may not allow them to be deleted (such as the Windows guest account), these accounts should be disabled. A common method for obtaining unauthorized system access is trying to log in using default or known accounts.
A security policy should set a threshold of invalid login attempts allowed and lock accounts once this number of invalid attempts has been reached. Accounts should remain locked until an administrator unlocks them. A potential intruder will either manually or automatically attempt to log in to accounts with multiple passwords until successful. By enabling account lockouts after three to five attempts, the risk of unauthorized user access decreases.

User groups.

Users who require access to the data store files and utilities should be granted access through the use of operating system–level groups defined by role and responsibility requirements. For example, all database administrators should be members of a database administrator (DBA) group that has permission to stop and restart the database service on the server. Appropriate use of user accounts at the operating system level prevents users from obtaining access that is not commensurate with their roles and responsibilities, and thus compromising the integrity, availability, and confidentiality of the database.
Each data store system contains multiple objects (data tables, for example), each of which has certain access rights associated with it. System and object privileges should be assigned to roles rather than to individual accounts. Assigning privileges directly to accounts creates a burdensome and confusing administration process and increases the risk that an account will have unauthorized privileges.
Restricting access to administrative and other privileged IDs, such as root or administrator, reduces the risk of accidental and unauthorized changes to the system.
Periodic review. Administrators also should implement a procedure to review access periodically to identify inactive accounts and to determine whether access granted is appropriate to job responsibilities. Inappropriate system-level access increases the risk of unauthorized access, accidental error, or fraud.

Change Control

Management-endorsed, formal change control procedures should be developed and implemented to restrict operating system, database, and application changes. Formal change control procedures are critical in managing changes to any data store and its infrastructure. Without a change control process, changes made to the operating system, database, or application might not be performed according to internal control standards. For example, if change control procedures are not followed, changes could be made that: • Do not align with business needs.

• Have not been fully tested and subjected to quality assurance.

• May have an impact on other production systems.

Documentation of change control processes should be thorough and include a method of tracking change requests and change request approvals. The procedures also should have mechanisms for logging and prioritizing changes, ensuring adequate segregation of duties, testing changes, gaining management approval, handling back-out procedures, and reporting to management.


All business-critical data must be backed up on a regular basis, including operating system and database system files. Without a documented, approved, and tested backup and recovery strategy, an organization might not be able to successfully recover data within an adequate time frame if a disaster occurs. Organizations should take full operating system backups of the data store at regular intervals and determine an appropriate period of time to store them off-site. Backup media must be stored in a secure and safe location. If an organization does not have adequate off-site data storage facilities, a disaster at an organization’s location could cause data loss and impede, or possibly halt, operations for an indeterminable amount of time.

Disaster Recovery and Business Continuity Procedures

Organizations should develop and document emergency procedures and test them annually. Inadequate emergency response procedures may lead to a failure to recover systems effectively and efficiently. Further, confusion could emerge about responsibilities and tasks to be performed.

At the data store level, inadequate emergency access procedures could compromise the integrity of the database and lengthen recovery time, or prevent a full understanding of the impact of the emergency. At the operating system level, appropriate procedures should be in place to ensure that operational failures (disk drive problems, for example) are identified, resolved in a timely manner, and, where appropriate, corrective actions are approved retrospectively by appropriate IT staff and users.


Each individual data store package has its own security configurations and settings to consider. However, for any data store, the following general computer controls should be reviewed.
Database Logins All users, including DBAs, should log in to the system with a unique ID assigned only to them. Privileged users have full rights to the database. If users are allowed to log in using shared accounts (such as root or oracle), the security of the database could be compromised. Accountability for actions taken within the system can be achieved only by assigning all users their own unique account and requiring that before they use a privileged account, they first log in using their personal ID. This practice provides the ability to audit the use of privileged accounts (assuming that adequate logging is in place). The practice of sharing accounts or using the same account name for different individuals decreases the level of accountability when irregular, unauthorized, and inappropriate system actions have been taken.

Access to Security Files

Only authorized users such as the DBA and system administrators should have access to security-related data store files (init.ora and control files, for example) within the database. Restricting access to authorized personnel will minimize inappropriate changes to the files and subsequent database corruption.

Session Settings

Administrators should not allow users to open multiple sessions. Concurrent logins could allow the same account to be used more than once in the same space of time. Both authorized and unauthorized use could occur simultaneously without notice.
Users should have an idle time setting of 10 to 30 minutes depending on the nature of the account. Allowing accounts to remain logged in when idle for excessive periods of time increases the risk that an unauthorized user will gain access to the system. For example, if a manager with the authority to approve transactions logs in to a session and leaves for the day without logging off the system, a user with fewer privileges could use that open session to approve unauthorized transactions.

Database Shutdowns and Restarts

In environments where the database is required to be highly available, ensuring that the database is rarely, if ever, shut down and restarted is crucial. Unexpected database shutdowns and restarts may indicate a potential problem on either a physical level (hardware capacity or power outages, for example) or a logical level (users are making unauthorized shutdowns). The database administrator should review the database audit log file or table and investigate any unplanned database shutdowns. Data store instances should be shut down and subsequently restarted only by authorized personnel at scheduled times when users can be properly notified. Unplanned or unscheduled database shutdowns will deny users access to the system Logging and Monitoring Auditing is critical to the confidentiality, integrity, and availability of data. If auditing is not enabled, accidental or malicious attempts to alter data could go undetected. Data store activities should be logged and monitored for any changes to business-critical data. Auditing can be accomplished using built-in tools that may be used with a log analysis tool if desired. Events such as login failures, changes in permissions, changes in critical data, code changes, and privileged ID usage should be logged and monitored. If important events in the database are not recorded, the risk increases that unauthorized system actions (such as deleting or modifying sensitive data) or access attempts may not be identified and resolved in a timely manner. Administrators should back up audit logs as needed. Auditing can be useful for gathering historical data for particular database activities. Administrators also should mirror log files, because multiple copies of log files reduce the risk that data will be lost if a system disaster occurs.
The DBA should monitor the available disk space allocated to the database. The continued functioning of all database applications requires that enough database resources are available to continue to record application transactions and systemgenerated log records. A potentially large risk is that once the database (or even the archive log) runs out of disk space it will stop responding and all processes will be stopped. The percentage of available disk space may vary for each product. Refer to vendor recommendations, and determine whether a higher than recommended percentage is more appropriate.
In addition to auditing the events previously mentioned, the activities of privileged IDs and actions against certain critical data also should be audited (such as auditing activities completed by the oracle ID, or changes to financial transactions). Auditing the actions of users with system administrator privileges reduces the likelihood of inappropriate use. Although an organization relies greatly on the DBA, an audit trail that documents specific actions (such as deleting transactions from certain tables) should be created for the DBA user. Such an audit trail is also important when the DBA function is outsourced but the system is on-site, because the DBA effectively has complete access to all of the business records.
Organizations should audit all sensitive objects for certain changes (SQL statements such as INSERT TABLE, DELETE TABLE, and UPDATE TABLE, for example). Triggers should be used to audit changes to critical data. For example, an in-house script may be developed to write to a log file anytime the UPDATE TABLE command is run. The log should include the name of the command, the original values in the table, the modified values in the table, the ID submitting the change, and a timestamp for the change. The nature of these triggers will depend on the organization and the data that resides within the database.
Periodic reviews of user profiles and failed login attempts are important. Unauthorized users may target profiles that have not been utilized recently to break into the system. The number of access attempts could indicate an attempt at hacking into the database, which, if successful, could compromise the integrity, availability, and confidentiality of the database.


Operating system controls are a subset of general computer controls that help prevent unauthorized users from obtaining access to the operating system and ultimately access to the data files and executables.

File Permissions

Administrators should restrict operating system level access to data store executable files, services, utilities, and system commands. In addition, administrators should ensure that the Everyone group does not have read, write, or execute permissions on the directory or files. The owner of the files and directories should be the DBA or other appropriate owner, and group access should belong to the group defined for data store users. If this is not done, users not authorized to access the database still could access data (that is, read, change, or delete) at the operating system level using command line access to the data store directory. Therefore the integrity, availability, and confidentiality of the data store could be compromised. A user that has command line access could bypass data store and application security controls to execute data store services and functions.

User Profiles

Similar to operating system groups, database profiles should be used to impose resource limits on a specific user or group of users. Profiles should be created and assigned to a user or number of users. If only the default profile is used, then no resource limits will be applied to a user account. Unauthorized access (that is, more access than users require to do their jobs) to the system could result, depending on the nature of the database. As with groups, profiles should be separated by job function. For example, the DBA and the SAP users should be separated. SAP users should not be allowed to interact directly with the database, because they do not fill a database administration role. Similarly, DBAs should not have access to SAP, because it is not their job to conduct financial transactions. If these users share a profile, a risk exists that unauthorized transactions may be completed outside of their job function.

Source Code Access

Administrators should allow source code access only to required parties. Giving users access to source code information provides insight into the protection mechanisms in place and increases the risk that unauthorized or inappropriate changes to source code will occur.

Developer Access

To help prevent accidental and unauthorized changes to the production environment, developer write access to the environment should be limited.

Logging and Monitoring

System activities on any data store server containing business-critical data should be logged and monitored. Logging and monitoring should include events such as login failures, changes in permissions, changes in critical data, privileged ID usage, and recycling the services.


Another type of general computer controls, network controls, are also used to implement data store security. For example, ERP data stores should reside only on the internal network or be segregated within their own secured network. Firewalls can help protect data stores by limiting the accessibility of the server, and several networkrelated controls should be considered.

Network Connections

Internet-based connections should not be permitted to access resources located on the corporate internal network. Such connections increase the risk that a security incident may occur. The firewall should deny all connections unless they are specifically required to support a business purpose. Access to the data store server should be granted only to the ERP application, relevant service accounts, the required network administrators, and the required data store administrators. These restrictions minimize the risk that unauthorized parties have access to sensitive information.

DNS Listings

Internal Domain Name Service (DNS) listings must be segregated from external DNS databases. Listing internal systems in a publicly accessible DNS database could allow intruders to gain information about the corporate network topology and systems, thereby potentially giving them the ability to access corporate data.

Firewall Configurations

Access to view or modify firewall configurations must be strictly controlled to maintain the integrity of the rule base. Failure to implement stringent controls regarding which users and workstations are allowed to administer the firewall increases the risk that inappropriate changes to the firewall configuration may occur. Physical access to the firewall should be limited to key technical staff who require access for maintenance and emergency procedures.

Services and Protocols

Administrators should disable unnecessary services and protocols. Every service or process not explicitly required to support business functions represents a potential exposure. If that exposure were exploited either to gain access to the host or to harm its ability to operate properly, unauthorized individuals could compromise the data store.

Antivirus Software

Antivirus software decreases the risk of viruses, worms, Trojan horses, and other malicious executables running on the servers. These programs, however, may also affect the performance of the server and the network, and could potentially send internal information to external parties.

Logging and Monitoring

Monitoring the network for attempted intrusions helps to identify unauthorized or inappropriate activity. Maintaining an awareness of normal transactions will aid in identifying events that are abnormal in the environment. Monitoring activities can range from manually reviewing logs to deploying an IDS. Access to the log files should be restricted to prevent unauthorized users from gaining security information about the host and the environment.

Spoofed Traffic

Administrators should implement controls to protect against spoofed traffic; that is, traffic that hides its true source behind an internal or another trusted IP address. This type of traffic is considered an attack. Failure to inspect traffic for spoofed packets may leave the internal network open for attack.

■ Web Services Security The Web follows a simple premise: Use standard methods to encode information— Hypertext Markup Language (HTML)—and to access it—Hypertext Transfer Protocol (HTTP)—from any Web browser-equipped networked computer, regardless of its operating system. A similar notion underlies Web services and Extensible Markup Language (XML): Standardize the way software components and applications communicate with each other over the Internet—or across any network—regardless of their host platforms or the software environments in which they run. Web services and XML specify a simplified way for systems to interoperate over the Internet, allowing the information resources—applications and data—to be shared and facilitating the development of software that can access and manipulate the information.
Many enterprises are delaying Web services implementations because of the following security concerns:

• The ability of conventional infrastructure security technologies such as firewalls and IDSs to protect resources accessible through Web services.

• The possibility that new Web services-specific hacker attacks may be difficult to detect and defeat.

• The inadequate protection for Web services provided by existing infrastructure security protocols such as SSL, and the fact that new Web services-specific security protocols are still in development.
While standard Web-based traffic involves HTML from server to browser, Web services traffic can involve application programming interfaces (APIs) that send data back and forth over a variety of protocols, including HTTP and SMTP. Each Web service application interface may have hundreds of operations that can be accessed, providing hackers and other unauthorized users with new opportunities to compromise systems. For example, a bank loan application for a real estate transaction may be exposed as a Web service. Once invoked, this application may have to perform a number of operations such as identity verification, credit history verification, appraisal value verification, payment calculation, or funds transfer. Each subsequent operation may involve other, more granular operations. Compromising one of the component operations may provide an attacker with an opportunity to access other applications and their related data.
These concerns are amplified by the additional architectural and implementation considerations introduced by Web services.

Message based. While traditional applications are connection oriented and support security implementations at the connection level, Web services are message oriented and lack the guarantee of a direct connection between service provider and consumer. In a direct-connection scenario, data traveling between systems can be secured by the applications or by the network.
System coordination. The Web services architecture requires significant coordination among different systems. For example, a Web services implementation may include accessing older applications, accessing external Web services, and interfacing with other enterprise applications using Web services technologies. The components of the implementation are likely to have different security mechanisms, complicating both the coordination of authentication and authorization, and the maintenance of integrity and confidentiality across the Web services interfaces. If security lapses occur in any of the components, the vulnerability could put all other participating systems at risk.

Machine-to-machine interaction.

Web services operations are predominantly machine-to-machine interactions, so creating, federating, and managing digital identities and entitlements across security domains represent another challenge. Using Web services technologies for business-to-business activities implies that the interacting organizations must let external users (machines or humans) access their business applications. Existing security practices and standards that have been developed for internal use may be meaningless, inappropriate, or completely unacceptable when outside users are introduced. For example, an internal policy may dictate that only members of a particular group or department with specific job titles or grades can access a given resource; clearly, these criteria may not be applicable for external users.
Interoperability. Web services will also increase the complexity of testing, change management, and troubleshooting of application components because of the dynamic, loosely coupled nature of the Web services run-time environment. To ensure safe and effective interoperability, companies implementing Web services need to test attributes such as:

• Publish, find, and bind capabilities of a Web services environment

• Simple Object Access Protocol (SOAP) intermediary capability

• Quality of service monitoring

• Web service orchestration testing

• Web service versioning

Because some testing requirements affect the design of Web services, new testing tools need both run-time testing capabilities and design-time capabilities.


Security disciplines that are routinely used in typical environments—authentication, authorization and access control, encryption, and data integrity—also play an important role in providing basic levels of security for Web services communications.
Authentication Web service requestors need to be authenticated by the Web service provider before the request is processed. Standard Web applications use passwords, X.509 certificates, Kerberos, Lightweight Directory Access Protocol (LDAP), and other technologies for authentication. When an organization adopts a Web services approach, it might require additional security measures to protect and authenticate otherwise exposed Web services components.
For example, an e-commerce Web application may use a third-party service to authorize a credit card purchase. Implemented as a Web service, the processing may require creation of a Web Services Description Language (WSDL) file describing the service. But this requirement can introduce a security vulnerability because the WSDL file can be tampered with, perhaps causing a service requestor to communicate with an unauthorized Web service provider that is impersonating the desired provider. Web services interactions require two kinds of authentication: In addition to authenticating the service consumer and provider to each other, principal (that is, end user) authentication is also needed. The first type of authentication—between the consumer and provider—is generally accomplished using connection-oriented security such as SSL; principal authentication is more difficult and requires some type of messageembedded authentication token.

Authorization and Access Control

Authorization is critical because Web services can introduce complex levels of access. Web services security requires the determination of what information users or applications are allowed to access, as well as the operations an application or user can perform. Because Web services are programmatic interfaces, they may be more difficult to monitor for suspicious activity than standard applications. For example, unauthorized access to an improperly protected SOAP interface can easily go undetected because it may appear as an API call by an authorized program. This scenario is similar to that of protecting Web pages and the applications that may be embedded in them; the Web page URL can be protected using Web Access Control and single sign-on (SSO) solutions, but the application within the page is not protected because it is not a Uniform Resource Locator (URL).
Because Web services allow much easier integration with third-party applications and service providers, such as suppliers, customers, and other business partners, authentication and access rights must be tightly controlled and kept up-to-date. But the involvement of multiple parties can make standardizing on one authentication and access control scheme difficult. For example, business-to-business exchanges have the added challenge of managing multiple authentication formats among their members. In a worst-case scenario, every Web service in a business-to-business application would need a separate credential for each service accessed.
The Security Assertion Markup Language (SAML) is designed to facilitate the management and interoperability of security credentials across security domains, but if the SAML payload is not itself protected, it could present a security vulnerability that could be exploited by an attacker. Web SSO and credential mapping solutions that are designed to make these environments easier to manage and easier for participants to use may present an added security risk if proper security measures are not in place.

Session Management

Because Web services are generally stateless, service components would need to authenticate the client on every access, unless the proper security measures are in place. For instance, a Web service that initiates a funds transfer transaction may consist of several operations and contain multiple steps. Each step—for example, login, choose accounts, choose amounts, confirm—would require a user to authenticate, which could double the number of required interactions.

Web services components may leverage Web SSO technologies and use cookies or SSL sessions to grant trust for a specific period of time. Alternatively, some solutions offer implementations of state machines that help alleviate this problem. However, state management itself can become a security risk. For example, if a persistent state management solution were to keep the state information exposed long enough, a malicious user could take advantage of the stored session information and impersonate the authorized user. This type of vulnerability could result from as simple a circumstance as an authorized user leaving a system unattended, thus allowing unauthorized access by another user who may—maliciously or just carelessly—perform other operations with the data.

Encryption and Data Privacy

Typically, standard SSL encryption provides point-to-point data privacy between the end points of service requestors and service providers. However, given the disconnected, loosely coupled nature of Web services, the service provider may not be the ultimate destination for the transaction message, and may even act as a service requestor as part of a multistage business transaction. For example, a Web service that handles the purchase of equipment may be terminated at the buyer’s purchase order application. The purchase order application, in turn, acts as a consumer to the equipment manufacturer, which may be a service consumer for the shipping company, and so forth throughout the purchasing process. Because SSL encryption terminates at a Web or application server, relying on SSL for end-to-end protection may be insufficient; additional protection, such as using the XML Encryption standard, would permit encryption of portions of the SOAP message, offering greater security throughout the processing cycle.
Encryption, however, may hamper virus detection. As with e-mail, Web services traffic may include attachments. When content is encrypted, viruses that may be embedded in the message are also encrypted, making it difficult for an antivirus engine to detect infected data.
Data Integrity and Confidentiality An organization that exposes an internal application as a Web service may also expose supporting data stores, such as databases, registries, or directories. This data must be protected, either by encryption or, if the performance impact of encryption is not an acceptable option, by providing resource-based access controls (access control based on evaluating resource names such as the names of databases against a list of authorized users). Data also may be in danger of interception as it is being processed, especially when a Web service uses temporary data stored locally and the local storage is not adequately secured.
Another data protection issue is the potential interception and modification of a Web service’s output. The XML Signature standard provides a way to sign parts of XML documents, providing end-to-end data integrity across multiple systems. A key benefit of signing is the concept of auditability and non-repudiation—the ability to prove that a particular action took place. For example, an online loan application needs to be approved by an authorized bank officer, and an audit trail of all the actions performed must be maintained for control and dispute resolution purposes. If the application is an XML-based form, certain fields (such as the applicant’s name and Social Security number, loan amount, approver’s name and signature) can be encrypted and signed by a loan officer.

Shared Context Shared context refers to the information that a Web service needs to know about a service consumer to provide a customized, personalized experience; shared context data may include the identity of the consumer, the consumer’s location, and any privacy constraints associated with the consumer information. When several discrete Web services are aggregated to create a composite business service, the participating services need to share the context information.
For example, a consumer planning a vacation may use an online travel service that is actually a set of aggregated airline, hotel, and car rental Web services. The services must communicate with each other to accommodate the shopper’s schedule, budget, and other preferences. But the consumer may explicitly request not to disclose particular personal information to any one of the participating services. To adequately address the consumer’s convenience and privacy concerns, the services must employ a complex set of rules and safeguards to ensure the integrity of the user’s data and identity.


As with any componentized application, Web services applications are vulnerable to attack because they represent a chain of processing components whose security is only as strong as the weakest link. This link is the most vulnerable to attack and can compromise other components in the chain. Exploitation of vulnerabilities can occur in a variety of ways:

• Data that flows to or from the weak-link component may be intercepted, allowing an interloper to acquire sensitive, personal, or valuable information.

• The streams of data traveling among the components may be manipulated to alter the data, redirect the data, or to use unsuspecting servers to mount a DoS attack.

• A component may be shut down completely, denying its functionality to the other components that depend upon it; this can effectively disrupt users’ activities from many different access points.

• If the credential-sharing mechanism between one service and another is insecure, principals of an intermediary Web service may be impersonated.

Although a weakest-link compromise is not exclusively a Web services vulnerability, Web services provide greater opportunity for compromise due to their perceived design simplicity—Web services traffic flows through ports 80 (HTTP) and 443 (HTTPS), much as standard Website traffic does. Most firewalls can recognize SOAP messages but view them as well-formed (syntactically correct) HTTP traffic. They can be configured only to allow or reject SOAP traffic; however, newer XML firewalls can inspect SOAP contents and selectively allow certain Web services to pass through. But XML and SOAP filtering may not be adequate because, as noted previously, Web services interfaces are much more complex than Website interfaces that exchange only HTML pages and forms.
A Web services–enabled application may, for example, have hundreds or thousands of critical operations exposed, all accessible through port 80. In addition, an attacker has more information available, such as WSDL files and Universal Description, Discovery, and Integration (UDDI) entries, whether they are private or public. WSDL exposure is particularly dangerous because its XML format is self-describing and clearly defines the data elements. So WSDL exposure can provide an intelligent attacker with a documented way to invoke a service—its APIs, accepted parameters, and so forth— enough information perhaps to enable the attacker to gain entry In addition to attacks on UDDI- and WSDL-based weak links exposed by Web services implementations, Web services components can be victimized by traditional types of attacks. But as attacks have grown more sophisticated, so have the countermeasures. In the Web services environment, these countermeasures include analyzing more information, such as the content of SOAP messages, and security systems to help detect and deter attacks. For example, information in SOAP headers enables firewalls, IDSs, and IPSs to analyze traffic and even payload for activity, pattern recognition, and auditing. Preventive measures such as this are effective, but they must be considered part of a continuous process, so that security and monitoring tools keep up with the constant advances in hacking.


For effective and acceptable business use of Web services, authentication and nonrepudiation must be applied to Web services messages at a granular level. This requirement potentially could involve many users across disparate organizations in situations where it may be necessary to encrypt and authenticate a transaction in arbitrary sequences. Also, the ability to provide digital identity management that can span multiple organizations is essential for high-level business-to-business transactions. These complexities can make it more difficult to make enterprisewide decisions and verify the compliance with enterprisewide security standards.
Therefore, a critical aspect of any Web services architecture must be a security management framework that would allow centralized organization and coordination of different security systems in an interoperable and managed fashion. Such a framework may not be exclusively positioned to support Web services, but may be focused on addressing broader security concerns. To ensure that a security management framework can deliver the desired functionality, it must be implemented as an instance of a service oriented architecture (SOA).
Such a security management framework would extend the notion of policy-based management to enable setting and enforcing security policies across all Web services present in the organization. A security management framework would include but is not limited to the following:
• Trusted interoperable identities: – Identity federation (linking identities across various security domains) – Authentication sharing (exchange of the authentication states) – Attribute sharing (exchange of information about attributes/roles)

• Interoperable credentials: Issuance, exchange, and validation of digital credentials

• Interoperable policies: – Operational policies – Resource access policies – Confidentiality and privacy policies • Trust models: – Business trust models – Cryptographic trust models

• Message exchange integrity and confidentiality


The basic Web services building blocks—SOAP, WSDL, and UDDI—do not address security. Thus, new standards such as SAML and WS-Security are required to ensure that Web services architectures are capable of meeting enterprise security requirements. Additionally, standards that address the security issues related to XML, the basic language of Web services, are also required.
Security Assertion Markup Language Ratified by the Organization for the Advancement of Structured Information Standards (OASIS) in the fourth quarter of 2002, SAML 1.0 is a specification for an XML-based security framework that will enable a federated network of identity management for interoperation across distributed hosted services and Websites.
SAML 1.0 provides the secure interchange of authentication and authorization information by using Transport Layer Security (TLS) with core Web services standards such as XML and SOAP. SAML also offers suggestions and best practices for the use of the XML Signature and XML Encryption standards with SAML messages. SAML’s benefits include SSO.
SAML was developed by the OASIS Security Services Technical Committee, with contributions from BEA, HP, IBM, Netegrity, RSA Security, Sun, VeriSign, and public key infrastructure companies Baltimore Technologies and Entrust. Initially, there was concern that the SAML specification might vie with WS-Security, which is backed by IBM and Microsoft. However, the two specifications are complementary, and vendors are committing to use SAML for forthcoming Web access management solutions.
The following three scenarios demonstrate how SAML can be used for Web services access control:

• Distributed Web SSO—In this scenario, a Web user has authenticated (proven their identity) to the security system of Company A and would like to access the Website of partner Company B. Despite using different security systems, these companies have agreed that they will trust the authentications of each other’s users. This cooperation requires a linkage between the SSO systems of the two domains, which SAML provides.

• Distributed transaction—This scenario involves a transaction flowing between two organizations, such as a financial transaction between business-to-business gateways. Depending on established business relationships, different security information related to the transaction will need to be communicated via SAML between the parties of the transaction. Examples of the security information may include the identity of the transaction initiator, the type(s) of authentication that were provided, attributes about the user, and authorization decisions made by some authority.

• Trusted third party—SAML also can be useful when an organization needs to request security information from a trusted third party. For example, an employee of Company A might be logged into a supplier’s Website to order office supplies. The supplier’s system would need to determine whether to process the order, so it would use SAML to query Company A to determine whether the employee is authorized to place the order.

The SAML specification includes an XML schema that defines SAML assertions and protocol messages. The specification also describes methods for binding these assertions to other existing protocols (such as HTTP and SOAP) to enable additional security functionality. The components of SAML are:
• SAML assertions—SAML assertions are the statements made to communicate authentication and authorization information. SAML specifies three types of assertions: – Authentication assertions contain statements that a given system (or user) has proven its identity (authenticated) to a given authority or security domain. – Attribute assertions contain statements about the attributes of a system or user; these statements may be used by security systems to process authorization rules, or by applications in their business processes. – Authorization decision assertions contain statements about the results of an application of authorization policy.

• SAML protocol—SAML defines an XML schema for request and response messages that conform to the SOAP specification for defining Web services. The SAML protocol supports the SAML bindings, but can also be used independently as a way for organizations to communicate security information.

• SAML bindings—One of the ways SAML will be used is by binding it to existing protocols for additional security. The current SAML specification defines two bindings: the Web browser SSO binding and the SOAP binding. The Web browser SSO binding allows SSO between organizations’ Web systems by standardizing ways for Web systems to communicate user authentications. The SAML SOAP binding describes a method for attaching SAML assertions to ordinary SOAP messages to associate various security characteristics (authentication, attributes, or authorization decisions) with the messages.


The WS-Security standard was originally proposed by IBM, Microsoft, RSA Security, and VeriSign in early 2002. WS-Security provides standard mechanisms to exchange secure, signed messages in a Web services environment, and provides a foundation layer that will help developers build more secure and broadly interoperable Web services.
WS-Security is designed to be flexible and modular so that it will effectively accommodate the variety of systems that may be used among collaborating organizations, which also may employ different security approaches. This interoperable approach allows both the security technology and its business use to evolve as required. Accordingly, the WS-Security road map describes how to support current and future security approaches. Organizations can choose the credentials they wish to employ—for example, user ID and password, X.509 certificates, or Kerberos certificates. The adoption and deployment process can be incremental, thus allowing an organization to start with basic user ID and password credentials and later to adopt stronger security mechanisms. WS-Security provides a general-purpose specification for associating security tokens with messages by using XML Signature and XML Encryption standards.
WS-Security is the foundation for a broader road map and additional set of proposed Web services security capabilities outlined primarily by IBM and Microsoft. The companies continue to develop the subsidiary specifications, such as WS-Trust, WSPolicy, and WS-Secure Conversation. The WS-Security road map includes the following standards.


The WS-Trust specification was released as a public draft in the fourth quarter of 2002. The goal of WS-Trust is to allow applications to build trust mechanisms directly into the exchange of SOAP messages between Web services providers and consumers. The specification’s extensions facilitate the request and issuance of security tokens in the management of trust relationships. The specification does not define an explicit security protocol, but is designed to support a range of existing and new protocols. WS-Trust works with other protocols in the security stack, such as WSSecurity and WS-Policy, to ensure that a service requester is properly authenticated and granted appropriate resource access.

WS-SecureConversation. WS-SecureConversation builds on WS-Security by defining methods for deriving and passing session keys, as well as for establishing and sharing security contexts. A shared security context is established when the conversation between participants is first initiated. The context can be used to derive session keys that increase the overall security while reducing the overhead of security processing for each message. The initial working draft of WS-SecureConversation 1.0 was released in the fourth quarter of 2002.
Web Services Policy Framework (WS-Policy). WS-Policy provides a model and an XML syntax for Web services providers and consumers to define rules of information privacy and usage. WS-Policy works with the WS-Security mechanisms to enforce access and usage rights, and is designed for discovery through WSDL and the UDDI repository. The WS-Policy specification relies on additional policy-building blocks, including: the Web Services Policy Assertions Language (WS-PolicyAssertion), which provides a common language for representing individual requirements and preferences; and the Web Services Policy Attachment (W-PolicyAttachment), which defines three different attachment methods of associating policy definitions with WSDL and UDDI entities. The WS-Policy draft was released in the fourth quarter of 2002.


WS-Authorization will define how Web services manage authorization data and policies. The specification is under development and not yet publicly available as of October 2003.

WS-Privacy. WS-Privacy is currently under development as of October 2003 and will define how Web services state and implement privacy practices.

WS-Federation. Published in July 2003, WS-Federation enhances the WS-Security road map by describing how to manage and broker trust relationships in a heterogeneous federated environment, including support for federated identities. The specification lets a user access multiple applications without logging into each application separately, even if the applications are provided by different companies. WS-Federation includes the following three elements:

• The Web Services Federation Active Requestor Profile defines the means to request, exchange, and issue security tokens in the context of active requestors.

• The Web Services Federation Language is the specification document that defines how federation works in the WS-Security stack.

• The Web Services Federation Passive Requestor Profile defines a system for passive mechanisms to work seamlessly using a single or simplified sign-on to the WS-Federation system.