Information Security - part 3


Nearly every enterprise now relies on information technology as an essential tool for meeting its business objectives. In so doing, however, enterprises must also contend with the various threats and vulnerabilities associated with today’s computing environment.


Nearly every enterprise now relies on information technology as an essential tool for meeting its business objectives. In so doing, however, enterprises must also contend with the various threats and vulnerabilities associated with today’s computing environment.
A threat can be defined as any event that might prevent or inhibit an organization’s ability to meet its objectives. For example, in day-to-day business operations, an employee strike at an overnight package carrier might be considered a threat, because it could affect the company’s ability to receive or send materials necessary for particular functions. In the information security realm, threats can come in many forms:

• Brute force—An intruder uses a brute-force attack to gain an application password and then performs an unauthorized money transfer.

• Natural disaster—A tornado destroys a data center where essential servers are hosted.

• Corporate saboteur—A corporate saboteur takes advantage of weak encryption on a wireless network to gain access to confidential information.

• Malicious code—An employee unintentionally downloads a Trojan-horse program that erases all data on the local hard drive.
Each threat must take advantage of some vulnerability—a weakness in technology or process that can be exploited. For example, a burglar with a lock-pick set is a threat to many forms of perimeter security. A combination lock, however, is not vulnerable to such an attack. In the context of an enterprise IT infrastructure, any of the following examples could be considered a vulnerability:

• Software bug—A new buffer overflow bug is discovered in the password program that ships with an operating system.

• Password—The minimum password length for a domain is set to five characters rather than eight.

• Wireless access points—An organization’s wireless network has been deployed without Wired Equivalency Protocol (WEP) encryption enabled, and with the factory-default administration password for the access points intact.

• Inappropriate access—Users perform routine tasks on their computers while logged in to accounts with administrative access.
Threats and vulnerabilities are intimately related. One cannot affect an organization’s ability to meet its objectives without the presence of the other. In the case of the first example threat, an intruder would not have been able to successfully obtain passwords by brute force had some vulnerability not existed in the authentication procedure—for example, a password policy mandating the use of lengthy, difficult-to-guess passwords was not enforced at the application level.

Threats can come from anywhere, both inside and outside the organization. A threat’s objective might be financial gain, a simple act of vandalism, or even a completely anonymous attack carried out blindly by a virus or other malicious software. Similarly, the potential sources of vulnerabilities are innumerable, ranging from software bugs and user errors to any unforeseen circumstances. New trends in online threats, such as Internet worms and attacks on wireless networks, have forced organizations to resolve security concerns that previously they need never have considered. Complicating security efforts is the ever shorter length of time between the discovery of a new vulnerability and when it is first exploited.
For all of these reasons, an effective strategy for threat and vulnerability management (TVM) will include an integrated, proactive approach to protection.

A comprehensive defense incorporates four primary activities:

• Threat detection—Actively identifying and isolating threats to minimize their impact on assets.

• Vulnerability detection—Actively identifying asset weaknesses before they can be exploited in an attack.

• Threat and vulnerability remediation—Isolating and resolving asset security issues once identified.

• Security information management—Integrating, interpreting, and presenting security-related information from disparate sources.

■ Common Sources of Threats The news and popular entertainment media usually paint a romantic picture of the Internet hacker. He is a rogue—perhaps a rebellious teenager—someone who breaks into systems for the thrill, or the glory, or to tap into privileged information. He changes his school grades at will and erases the record of his overdue phone bill. Smart, yet disaffected, his primary motivation is to live outside the dictates of authority.

This portrait tends to downplay how great a risk today’s enterprises really face. That government records databases might be subject to attack goes without saying. But how likely is it that one of these thrill-seeking individuals would want to go after (for example) the human resources records of the shipping warehouse of a major importer of aluminum die-making equipment?
As is so often the case, however, the truth of cyberattacks does not match its Hollywood portrayal. The reality is that every enterprise is at risk and an attack might be launched against any system, no matter how seemingly mundane. (See Figure33 for a survey of common corporate attacks.) To understand why this vulnerability is so pervasive, it is important to consider every potential source of attack and every kind of attacker. Popular perception tells only part of the story.

MALICIOUS EXTERNAL THREATS

Malicious threats are characterized primarily by the desire to do harm, whether to a company’s assets, its public image, or more often to the computer system itself. Harm in this case usually means literal damage, such as defacing a Web page, erasing files, or bringing down an entire system. It can also mean any act that undermines or runs contrary to a company’s own security policies and standards, however. For example, gaining unauthorized administrative access to a server can be characterized as a harmful act.
The actions of the system hackers that have captured Hollywood’s imagination generally fall into the category of malicious external threats. In real life, the perpetrators of malicious external threats are often skilled programmers, system administrators, or simply hobbyists with a very keen understanding of the inner workings of computer systems. Their motivation for compromising corporate networks is usually the thrill of discovery or of completing a challenge. In some cases, hackers of this type have even been known to disclose any vulnerabilities they discover to the companies in question, so that they can be corrected. While this disclosure may not excuse the hackers’ actions, companies at least have some hope of bringing such incidents to a more or less amicable resolution.

Unfortunately, such hackers are by far the minority. Incidents today are more often the work of semiskilled attackers, using the equivalent of a sledgehammer rather than a scalpel. These attackers are known as script kiddies, so called because they rely on scripts or programs written by someone else to exploit known vulnerabilities, without necessarily even understanding how the scripts work.

They are seldom motivated by curiosity; rather, their goal is simple vandalism, or the digital equivalent of a joyride on someone else’s computer systems. They seek notoriety in the computing underground—in other words, they do it to be able to say they did it. For that reason, any Internet-exposed system might fall under a script kiddie’s crosshairs, provided the system is easy enough to compromise.

EXTERNAL THREATS FOR FINANCIAL GAIN

Although the actions of any external attacker might be appropriately characterized as malicious, it is important to distinguish between those motivated by the desire to do harm and those whose goal is financial gain. Many high-profile Internet heists have been reported in recent years, often involving significant sums. The profile of the attackers in these cases varies widely, from rogue individuals to well-organized cartels of criminals.
Financial institutions are not the only organizations at risk for money-motivated attacks. For example, the customer records database of any company could be a valuable target, if it exposes credit card data that could be used later for fraudulent purchases. In such a case, the company itself may not experience any immediate financial losses because of the attack. The resulting loss of customer goodwill, however, could be significant—as could the potential financial liability, should the company be found negligent in its failure to protect customer data. Other targets of financially motivated attacks might include order-entry systems, billing records, shipping and receiving databases, and online product pricing information.

This type of attacker is unlikely to be scared by the threat of prosecution under data security laws, or to be dissuaded by basic countermeasures. The goal of these attacks is profit. The perpetrators have likely already resigned themselves to committing a crime and, in the case of a well-organized attack, will have already identified the exploits that will allow them to achieve their goals as quickly as possible. It is imperative, therefore, that corporate response to these attacks be swift, comprehensive, and involve appropriate external authorities at the earliest stages.

INTERNAL EMPLOYEES

Not every attack originates outside of a company. In fact, according to the Computer Security Institute (CSI)/Federal Bureau of Investigation (FBI) 2003 Computer Crime and Security Survey, 77 percent of respondents believed disgruntled employees were the likely source of attacks.
Internal attacks too frequently go unnoticed until the damage is done because their activities do not match the documented patterns that characterize malicious external attacks. While external attackers must lay siege to a corporate network through either brute force or subterfuge, terminated employees whose accounts are not deleted on a timely basis will have no difficulty gaining access. The same is true if their access was only partially revoked—for example, if their access to local file servers was disabled, but their login accounts on dial-in terminal servers or wireless local area network (WLAN) access points were not.
The potential motives of these attackers are numerous. They may harbor a grudge against a former employer and wish to cause harm or perform acts of vandalism. They may be motivated by direct financial gain through embezzlement. Or, they may simply want to use company resources for their own gain—for example, access to a sales contact database. Whatever the reason, the threat disgruntled employees pose to the enterprise is no less than that of outside attacks.

CORPORATE ESPIONAGE

Enterprise computers are often known as information systems, so it should come as no surprise that information is often the true target in Internet attacks. A company’s confidential information might be of value to several parties, but none more than that company’s competitors. Corporate espionage is a very real part of today’s business landscape, and the advent of the Internet has only increased the number of avenues for spies to gain access to sensitive data.
The classic depiction of corporate espionage is an agent breaking into a secret laboratory to steal plans for a hush-hush research and development project. But reallife espionage need be nothing so dramatic to be equally harmful. An incident might be something as simple as downloading a white paper for a yet-to-be-announced product from a file server that was configured with inadequate password protection. Or, in more extreme cases, an attacker might gain access to customer contact information, sales records, financial projections, or even employee salary information—any of which could prove catastrophic in the hands of a competitor.
Espionage-motivated attacks can often be difficult to spot, because their activities generally match those of other attacks that originate externally. Additionally, the majority of espionage-related attacks are carried out by independent agents whose patterns of attack offer no clues about their actual employers. While many companies would respond to suspected corporate espionage by simply closing their compromised systems and effectively locking out the attacker, an alternative is to allow the hacker to continue the exploits and monitor the attack in an attempt to discover its origins. In such a case, the company might supply the attacker with false information or trace its attack patterns to learn from this behavior.
Whenever an attack occurs, companies should give careful consideration to exactly which assets have been compromised and what the impact could be to the organization should the information those assets contain fall into the wrong hands.

FOREIGN ESPIONAGE

Attacks resulting from foreign espionage are less common than those motivated by profit or simple malicious intent, but nonetheless they should be taken seriously. For example, in the United States, the U.S. Department of Homeland Security has expressed concern that cyberwarfare could be used as a terrorism tactic against the United States. Although theft of sensitive information may be one part of this threat, denial of service (DoS) attacks against vital government, financial, and business infrastructures are equally likely.
Organizations with close ties to government are not the only ones at risk from these threats. Many governments have close ties to industry and could potentially assist the espionage efforts of prominent companies against their competitors in other countries. Sometimes a corporate network may be compromised merely as a gateway to another network—for instance, that of a government customer or partner.
If foreign espionage is suspected, a company should first initiate the same security procedures that would be followed for any other attack. It is particularly important, however, that the appropriate authorities also be contacted in these cases, because a localized incident may in fact be part of a larger national security threat.

IDENTITY THEFT

A disturbing trend has been the marked increase of identity theft, in which criminals gain access to identifying information about an individual for purposes of posing as that person. Profit is the usual motive—for example, applying for a credit card under an assumed name. Depending on the information gathered, however, an attacker could stand to gain much more: airline tickets, for instance, access to restricted materials or even a trusted position at a company. Damaging an individual’s or an entity’s reputation is another motivation for identity theft. For example, a disgruntled employee might impersonate a former boss to commit unlawful or unethical deeds that would damage the boss’s and the company’s reputations.
The gathering and cross-referencing of personal identifying information has become common practice, meaning the number of avenues for identity theft is increasing. Any systems that store or give access to such information might be subject to attack, including customer databases, sales contact information, or even employee records.
Assessing the actual financial risks companies face from identity theft is often difficult. When a corporate database is compromised, the company itself may not suffer direct losses. Any organization, however, can be a victim of fraud resulting from identity theft. For that reason, it is important for companies to recognize this mounting social ill and do their part to stem its spread before it becomes pandemic.

■ Malicious Code Although all threats are ultimately the work of humans, individual incidents often occur through automated processes.

Malicious code, usually written to retrieve sensitive information, damage systems, or commit acts of vandalism, is all too widespread on the Internet today. The majority of attacks that result from malicious code are completely anonymous—only very rarely can they be traced to their sources—and they are launched without a clear target. Indeed, in many cases their aim is to affect as many random targets as possible. However, some malicious code applications are written specifically for single targets. For example, distributed denial of service (DDoS) attacks, which can be distributed in the form of a virus, can target one Internet Protocol (IP) address or network and are frequently used to attack a single system.

SYSTEM-LEVEL ATTACKS

Malicious system software can be divided roughly into three categories: Trojan horses, viruses, and Internet worms.

Trojan Horses

This type of threat is named after the famous episode in Homer’s Iliad in which the Greeks sneaked into the city of Troy concealed inside a wooden horse, which they had supposedly left as a gift earlier in the day. This scenario describes perfectly the means by which Trojan-horse programs work their way into enterprise computers. The software presents itself to the user as an innocent application such as a screen saver, a utility, or an amusing entertainment. Once run, however, it acts on its true objective, which could be to delete or overwrite files. Trojan horses introduce a so-called back door into the compromised system, enabling the attacker to remotely control a user’s PC. For example, the attacker might use the back door to access the company network undetected.
Trojan-horse programs rely on subterfuge and deception to trick the user into mistakenly running them. Unlike viruses, they do not self-replicate, cause themselves to be executed without user intervention, or move from machine to machine without being manually copied

Viruses

Virus software may begin as a Trojan horse, but once the malicious code is executed, its potential for harm is much greater. In addition to any directly damaging effects they may have, viruses can also attempt to infect other files or disks with their own code, making viruses difficult to eradicate once they have been activated. The life cycle of a typical virus has several stages:

• Stealth—Most viruses take steps to conceal themselves from the user, such as attaching themselves to pre-existing files, encrypting themselves to disguise their signatures, or modifying messages from the host operating system that might otherwise reveal their presence. The goal is to make sure the viral code is unwittingly executed as often as possible.

• Self-replication—Like a biological virus, a primary goal of a computer virus is to infect new hosts. Each time it is run, the viral code will seek new files or disks into which it can insert copies of itself. Thus, removing a virus from an infected system is seldom as simple as deleting the first infected files discovered; others will almost always exist.

• Activation—While some viruses are opportunistic and activate whenever possible, many others wait until a certain date and time, or until certain internal system conditions are met, before the damaging portion of their code takes effect. Theoretically, a virus could remain dormant for days, months, or even years. • Payload—Sometimes called a warhead, this code runs when a virus is activated. In many cases its effects will be similar to those of the damaging code contained in Trojan-horse software.
In one common tactic, viruses attach themselves to frequently executed programs or system code. The viruses then appear to run completely unbidden, without direct user intervention. For example, a virus might modify the host operating system so that the viral code is executed every time new disks are inserted.
Earlier viruses were operating-system-specific and infected only program files. But the advent of scriptable applications that have their own macro languages (such as Microsoft Office) has introduced a new threat: the macro virus. These viruses can infect various platforms, because they attach themselves to document files rather than executable binaries. They are also easy to create and spread, because documents are exchanged far more often than programs. Macro viruses have replaced boot sector viruses as the system infection responsible for the most damage annually.

Worms

A subcategory of viruses, worms have the additional capability of network awareness. While most virus software searches the local hard drive for files and programs to infect, worms will also probe the local network or even the Internet. They will often penetrate new machines by exploiting known vulnerabilities in network software, or by taking advantage of common configuration errors. Some of the most pernicious viral outbreaks in recent years, such as Blaster, Code Red, and Nimda, have been worms.

BROWSER-BASED ATTACKS

Viruses and worms are well-established threats. But as hostile programmers continue searching for new vulnerabilities to exploit, Web browsers have become popular targets. At first, this approach may seem counterintuitive; to most users, a Web browser is simply a way to navigate and display content, nothing more. But the role of the browser has expanded, and today’s browsers are complex engines capable of executing a diverse range of programming languages and instruction codes, any of which might provide the means for a successful attack. Examples include:

• JavaScript—Sometimes called ECMAScript, this language is a primary component of dynamic Hypertext Markup Language (DHTML) and a core feature of every browser in widespread use today. Many Websites require the presence of JavaScript to properly display their complex user interfaces.

• VBScript—The Microsoft equivalent of JavaScript, VBScript is supported in Microsoft’s Internet Explorer.

• Plug-ins—Most browsers support extensibility architectures, and many plug-in extensions implement programming languages of their own. Examples include ActionScript (in Macromedia’s Flash), Lingo (in Macromedia’s Shockwave for Director), and Curl.

• Java—Although their popularity has waned somewhat since the 1990s, Java applets still provide a means of executing complex, graphical user interface (GUI)-enabled applications from inside the browser.

• ActiveX controls—This Microsoft technology lets Windows users download and install system software components and code libraries over the Internet.

• .NET components—In 2001, Microsoft introduced .NET, a development platform that attempts to resolve many criticisms of past development models, including ActiveX, Component Object Model (COM), and Distributed COM (DCOM). As of late 2003, .NET-based browser components are rare, but the bulk of new development for the Windows platform is expected to use this technology in the near future.
Each of these tools was designed to help prevent the spread of malicious code. For example, JavaScript simply lacks any facilities to write to the local disk, format drives, or delete files. Its ability to access content across different Internet domains or browser windows has also been intentionally limited.
The Java security model limits the destructive potential of applets with its sandbox, which carefully screens all code before it is executed. An applet that contains potentially harmful code—such as code that deletes files or writes to the local disk—will not be allowed to run. Depending on the security model of the Java version in use, however, there may be several ways to loosen the restrictions of the sandbox model and allow Java applets access to more potentially harmful features. Standalone Java-based applications (those that do not run inside a Web browser) do not have a sandbox security control and are subject to application-level vulnerabilities.
Microsoft’s ActiveX security model essentially relies on human judgment. When an ActiveX control is offered for download, the user is presented a dialog box that is used to certify the application. For example, the dialog box might state that the application the user is about to execute was developed by XYZ Corporation, prompting the user to verify the application to establish trust. If the user selects Yes, the control will be downloaded, installed, and automatically executed.
This requirement of certifying and accepting ActiveX controls can become a risk in and of itself, as users become conditioned to selecting the trust always option without reading the specifics about the control noted in the dialog box. This conditioning to automatically accept ActiveX controls without question can be exploited by an attacker.

Microsoft’s .NET platform borrows many ideas from Java, including a virtual machine environment for executing code (called the common language runtime) and a welldesigned, Internet-savvy security model. Like Java applets, most .NET-compliant components have little ability to harm the host computer. Unlike Java, however, .NET gives programmers the option of declaring their code untrusted, allowing access similar to that of ActiveX controls.
Developers of both the Microsoft and Java technologies support a practice called code signing, which aims to establish the authenticity of downloaded code components through the use of digital signatures. Again, however, this feature identifies only the origins of the code; it does not provide true guidance.
Thus, no browser-based technology should be considered secure. However, it would be unrealistic to expect to shield end users completely from any browser-based threats; executable content on the Web has become far too commonplace. Instead, companies often implement firewalls and content-filtering measures, along with a user education program for secure Web access.
Finally, any browser-based technology can be only as secure as the browser itself. Even if designed with security in mind, a plug-in cannot be truly secure if the host browser’s plug-in interface is not. Likewise, a buggy Java Virtual Machine or JavaScript interpreter poses equal risk. Therefore, security administrators must remain aware of current browser security notifications and standardize on a browser known to be stable and robust, upgrading only as new versions are proven to be reliable.

■ Threats in Distributed Environments

Today’s Internet-enabled computing environments are using network-based distributed computing models rather than client/server architectures. Technologies such as Common Object Request Broker Architecture (CORBA), Enterprise JavaBeans, Microsoft .NET, grid computing, and Web services are evidence of this trend. New application categories like collaboration tools, shared storage systems, and peer-to-peer applications extend these concepts to the desktop.
The shift toward networked computing is an exciting development, but it also presents new security challenges. The same distributed computing concepts that enterprises now use to their benefit also can be turned against them, making it possible to mount attacks remotely, anonymously, and on a large scale. Because the Internet connects many of the world’s computers, attacks can be launched from locations around the globe, routed through other countries to disguise their tracks, and finally directed against any machine on the Internet.
The complexity and stealth of such attacks can greatly hinder an organization’s ability to stop them, as well as to identify their origins. Instead of focusing on a single machine or local area network (LAN), security incident response teams now must consider whether events that are distant in time or space are related, or indeed part of the same attack.

EAVESDROPPING AND IMPERSONATION

Whenever computers are working together to achieve a common task, they must coordinate their efforts by passing messages across the network. As the number of machine-to-machine communication channels increases, so does the number of opportunities for an attacker to monitor or disrupt the messaging process. The most popular forms of attack are the following:

• Replay attack—An attacker captures a legitimate inter-system message for the purpose of sending the same message again at a later time. For example, the attacker might record a message indicating that payment for a service has been verified, then replay that message to the receiving machine when no actual payment has been made.

• Session attack—An attacker waits for a legitimate user to establish a session with a network service and intercepts the resulting authentication token (for example, a cookie). The attacker then uses that token to masquerade as the legitimate user, hijacking the session. The attacker need not know any network or service passwords to complete such an attack.

• Man-in-the-middle attack—An attacker insinuates an unauthorized machine into the communications channel between two computers, first by redirecting network routing paths and then by masquerading the unauthorized machine as either the sender or recipient of a legitimate message. Once these steps are accomplished, the attacker can choose to either intercept and record messages before passing them on to their intended recipient, or generate new, fraudulent messages that will be perceived as genuine.

• Password sniffing—An attacker uses a hardware device or software utility that enables it to monitor all network traffic and identify and log messages that contain unencrypted passwords.

• Internet Protocol (IP) address spoofing—A technique in which attackers disguise traffic as coming from a trusted address to gain access to the protected network or resources.
The significance of these attacks in distributed computing environments is grave, as the consequences could include executing harmful code, placing unauthorized orders, or corrupting computational results. The only effective solution is to develop secure intermachine communications channels. By themselves, Web services protocols such as the Simple Object Access Protocol (SOAP) are not inherently secure. But in much the same way that user logins can be made more secure, these protocols can be augmented with additional security layers, incorporating encryption and strong authentication using digital signatures.

DENIAL OF SERVICE

In basic terms, denial of service (DoS) refers to any intentional act designed to disrupt the normal function of a system or prevent access to computing resources. DoS attacks can come in many forms and degrees of sophistication. Typical examples include:

• Cutting the cable that provides Internet access.

• Flooding a server with bogus ping packets to choke its network bandwidth.

• Writing volumes of false data to a server to fill up its disk space.

• Tricking a computer into executing code that produces an infinite loop.

• Filling system memory through repeated application requests.

• Altering router or firewall settings to disallow outside connections.

• Blocking other systems through a so-called blacklisting attack, for example, in which Company A’s firewall would block legitimate traffic from Company B.

A one-time occurrence seldom qualifies as a true DoS attack. A successful networkbased DoS attack is usually unexpected, rapid, and overwhelming. For this reason, many DoS attacks are automated, using prewritten scripts designed to execute the same malicious event repeatedly (for example, flooding a mail server with hundreds of duplicate phony e-mail messages).
An organization’s response to a DoS attack should depend on the nature of the attack itself. Adjusting firewall or router settings to block traffic from the offending address may be sufficient to deflect most network-based attacks. Additionally, many common DoS attacks rely on publicized bugs or other vulnerabilities in operating system and server software. DoS incidents can often be avoided—or at least remedied—by keeping up-to-date on all the latest patches for software and network devices in use across the extended enterprise.

DISTRIBUTED DENIAL OF SERVICE

The proliferation of high-speed, always-on broadband Internet connections has magnified the threat of DoS attacks, because countless users on both home and business networks can now become participants in a single DoS event. Often, their participation is entirely unwitting.
The typical distributed denial of service (DDoS) attack is launched as the payload of a self-replicating virus or Internet worm that is usually installed as a Trojan horse. The malicious code first self-replicates as often as possible, with the goal of spreading itself to as many points on the network as it can. Then, at some predetermined time, the virus payload will activate, initiating the DoS attack. A payload might broadcast a stream of invalid network packets to a single, specific address (the target of the attack). If the worm has propagated itself across the network successfully, the same DoS attack will be launched simultaneously from countless locations on the network, each aimed at the same target.
Owing to their distributed nature, DDoS attacks can be difficult to combat. Although adjusting firewall or proxy settings can usually block a single DoS attack launched from a single, identifiable source, an attack that originates from points across the Internet is a different matter. Also, the sheer volume of network traffic involved in such attacks makes them all the more crippling. In many cases, the owners of systems participating in the coordinated attack may not be aware of their involvement, or may lack the knowledge to remove the responsible worm from their systems.
Because of these factors, an effective response to a DDoS attack must be a coordinated effort. Security personnel from geographically dispersed locations must often work together to close any security vulnerabilities that may allow the attack to propagate. Early involvement of external entities such as antivirus vendors and law enforcement authorities is therefore essential.

■ Top Vulnerabilities

For a threat to be realized, an attacker must exploit one or more vulnerabilities. The presence of vulnerabilities need not be limited to computer systems; flaws in security procedures or business processes could just as easily become conduits for attack. Many sources of exploits lead to security violations, whether on the network, the enterprise perimeter, a computing or communication platform, or in an application or database. Examples include buggy software; poor system design, implementation, and practices; failure to enforce security policies; eavesdropping; weak authentication standards and policies; and compromised access controls.

FLAWED SOFTWARE

Experienced programmers know that it is practically impossible to completely eliminate bugs in production software. Human error, unforeseen conditions, or unexpected interactions with other software or the operating system can all turn a benign software flaw into an exploitable vulnerability. Examples of common threats related to flawed software include:

• Buffer overflows—When fed more data than expected, the application will write beyond the bounds of allotted memory, potentially overwriting program code and causing the system to execute arbitrary instructions.

• Memory leaks—As demands are placed on the application, it consumes more and more memory, eventually using up system resources and causing a system crash.

• Poor data validation—An application that expects one kind of input accepts another without verifying it. For example, a search engine that expects Englishlanguage input might mistakenly accept arbitrary structured query language (SQL) instructions, which are then executed by the database (this kind of attack is known as SQL injection).

• Conflicting libraries—One application installs a different version of a given code library (for instance, a Windows dynamic-link library [DLL]) than the one required by another application, creating unforeseen vulnerabilities.

As an application grows in complexity, the likelihood that it contains exploitable flaws increases. The only solution is for IT and security staff to diligently apply the latest patches for the software in use on their systems. Many vendors now help to facilitate.

 

this process by providing semi-automated, Internet-delivered software and operating system component update systems—for example, Microsoft’s Windows Update or the Red Carpet service for Red Hat Linux.

POOR SYSTEM DESIGN, IMPLEMENTATION, AND PRACTICES

Poor system design can unintentionally introduce serious security problems that can be deliberately targeted by attackers. In this case, a system refers to any specific component of an organization’s IT infrastructure, whether a single application, a single computer, or several machines working in tandem in a distributed environment. Because of the complexity of today’s computing environments and the number of technologies at work, a wide variety of vulnerabilities can arise, such as:

• Unauthorized access points—A user on a network may install a dial-in modem or WLAN access port without the authority or knowledge of the network administrator.

• Compromised operating system—An operating system can be compromised because of poor security model design.

For example, the security model in Windows NT causes many users to use an account with administrative privileges for all their daily activities. In addition, current software upgrade schemes and document delivery methods condition users to readily download and execute software with little or no validation of the update’s authenticity. Taken together, these two factors can be a hazardous combination.

• Out-of-date operating system—An installed operating system may lack crucial bug fixes, because the system recently had been restored from a backup tape that did not include the most current updates.

• Unnecessary services—Systems may be running unnecessary and unwanted services. For example, an e-mail server may be improperly configured with file sharing enabled, or its software firewall may incorrectly allow Web access when only e-mail transfer is needed. • Inappropriate privileges—A group of users may accidentally be given privileges that should be restricted to one member of that group.
Because checking operational configurations is very labor intensive if performed manually, a good security framework includes security-aware configuration management tools that can automatically enforce desired configurations or alert administrators when variances from the known configurations are detected.
Other security incidents occur not because a company’s security technology or its standards are inadequate, but because the individuals responsible simply do not implement them. Examples include software firewalls that are installed but not enabled, or using insecure channels for administrative logins when encrypted alternatives are available. Administrators should take care to review all of the security measures available to them and to make use of as many as possible.

FAILURE TO ENFORCE SECURITY POLICIES

Many security breaches happen because security policies are not strictly enforced or not enforced at all. Employees should be uniformly and consistently reprimanded for failure to observe security policies, and clear consequences should be established for repeat violations. A pattern of such failure is an organizational problem that almost always defeats security measures, regardless of the technologies employed.

EAVESDROPPING

The most common security measures can be compromised if an attacker gains the ability to capture and record interactions between an authorized user and a computer system. Software utilities that allow attackers to clandestinely record every keystroke or mouse movement entered by a user are widely available. Another type of software used for eavesdropping enables the microphone on a notebook computer and then transmits its recordings. Once installed, an intruder can use these tools to capture passwords, network logins, or other authentication tokens that may allow further access to the network. For this reason, administrators should remain vigilant against the use of unauthorized software installed on user machines, and should monitor for unauthorized or unscheduled network activity that may indicate signals are being sent to a potential intruder.
Another eavesdropping scenario might involve an internal attacker who has physical access to the systems. In this case, the hacker can connect a small hardware device between the computer and the keyboard that records keystrokes made. Because the device cannot be detected by a software utility or through network scanning, it poses a significant eavesdropping risk.

WEAK AUTHENTICATION POLICIES AND STANDARDS

Operating systems, application programs, and even hardware devices such as routers regularly come preconfigured with default account names and passwords. These defaults are vulnerable to attack and a company’s policies should dictate that they be changed immediately upon installation. But even user-specified passwords can be vulnerable if they can be deciphered through educated guesswork, or by a brute-force attack using an electronic dictionary or other word list. Such attacks can be forestalled by password standards that provide for long passwords requiring numbers, mixed-case letters, and punctuation. The use of biometrics technology such as fingerprint scanning used with passwords (strong authentication) can also protect against such attacks.
Care should be taken when publishing information about individual users on corporate Web pages, intranets, or directory services like UNIX finger. Often attackers will use this information to identify usernames and guess passwords. Users should be further instructed never to leave any written record of their passwords in their work areas.
When designing authentication systems for applications, care should be taken to implement the system in such a way that it will discourage password attacks. For example, because users occasionally miss a key while typing their password, two failed login attempts (with a short delay between each attempt) are often allowed. However, after the third failed login, the user may be forced to contact the network administrator for a new password.
In such an implementation, the system should not announce to users whether it was the username or password entered that was incorrect, as this message would give information to attackers. Failed access attempts should be logged so that administrators can pinpoint where potential break-ins may occur or are occurring. Finally, any authentication tokens that will be transmitted across the network should be protected with strong encryption.

COMPROMISED ACCESS CONTROLS

Strong passwords and robust authentication systems have little effect if access to user accounts is managed improperly. Disgruntled present and former employees can easily take advantage of their ability to enter corporate premises to inflict damage on enterprise computing resources, steal, or damage sensitive data. Access for terminated employees should be promptly and thoroughly revoked, including e-mail access, network logins, access to collaborative tools, and access to dial-in resources such as modem banks and WLAN access points.
Like any other sensitive information, access control lists (ACLs) may themselves be the targets of security attacks. In many cases, the purpose of the first attack will be to establish a back door that can be used to further penetrate the enterprise at a later time. With an ACL in hand, an attacker can plan methods to circumvent the restrictions, or even to alter the lists to simplify an attack. These lists and user directories should be protected and all access to these directories (authorized or otherwise) should be logged for analysis.

■ Primary Threat Detection Activities

While the goal for IT administrators and systems security personnel should always be to eliminate vulnerabilities as soon as they are discovered, it would be unrealistic to expect that any network could remain perpetually impervious to attack. At some point in time, any network is likely to contain undiscovered vulnerabilities. Thus, a proactive approach to detecting incoming attacks and threats is necessary. The recommended plan of action involves four primary activities:

• Intrusion monitoring—Intrusion monitoring is a security activity designed to detect and sometimes prevent threats, minimizing their impact on computing systems and other information assets. Malicious attacks may be attempted against essential assets in order to compromise the confidentiality, integrity, or availability of the resource. Intrusion monitoring systems can be divided into two categories: reactive monitoring—usually the province of intrusion detection systems (IDSs)—or proactive monitoring, as provided by intrusion prevention systems (IPSs).

• Malicious program detection—This activity monitors technologies for viruses, worms, Trojan horses, and code that introduces holes into the environment (such as back-door software). This area of threat detection is the most mature in the TVM product market. Product families include antivirus, Web filtering, content filtering, and e-mail filtering applications. Many of these products include both detection and remediation capabilities.

• Log activity analysis—One way to gain a broader understanding of how a particular network functions is to gather logged information from disparate systems, normalize it, accumulate the information into a data store (often a relational database), and then correlate events based on heuristics, known behavior patterns, or stateful rules. This activity, known as log activity analysis, helps detect more than external threats. In fact, it can be particularly useful for detecting intrusion attempts launched inside an organization and often ignored by perimeter security devices (such as firewalls and IDSs). Log activity analysis is a first step toward event correlation, in which logs are collected and their contents analyzed as a whole.

• Rogue technology discovery—The goal of this activity is to detect devices and applications that appear unexpectedly in the security management domain. Such devices could be either a legitimate addition that fails to follow proper change control procedures (such as a misconfigured router), or an unauthorized device that directly compromises an information asset (for example, a WLAN access point installed by an unauthorized employee). Rogue technology discovery tools are used to identify devices, systems, and applications, and can be used both internally and externally.

MALICIOUS PROGRAM DETECTION

For any organization, prevention is the first line of defense against malicious code. Users should be educated about the risks posed by viruses, worms, and so forth, and encouraged to take appropriate preventive measures. Particularly, they should avoid indiscriminately opening attachments received via e-mail, downloading code or applets from untrusted sources, or installing pirated software, because these all can be primary sources of viral infection.
Reliance on these manual methods is a poor guarantee of security, however. True prevention requires active countermeasures that react at the speed of the malicious code itself. Virus incidents are automated attacks; antivirus and content filtering software can provide enterprises with appropriately automated defenses.

Antivirus Software

Viruses and worms pose an ever-present threat to enterprises today. A 2002 study conducted by the International Computer Security Association (ICSA) showed that 100 percent of businesses surveyed had experienced at least one virus encounter within the last 12 months. In the same survey, 74 percent of respondents felt the overall virus problem had worsened since the previous year.
Because stealth is a common characteristic of virus software, the only truly effective means of detecting and defending against these attacks is to employ specific software countermeasures. Antivirus software comes in several different forms:

• Signature scanners—These devices scan system memory, disks, and files, and compare their contents against a database of known virus signatures. Because of the proliferation of new viruses, these databases must be frequently updated to remain current. To facilitate the updates, many vendors provide automatic, network-based update systems (for example, Symantec’s Live Update feature). Scanning software can be either run on demand or set to operate in the background, scanning each new document, program, or e-mail attachment as soon as it is opened or written to disk. The latter method is more secure, but it affects performance on the host workstation somewhat and may be incompatible with the normal function of some applications

. • Heuristic scanners—These scanners perform statistical analysis on the actual program instructions contained in executable files to assess the probability that they might exhibit virus-like behavior. Because they do not rely on lists of known signatures, heuristic scanners could detect even hitherto unknown viruses. Their incidence of false positives (false alarms) is usually higher than that of signature scanners, however.

• Checksum scanners—Rather than relying on a static database of known virus signatures, a checksum antivirus scanner will compile its own list of signatures— called cyclic redundancy checking (CRC) codes or checksums—for the files currently on the disk. On subsequent runs, it compares the known list to the current state of the drive, flagging any irregularities. Although effective, this antivirus measure is not comprehensive, because it cannot detect viruses in any newly arrived files nor can it check for viruses in memory. However, checksum scanning can provide an additional line of defense when combined with other scanning methods. • Behavior blockers—These tools attempt to monitor the system environment in real time, looking for program instructions that may have detrimental effects, and interrupting processing before the damage can occur.

Unfortunately, wellknown methods of overriding such software cause a high failure rate, but behavior blockers still can be useful as part of a comprehensive antivirus solution.
Once infected files are identified, the software can take various measures to correct the problem, from removing viral code on a per-file basis, to more complex operating system–level repairs on the disk itself. When the appropriate remedy is unclear, most software can be set to report anomalies and suggest next steps. In extreme cases, repairing an infected file may be impossible, and the file must be quarantined in a designated safe storage area, pending deletion.
Worms or other Internet-spread viruses, are special cases and their remedy requires additional measures. Every system with which the infected computer may have been in contact also must be disinfected, including file servers and other workstations on the LAN. Best practices also suggest informing clients, vendors, and other partners whose systems may have been exposed during such an outbreak. Although embarrassing, such disclosure may be preferable to the loss of goodwill suffered due to the spread of viruses.

Content Filtering Systems

One effective means of keeping undesirable content out of enterprise networks— content that includes spam, offensive materials, viruses, worms, or other unauthorized software—is to deploy some form of content filtering system. The most basic example enables simple filtering rules in e-mail client software to block unwanted messages. This method has limited effectiveness, however, and it puts responsibility for filtering in the hands of individual end users.
More comprehensive solutions implement filtering at the server or network level, using either software or specialized hardware appliances. This form of filtering is similar to, but not the same as, that performed by a firewall. Rather than blocking network traffic outright like a firewall, filters aim to intelligently analyze incoming messages or Uniform Resource Locator (URL) requests (using antivirus software or other techniques) and block any traffic deemed contrary to company policies. This traffic can include not just e-mail, but Websites, instant messaging, chat rooms, and other services. Filters can work in a number of ways:

• Filtering by address—Blocking traffic from certain domains or URLs; for example, refusing to relay e-mail from the servers of known spammers, or blocking access to known pornographic Websites. Building and maintaining lists of such sources can be a cumbersome process. Fortunately, a number of filtering software vendors and independent organizations make available their own lists to which administrators can subscribe. However, one risk of this approach is that legitimate content could be maliciously blocked through such a list. For example, an attacker could perpetrate a DoS attack by simply forging evidence and reporting a Website or e-mail address to the list services. The list services would then block all content to and from the targeted entity.

• Filtering by content—Blocks traffic based on certain content types. For example, when an e-mail message arrives bearing an attachment containing executable code, the filter might scan the file using antivirus software before passing it along to the local mail server. Another option might be to automatically eliminate images or HTML/JavaScript code from incoming e-mail messages.

• Keyword-filtering—Screens content based on a so-called hit list of words or phrases commonly associated with spam or offensive subject matter. More recently, some filtering systems have begun employing a technique called Bayesian filtering. This technique uses a combination of statistical analysis and human intervention to learn from blocked content, improving the filters’ chances of success and reducing the number of false positives over time.

• Advanced techniques—Some filters take advantage of still more sophisticated filtering techniques now under development. For example, a number of companies have developed filters that can analyze graphic images for their content, rejecting an image if it returns a high probability of containing pornography. Most such technologies are relatively immature, although demand for them is high, particularly in the content-hosting industry.
Suspicious content identified by a filter may be automatically blocked, but there are also other options. Most filters can be configured to report suspect traffic to an administrator for manual approval, or simply to log incidents for later review. Senders of blocked e-mail can be informed of the action taken, and blocked messages can be quarantined in a holding area rather than deleted outright. The content must then be manually vetted by administrators. To combat spam, one technique is to have the email system send a bounce e-mail to the originator of the message. This message appears to be generated by the e-mail system, informing the sender that the recipient does not exist. Some spam senders will thus be fooled into removing the corresponding e-mail address from their databases. This feature is built into most e-mail clients and can also be enabled on some e-mail servers.

In many cases, automatic deletion of filtered content is not a suitable response. False positives are a perennial problem—for example, although screening out e-mail messages that contain the word Viagra might be advisable for most organizations, it would be wholly inappropriate for a pharmaceutical company. Or legitimate content may be inadvertently blocked—for example, content filters that do not allow users to access government and community Websites for Sussex County because the URLs contain the word sex. Similarly, Bayesian-based solutions have been known to block the Website to the document and business services company Kinko’s (kinkos.com) because they infer that the content is related to the word kinky.

Additionally, the legal ramifications of using filtering software are as yet unclear. Can a company be held negligent for failing to install filters? Or does establishing a filtering policy only increase the company’s liability when unwanted content passes through? Because of these factors, enterprises are advised to consult thoroughly with experts— including IT, business management, and legal resources—before implementing any content filtering system.

LOG ACTIVITY ANALYSIS

High-profile attacks like buffer overflows, format strings, DoS and SQL injection stem from malicious outsiders’ abilities to break through an organization’s network and perimeter security.

But security devices such as firewalls and IDSs are often of little help in detecting incursions launched inside an organization, because of two factors. First, these devices are usually installed only on network perimeters (for example, the Internet and partners’ connections). Second, the technologies these devices use to detect attacks—mainly pattern matching and protocol analysis—fail to detect attacks that either do not match known signatures or else mimic a normal connection (for example, a terminated employee whose account is still active).
As a result, to detect this type of threat, companies must also focus security efforts at the asset level, including applications, workstations, servers, and so on. One of the most important tools in this process is log activity analysis.
Identification of Significant Events Most technology assets have the ability to generate logs of significant system events. Events that might be logged include user authentication information, errors, data access, and administrative functions, as well as general system status notifications.

The specific audit settings available on a device and the normal patterns for its event logs will depend on the type of technology. For example, an authentication server would be configured to record many more authentication events than a file server. Conversely, a file server would record more file and directory access events than an authentication server. Still, in most cases audit settings should be configured similarly across all assets in an organization, because information that is pertinent to one technology will also be pertinent to others.

Authentication events captured on a file server are still pertinent, for instance, because any failed authentication events might indicate an attempted unauthorized access. Similarly, any file and directory events recorded by an authentication server should be investigated, because transferring files is not the usual function of such a device. Properly configured event logs that conform to a standardized audit policy can aid in both troubleshooting and threat detection.

Equally important though often overlooked is the accuracy of the internal clock settings of devices across the enterprise. Proper clock settings are indispensable when trying to match log entries reported by different systems and trace the sequence of events in an attack. One way to ensure consistency is to configure every device to obtain its time setting from a common network time protocol (NTP) server.
The sheer volume of messages logged by properly configured servers can be a problem of its own. Multiple logs generated by multiple devices across the enterprise can quickly add up to terabytes of data. Without a well-designed storage strategy that allows easy access and analysis of archived logs, all the information gathered might be useless.

One approach is to maintain a single, central syslog server where all logs are periodically sent for storage. Larger organizations may prefer to maintain different storage locations for different types of logs; for example, logs generated by servers running Windows could be separated from the logs of UNIX machines.

Large volumes of log data also require organizations to consider how best to use this data to proactively detect intrusion. Regularly reviewing every log generated by every system would not be feasible. Determining what logs to analyze can be the most difficult task of log activity analysis. No standard solution exists, as the systems and logs deemed crucial will vary from company to company. The only way to correctly assess an s been gathered into a central archive, analysis can begin. In addition to those essential systems that have been flagged for regular, proactive review, any logs that deviate from the normal log pattern— such as instances of multiple authentication events captured from a router or database server—should be reviewed.
Identifying which log entries signal an attack can be difficult. It requires knowledge of many different types of attacks (and new ones are being devised almost daily), as well as an intimate knowledge of the IT environment that generated the logs. In many cases, companies subscribe to a security intelligence service that can aid administrators in identifying intrusion events.
Whatever resources are drawn upon, the essential goal is to identify events that should create concern. Examples of these types of events include:

• Rapid failed logins

• Repeated access attempts from the same IP address

• Unusual login activity from administrative accounts Next, organizations must define processes that allow proper escalation of identified events. The following questions and concerns must be covered:
• Who are the appropriate internal contacts?

• Do systems need to be quarantined?

• Should customers be notified?

• When should external authorities be contacted?

Without proper escalation procedures, an organization will not be able to efficiently and effectively respond to an identified event.
Defined risk categories can assist this process. These need be no more complicated than labels classifying high-, medium-, and low-priority threats. High-priority threats might demand immediate calls to external authorities; medium-priority threats might require immediate internal action; and low-priority threats might be resolved the following day.
Finally, at some point logs must be purged and archived. Government regulation might require that some logs be stored for a certain amount of time. However, even without regulation, logs should be archived for some period of time, because logs of past activity might aid investigation of a current incident. A backup, purging, and archiving policy for log files should be defined and tested regularly.
Log activity analysis is the first part of a much larger process: event correlation, which is the process of analyzing events across assets. Log analysis must be performed correctly for event correlation to be effective.

ROGUE TECHNOLOGY DISCOVERY

Rogue technologies are threats that usually result from the actions of internal employees. Organizations can minimize this risk by standardizing on a pre-approved set of computing devices and system configurations that can be actively monitored for vulnerabilities. Whenever employees install additional hardware and software (for business or personal use) beyond what has been formally specified, they can compromise these policies, even without meaning to.
For example, employees might install modems or deploy access points for wireless networking without prior approval—and without regard for corporate policies. Peer-topeer networking software, browser extensions, or chat software installed by employees for their own use can prove equally dangerous. Of particular concern is the risk of rogue technology deployed by upper-level management. An executive might implement a rogue technology like WLAN without fearing any negative consequences for violating security policy, because the organizational structure may not have anyone to enforce the policy at upper levels. Such a structure may introduce rogue hardware or software that is difficult to detect and to remove.
Because these unapproved devices and applications are not managed or monitored by the organization’s IT or security groups, any new vulnerabilities they may introduce can be exploited with impunity, without the knowledge of either group. As the popularity of wireless networking has increased, it has become a particular problem in this regard.
Identifying, exposing, and controlling these unauthorized devices is essential for proper enforcement of established network security policies. Various forms of software exist that can help in this process. One kind simply port-scans a range of IP addresses for known types of network traffic. Such software is often included as part of enterprise systems management (ESM) packages or vulnerability detection systems. Agent-based asset management systems are another option. These systems install on each user workstation a small application that runs during login, creating a catalog of all software installed on the local system. Companies can also create custom scripts that enable them to compare attributes of installed systems with the attributes of approved configurations.
For example, a company wishing to detect and disable the use of chat applications might employ several techniques. First, the company could monitor its proxy server and detect connections to Websites like AOL.com, from which a user needs to download the AOL Instant Messenger (AIM) client to use the instant messaging service. Also, because software such as AIM uses Secure HTTP (SHTTP) to connect to a messaging server, the company can set the proxy to block such traffic. Finally, because all browsers send an identification string when they request a Website from a Web server, the company could simply allow only one identification string access through the proxy.
Although these techniques and tools can help detect rogue technologies, they are not a complete solution to the problem. Often, rogue network applications like chat clients and peer-to-peer software will transmit their traffic using the same network ports and protocols used by standard Web browsers. This approach is often used to allow the applications to work through firewalls, which usually screen out traffic on other ports. Unfortunately, this scenario is precisely what most organizations do not want. In effect, the traffic of these rogue applications is disguised as legitimate Web traffic, making it difficult to detect via automated means.

Rogue hardware may also be difficult to detect because scanning tools are limited in their ability to differentiate between legitimate devices and unauthorized ones. For example, many laptops now come with both modems and WLAN adapters built in. A software agent would detect these devices and mark them as possible rogue technologies, but would be unable to differentiate between actively used components and those that are installed but never used.
Therefore, the first line of defense against the use of rogue technologies is a clearly defined and published security policy that details what the organization considers rogue hardware and software, as well as the consequences of installing and using unauthorized devices and applications (which may range from a formal reprimand to termination, depending on the severity of the infraction). As with other company policies, employees should sign and acknowledge the network security policy in writing. When used with firewalls, application proxies, routine system audits, and secure configurations that cannot be modified by the end user, strict adherence to such a policy is the most effective way for organizations to retain control over the technologies in use on their networks.

■ Primary Vulnerability

Detection Activities Identifying threats that can put an organization at risk is only one part of a comprehensive security strategy. Companies must also actively identify asset weaknesses that could be exploited in an attack.
Every enterprise asset has attributes that make it vulnerable in some way, whether the asset is a server, a client, a Website, transactional data, or a business process. Some vulnerabilities might be the result of weaknesses in the technologies that control the asset, such as a buffer-overflow bug in the application software that runs a company’s Website. Others are simply due to the inherent nature of the asset itself, such as the need for confidential data to remain private. Table10 provides an overview of some common enterprise vulnerabilities.
Although an enterprise cannot control the existence of vulnerabilities, it can control the way in which it chooses to deal with them. Properly implemented security policies, standards, and technologies can help to limit risk by proactively identifying weaknesses in enterprise assets. The goal of this activity, called vulnerability detection, is to allow organizations to remediate vulnerabilities before they can be exploited by an attack. Before organizations can undertake vulnerability detection, they must first develop an understanding of where their vulnerabilities might exist—whether in their technology, process, environment, or some other potential point of failure. This general understanding should inform the organization’s security policies and standards.
Organizations usually implement vulnerability detection programs in phases, starting with the most necessary assets (as identified during the risk assessment or asset classification process) and widening the scope to less essential assets as the overall environment becomes more controlled. Managing the scope of the project in this way is often essential to its success, as it limits the amount of raw data generated by vulnerability detection activities to a level that can yield usable information.
The vulnerability detection process consists of three primary activities:

• Compliance testing—The aim of this activity is to continuously review targeted assets for conformity with established security standards and policies. Having defined what they consider to be acceptable limits for compliance, enterprises can employ both automated and manual techniques to measure whether their technologies and procedures fall within these limits. If compliance is not met, necessary changes to systems (or potentially, to policies and standards) that will enable compliance must be identified.

• Vulnerability scanning—Vulnerability scanning processes identify weaknesses in technology assets and business processes, with the intent of better controlling the security of an organization’s environments. Several techniques can accomplish this identification. One method involves conducting periodic attack and penetration studies against specific assets, such as operating systems, firewalls,databases, and applications. Another approach is to perform vulnerability and configuration tests on an ongoing basis using automated scanning tools. • Operations availability analysis—The final activity of vulnerability detection seeks to resolve asset weaknesses that are not subject to conventional attacks. Instead, it focuses on the operational resilience of assets, including tasks like maintenance of high-availability systems; backup, recovery, and tape rotation services; business continuance and disaster recovery services; and ongoing, dynamic system capacity and performance engineering.

COMPLIANCE TESTING

Compliance testing can occur at many levels of the enterprise. Its primary aim is to ensure that organizations conform to their own established security policies and standards. An organization might choose to measure compliance against any number of criteria, such as:

• Security policies against regulatory requirements

• Corporate standards against security policies

• Documented procedures against security policies

• Procedures as practiced against documented procedures • Technical controls against security policies

• Integration of information systems against technical controls

• Organizational security policy against specific departmental or system procedures

• Risk exception inventory against documented policy exceptions

In the same way that creating and implementing corporate security policies require a well-defined method, so must companies define their approach to compliance testing. An effective compliance-testing program has five basic characteristics: independence, planning, evidence gathering, reporting, follow-up.
Failure to include any one of these criteria when devising test procedures can undermine the results of the testing process, thus limiting its usefulness.

Independence

Compliance testing is of value only if the results of the test are impartial. To ensure that a compliance test maintains its integrity and objectivity, the person or group conducting the test must be independent of the asset being tested.
In its strictest sense, independence can be defined as lacking any direct or material indirect financial interest in the asset being tested. In simple terms, this means that the testers should not be involved in any operational or financial decisions. Furthermore, they should be responsible to a department other than the department that is conducting the test, to avoid any kind of managerial influence or pressure that may skew analysis of test results.
For example, in a typical company the security group might be responsible for maintaining the security policy and the technical controls to be tested, while the IT department maintains the information systems. In such a case, neither the security group nor the IT department should perform the compliance tests. Instead, the company should call upon another internal entity, such as an IT auditing department, or perhaps an outside specialist.

Planning

For a compliance test to be successful, an organization must first clearly specify the objectives and scope of the test, in the form of a test plan. The planning process begins by examining the organization’s established security policies and the procedures it has implemented to comply with them.
Working from this foundation, the organization must decide which specific assets to test (including policies, tools, standards, procedures, and technologies), how often to test them, and which areas and departments are to be included and excluded from testing. It must also consider how to accommodate any necessary changes in security policies that the tests may identify, as well as how to re-evaluate and refine its testing approach across successive tests. Selection priorities will vary depending on the objectives for each test.
For example, a primary objective might be to test all high-risk systems, such as those that are Internet-facing or that contain classified information. Other objectives could include testing all the remaining low-risk systems that were excluded from previous compliance tests, or testing systems that are maintained by outside vendors. In many cases, a company will choose to deploy a compliance-testing program in stages; for example, Windows servers the first year, Solaris systems the next year, and so on.
The organization must next specify the types of evidence to be collected, which will dictate the tools and techniques used to gather it. For example, corporate policy might stipulate that all passwords across the enterprise should be at least eight characters long. One compliance testing process for this policy might entail using a software tool to examine a particular asset (for example, a server) to determine whether the actual passwords stored on it comply with the minimum length requirement. Another process might involve surveying specific configuration settings to confirm that the policy was being programmatically enforced, using either manual inspection or automated scripts or programs.
When determining the type and approach of evidence gathering, organizations also must consider the risks involved. In the course of gathering evidence for a compliance test, testers might be given administrative access to confidential or protected data residing on systems. This practice in itself creates a new vulnerability that should be clearly assessed and understood by the company.

Evidence Gathering

Compliance testers must record specific evidence to support their findings and conclusions. When gathering evidence, testers must consider the following requirements:

• Persuasiveness—Testers must exercise good professional judgment to decide whether the evidence gathered will support the reported findings in a persuasive manner or merely cause confusion. One helpful technique is to rank specific evidentiary points from the most persuasive to the least persuasive.

• Relevance—The evidence should be logical and should support the goals and objectives set during test planning.

• Objectivity—Neither the test results nor the conclusions drawn from them should be colored by the personal opinions or prejudices of the testers. Evidence that has integrity and remains objective provides much stronger support for test findings.

• Freedom from bias—Neutrality should exist in the evidence. As much as possible, the tests should favor no particular vendor, process, or method.
Evidence can be collected in many forms:

• Documentary—A permanent form of evidence, such as written procedures, flow charts, or log files.

• Analytical—Reports derived from expert comparison and scrutiny of data, questionnaires, testimonials, or documents.

• Physical—Evidence obtained through direct observation of people, property, and events.

• Testimonial—Statements made by staff can confirm test results, but this form of evidence usually requires corroboration.
For example, when a company is testing its technical controls for compliance with its security policies, it will want to gather the technical control logs and security policy statements as evidence of test results. Further evidence could be gathered using a variety of means, including physical examination, interviewing, analytical review, procedure mapping, and so on.
When gathering test data regarding IT systems, most organizations will want to maintain their findings in an electronic format such as a spreadsheet or database. When reviewing evidence in such a malleable form, however, care should be taken to ensure that the documents being examined are not changed during the review process and are in fact the documents that are used in the organization. For example, one way in which an organization could ensure data integrity would be to obtain signed evidentiary documents on CD-ROM directly from the security department’s manager. Then a copy of these documents could be used during the test, while the original was kept in a safe place.
More often than not, compliance tests of information systems will have evidencegathering requirements that call for the use of automated tools. These tools gather all the registry settings and permissions, file system permissions, and security configuration settings for a given application. A number of commercial tools for Windowsbased environments are available. Companies with UNIX environments often develop their own, in-house evidence-gathering tools, although some UNIX-based security benchmarking tools are available from security organizations and freeware vendors.

Reporting

Following the completion of the compliance test, testers should create a report summarizing and detailing their findings. To increase the likelihood that the results of the tests will be acted upon, the findings should be presented in a concise format that is easy to read and understand. The report should identify the objectives and scope, the period of coverage, and the nature and extent of the test performed. A typical summary report might consist of the following sections:

• Background—The nature, purpose, and function of the assets or policies that were tested.

• Objectives—The goals of the test and what assumptions it set out to verify.

Test approach—The methods, tools, and procedures followed during the test, as well as the reasons why the test was designed in this way. • Results gathered—A summary of the material evidence produced during the test.

• Conclusions—Analysis of the test results; for example: “The firewall adequately filtered inbound and outbound traffic. However, deficiencies were noted concerning IP address translation as defined by Section 1.1.1 of the Information Security Policy.”

• Acknowledgments—Who performed the test, who contributed other material evidence or background materials, and who can provide further information.
Once the report has been completed and its findings acknowledged, organizations generate an action plan to respond to the findings of the test. Ultimately, this action plan is the most important part of the test process. (See Figure37 on page164 for example action plans.) At a minimum, the action plan should consist of the following

• Any new controls to be implemented and the techniques to be used—identified organizationally, physically, or logically. • Expected dates by which the controls should be implemented.

• Who has responsibility for implementing the defined controls.

• Who was responsible for developing the implementation plans.
An action plan could also result in changes to an organization’s security policies or a documented exception to the policies. Consider, for example, a company with a technical standard dictating that the File Transfer Protocol (FTP) service should be disabled on Internet-facing servers, but which also has a business application that requires FTP. In this case, the business line manager would be required to make a case for why FTP cannot be disabled in this instance (an exception to the policy). The manager would need to understand the risk this introduces and might make an explicit statement that the group in question is accepting that risk. Likewise, the security department might conduct an assessment to determine whether the risk is an acceptable one. The company would also likely develop a project plan of how the application could be changed or replaced so that FTP can be disabled, thus adhering to the existing policy. Documenting such exceptions to policy enables companies to track these exceptions over time and to centrally manage risk.

Follow-Up

The completion of a single compliance test is not the end of the process, even after an action plan has been drafted and set into motion. Follow-up tests are necessary to evaluate the status and effectiveness of the measures recommended in the action plan. The results of the follow-up test can answer important questions, such as:

• Have the recommended corrective measures been implemented?

• To what extent have they been implemented?

• Are the new controls operating properly?

• Are the new controls effective?

• What impact have the new controls had on the asset?

• Will further compliance testing be necessary?

Follow-up testing occurs after the implementation of recommended technical controls and/or policy changes. Evidence is gathered from an asset and then the new data is checked against the outstanding issues from the original compliance test. Technical system follow-ups may require the use of automated tools to gather the requisite evidence and cross-check the results against those obtained prior to the implementation of the action plan.

VULNERABILITY SCANNING

Vulnerability scanning is the process of identifying and assessing the weaknesses in a given enterprise environment. It takes a comprehensive view of all technology assets— including applications, servers, workstations, and network elements—and evaluates how susceptible the environment is to attack. By looking at how the individual assets fit into the larger environment, vulnerability scanning can help organizations spot weak links in their IT infrastructures.
Too often, organizations are preoccupied with preventing so-called gaping holes in their environments. They focus on the major vulnerabilities that could lead directly to unauthorized access, while failing to resolve the small vulnerabilities in less essential systems that can also be jumping-off points for attacks. Trust relationships, unsecured single sign-on privileges, and misconfigured user accounts are just a few examples of minor vulnerabilities that make it easy for an attacker to jump from machine to machine until he finds the target he seeks.
To conduct comprehensive vulnerability scanning, companies usually take a twopronged approach. First, they scan for common vulnerabilities that can affect individual assets. Next, they analyze their unique environments to identify how vulnerable they are to highly customized attacks.

Vulnerability testing should be a proactive process. Companies should develop procedures to routinely perform vulnerability testing as part of the application development life cycle, and especially when designing and deploying new applications. For example, when deploying Web-based applications, a company might use automated scanners to identify common vulnerabilities. Several recent studies have demonstrated that the later in the application life cycle a bug is discovered, the more expensive it will be to remedy. Identifying any weaknesses during application development facilitates correcting the flaws before deployment.
Best practices dictate the use of many vulnerability scanning techniques. For instance, automated scanning tools can identify weaknesses, while penetration testing (sometimes called ethical hacking) can simulate the routes an attacker might use and demonstrate the potential for unauthorized access

.
Automated Scanning

Tools Asset scanning tools poll an enterprise IT environment at regular intervals, checking configuration information for each asset against a database of known vulnerabilities. They report on inconsistencies at a given point in time and provide details on the identified area of concern, including the potential consequences of a security breach related to the vulnerability and steps to correct or minimize it.
Because IT departments worldwide discover new vulnerabilities every day, scanningtool vendors must continually update their products for them to remain effective. Many vendors provide an automated update service for this purpose. Likewise, organizations must run scans regularly to detect changes in their environments.
There are two types of vulnerability scanning tools. Network-based tools are applications that can probe one or more assets on the local network from a centralized monitoring location. Host-based products, on the other hand, are installed and run directly on the server or other asset to be monitored. Network-based tools are the most widely used in organizations; however, host-based tools generally have the ability to provide more detailed and thorough assessments because they can probe the asset more deeply.
Other factors also influence the selection of a vulnerability scanning tool, including the quality and comprehensiveness of its vulnerability database and the frequency and ease with which it can be updated. Reporting capabilities also vary between products. Small organizations might be satisfied with Web-based reports. Larger organizations might require integration with event correlation or trouble-ticketing systems. For environments in which systems are required to be operational around the clock—in which case there will be no downtime in which to schedule scans—performance of the scanning tool is also a consideration. (In these cases, the organization may need to extensively test prospective scanners in a lab environment.)
Few organizations will find that a single tool meets all of their needs. For example, the same tool is unlikely to be appropriate for scanning both Web applications and databases. In addition, some organizations choose to scan the same technology using multiple products (for example, two network scanners) to collect more data, in hopes of obtaining a more complete analysis of their environments.
Vulnerability scanning is not foolproof. One limitation is that a tool is only as good as its database of vulnerabilities. If a company is using an older version of a tool, or if the vendor maintains the tool poorly, it might not test for all the latest vulnerabilities.

Another limitation of scanning tools is that the way in which they operate sometimes produces unintended or unwanted results—for example, the scanning process might produce a system crash.
The number of vendors, version numbers, settings, and custom configurations available to IT departments result in literally thousands of types of systems. Although most vulnerability scanning tools are designed to cause as little interruption to systems as possible, no tool is guaranteed to be completely problem-free.
Finally, the results returned by these tools are sometimes so voluminous that administrators simply cannot properly identify and resolve true vulnerabilities. The tools also may produce false positives because of poor criteria used when checking for vulnerabilities. The only way to fix these deficiencies is to customize the test criteria, disabling tests that are known to be irrelevant or inappropriate in a given environment and manually double-checking identified vulnerabilities. For instance, organizations may choose not to test their systems for vulnerability to DoS attacks because they do not want to risk their systems becoming unavailable under the load imposed by the test. Likewise, scanning for certain vulnerabilities might produce a system crash, in which case the particular scan would also need to be disabled.

Security Penetration Testing

Another way to identify vulnerabilities that are more complex is through the manual process known as penetration testing or ethical hacking. In general, security penetration testing seeks to simulate an attack by using the same tools and techniques that an unauthorized user would employ. For that reason, penetration testing is more comprehensive than vulnerability scanning.
The results of a penetration test are usually more concrete than those produced by vulnerability scanning. Penetration testing not only identifies potential vulnerabilities, but it also confirms their existence and demonstrates the effect they could have on the enterprise. For example, a typical penetration test across a system might reveal any of the following exploits:

• Gained access to administrative accounts on 25 of 30 systems.

• Manipulated the Web application to change the price of medications and purchase them at a discount.

• Was able to access and read the CEO’s e-mail.

• Was able to shut down the car manufacturer’s assembly line.

Some companies set time frames for testing to determine how much damage a hacker could do in a given period, such as one week. Other companies have specific targets or attack types that they want to test. For example, penetration-testing methods are particularly effective in testing an organization’s resilience to attacks using social engineering, which is the technique of subverting technical controls through human interaction, such as tricking an employee into revealing a sensitive password.
On a basic level, the steps of a penetration test include threat assessment, survey testing, intrusion, and exposure assessment.

Threat assessment.

During this stage, organizations determine what threats exist and which ones they want to model during the course of their penetration test. For example, will the test model the threat of an outside hacker who wants to break in? Or will it model the threat of unauthorized actions performed by a rogue application developer from inside the organization?

Survey testing.

Next, the company must assess the technical environment to identify vulnerabilities that might be exploited. Vulnerability scanners play an important role here, as do other techniques, like surveying hacker Websites to gain intelligence on current exploits.

Intrusion.

Having gathered the necessary intelligence, the testers can begin the actual attempt to exploit the potential vulnerabilities identified in the previous step. Both automated tools and manual techniques might come into play as the testers attempt to achieve the threat modeled during the threat assessment process.

Exposure assessment.

Following the intrusion attempt, the testers carefully compile, analyze, and communicate the results of their tests to management. The goal should be to provide more than a pass/fail grade for each test performed. The management should be made to understand which vulnerabilities and exploit tools were used, how savvy an unauthorized user would have to be to duplicate these events, and how the company could go about resolving the flaws to prevent similar attacks. These recommendations could be focused (for example, “install a certain patch on machines running a particular operating system”) or they could be more strategic (“develop a more robust management process for software updates”).
As with all security tasks, penetration testing should not be a one-time event. Companies should perform tests periodically, especially if the technical environment is changing rapidly. Each new system or application deployed has the potential to introduce new vulnerabilities.
Finally, organizations undertaking security penetration testing need to consider the additional risks associated with it. First, because organizations often hire an outside specialist to conduct the testing, they introduce the risk of revealing network vulnerabilities to outside entities. An unscrupulous employee of the testing company, for example, could use the information gleaned during testing to later gain access to the organization’s systems. Another risk introduced by penetration testing is the possibility that testing could damage applications or affect systems availability due to unintended side effects such as a system reboot or failure.

Comprehensive Security Assessments

While a penetration-testing team may identify and exploit a single vulnerability on a given server, it may overlook still other vulnerabilities. Likewise, while vulnerability scanning software would reveal a buffer-overflow vulnerability on the server, it would ignore an inadequate password policy that allows users to select blank passwords. When a company desires a more comprehensive analysis, it can choose to initiate a full security assessment as a complement to these tools.
Rather than directly simulating an attempt at unauthorized access, a security assessment takes a holistic view of the environment, including everything from technical controls to business processes, and attempts to identify the full scope of vulnerabilities. For example, an assessment of technical security controls might compare configuration files and system settings on specified assets to industry best practices. For network assessments, organizations might also review network diagrams to enrich their understanding of the environment.
Business process assessments are similar to security and controls assessments, although in this case the focus is not on technologies. Where a security and controls assessment might cover a system’s password policy, a business process assessment would reveal the fact that a single person can both approve payments to vendors and write the checks. In a broad sense, such processes are vulnerabilities every bit as much as weaknesses in technical controls, as they represent risk for the company.
Still other vulnerabilities might be the direct result of the company’s physical environment. It is a maxim of data security procedures that if an intruder can gain physical access to a server, that server cannot be considered secure. Likewise, many other environmental factors can introduce risk, independent of any technological considerations:

• Physical access to utility resources (gas or electric power)

• Physical access to a company’s products

• Location of company employees

• Fire doors that do not meet code regulations

• Circumvented access controls (a security door propped open)

• Security guards not on duty when expected

As part of a comprehensive security assessment, procedures should be established to periodically review a company’ physical controls and determine whether any vulnerabilities are present.

Application Development Assessment

A final element of a vulnerability assessment concerns a company’s application development processes. Whenever companies design, test, and deploy applications, they should continually assess the product for vulnerabilities During application requirements analysis—the point at which organizations determine what functionality the application should have—engineers should also identify and document any security and control requirements. For example, if the application will facilitate high-value financial transactions for institutions, end-to-end data encryption of all transmissions might be a crucial control requirement.
Once the project enters the design phase, the developers must build these security and control requirements into the application’s design and architecture. They should also specify any third-party products (for example, code libraries for data encryption) that will be necessary to complete the project. Engineers must then implement the encryption software during the development phase, as well as develop keymanagement procedures and any other necessary functional requirements.
Good application-coding standards are necessary. Many application vulnerabilities are the result of poor coding techniques, such as failing to properly validate user input before acting upon it. Organizations should give thought to drafting secure coding standards and enforcing developer adherence to them before undertaking any application development project Engineers will verify application functionality during the testing phase of development, but a security-conscious enterprise should also encourage simultaneous testing of security controls. This stage of coding is an ideal time to begin the process of vulnerability scanning, in addition to the regular code reviews to verify compliance with secure coding standards.
Before deployment, developers should be required to deliver not only the application code itself, but also to implement and document all recommended security controls and procedures. For instance, if one of the application’s security controls is a password, documenting the process for resolving forgotten passwords should be established as an engineering deliverable.
Facilitating this integration of security into the application development life cycle is not always an easy task, prompting many organizations to create the role of an application security architect. These individuals collaborate with application development teams to work through each phase of the project life cycle and ensure that applications are designed, developed, and deployed in a secure manner.
Managing Vulnerability Information Before initiating vulnerability scanning and assessment activities, companies need to consider what happens to the information that is uncovered. The obvious concern is how to act on that information to remediate vulnerabilities. However, another important consideration is controlling access to this sensitive information.
Ideally, organizations will have already established data classification standards and associated control requirements that can be applied to data security information. For example, all data that could cause a severe detrimental affect on the company if disclosed to unauthorized individuals should be classified as confidential. Electronic copies of such reports should be encrypted, password protected, or otherwise subject to technical controls, while hard copies should be shredded before disposal, and so on.

OPERATIONS AVAILABILITY ANALYSIS

Another important component of vulnerability detection is operations availability analysis, the process of maintaining the operational resilience of a company’s systems and ensuring that systems remain available and can be easily recovered if unplanned downtime occurs. Global competition and near-instantaneous communications are just two of the factors why availability is so important for today’s modern enterprise. While a true 24-hour global economy might not be reality yet, maximum availability is more and more important for a growing number of companies. Typical candidates for high-availability system design include:

• Networks—Connecting to Internet service providers (ISPs), LANs, and WANs.

• Application servers—Deploying server farms to distribute processing across several application servers.

• Web servers—Caching or load balancing front-end Hypertext Transfer Protocol (HTTP) or HTTPS requests, or distributing them across server farms.

• Databases—Clustering, replicating, and distributing data stores.

Any number of factors can have a negative impact on the availability of enterprise systems. Poor resource management, inadequate operational procedures, and even natural disasters can bring down entire networks. Likewise, an unforeseen security incident can have disastrous consequences for today’s always-on applications.

Engineers will verify application functionality during the testing phase of development, but a security-conscious enterprise should also encourage simultaneous testing of security controls. This stage of coding is an ideal time to begin the process of vulnerability scanning, in addition to the regular code reviews to verify compliance with secure coding standards.
Before deployment, developers should be required to deliver not only the application code itself, but also to implement and document all recommended security controls and procedures. For instance, if one of the application’s security controls is a password, documenting the process for resolving forgotten passwords should be established as an engineering deliverable.
Facilitating this integration of security into the application development life cycle is not always an easy task, prompting many organizations to create the role of an application security architect. These individuals collaborate with application development teams to work through each phase of the project life cycle and ensure that applications are designed, developed, and deployed in a secure manner.
Managing Vulnerability Information Before initiating vulnerability scanning and assessment activities, companies need to consider what happens to the information that is uncovered. The obvious concern is how to act on that information to remediate vulnerabilities. However, another important consideration is controlling access to this sensitive information.
Ideally, organizations will have already established data classification standards and associated control requirements that can be applied to data security information. For example, all data that could cause a severe detrimental affect on the company if disclosed to unauthorized individuals should be classified as confidential. Electronic copies of such reports should be encrypted, password protected, or otherwise subject to technical controls, while hard copies should be shredded before disposal, and so on.

OPERATIONS AVAILABILITY ANALYSIS

Another important component of vulnerability detection is operations availability analysis, the process of maintaining the operational resilience of a company’s systems and ensuring that systems remain available and can be easily recovered if unplanned downtime occurs. Global competition and near-instantaneous communications are just two of the factors why availability is so important for today’s modern enterprise. While a true 24-hour global economy might not be reality yet, maximum availability is more and more important for a growing number of companies. Typical candidates for high-availability system design include:

• Networks—Connecting to Internet service providers (ISPs), LANs, and WANs.

• Application servers—Deploying server farms to distribute processing across several application servers.

• Web servers—Caching or load balancing front-end Hypertext Transfer Protocol (HTTP) or HTTPS requests, or distributing them across server farms.

• Databases—Clustering, replicating, and distributing data stores.
Any number of factors can have a negative impact on the availability of enterprise systems. Poor resource management, inadequate operational procedures, and even natural disasters can bring down entire networks. Likewise, an unforeseen security incident can have disastrous consequences for today’s always-on applications.

Operations availability can be viewed as an umbrella process model that includes numerous focus areas and systems, such as: • Workflow engines

• Business process modeling

• Networking environments

• Operating systems

• Application servers

• Web servers

• Call centers

• Enterprise resource planning (ERP)/customer relationship management (CRM)/portal environments

• Mainframe/legacy systems

• Telecommunications/phone systems

• Custom application development

• E-mail/groupware systems

Merging of Enterprise Systems Management and Security Management

In the past, management of enterprise technology security has often been seen as a unique discipline that requires its own technologies and processes. This view has begun to change as the World Wide Web and the Internet have become integrated in day-today business activities, and now the line between traditional ESM and security management is blurring. These two operations have begun to fall under a broader IT operations role, of which security operations is but one focus. Today’s large, enterprise organization requires an overall ESM framework to effectively manage business systems and processes, and technology security operations have become a major component of that framework.
The focus of traditional ESM systems has been performance management and availability of primary systems, networks, and applications. But during the past three to four years, ESM has evolved from managing the environment at an infrastructure level to taking a more holistic view of the organization. Instead of managing specific technology assets, the goal is to manage business processes more effectively.
For example, suppose an important router or application server becomes unavailable. Is this a crisis scenario? Is it critical enough that IT staff members should be called in at 3:00 a.m. to repair the system? The answer depends upon the design of the IT environment. If a company’s management infrastructure cannot provide a solid and current understanding of the downstream effect of an outage, upper management might not be able to make the right decision. A view of the impact of an outage from a business process standpoint can help to better inform decisions on how to handle problems as they arise.
The same holds true for security monitoring solutions. When first conceived, this category focused on managing infrastructure components such as firewalls, IDSs, operating system permissions, and on minimal application-level vulnerability monitoring. Today, security monitoring and enterprise systems monitoring solutions are converging to provide a single, consolidated management infrastructure.

Management of application availability to meet overall scalability or high-availability requirements is usually the domain of the ESM group, rather than the security organization. General performance or outage issues remain the province of the various infrastructure teams (network, operating system, application servers, Web servers, hosting) that support the enterprise environment. Security architects are chiefly involved in the engineering and implementation phases of application development. Security personnel should become involved, however, if an application becomes unavailable due to a DoS attack or a virus outbreak, or to identify an operating system patch that will close the door to a potentially crippling vulnerability.

Disaster Recovery

Disaster recovery is a crucial part of operations availability analysis. Disaster recovery measures are designed to guide a company’s IT operations through the recovery process following a major incident. Types of incidents include fire, natural disasters, terrorism, malicious acts, accidents, or other hazards specific to an industry. Disaster recovery plans should be part of larger business recovery and continuity plans that ensure critical business processes are operational following a disaster. Plans should be developed to address short-, medium- and long-term scenarios, ranging from a few hours of service interruption to several months of unavailability.
Organizations need to identify the risks or impacts each type of disaster presents to business continuity and operations and prioritize their disaster recovery efforts based on those most likely to impact core business processes. Next the company should determine acceptable levels of downtime based on the acceptable level of risk relative to it core business processes. These estimates will drive policies and procedures governing IT operations (such as frequency of backups), as well as investments in people, processes, and technologies to ensure operations are restored within the allotted timeframe. Finally, organizations should develop appropriate plans to address these risks and develop contingency plans in the event a particular incident or set of incidents should occur.

Disaster recovery plan.

Disaster recovery plans should include the relocation of people, equipment, and data to a suitable remote center of operations. The remote center must have adequate equipment, supplies, and capacity to handle the infrastructure supporting critical business processes. Books, manuals, and operating instructions should be available and staff must be trained in the tools and procedures needed to restore operations (for example, data backup and recovery, archiving, and retrieval).
The disaster recovery plan must be reviewed, updated, and tested regularly to ensure it is viable and usable when needed. Organizations must also disseminate disaster recovery policies and procedures to employees at all levels; identify and train appropriate staff to coordinate, manage, and execute the recovery plan; and coordinate with community and government organizations to ensure smooth, orderly management of the situation.

Support. Time is a crucial component during an incident response event. If the investigative team cannot have immediate access to essential systems or networks, this delay exacerbates the size and severity of an incident. Not only is access important, but sufficient resources are also necessary to adequately investigate an incident. An organization must have a technical support team that can provide access and resources to help sustain the overall incident response effort. The technical support team can make or break an incident effort. If an organization uses outside support, it is not reasonable to expect that outside support to have full knowledge of the environment and to have with them all the necessary tools to complete the job.
Incident Recovery The main objective of the recovery stage is to return compromised machines to a secure, operational state as efficiently as possible. The recovery phase allows users to assess what damage has occurred, what information has been lost, and what the postattack status of the system is. Those who are unsure about the level of recovery necessary should err on the side of caution.
All compromised machines require recovery. The recovery should match the level of compromise. In the case of shared accounts and sniffed passwords where further compromise never occurred, a simple password change should be adequate recovery. Some incidents may require a rebuild from CD-ROM or trusted media.
The actual recovery should occur offline whenever possible. If the machine is essential to operations, the organization should consider a temporary replacement while the host is rebuilt and secured. If several machines need to be rebuilt, they should all be taken offline simultaneously and then reconnected as they are secured. Incident response team members should provide guidance during the recovery, but information custodians should perform the actual rebuilding and securing of the systems.
After all assessments and investigative actions are completed, the incident response leader will provide guidance to the information custodian responsible for recovery. The information custodian is responsible for restoration of the affected systems.

 

The information custodian should forward response and recovery actions to the incident response leader for tracking. The organization should ensure at a minimum that all compromised machines (that is, any machines that share a network segment with the compromised machine) are identified and documented; that the recovery matches the level of compromise; that recovery actions occur offline if possible; and that all vulnerabilities are corrected.
Incident Evaluation and Reflection A post-mortem is a meeting after resolution of the incident. The purpose of the meeting is to discuss lessons learned and improvements that can be made to better resolve a similar event in the future. Within two weeks of resolution of each computer security incident or simulated attack (Severity Level 1 and 2), a post-mortem should be held. During this meeting, distinct action items should be noted, and a follow-up report should be developed of the actions taken and closure reached. The incident response team leader is responsible for:

• Scheduling and leading the post-mortem meeting

• Documenting the lessons learned and action items

• Tracking and closing the action items

• Developing the final report, including actions taken

The post-mortem session should focus on what worked and what did not work during the incident response process, and the post-mortem session should include all participants who worked on the incident. This session does not require a face-to-face meeting; it could be handled via phone or e-mail. The incident response leader should have final approval on any changes to the incident response process. If a management report on the incident is required, it should include the lessons learned and action items from this post-mortem session.
Testing the Incident Response Plan Testing the incident response plan is a difficult task, one that requires testing a process geared for reacting to the unexpected. Nevertheless, a program for periodic testing of the SIRT process should be developed and should account for the following:

• Frequency—Testing should occur annually, but semi-annually is preferable.

• Infrastructure changes—Major changes affecting the organization structure or infrastructure should be considered (for example, a merger or a divisional move from Windows NT systems to Windows 2000 systems).

• Different scenarios—Each test should include different attack scenarios (for example, a DoS attack, an insider theft of intellectual property, or an insider scanning another organization’s network).

• Variety of test times—Testing should occur at different times of the day, week, and month (for example, some at night, some on weekends, some during regular business hours).

• Reporting procedures—Information reporting procedures to the SIRT should be tested.

• Communication—The testing program should also evaluate the effectiveness and timeliness of information communication and coordination between SIRT roles and internal organizations.

INCIDENT RESPONSE TECHNOLOGIES

Security tools can range from small, simple freeware utilities to comprehensive professional security suites designed to solve all types of security problems. This vast number of tools complicates the incident response professional’s selection of the right tools for each security concern.
Security tools are essential to an organization. In most cases, performing security tasks manually would be impossible. Tools allow the security administrator to assess and detect malicious activity the human eye would not be able to see. A few of the more common tools include scanners, IDSs, and forensic tools.

Scanners

Scanners are tools that automatically scan a range of Internet Protocol (IP) addresses (IP-based scanners), log files (host-based scanners), Common Gateway Interface (CGI) scripts (CGI scanners), phone numbers (phone dial scanners), or the airwaves (wireless scanners). Most scanners function as port scanners and identify open ports and running services. Some scanners also function as vulnerability scanners and have the capability to perform numerous tests aimed at detecting known vulnerabilities.
IP-based scanners. An IP-based scanner views the network from the perspective of an attacker. The scanner can be used to discover systems and services, identify vulnerabilities that could be exploited, and recommend remedial action. During incident response, these scanners are used to determine whether any rogue technologies have been introduced to the network or whether any services have been enabled on existing systems.

Host-based scanners.

A host-based scanner reviews the host’s operating system and applications for vulnerabilities that could be exploited and checks them against a preestablished system security policy for noncompliance. Most host-based scanners require software to be installed on the system or an agent (software running as a service) installation. Some newer host-based scanner products have removed this requirement.

Incident response investigators use a host-based scanner to identify any changes in a system. Intruders commonly will modify system configuration settings and create a back door or enable an unauthorized function.

CGI scanners.

Some Websites use the CGI application programming interface to invoke programs called CGI scripts. CGI scanners assess CGI scripts for known vulnerabilities.

Phone scanners.

Phone scanners (also known as war dialers) dial a range of phone numbers, connect to telephones lines, and check to determine what is answering on the other end (for example, a standard voice line, a modem, or a fax). Once the scanner has completed its task, it can provide a report outlining the types of carriers on the other end. If the war dialer discovers a modem, a clever hacker can attempt to break through and gain access to the organization’s network.

Wireless scanners.

Wireless scanner applications provide automated detection and security analyses of 802.11 wireless networks. The scanners discover access points and individual client devices. An individual easily can insert a wireless network card into a notebook computer and walk or drive around discovering access points. This technique has become common and is better known in the hacker underground as war driving

Intrusion Detection Systems

Intrusion detection systems (IDSs) form a significant piece of the computer security infrastructure, alerting administrators to intrusions and attacks aimed at computers or networks. IDSs can be either network or host based.
A host-based IDS monitors event logs from multiple sources for suspicious activity. A host-based IDS is best placed to detect computer misuse from trusted insiders and those who have infiltrated the network. These systems are popular with security personnel because they operate in near-real time to detect unauthorized activity.

A network-based IDS monitors all network traffic passing on the segment where the IDS is installed. A network-based IDS reviews each IP packet for protocol anomalies or compares the data to a list of attack signatures, identifying the activity as threatening or nonthreatening. This technique is known as a signature-based IDS. If the IDS identifies an attack, the system will send an alert.

Honey Pots

Honey pots are systems designed to look interesting to a potential intruder so that security professionals can monitor who is probing or attacking the networks without exposing the real server to outsiders. Honey pots are highly flexible security tools that have different applications. The main function of a honey pot is to gather information about potential intruders. A professional gathers enough information about a would-be attacker and then implements countermeasures to prevent the unwanted activity.

FORENSIC ANALYSIS

A broad range of hardware and software tools are available to assist the forensic analysis of security incidents. Many of these tools are used to acquire information that will supplement the data produced by network devices such as IDSs, firewalls, and antivirus devices. After event analysis or correlation, these tools can help investigators establish exactly what happened during an incident.
The primary purpose of a forensic examination is to acquire evidence using a process that not only maintains the integrity of the evidence but also helps to ensure admissibility in any future legal proceedings. However, today’s forensic tools are also useful as an intelligence gathering and post-incident review device, enabling investigators to recover overwritten or deleted data and help answer the fundamental question: “How did this happen?”
Hardware and software forensic analysis devices prevent a target media (usually a hard drive) from being written to and thus maintain the evidential integrity of the data. Write blocker hardware tools allow an instant or real time examination of a system to be conducted, which is valuable during a live and ongoing incident.
Most tools come bundled with advanced data mining and analysis packages that enable examination of the evidence that has been acquired. Some also have advanced timeline analysis capabilities. Often, however, these tools work with a finite range of platforms and technologies. This means that for some less common platforms, although the data on any given hard drive can be forensically imaged, the forensic tool may not support useful analysis. In such instances, a copy of the forensic image would need to be used to re-create the original environment, thus enabling an admissible examination of that material.
Two types of forensic analysis are network and system. In most incident response situations, a network investigation will lead to a system forensic investigation.

Network Forensic Analysis

Network forensic analysis is the act of investigating an event where an attacker launches an attack remotely and attempts to breach a system while disguising his or her true identity. Network forensic analysis is not new to the security field, but few tools and resources are available to monitor and track activity.

Network forensic analysis relies heavily on the native logging capability of network devices. If a cyberattack or system breach occurs, security personnel first review the information supplied by the firewall and router logs. These logs can help the investigator determine where an attack was launched. Once an investigator obtains the logs, other tools—including nslookup, whois, ping, traceroute and traffic analyzers—can help obtain additional information.

nslookup.

The nslookup program lets a user enter a host name (for example, pwc.com) and the program returns the corresponding IP address. Users also can enter an IP address, and nslookup will discover the host name. The nslookup command is a native program on most UNIX and Windows systems. For example, a user can open a DOS command window, type nslookup pwc.com, and obtain information.

whois.

The whois command provides information about the owner of any second-level domain name. This command will query a top-level registrar, such as Network Solutions (VeriSign’s sale to Pivotal Private Equity pending, as of October 2003), which maintains domain registration information for com, net, and org domain names. This command can be executed by entering (for example) whois pwc.com, and whois will provide information about the owner of that second-level domain name.

ping.

The ping utility verifies that a particular IP address exists and that a host utilizing the IP address can accept requests. (The verb ping means the act of using the ping utility or command.) The ping utility is most often used for diagnostic purposes, but it can also be used for discovering systems and network devices. It is considered a best practice to filter out or disable the ping functionality, so the ping command is not always a reliable method for obtaining information.

traceroute.

The traceroute program records the route through the Internet between a computer and a specified destination computer. It also calculates and displays the amount of time each hop (from one system to another) consumed. The traceroute tool is especially useful for obtaining detailed information about the attacker’s destination.
The traceroute program is included in UNIX and Windows operating systems, among others. On Windows operating systems, the program has the abbreviated name of tracert.
These tools help security personnel obtain information regarding the destination of the attack, but they provide little information about what type of attack is being executed. For this type of information, the investigator needs to execute tools that can more deeply analyze the attack packets.

Traffic analyzers.

Traffic analyzers (also known as sniffers) monitor and analyze network traffic, allowing the investigator to view the packet information to determine what the packet is doing. Using this information, an investigator can determine the type of attack and implement countermeasures.

Log analyzers.

System logs are a primary source of network information if logging has been properly enabled and configured. Many investigations end because insufficient information was available to move forward with the investigation.
Assuming proper logging activation and configuration, the investigator next must sort through the large amount of information and successfully identify the attack. At this point, tools provide insights that the investigator would have difficulty detecting unassisted.

System Forensic Analysis

System forensic analysis is the discovery, analysis, and reconstruction of evidence extracted from computer systems, networks devices, storage media, and computer peripherals that might allow investigators to solve the crime. The two main parts to system forensic analysis are data acquisition and data analysis.

Data acquisition.

Data acquisition can be the most challenging aspect of system forensic analysis. While acquiring data from a system, an investigator must collect the information in a manner that does not alter the data in any way. Although this task sounds easy, it is extremely difficult in practice. The investigator must be prepared to work with all types of hardware and software. Systems ranging from mainframes, midrange systems, desktop computers, notebooks, and personal digital assistants (PDAs) can make the acquisition a challenge. If any of the information on the original system is altered, the entire case can be jeopardized. The main purpose for this requirement is to give a third party the opportunity to re-create an investigation stepby-step and draw the same conclusion. This approach ensures that evidence has not been altered.

Data analysis.

Data analysis can be a long, tedious process when the investigation covers a large amount of data.

After a obtaining an image copy, the investigator should always make a backup copy. Modern forensic tools index the raw data and create links to files or data fragments currently present on the drive. Depending on the amount of data, this process could take several hours. After the index phase is complete, future queries will search the index rather than the raw data. Following the keyword searches, the investigator will first review each item returned and then review files or file fragments of nontext data, consisting of graphical images, video, music, or encrypted files, for example.
Although the forensic tools assist in acquiring and searching the data, the analysis phrase still depends on investigators to identify any wrongdoing

INVALID INCIDENTS

After identifying and classifying an event, an organization might decide not to respond. Information about the incident from trusted sources might be lacking. Apparent irregularities or conflicts may appear in the data (for example, a Windows server seemingly compromised by a UNIX exploit). Alternatively, the asset in question may be deemed valueless, prompting the company to choose not to initiate the response process.
Even if an organization takes no action in response to an event, however, data regarding the incident should be retained to facilitate consolidation of security activity and management reporting. In addition, data about seemingly harmless events could prove relevant in assessing other events. Seen individually, they may be ignored as invalid incidents. But when taken together, these isolated events could be indicative of a more intensive or aggressive attack. By correlating data from many sources, security personnel may be able to identify more complex security incidents that involve multiple systems over an extended time period.

 

Executive management teams often require a snapshot of their organization’s information security health. They may want this report for strategic planning purposes, or it might be in response to highly publicized information security attacks and the desire to assess how the company would fare in similar circumstances. Alternately, this mandate might be due to regulatory or fiduciary requirements that call for companies to provide a clear status of an organization’s internal control structure.

To meet this requirement, organizations often rely on a piecemeal collection of facts, generalities, and assumptions. This incomplete approach is a striking contrast to the way companies usually manage the information-gathering and decision-making processes that influence their core business processes. For example, data in a simple manufacturing process might include production rates, material costs, and sales transactions. This raw data is then translated into information (cost per unit, sales totals, and revenue), which becomes knowledge—for example, sales forecasting and trends. Using this knowledge, an organization can measure the effectiveness and value of its processes and then make informed decisions.
However, replicating the data-information-knowledge process for an organization’s security operations is not a straightforward pursuit. An organization’s security activities encompass a wide range of technologies and processes—intrusion monitoring, malicious code detection, vulnerability scanning, log analysis, incident response, and so on. Each of these activities uses distinct technology tools that capture and analyze important security data about an organization’s assets, events, and security policy compliance. Additionally, manual processes such as incident remediation, operational management, security-related help desk activities, and security administration tasks are also sources of data. The sheer volume of security-related information generated by an organization can be overwhelming—a single firewall, for example, can generate hundreds of security events every minute.
These myriad sources of data are intended to help organizations detect and contain threats and vulnerabilities, but often little value can be derived from them. An organization’s security staff usually does not have the resources to thoroughly review the voluminous reports generated by various tools. For example, most organizations struggle with simply allocating resources to review system log files. Thus, although operating system logging is considered a good practice, many organizations do not enable logging because the administrators lack the time to adequately review and manage the logs. And for those companies that do enable logging, they may incur significant cost for the disk space necessary to store the logs without ever capturing the true value of the data generated.
Furthermore, information security attacks are often so subtle that they cannot be detected by reviewing a single source of log data. To gain a full view of such an attack, an organization must use many sources and gather enough data to analyze the situation. For example, Web attacks are usually executed through port 80 or 443. If the attack uses port 443, the traffic will be encrypted, so an intrusion detection system (IDS) will not be able to identify the attack. However, the Web server logs may contain irregular entries, such as requests to files that do not exist or have unusual parameters inside Uniform Resource Locator (URL) requests. Likewise, if the Web attack is on a database, its logs might show irregular structured query language (SQL) calls. Finally, reviewing the log file from a network router might reveal Internet Protocol (IP) addresses that could be used to track connections and ultimately the origin of the attack.
Another obstacle to effective management and interpretation of security data is that the reports generated by threat and vulnerability identification tools are designed for IT administrators and focus at the operational level. Although this information is useful for managing and securing an organization’s systems, it does not aid the chief security officer (CSO), chief information officer (CIO), and other executives in assessing trends and strategies. These tools provide technical details that are usually symptoms of a greater business problem. Many times, the symptom (technical vulnerability) is fixed, but the problem (breakdown in a business process) is overlooked. Visibility across threat and vulnerability identification tools is essential for an organization to gain a full perspective of its security status.

Security Information Management Components

Security information management (SIM) resolves this problem by providing a way for organizations to integrate security-related data from disparate sources. In essence, SIM is the process and infrastructure to gather the data from the business process, consolidate it into relevant information, and then translate the information into knowledge to support business decisions on which an organization can take action. For example, an organization using a SIM system could automatically correlate a technical vulnerability warning issued by a security intelligence service with the organization’s asset inventory ranked according to its security policies. The resulting information would allow the organization to better allocate resources and prioritize remediation actions to resolve the vulnerability.
SIM provides a number of advantages to organizations. First, it can help reduce the cost of running a company’s security organization by decreasing the amount of time staff must spend monitoring many devices and then analyzing and responding to alerts. SIM also can increase the return on a company’s current security investments by turning vast amounts of data into information. Often, established security solutions are underutilized because the amount of work required to monitor and manage them exceeds the value of the information they provide. By automating the process of analyzing the information these systems generate, SIM can increase their value to the organization.
SIM also can reduce risk by permitting rapid response to attacks. The cost of a security breach, in downtime and lost revenue, as well as customer satisfaction and legal liability, can be significant. The longer an organization takes to analyze and respond to alerts, the greater the likelihood a breach will occur. SIM helps an enterprise to quickly identify and respond to intrusions. Finally, SIM can simplify compliance with security policies and government regulations, as well as agreements with suppliers.
SIM comprises the following processes and supporting technologies:

• Intelligence analysis—The collection, evaluation, and application of the security intelligence information obtained from external sources, such as the alerts issued by commercial intelligence services, hardware and software vendors, or independent entities like the Computer Emergency Response Team (CERT) Coordination Center.

• Asset classification—The process of identifying and maintaining risk information about each of an organization’s assets to determine dependencies, as well as to define which assets are the most important to the organization.

• Event correlation—The ongoing analysis and interpretation of the security information generated by disparate sources.

• Standards and policies management—The formal, ongoing maintenance of an organization’s security policies and standards

. • Reporting—The process that provides a periodic, holistic view of security operations to allow organizations to evaluate the effectiveness of their threat and vulnerability management processes.

The Security Dashboard

Today, companies spend considerable time and money deploying threat and vulnerability identification tools. Companies often make these additional investments without first deriving value from the data generated by their current infrastructure or supporting their investments with the requisite people or processes. For example, a company may invest in IDSs without first making use of the available data created by system logging. If the company properly extracted, correlated, and disseminated the log data, the resulting information and knowledge might benefit the organization more than a new security tool that would generate even more data. In another example, a company might choose to invest in an event correlation engine that would help it interpret the data generated by its infrastructure and security tools. However, the system itself is not enough; the organization must also put the people and processes in place to support its security infrastructure.
In both cases, it is the intelligent processing of the data that provides real value to the organization. Thus, while each of the discrete SIM activities helps organizations to manage threats and vulnerabilities, the solutions are still relatively fragmented. Executives need an integrated view of SIM that gives them a single point of reference. This concept of the security dashboard is becoming a requirement for many organizations. Similar to the business intelligence tools that companies use to aggregate, analyze, and distribute near-real-time operations data to decision-makers, the security dashboard provides a high-level view of an organization’s security status.

DASHBOARD FUNCTIONALITY

The security dashboard is a new concept and no standard definition exists for precisely what constitutes a dashboard. However, the following are the major functional areas that the dashboard should address:

• Consolidate the information into one consistent interface—Having a single, definitive source for security information is essential. Everyone in an organization—including general employees, system administrators, and executives—will refer to the source for security information.

• Provide a common framework for handling security data—A consistent framework means that new inputs to the dashboard system should feed into the current system—not disrupt it. For instance, as companies deploy new systems and applications, they will use the standards documented in their policies and technical controls to dictate what data is needed for the dashboard’s technical view. The dashboard’s data collection mechanism, such as an event correlation system, can then collect, organize, and report on this data. Similarly, a new regulatory directive can be interpreted, applied to the current policy, and if necessary, communicated through the framework to the technical view.

• Manage a variety of information—Dashboard information should be presented in a consistent fashion. Information should be grouped by topic for easy navigation and, as applicable, cross-referenced, indexed, and made searchable. Most users of the system will be looking for the answer to a specific question. Therefore, easy navigation and search capabilities are crucial.

• Assist in managing technical documentation and technical events—An organization’s security documentation can include both vendor-supplied documentation such as vulnerability alerts and application or system manuals, as well as control information, such as technical standards. This information is crucial for the technical view, as is the event-related information gathered by the datacollection mechanism. The dashboard is designed to not only gather and report on security events and current status, but also to provide guidance in the remediation of problems. The control information should be concise and explicit. System administrators, whose main function or skill may not be security, will look to this documentation for direction.

• Support the audit and compliance process—One of the common challenges of a security audit program is to test compliance consistently. IT auditors often must interpret the organization’s security policy and standards prior to testing, and this practice results in inconsistencies. The dashboard can streamline this process by providing the data necessary to measure current security controls. Additionally, if an organization’s implementation and compliance teams both use the same control library (the policies, standards, and controls documented in the dashboard) as the basis for their work, system deployments and audits will have the same point of reference.

• Support focused communication to the various roles in the enterprise—The security process depends on many different people with many different responsibilities. The system should give each role a focused view of security information. This view may show specific components of the security reporting process (for example, overall measured compliance to policies), or the view may show specific systems or platforms administered by the person. The concept is to tailor the view of the information to the role or function of the user.

• Tie together the different components into a consistent model—Just as accounting systems process information using specific models or calculations, the security information system also should be based upon a model to tie together the different elements. This model can be as sophisticated or as simple as necessary. The important concept is that the model system be flexible enough to support changes to input (source of information) or output (result or audience for information). If a new input is supplied to the system, then the resulting output should be consistent.

• Support the process—This aspect of the dashboard is perhaps the most important. The system should incorporate the basic practices of the security process— the information classification scheme, the risk assessments, the technical and nontechnical documentation, and the overall security controls environment. The system should be determined by the strategy of the organization and should be a primary infrastructure component. Envisioning the security dashboard, its operation and functions, and its place in the overall business environment is very important.

The three primary organizational components of a security dashboard are a common reporting infrastructure, an integrated view of an organization’s security policies and standards, and visibility into the technical and operational implementation of an organization’s controls.

Reporting Infrastructure

Before they can implement a security dashboard, companies must define a common reporting infrastructure. A common reporting process and consistent report structure provide the framework that the dashboard supports. When an organization determines common metrics and measurements across the enterprise, the dashboard can provide the structure to collect and present those metrics. This structure creates a single point of reference for those outside the security process, and helps executive management to better understand trends and make strategic decisions.

This step may be difficult for organizations that have not created a common vernacular and approach to security across the enterprise. Companies with a CSO or CIO to create this vision are in a better position to achieve common reporting.

Integrated View of Standards and Policies

A security dashboard provides an integrated view of an organization’s security policies and standards—essentially the executive view of an organization’s security program. Many organizations suffer technical and operational inconsistencies because they either lack policies and standards, or they lack a consistent way to measure their implementation. The integrated view provided by the dashboard helps organizations to consistently measure how their security policies have been implemented, and to take appropriate action.

Technical and Operational Visibility

Finally, the security dashboard must provide visibility into the technical and operational implementation of an organization’s security controls. This component is based on the common reporting process and the integration of the organization’s security policies. With these prior components in place to dictate requirements, the technical data for current processes and technologies can then be added to the dashboard as needed. The dashboard gathers data from the infrastructure for analysis and consolidates symptoms across the infrastructure to identify problems. This integration is a complex process, but event correlation systems can bridge the gap between the dashboard and low-level threat and vulnerability identification tools. Event correlation technologies can help collect, organize, and centralize data from disparate sources and correlate events across systems.
While this technical and operational visibility is the major component of the dashboard, it is also the most flexible because these elements can be expanded. For example, a company may first add visibility to its IDS and virus management consoles, then later add visibility to its compliance monitoring console as well as its operating system, database, device, and application logging functions.

Beyond Visibility

The flexibility of the dashboard makes it possible for organizations to add information beyond that which provides visibility into their technical and operational processes. For example, a dashboard might present technical procedures and control documentation, security intelligence information, or information created and used in supporting processes related to primary security activities.

Technical procedures and control documentation.

Technical procedures and control documentation have become increasingly important with the introduction of more sophisticated and varied technologies. Additionally, turnover of technical personnel is frequently high, and the demand for technical resources has increased dramatically. These demands result in the need for detailed documentation that is both current and effective. As technologies change or implementations vary, baselines should be developed to provide consistency. Companies also should document a process for implementing mitigating controls for exceptions or risk acceptance.

Security intelligence alerts.

The alerts issued by security intelligence services can be integrated into the dashboard. The proliferation of e-mail-based viruses and the ongoing discovery of operating system and application vulnerabilities have made monitoring this type of information overwhelming, especially for organizations running heterogeneous technical environments. The dashboard should assist security administrators in identifying problems that affect the technical environment, identifying the true scope of the impact to the environment, disseminating action items, supporting the remediation process, and tracking this trail of information for consistency and completeness.
Support process information. The dashboard can also be used to create a single view for all security-related information. Support processes such as information classification, risk assessment, and technical security architectures include information components such as methods, data ownership matrices, data custodian matrices, and technical security architecture diagrams.

DASHBOARD VIEWS

Ultimately, the security dashboard would include several views based upon its core components. First, the dashboard would provide a view of security that is common across the highest level of the organization, the executive view. This view would measure security against security policy and the common vision of the CSO or CIO. Second, the dashboard would provide a view for the CSO and his or her management peers, the operational view. This view would show operational-level information to support decisions such as resource planning, budget allocation, and project management. Finally, a technical view would let the system and security administrators view the technology infrastructure relevant to their responsibilities. Subsequent views or combination views could be created based on the needs of the organization. For example, an organization might have a view for general users that would present its security policies and standards and other pertinent information about its security program. Figure42 on page198 shows examples of the various views.
In the executive view, for example, the dashboard would provide executive management an up-to-date report of the current compliance with policies and regulatory issues. The view might display compliance percentages across business units against specific policies, along with trends and prior-month results. Based on the data collected from the infrastructure, the results would be quantified into summary metrics, compared and mapped to policies, and overall compliance results would be reported.
In the operational view, the CIO might use the same policy-compliance information to allocate resources and prioritize projects. For instance, if the dashboard indicated a specific business unit had a low level of compliance on specific policies, the CIO might allocate security resources to further investigate the cause. Specific testing or process reviews may be necessary to help the business unit improve its compliance rates.

For the security administrators in that business unit, the dashboard’s technical view would provide specific information: which systems are out of compliance, technical standards or control implementation details to resolve the problems, and a tracking system to remediate them.

■ Intelligence Analysis

New threats and vulnerabilities are uncovered every day. Alerts of their discovery are reported by a variety of sources, including information technology vendors, government and security organizations, security intelligence services, and even hackers. Organizations must proactively monitor this external security information to prevent and minimize attacks. But over-burdened administrators often deem such time-consuming monitoring a lower priority than other security activities.

However, the identification and reporting of new vulnerabilities is one of the most crucial pieces of an organization’s information security framework. The identification of new vulnerabilities must come from a trusted source, and the risk must be clearly identified to begin remediation.
This process of evaluating and prioritizing external security information is called intelligence analysis. Tools used for intelligence analysis capture and consolidate information from various external sources. The data is normalized and filtered to eliminate redundant and unnecessary information. Then, based on a company’s asset classification data (which maps dependencies and determines the asset’s value to the organization), an intelligence analysis tool prioritizes which of the alerts the organization needs to act upon and how quickly it must do so.
The crucial time period between the discovery of a vulnerability and the occurrence of a related exploit is quickly diminishing.

Security intelligence services can help improve corporate awareness and shorten an organization’s response time by issuing relevant patches and workarounds for newly identified vulnerabilities. The two types of security intelligence services are community and commercial. Table11 on page201 shows a representative listing of these services.

COMMUNITY INTELLIGENCE SERVICES

These services are essentially moderated mailing lists to which subscribers post vulnerability alerts, information about patches and workarounds, and other vulnerabilityrelated information. In such a community-based service, alerts are not usually verified, nor are they issued in a consistent manner—both factors that can lead to false positives.
An organization’s security personnel must spend considerable time to assess the alerts from these services to determine whether the patch or workaround is possible and whether the vulnerability presents a threat to the company. Because community-based services do not have an automated filtering process, administrators must review potential alerts to determine those relevant to the organization.

A subset of community intelligence services are those mailing lists provided by hardware and software vendors that contain alerts specific to the vendors’ products. Vendor alerts usually consist of a link to a security patch that corrects the identified problem, and these alerts usually do not include exploit information or additional analysis about the vulnerability. The format of each alert is specific to each vendor, thus a vulnerability deemed critical and its associated patch from one company may not be equivalent to a so-called high-risk vulnerability and associated patch from another.

COMMERCIAL INTELLIGENCE SERVICES

Premium security intelligence services allow organizations to customize alert and vulnerability notification. For example, if a company has deployed only Microsoft’s Windows operating systems, Sun Microsystems’ Solaris and IBM’s AIX vulnerabilities are not relevant and should not be reported to the organization. Such services also let organizations prioritize alerts according to risk and urgency levels; by applying a consistent risk rating to alerts, companies can determine the process and urgency for remediating the identified problem.
Often, commercial services announce significant security alerts in advance of vendor announcements about the vulnerability, enabling organizations to better protect themselves from threats. For example, in July 2003, Symantec’s DeepSight service issued an alert about the Cisco IOS Malformed Packet Denial of Service vulnerability approximately eight hours before Cisco released the official announcement. By the time of Cisco’s announcement, DeepSight customers had already received a summary analysis of the vulnerability, a patch, and workaround information. This early disclosure of vulnerabilities has become a controversial practice. Critics argue that it is the responsibility of hardware and software vendors to release alerts more rapidly and that it is unfair for the customers subscribing to premium services to have early or exclusive knowledge of a vulnerability.
Many commercial intelligence services provide global Internet threat analysis. This type of analysis processes millions of events from across the globe and attempts to identify the most frequent types of attacks, most exploited services, and overall attack volume. These services can help identify Internet worms or global regions that present the greatest risk to a particular platform or industry. Some community intelligence organizations also offer such services, such as the alerts issued by the SysAdmin, Audit, Network, Security (SANS) Internet Storm Center.
In addition to providing vulnerability notifications, commercial intelligence services also provide industry and geopolitical notifications. For example, if a trend is developing in which banks are being targeted by attackers, the service will send notifications to companies in the financial services industry. Or, for example, when the United States was at war with Iraq in mid-2003, some people opposed to the war made public threats about cyberattacks on U.S.-headquartered companies. In this case, several of the commercial services alerted their customers to monitor for attacks from specific countries in which the threats had been issued.
Another type of monitoring provided by premium intelligence services focuses on Internet sources in the public domain, such as newsgroups, Websites, chat rooms, and mailing lists. Intelligence is conducted specifically for a single organization and is focused on detecting potential threats like spam, malicious or spoofed Websites negative publicity, hacking attempts, and publication of confidential information. For example, iDefense’s iMonitor service monitors chat rooms, newsgroups, mailing lists, and Websites for specific references to a client company or its business.
Most security intelligence providers deliver alerts in a standard format that can be integrated into security remediation management software. By integrating alerts with a remediation management system, patches and fixes can be applied quickly across the organization, following the steps outlined by the intelligence service.

■ Asset Classification

The ability to efficiently and effectively respond to threats and vulnerabilities depends on how well an organization understands the role each of its assets plays in the overall environment. More than simply knowing the asset’s specifications, this knowledge includes determining how the asset is used in an organization’s primary business processes, its direct and indirect relationships with other assets, and the risks associated with impairment of the asset’s confidentiality, integrity, and availability. For example, if an enterprise resource planning (ERP) system is determined to be a crucial asset, an analysis must determine the impact an inventory system has on the ERP system. The confidentiality, integrity, and availability risks that affect the inventory system must be considered when determining those same attributes in the ERP system.
This information discovery and capture process is called asset classification. By identifying its most important assets, an organization can determine how to best allocate resources when managing threats and vulnerabilities. Additionally, in some countries, risk assessment of data is required by law if the organization has private and sensitive information about customers and employees. The purpose of asset classification can be thought of as protecting the right information, with the right amount of security, at the right time. During the asset classification process, organizations must consider the socalled highest common denominator; that is, when classifying assets, if one system (for example, ERP) is materially directly dependent on another (inventory), the asset classification for both systems must be as high as the highest rating of the two systems.
Building on the previous example, a security program would focus threat and vulnerability management activities on those software and hardware systems that run the ERP system. Additionally, the ERP system architecture would have more protective resources applied to it than less essential systems. If these measures fail to guard the ERP system against an attack, the asset class that the ERP system belongs to should prescribe remediation activities that will focus on returning the system to a trusted state before those systems of a lesser classification.

ASSET CLASSIFICATION VERSUS ASSET MANAGEMENT

A company’s asset management activities usually are directly related to the monetary value of physical assets and their resulting amortization. An asset classification system includes not only the value of the physical assets but the value of the information assets as well. However, an asset management program can be the basis for developing and applying an asset classification program.
Where possible, a system that manages asset management activities should be used to manage asset classification, but only if it provides the ability to link information assets with their related physical assets. If the system does not provide this functionality, the data related to the physical assets can be used in a system that manages asset classification. Regardless of how the information is tracked, keeping it current is essential. If information security decisions are made based on how assets are classified, but the information does not reflect the appropriate classification, assets may become overprotected (causing increased costs) or more importantly, underprotected.

ASSET CLASSIFICATION PROCESS

Organizations begin the asset classification process by reviewing the company’s business impact analysis. Where no analysis has been performed, a business continuity plan or a disaster recovery plan may be of assistance. If none of these prior activities has been performed, the organization will begin the asset classification process through a series of interviews with executive and senior level management to determine which business processes and assets are of the greatest value to the business. Although identifying the most essential business assets and processes may seem like an obvious exercise, it is not always so straightforward because an organization’s senior managers may deem different systems and processes essential. By surveying a representative group of senior executives, the security organization can ensure that value assignments are vetted across the management team.
Once organizations have identified the appropriate business processes, they must determine the individuals—the information owners, information custodians, and information users—responsible for those processes. Ultimately, the responsibility for the confidentiality, integrity, and availability of the business process will fall upon the information owner. That person is likely to delegate his or her day-to-day responsibilities to an information custodian. Knowing who uses the information relevant to given business processes will also aid in determining the direct and indirect relationships between business processes. Finally, organizations must understand which technologies support business processes so organizations can determine their susceptibility to threats and vulnerabilities.
After determining the people and technologies associated with business processes, a company can define an asset classification scheme. This scheme should incorporate the three attributes of information: confidentiality, integrity, and availability.

An asset classification scheme will vary from organization to

organization—each defining a different number of classes, labeled in a way that makes the most sense to the organization. More important than the number of classifications or their specific names are the criteria used for each label and the consistent application of these criteria.

Considerations for criteria should include the impact of direct and indirect loss of confidentiality, integrity, and availability (individually), with respect to materiality, time, customer confidence, internal and external stakeholder confidence, regulatory requirements, and contractual obligations. When the security organization has established the criteria, and senior and executive management have approved it, then the organization can establish a qualitative system to assign values to each of the classifications. For the asset classification system to be successful, the organization must apply it consistently throughout the asset classification process.

ASSET CLASSIFICATION SYSTEMS

Currently, no commercial systems comprehensively manage asset classification. However, companies can use their enterprise systems management (ESM) tools to aid in the asset classification management process. ESM tools can be used to constantly monitor and track changes to the organization’s environment, enabling companies to more quickly resolve vulnerabilities. The systems continually poll and scan the organization’s environment and compare the results—a snapshot of the environment at a given time—with prior scan results to determine whether changes have been made. For example, the tool would gather data from a variety of disparate sources in the environment, such as firewalls, IDS sensors, and system and application logs. The system would then process the data and compare the most recent results with the prior inventory. Following this analysis, the system would report any discrepancies between the two inventories.
A more advanced ESM implementation might be able to receive and act upon security intelligence gathered from an external source if it is in a format compatible with that used by the ESM system. For example, a software vendor might issue an alert relevant to the organization’s environment. The ESM tool would then poll the environment to determine the existence of the reported vulnerability, report on any changes to the environment, and even provide an analysis of the susceptibility of the systems to the specific vulnerability. Next the system could notify the appropriate information custodian (via the organization’s trouble-ticketing system) of the need to apply a designated patch that the ESM system had automatically retrieved. If the vulnerability is recognized as breached, the system can notify incident response team members.
ESM systems usually comprise a three-tier architecture: a management console (often Web-based), a rules-based engine, and a database for storing scanning and polling results. The most effective ESM products have advanced workflow capabilities, enabling more flexibility to support an organization’s business processes and enabling them to easily integrate with help desk applications, change management systems, and internally developed systems. Another difference among ESM systems is the number of platforms they can monitor and manage. New features in ESM systems include the ability to tie the asset classification assessment and analysis into the system; the ability to consider policy exceptions; and integration with an automated change management process that allows user sign-off.

■ Event Correlation

At the core of a security information management system is an event correlation engine, which aggregates, interprets, and presents data from an organization’s many security systems and data sources. The event correlation system processes both static and near-real-time data, which enables companies to better manage security information and mitigate the potential damage resulting from threats and vulnerabilities. The near-real-time capability of reporting on security events as they are occurring distinguishes event correlation from the historical reporting provided by many other security systems.
For example, if a terminated employee ID was flagged in an event correlation system and the ID was used to log in anywhere on the monitored infrastructure, that event would be identified and appropriate personnel could be notified. If the organization did not have an event correlation system and relied only on system logging, this event would only be identified after the fact—and only if the reviewer of the log file knew the user had been terminated. Even if the company had a real-time monitoring system in place, such as an IDS, the IDS might not identify this event as an issue because it consisted of a valid login using an ID and a password; thus, the login would not look like malicious activity to the IDS.
Event correlation systems also help organizations reduce false positives by gathering additional information about events. For example, if an IDS sensor were to identify an attack, the event correlation tool could evaluate the destination of the attack to first determine whether the destination host could be affected by the type of attack, and, second to see whether it was vulnerable to that specific attack (that is, whether it already had been patched). This ability frees administrators from investigating each and every event and allows organizations to focus resources on crucial events.

HOW EVENT CORRELATION SYSTEMS WORK

An event correlation system interprets data based on a set of rules that defines what events the system should look for and what it should do when it identifies the events. Most event correlation systems have predefined correlation rules, but also allow organizations to customize the rules or define specific events to identify.
The rules for identifying events range in complexity and are based on an organization’s security policies and standards. For example, a simple rule might state that if the correlation engine detects the same event, such as a failed login attempt on numerous systems that originates from the same source IP address, the system would trigger this collection of events as an issue.
A more complex rule might detect the following: If system A logs jsmith logging in from a particular IP address, and then 20 minutes later jdoe logs in to system B from the same IP address, the system would trigger an event that suggests that an ID has been compromised. However, if it is the company’s policy to have shared workstations, this occurrence may not constitute an issue.
The system’s rules also define how the system should manage alerts. For example, if the correlation engine identifies an event, but the system under attack is classified as a low priority, that event could be treated differently than if it had occurred on a system that was classified as containing confidential information. Likewise, the organization must define what happens once an event occurs—who receives notification, how the notification occurs, and how quickly. Alerts can be issued through e-mail, pager, operation management console messages, and so on. The event correlation system alerts should be integrated into the organization’s incident investigation and escalation procedures. If these processes have not been defined, the value of the event correlation system will be diminished.

DATA SOURCES

To effectively define the system’s rules, the organization must identify and document the events it wants the system to look for and then the information required to monitor those events. Implementing a successful event correlation solution depends on understanding the amount of data needed and making sure the quality of the information is high so the system can be relied on for decision-making.
The sources of data and the type of information it contains are numerous, including data from firewalls, IDSs, and application and operating system logs. Nearly every attribute of the data collected could be used by the event correlation system. Such examples include source and destination IP addresses, ID, request type, type of event (such as login failure, successful login, denied access), and time of the event.
As event correlation systems mature, the number and type of sources that can be used will increase. Current systems will accept data from most of the common platforms; gathering data from nonstandard sources may require some customization of the event correlation system. Most vendors of event correlation systems support standard perimeter devices, such as Check Point’s FireWall-1 and Cisco’s PIX firewalls, ISS’s Real Secure and Cisco’s IDSs, and most major operating system platforms such as Microsoft Windows and versions of UNIX. Some systems support additional sources like vulnerability scanning data, security intelligence data, and data from trouble-ticketing systems.

If the data required to monitor specific events is not currently available, the organization must identify this data and determine how it can be captured. Finally, the organization must determine how to integrate this data into the event correlation system’s central data store.
■ Standards and Policies Management Every security process and supporting technology that an enterprise implements is informed by its security standards and policies. Thus, the ongoing maintenance, evolution, and administration of these requirements are essential. Some organizations invest significant resources in defining and implementing policies and standards, but fail to update them in response to security events. For example, if an organization’s standard specifies the use of 56-bit encryption and a recent exploit resulted in cracking a 56-bit encrypted key, the company (and others using a 56-bit encryption standard) should consider implementing a stronger key. If the company does not regularly review and update its standards in light of new threats, it may become more susceptible to such threats.
An understanding of how an entity’s policies, standards, and processes relate to one another—and to its ongoing security activities—is important for understanding the management process. The organization’s security policies are essentially formal statements that articulate requirements necessary to meet its security objectives. For example, a company’s policy may state, “A security risk assessment must be performed in the design stage of the system development life cycle.”
Based upon these goals, companies then define standards that will be used to comply with the policies. For example, in this case the standard would specify, “The security risk assessment must include, at a minimum: identification of technical and nontechnical threats; identification of vulnerabilities associated with all technologies and associated processes that will be utilized; and dependencies on input and output, to and from other systems.”
Finally, companies define processes for applying these standards to the relevant technologies. In this example, the corresponding process would explain where to obtain risk assessment criteria and any applicable tools as well as indicating the person to contact for more information.

POLICY REVIEW AND UPDATE

Security policies and standards are subject to change in the same way as other business policies and standards. Ensuring that security activities provide value to the business is important, and this goal will be accomplished only by changing the policies and standards so that they support changing business objectives. This process becomes even more crucial when regulatory requirements affect policies and standards.
To facilitate the review and update process, companies should organize a crossbusiness-unit team that revisits policies and standards in their entirety at least once a year. Members of this team usually include senior management from legal, human resources, accounting, financial, privacy, information technology, and information security; representatives from core business units; and an appointed policy custodian. Between formal review periods, the policy custodian should be in regular contact with members of this team to determine whether changes need to be made in the interim. Due to the commitment required to keep policies (and especially standards) current, companies may lapse in their review and update activities. If support and participation for these important activities wanes, the company’s policies and standards will fail to be aligned with its business objectives.
Currently, no tools exist that propagate senior management intent via policies into technical standards, and finally into specific configuration settings for an organization’s infrastructure. There are, however, tools that take technical standards and directly configure infrastructure components via centralized means. Often these tools refer to the technical standards as policies, but these tools do not contain formal policies as defined here. Companies can, however, use these tools to help them establish security controls in the technical environment.
■ Security Reporting Security reporting enables a company to evaluate the effectiveness of its threat and vulnerability management operations. The nature and frequency of the reports vary, depending on who in the organization is the intended recipient. For example, a CSO would be interested in high-level information, such as the status of business initiatives that affect the organization’s security posture, status of projects, and the overall effectiveness of security countermeasures.
On the other hand, a security manager overseeing daily security operations would need more detailed information—for example, the number of password resets the help desk has performed on a weekly basis, or the number of systems that needed a patch installed due to a newly published exploit. Frequently, organizations produce security reports that serve the latter group and focus on the technical details of security. While security managers need this level of detail, upper management does not and instead would derive greater value from security information that is relevant to the organization’s business.
In many organizations, security reporting is a fairly fragmented process and may not be formalized, especially in larger organizations that have grown through mergers and acquisitions. The more disparate business units there are in an organization, the more fragmented the reporting. The same holds true with the technologies used for reporting; as the number of different systems increases, the number and types of security reporting increase. To effectively manage a business from a security perspective, the information and its effect on the business must be understood. Additionally, metrics need to be defined so that performance can be measured and tracked. When the company has identified what type of information is required for its reports, it may choose to centralize and consolidate this information in a security dashboard to provide a unified view of the available data.

TYPES OF SECURITY REPORTS

To set up a standard reporting mechanism, companies first identify the types of security information that are generated by their security systems. For example, at a technical level, information from operating system logs, firewalls, IDSs, and other systems can be used to identify anomalies, potential threats, and security events in the environment. A subset of this information could be useful to security management, and an even smaller summarization of the information would be useful to upper management. Figure45 on page208 provides an overview of the reporting information commonly relevant to each group in an organization.

Operational Reports An organization’s security team will be most interested in summary operational information that can help them manage the company’s day-to-day operations, such as:
• The number of password resets in a particular month

• The number of user change/add/deletes performed

• The number of security incidents

• The number of emergency patches applied

Event correlation and other automated log analysis tools can help the administrators and operators sort through the detail and volume of logs and identify the relevant information necessary to perform their jobs. Ideally, this detailed information should be reviewed daily. If real-time or near-real-time systems are in place, such as IDS and event correlation systems, this data stream should be monitored nearly constantly.
This information can help security managers effectively assign resources to areas of concern and identify areas where efficiencies could be gained. The timing and style of reporting will vary depending on an organization’s culture, industry, and an individual’s management style. Generally, operational reporting is most useful on a monthly or biweekly basis, depending on the volume

Executive Reports Executive-level reporting is far less frequent and should be a summary of significant items that affect the business, as well as trend information about the organization’s security status. In this case, the style of the reporting will also vary based upon organizational attributes. Companies generally use graphical representations of status and progress against goals to most effectively communicate information. In executive-level reporting, technical details and jargon should be avoided.


This chapter profiles representative information-security product and service vendors, as of October 2003. The vendors are categorized according to the type of products each provides in the following categories: suites and platforms, identity management, technology infrastructure security, threat and vulnerability management, and wireless and mobile security


management, and wireless and mobile security.
For a visual overview of the vendor’s offerings in each primary security activity, see Figure46 on page214 and Figure47 on page216. In the following discussion, vendors are listed under the category that encompasses their primary product offerings, even if they provide products spanning other categories. Note that as with the general technology market, startup companies are expected to continue to develop advanced technologies and industry consolidation is likely to occur, with suite and platform vendors acquiring specialty vendors.

■ Security Suites and Platforms Because true security requires protection and enablement at several levels of the infrastructure, many security providers supply products that bundle several applications to meet an enterprise’s security needs. These may encompass identity management, technology infrastructure security, and threat and vulnerability management.
BALTIMORE TECHNOLOGIES Baltimore Technologies provides the enterprise with a variety of security products; the company also issues public key infrastructure (PKI) security certificates through its UniCert products and services. The Baltimore products are available singly and in two main suites: Trusted Business Suite and TrustedPortal for Oracle. Trusted Business Suite includes modules for secure network access (including VPN, encryption, single sign-on, user authentication, and access-rights management for intranet users; certificatebased e-mail security for intranet and Web users; and digitally secured and authenticated document content management that integrates with Microsoft Office and Adobe Acrobat. TrustedPortal for Oracle brings certificate-based security, authentication, and access management to Oracle 9i Application Server portals.
The company also supplies PKI-based security for Windows 2000/XP desktops, as well as certificate-based security for wireless networks, with modules aimed at carriers/ISPs, banks, enterprises, and technology vendors.
Central to Baltimore’s technology is the use of PKI certificates to ensure that data is authenticated, not tampered with, and available to the intended recipient only. All Baltimore’s applications can be managed using its underlying BASE technology. BASE is a packaged certificate-issuing and life-cycle management product that offers advanced user-certificate provisioning and real-time signature and certificate validation. BASE also includes an advanced hardware security module, database, and directory. Both BASE and the solution modules offer Web-based graphical interfaces, personalized menus, automated downloads, and delegated administration.
In September 2003, managed security services company beTRUSTed announced it would acquire Baltimore Technologies’s PKI business.
BMC SOFTWARE As part of its overall enterprise systems management product line, BMC Software provides two sets of security products. One is its Control-SA suite, a package of tools to manage user identities, passwords, and access policies. The suite provides unified, automated systems for managing user identities and access rights; self-service for password reenablement and access requests; and auditing of access policies. The Control suite can integrate with SAP’s R/3. BMC’s other security-related products are modules for its Patrol enterprise systems management product. Patrol modules are available in Cisco and Check Point versions to monitor and manage firewalls.

COMPUTER ASSOCIATES

Computer Associates supplies three related suites intended to manage user identities, create and enforce access policies, and detect malicious code. The eTrust suites are part of an overall corporate identity and policy management system that covers network access, content access privileges, building key-card access, directory services, and human resources records. The suite permits centralized user accounts to which policies can be applied for both IT and non-IT resources, allowing single-account manageability.
The eTrust Identity Management suite includes tools to provide single sign-on, PKI certificates for user authentication, directory services, certificate validation, and enterprisewide user account management. The eTrust Access Management suite provides tools for network and Web access management, firewall provisioning, server security management, and database access management. The eTrust Threat Management suite furnishes tools for virus detection and elimination, intrusion detection, policy auditing, and content filtering (including peer-to-peer detection) for both enterprise systems and broadband-connected PCs. Computer Associates also sells an integrated security management system to provide unified control over the applications in all three suites.

ENTERASYS NETWORKS

The Secure Harbour suite combines intrusion detection and firewall capabilities in a set of hardware and software tools. The tools in the Secure Harbour product set can halt destabilizing traffic and activities; authenticate users at the points of network entry; detect intruders and unauthorized bandwidth usages (such as in denial of service attacks and from peer-to-peer software); manage network traffic according to security policies; and secure data through global VPNs (for WAN-to-WAN and remote-access applications) and advanced network control and monitoring features. The company’s tools operate on both wired and wireless networks, in fixed-location and mobile environments. In addition to its tools, Enterasys provides security assessments and managed security services.

EVIDIAN A

Groupe Bull subsidiary, Evidian supplies the AccessMaster suite, a set of modules for security policy management. The NetWall component enforces network protection from internal and external threats. The PortalXpert component provides policy-based single sign-on for intranet and extranet users of Web-based enterprise applications. The Single Sign-On component lets enterprises implement a sign-on security policy for existing systems, applications, and remote access devices. The PKI Manager component encrypts and provides digital signatures for Web and e-mail applications. The Security Policy component manages users’ rights centrally on distributed systems in compliance with enterprise policies.

HEWLETT-PACKARD OPENVIEW

HP’s OpenView is based on HP’s Integrated Service Assurance (ISA) architecture, which consists of a service layer, an integration layer, and an infrastructure management layer. OpenView includes more than 50 products that focus on service provisioning, integrated service assurance, and service usage. An add-on component, HP OpenView Advanced Security, provides secure communication for managing systems, databases, and applications over unsecured network infrastructures. Many third-party applications can be integrated with OpenView, including those for security, job scheduling notification, desktop administration, simulation, and Website analysis. Tying the products together is HP’s iNOC Console, a Web-based management reporting tool that provides a single access point to the products.
IBM IBM’s Tivoli division furnishes security management solutions for e-business applications. Its identity management solution manages users, access rights, and privacy preferences. The suite includes modules for identity management, directory management, and policy management. IBM’s security-event management solution helps IT staff monitor, correlate, and respond to security incidents. The suite includes the Intrusion Detector and Risk Manager modules; IBM recently expanded Risk Manager’s capabilities to manage security events from IBM DB2 Universal, Oracle, and Microsoft SQL Server databases. In 2002, IBM acquired identity management provider Access360 and is incorporating its technologies into Tivoli’s solutions.

IBM’s professional services arm, IBM Global Services, helps enterprises to assess, design, and implement technologies and policies to enforce security and privacy in both wired and wireless environments and to provision PKI certificates for use in ebusiness systems.

INTRUSION

Intrusion provides intrusion detection, VPN/firewall, and vulnerability-assessment tools. The company’s SecureNet Provider software for enterprises and managed-service providers delivers a customizable tree view to cross-index intrusion detection data. SecureNet uses network sensors, monitored through a central application, to detect possible intrusions. An optional Policy module provides policy management controls, and an optional Nexus module lets IT push signature controls to all network sensors. Intrusion’s VPN/firewall hardware, its PDS series of security appliances, consists of hardened Intel-based Linux platforms that use Check Point firewall-and-VPN software. Intrusion also supplies SecurityAnalyst, an assessment tool that does not require installation of software agents on target systems. It is designed to provide centralized audit data of all essential Windows security features and has built-in policy definition and reporting capabilities.

MICROMUSE

Micromuse’s Netcool suite of products collects and consolidates data from the application, system, and network layers of the IT infrastructure—giving IT staff a consolidated view of network events in real time. Applications in the suite can verify that network applications or Internet services are running. The suite includes analysis, reporting, and network autodiscovery tools as well as capabilities for automating responses and immediately informing particular individuals, systems, and processes about IT problems that will affect service. The Netcool/Firewall application works with firewalls from Check Point and Cisco Systems, providing intrusion detection, attack response, real-time capture and viewing of firewall logs, and a dashboard view of firewall status.

MICROSOFT

Since Microsoft first announced its Trustworthy Computing initiative in January 2002, the company has announced a number of products with improved security features. Microsoft’s Windows Server 2003 includes built-in security for file access, encryption, password protection, and user identity directory management. Several levels of security are possible, with different levels for older applications, current applications, and applications designed with strict security capabilities.
Windows Server 2003 also offers the common language runtime, which checks where browser-based code was downloaded and whether it has been changed since it was signed; Internet Connection Firewall to protect computers that are connected directly to the Internet; and Internet Authentication Server for RADIUS authentication. Microsoft also provides a series of tools for assessing and managing Windows Server security settings, as well as for Internet Information Server 2003 and Exchange Server.
In April 2003, Microsoft’s Security Business Unit announced the company would introduce new security technologies and tools in four critical areas by April 2004: patch management, digital rights management, applications development, and network access. Among the patch management products included in this effort are Systems Update Server 2.0, Systems Management Server 2.0, and Baseline Security Analyzer 1.2. Additionally, Microsoft plans to expand automated update c In the applications development arena, Visual Studio .NET and .NET Framework 1.1 will provide greater control and guidance to developers who need to secure Web applications for Windows-based clients. In the network access area, Microsoft began to offer Wi-Fi Protected Access (WPA) as a download for Windows XP SP1 in May 2003.
Malicious code detection and prevention have become a much higher priority for the company after a series of viruses exploited vulnerabilities in various Microsoft products. In May 2003, Microsoft jointly founded the Virus Information Alliance (VIA) with Network Associates and Trend Micro. Computer Associates, Sybari, and Symantec then joined the alliance in July 2003. The VIA intends to share information about viruses and prevention methods for viruses that attack Microsoft products.
Microsoft announced its intention in June 2003 to acquire the intellectual property and technology assets of GeCAD Software, a Romanian provider of antivirus software. This announcement followed Microsoft’s acquisition in 2002 of Pelican Software, a maker of behavior-blocking software that checks calls to operating systems from applications to monitor adherence to policy.
Microsoft continues to be active in the identity management arena. In July 2003, the company introduced Microsoft Identity Integration Server, a metadirectory and user provisioning product. Microsoft Identity and Access Management Solution Accelerator, a related product created in collaboration with PricewaterhouseCoopers, is designed to help enterprises plan and construct an identity management infrastructure.

NETIQ

NetIQ’s acquisition of PentaSafe Security Technologies has provided the company with a suite of security tools for the enterprise under the banner VigilEnt Integrated Security Management. Its Policy and Compliance Management creates, implements, and enforces security policies for users and resources. Its Administration and Identity Management provisions users and provides self-service password management. Its Vulnerability and Configuration Management establishes and manages security configurations and also identifies vulnerabilities. Incident and Event Management prevents and detects intrusions, consolidates event logs, and analyzes security events throughout the enterprise.

NETWORK ASSOCIATES

Two Network Associates divisions provide security tools for the enterprise. McAfee supplies the Desktop Firewall utility to block unauthorized and malicious network traffic, such as that generated by peer-to-peer software, by using packet-filtering, application-monitoring, and intrusion detection signatures that McAfee has developed. The companion VirusScan Enterprise provides virus protection for Windows systems throughout the enterprise, and the ePolicy Orchestrator allows IT staff to manage security policies, signature and software updates, and other settings across the network. Other antivirus tools are available for Macintosh and UNIX systems; wireless and thin clients; mobile devices; Microsoft Exchange and IBM Lotus Domino e-mail servers; and NetApp and NetWare file servers. McAfee also supplies appliances for e-mail and Internet gateways to block viruses and other malicious code, as well as e-mail spam. The Sniffer Technologies division supplies the Sniffer set of products to analyze and report network vulnerabilities. There are options to cover voice and wireless network segments.

NOKIA

Nokia provides a range of security tools for enterprise users, including firewall/VPN appliances that use Check Point and ISS software; an e-mail appliance that examines e-mail for viruses and spam using various partners’ software; and VPN clients for several Nokia data-enabled cell phones. Nokia’s hardware uses its own IPSO secure operating system in its appliances. The company also collaborates with a variety of other companies—including F5 Networks, Check Point, Trend Micro, Permeo Technologies, SurfControl, Tripwire, FishNet Security, and ISS—to provide complete solutions for enterprise security.

PALISADE SYSTEMS

Palisade supplies a variety of network management products, including several for security management. The ScreenDoor network appliance blocks, monitors, and manages various internal network functions, such as network protocol use, IP address use, and internal system access. The PacketHound appliance detects and manages the use of network applications and protocols, such as streaming media, Web radio, and distributed peer-to-peer file sharing. The FireBlock appliance allows organizations to monitor and enforce network-level access policies on their internal networks. The SmokeDetector appliance provides electronic decoy capabilities (honeypot intrusion detection) to attract, detect, and delay attack attempts. The Windows-based FireMarshal application centrally administers Palisade network appliances.
PASSGO PassGo provides a variety of security tools, primarily for UNIX and OS/390 environments, including identity management; single sign-on; password-reset synchronization across multiple platforms; hardware- and software-based authentication for perimeter security (including a module for Web-based e-mail access); and access- and session-management control for servers and Websites. Its flagship products are the access authentication solution and the Webthority server and Web access-management solution. An LDAP version of its Defender software is under development.

RSA SECURITY RSA

provides both authentication and access-management software for enterprises, as well as professional services, security tools for software developers (including its Bsafe library of products), and security education through its conference and publications. RSA’s primary focus has been on authentication, and its authentication options include software and hardware tokens, smart cards, and digital certificates for use in VPN, e-mail, intranet, extranet, and Web access. To manage such authentication mechanisms, RSA provides the Keon product family, with modules for SSL Web servers, secure VPN, secure e-mail, and digital signatures. For access management, RSA’s ClearTrust software helps enterprises manage and enforce a centralized authentication policy while providing single sign-on for Web-based users. RSA Mobile authentication software is designed to use mobile phones, PDAs, and the Short Message Service (SMS) infrastructure to quickly deliver one-time access codes to end users for secure entry into Web-based applications. For developers, RSA provides tools to integrate encryption and digital-signature technologies into enterprise, desktop, and mobile applications.

SECURE COMPUTING

Secure Computing provides three product lines: one for user authentication and access management, one for firewall provisioning, and one for Web filtering. The company also supplies professional services to assess and implement products in each area. The SafeWord set of tools provides authentication services to ensure that only authorized users have access to various internal and external systems. Capabilities include

one-time passwords, Citrix operation, use of various token protocols (including smart cards, biometrics, USB tokens, and digital certificates), and brokered authentication. The UNIX-based Sidewinder G2 firewall offers stateful inspection, circuit-level proxies, application filtering, secured servers, and real-time intrusion detection and automated response. Secure Computing also supplies a companion VPN client. The company also provides SmartFilter for URL filtering, which can work with Cisco hardware to prevent peer-to-peer traffic.
SECUREWORKS
SecureWorks provides intrusion prevention at the network and host levels, a managed commercial firewall service, and vulnerability assessment. The SecureWorks Intrusion Prevention Service for networks can prevent malicious network attacks by blocking them via packet filtering, not merely detecting and reporting them. The SecureWorks host-based intrusion prevention service provides behavior-based attack blocking in real time; protection against attacks that bypass perimeter security; and policy management. Both the network- and host-level services provide around-the-clock monitoring and notification, as well as reporting. The SecureWorks Commercial Firewall Service provides security management for Check Point’s FireWall-1 on Nokia and Solaris platforms, as well as for some Cisco PIX firewalls. The SecureWorks vulnerability assessment service provides views from both inside and outside the subscriber’s network perimeter and secure online reports from any Web browser.
SUN MICROSYSTEMS
Sun provides operating system–level security tools for the enterprise. The Trusted Solaris operating system provides firewall capabilities for Sun SPARC and Intel x86 environments. Among its capabilities, Trusted Solaris ensures that all administrative actions are traceable to an authenticated individual; audits all transactions; requires all objects to be labeled and all users to have defined clearance levels; and ensures that data is sent only to other users (or devices) certified to be at the same access level or at a higher one. The optional Solaris Enterprise Authentication Mechanism provides a single repository for enterprise authentication information. This information repository is intended to reduce the risk of employees gaining access to unauthorized information and sharing it with outsiders—according to Sun, an occurrence more common than external break-ins. The company’s standard version of Solaris 9 provides 128-bit encryption, as well as running DES, 3DES, IPSec, Kerberos, Pluggable Authentication Module interface, smart card, SKIP, and SunScreen security protocols. Sun also provides the ONE Directory Server to manage identify data.
SYMANTEC
Symantec provides enterprise tools for security-policy assessment and virus detection and prevention. The Enterprise Security Manager discovers policy deviations and vulnerabilities and allows network administrators to correct faulty settings to bring systems back into compliance. It runs on a variety of platforms including Windows NT/2000, Novell NetWare, OpenVMS, Linux, and most varieties of UNIX, and it integrates into existing enterprise systems management packages such as HP OpenView, IBM Tivoli Enterprise, and BMC Patrol. The Enterprise Incident Manager manages alerts on security incidents from various devices and programs, to prioritize and route them to IT staff. Event Manager for Antivirus offers similar capabilities for tracking virus incidents, including viruses detected by third-party antivirus tools, using an optional module.
Symantec also sells its own antivirus software that can be managed and deployed over the network as well as through desktop installations. One enterprise version combines antivirus and intrusion detection for use within a corporate network and by remote users. Other software modules provide vulnerability assessment and intrusion detection. For smaller businesses, Symantec provides a gateway security appliance to detect intrusions and viruses, to filter Internet content, and to provide VPN and firewall capabilities.
UBIZEN
Ubizen supplies around-the-clock security management services for the enterprise, providing service-level-agreement-based management for firewalls, identity management systems, and VPNs. The company works with a variety of hardware and software, including Check Point and Cisco firewalls and Cisco and ISS network intrusion detection probes, with support for new products added regularly. In addition to monitoring and response services, Ubizen OnlineGuardian Management Services include updating firewall and network intrusion detection policies and installation of new security patches and updates. At the heart of Ubizen’s monitoring service is its SEAM technology, which collects and processes data generated by other security devices, normalizing and analyzing data, and highlighting only those that may pose a security risk. Ubizen also protects Web servers against application-level attacks with Ubizen DMZ/Shield Enterprise, which intercepts requests to ensure that they are genuine and in line with security policies. If a request does not match, it is rejected.
VERISIGN
VeriSign provides authentication, secure payment, and digital certificate services, mainly through managed services.
The company provides SSL certificates for Websites and other e-commerce applications, with 128-bit encryption the standard level of security. For increased security, VeriSign has a hardware module that conforms to the FIPS-2 security standard. For companies requiring ongoing certificates, VeriSign provides a tool to automatically generate needed certificates from the VeriSign system, as well as a logo program that shows users that a company’s Website has passed VeriSign’s privacy- and securityvalidation standards. VeriSign provides authentication services that issue an SSL certificate only after VeriSign has verified a user or site. The company can validate consumers using credit information, physicians using practitioner databases, and business-tobusiness customers using various business databases. VeriSign also supplies services for firewall, VPN, and intrusion detection management. In addition, it provides a secure payment processing engine for e-commerce sites.
VIGILAR
Vigilar provides consulting and management services for a range of enterprise security requirements, including identity management, access management, intrusion detection, Web application and database security, content security, WLAN security, and network-perimeter security. The company uses software and hardware tools from a variety of vendors to implement a security strategy. It also provides managed enterprise-security services.
■ Identity Management
Identity management products encompass the following components: authentication, access control, directory services, and user management technologies. Identity management vendors may sell complete suites or focus only on specific components
ACTIVCARD
ActivCard allows IT to consolidate identity credentials onto a single, secure smart card. The CAC product serves as a photo ID as well as a security device that enables secure Microsoft Windows and network login, PC locking, secure VPN, and secure e-mail with digital signatures. Existing IT infrastructures are integrated with enterprise directories and PKI services to streamline the issue and administration of digital IDs. For remote access, ActivCard Secure Remote Access Solution uses tokens, USB keys, and smart cards (which can be deployed concurrently) to issue one-time passwords and manage the device generating them. Trinity Enterprise Secure Sign-on includes a multifactor authentication platform that allows enterprises to protect sign-on with smart cards, fingerprint biometrics, hardware tokens, passwords, or a combination of these methods. Trinity can also reduce or eliminate usernames and passwords, so users no longer need to remember or manage these credentials; it also allows several different login and sign-on tasks for access to target platforms and applications.
ALADDIN KNOWLEDGE SYSTEMS
Aladdin provides several access-control products. Its HASP line offers hardware-based software protection via dongles. Its Privilege software product provides secure e-commerce for electronic software distribution, software activation, online purchasing and downloads, antipiracy software protection, and software licensing. Aladdin’s eSafe product protects against viruses, malicious code, spam, and unwanted Web content through Web and e-mail gateway security, enforcement of an enterprise’s Internet policy, and security appliances. Its eToken software provides authentication, Web-access control, encryption, digital certificate storage, portable PKI, and remote access.
AVAYA
Avaya’s Access Security Gateway product suite provides challenge/response hardware security, in which users must enter a dynamically generated PIN to achieve access. There are four-port and sixteen-port versions of the gateway hardware, as well as the credit-card-size key that generates the correct response-key for end users. The gateways provide 56-bit DES key authentication.
BLOCKADE SYSTEMS
Blockade’s ManageID suite provides identity management services for the enterprise. The suite’s three components—IDentiserv, Selfserv, and Syncserv—are designed for distributed enterprises having multiple systems and up to hundreds of thousands of users. It provides identity creation and provisioning as well as managing access control and other policies. The suite permits employee self-service and synchronization of access controls in a distributed environment, so that each user needs only one username and password for accessing enterprise resources. The suite allows IT to delegate identity management tasks, while retaining central authority and management. Blockade also supplies ESaccess, which provides centralized enterprise access control and management using IBM OS/390 or z/OS Enterprise Server systems to administer the access of Web users to corporate Web resources. It provides centralized role-based access control for administration and control of user access.
BUSINESS LAYERS
Business Layers’ eProvision software employs XML-based user profiles to enable rolebased access control, automated provisioning, and other user management functionality. eProvision makes user access changes by responding either to business change information flows from enterprise human resources systems such as PeopleSoft, or to direct inputs. eProvision administers the necessary changes to user profiles and then adjusts access rights using workflow task scheduling and monitoring.
CAFESOFT
Cafesoft provides Web SSO and access control products for Java-based applications, J2EE application servers, and Web servers, including Apache, BEA WebLogic, Jboss, Microsoft IIS, and Oracle 9iAS. Enterprises integrate the company’s Cam agents into their application and Web servers. The agents delegate user requests to the Cam server, which authenticates users against standard directory tools and managed security policies. The company also sells Cam programmer libraries and tools, including standard Java APIs and customization tools for building and deploying Web applications.
CALENDRA
Calendra specializes in directory content management and offers tools to aggregate, present, and manage directory information, particularly for identity management and user provisioning purposes. The company’s main product, Calendra Directory Manager, is an LDAP directory content management and development tool that can facilitate identity administration, delegated administration, and identity model transformation. Calendra Directory Manager supplements, rather than replaces, existing directory services architectures, and operates with directory servers from numerous vendors.
CRITICAL PATH
Critical Path supplies several tools to manage user identities, including access permissions and passwords. Critical Path’s Data and Directory Integration product integrates disparate user profiles into a single centralized repository, synchronizing information and ensuring consistent data across multiple systems and applications. Critical Path’s User Provisioning product streamlines the process for granting and revoking user access to resources and applications. Critical Path’s Password Management product enforces password policies throughout the enterprise, synchronizing passwords across multiple applications and systems and enabling user password self-service.
DOCUMENTUM
Document management system provider Documentum supplies the Trusted Content Services module to protect against unauthorized access, regardless of where information is stored or transmitted. Key capabilities for protecting content include repository encryption, encrypted communication, enhanced authentication, single sign-on, certificate support, and increased security features (such as digital signatures). In October 2003, storage vendor EMC announced that it would acquire Documentum.
ENTRUST
Entrust’s technology focuses on ensuring the identity through authentication and digital signatures. Entrust provides secure identity management capabilities through a variety of network products for Web portals, electronic forms, Web and WAP server certificates, messaging, file and folder access, VPNs, and WLANs. These products rely on Entrust’s identity and policy management tools, which create required PKI digital signatures and other authentication mechanisms as needed. Its TruePass product provides PKI-based authentication for Web services and is validated as compliant with the federal FIPS 140-1 security standard. Entrust’s PKI also serves as a core element of the Federal Bridge Certification Authority (FBCA), used to validate security for government-to-government transactions, and is interoperable with all major FBCA vendors’ software. The company collaborates with Waveset to provide user provisioning, password, profile management, and similar workflow tools.
MAXWARE
MaXware’s identity management products provide a set of processes and an underlying infrastructure for creating, maintaining, and removing identity data, including attributes, credentials, and entitlements. Instead of requiring a centralized directory, MaXware’s middleware-oriented products can integrate and synchronize multiple directories and repositories for distributed enterprises.
NETEGRITY Netegrity supplies authentication, identity management, and access management services for e-business systems as well as for an enterprise’s internal networks. The suite includes authentication management, single sign-on, access control, user administration, and resource provisioning across heterogeneous environments.
SiteMinder provides the fundamental policy-management and access controls; TransactionMinder provides policy-based access control to Web services and XML documents. Netegrity also furnishes two versions of IdentityMinder, one for user administration and directory integration and one for identity provisioning. Other modules provide password management, login-concurrency limits, FTP service, wireless authentication for mobile devices, and connectors to Oracle, IBM OS/390 LDAP, FileNet Panagon, PeopleSoft, SAP ITS, Citrix Enfuse, RADIUS, and Siebel applications. The company also provides professional services.
NOVELL
Novell supplies the Nsure suite to provide identity management for the enterprise. Based on Novell’s eDirectory LDAP directory service, Nsure provides a centralized system for managing identities and authorizing access against those identities. Suite components include DirXML, which uses XML to bidirectionally update directories and databases with identity information; Nsure resources, a tool to grant and revoke access authorization; iChain, a unified sign-on system for connected Websites and network components; SecureLogin, a single sign-on provider for heterogeneous networks; Novell Modular Authentication Service, for managing biometric and other authentication information; and BorderManager, which enables enterprises to track users’ network and Web activities by their identities.
NUANCE
As a leader in computer speech recognition, Nuance provides systems for biometric access control. Its Nuance Verifier software can correctly identify distinct human voices over a variety of channels and devices, including both wired and wireless phone equipment. When combined with speech recognition, this software allows users to both identify and authenticate themselves over the telephone by voice, rather than through personal identification numbers or some other method. Additional applications include network login and physical site access control. As of October 2003, the speech recognition software supports eighteen languages. Nuance also provides text-to-speech software for delivering prompts and password challenges via synthesized voice.
OBLIX
Oblix provides two enterprise identity management systems to manage digital identities and user access throughout the organization and across corporate boundaries. One is NetPoint, which provides both enterprise identity management and Web access control. The other is IDLink, which integrates Oblix’s NetPoint with BMCSoftware’s Control-SA product. For interoperability with other enterprise applications, NetPoint uses SAML technology to allow applications or systems to access the Oblix NetPoint authentication and authorization services. Its IdentityXML component allows other business applications to integrate with NetPoint’s identity management functionality. NetPoint includes a set of application programming interfaces for access and identity functionality, as well as connectors for BEA WebLogic and IBM WebSphere application server environments.
OPEN NETWORK
OpenNetwork’s Universal Identity Platform (IdP) is a complete identity management and access control system for networks, domains, services, and applications across the enterprise. The Universal Identity Manager console is available as either a Java or .NET application. OpenNetwork’s workflow management tools streamline creating, modifying, and deleting user accounts, as well as granting or revoking access to resources. It provides single sign-on capability for a Web-based applications, with connectors for products from leading vendors including BEA, IBM, Microsoft, PeopleSoft, Plumtree, SAP, and Vignette. This capability can be extended to include applications across multiple organizations via federated single sign-on using SAML and Microsoft Passport.
ORACLE
Among Oracle’s many offerings are products for identity and access management. The Oracle Internet Directory is an LDAP-compliant server built on the Oracle 9i database. It acts as a central repository for storing a wide variety of user information, including credentials, profiles, application preferences, and network management and policy data. The directory itself supports secure authentication through access control lists and SSL. Internet Directory is the mandatory identity component for most Oracle e-business applications.
Building on Internet Directory, Oracle’s Identity Management offering provides an integrated central control panel for managing user identification and access control. It facilitates single sign-on for J2EE, Web, and older applications, including those from Oracle as well as third-party vendors. The suite also bundles certificate authority software for enabling public key infrastructure (PKI). Customers who wish to develop distributed or grid computing applications across discrete networks or organizations are a particular target market for this product.
PASSLOGIX
The Passlogix V-Go software enables single sign-on by taking any form of authentication—including passwords, PKI, smart cards, tokens, and biometrics—and connecting to mainframe, Windows, Web, and custom applications. It enables single sign-on from any computer inside or outside the firewall, connected to or disconnected from a network.
PING IDENTITY
The company operates the PingID Network, a federated identity management network, and provides other services related to federated IdM. PingID Network provides to its members standardized business and privacy agreements, policies and procedures for interoperability, and shared identification interchange services, among other features. The company’s Liberty Interoperability Validation Environment (LIVE) service provides a testing environment for companies deploying Liberty Alliance-based federated SSO. Its Ping Identity Confidence Assertion (PICA) service issues a common metric for measuring the confidence of authentication assertions between entities.
PHAOS TECHNOLOGY
Phaos offers a range of identity management, encryption, and XML security products. Its identity management products facilitate federated single sign-on for commerce applications and Web services. Phaos’s identity model is based on the standards of the Liberty Alliance Project (of which Phaos is a sponsor member). Phaos also provides toolkits and platforms for developing secure applications, including secure cryptography for XML, public key infrastructure (PKI), and SSL-based secure communications for Internet applications and mobile devices.
PROTOCOM
Protocom provides several secure sign-on products for Microsoft Windows and Active Directory environments, as well as for Novell NDS environments. The SecureLogin products provide single sign-on to all corporate applications, both in-house and offthe-shelf packages; users log in to the system once, after which SecureLogin manages all other system and application logins. The Advanced Authentication module integrates software- and/or hardware-based authentication for sign-on; the Self-Service Password Reset module allows end users to reset their own passwords, using a Web browser.
SIEMENS
Siemens Information and Communication Networks (ICN) provides identity management products and systems for carrier and enterprise networks. These products are part of the HiPath Slcurity Solutions portfolio and include MetaDirectory, HiPath Voice over IP (VoIP), and the Intelligent Digital Passport smart card–based system for both physical and network security.
The HiPath MetaDirectory allows synchronization of identity information residing in multiple locations. Siemens has collaborated with Entrust Technologies to develop PKI implementations that involve the use of MetaDirectory.
HiPath VoIP includes risk assessment tools for analyzing IP vulnerabilities for voice applications, as well as eavesdropper detection and caller authentication functionality.
The Intelligent Digital Passport uses biometric authentication and stores a user’s picture, fingerprint, and voiceprint on a single smart card. Enterprises can use one or more of these forms of authentication, depending on the security level desired. Intelligent Digital Passport smart card readers have several different verification engines for each biometric.
SIGABA Sigaba provides two kinds of secure-content software. One product line provides secure delivery by means of a secure e-mail server of electronic statements, with distributed key technology, authentication, fraud checks, and an audit trail. The other creates a protected channel for exchanging information over e-mail, wireless networks, and LANs, as well as in direct desktop-to-desktop connections. The e-mail system uses a secure e-mail server and client software to manage the authentication and encryption.
THOR TECHNOLOGIES
Thor Technologies’ Xellerate provisioning system automatically grants and revokes access to enterprise applications and managed systems. A Java-based user interface lets system administrators set up user profiles, define rules and policies, and determine the adapters the provisioning system will use to integrate with target sources. Xellerate supports both role based and rule-based access controls. Administrators also can build workflow and business processes using the Xellerate interface, or the product can use processes already established in the enterprise. Two browser-based interfaces allow system administrators to delegate functions. Individuals or groups directly responsible for specific resources can perform administrative tasks through one browser interface. Another browser interface lets users change their own passwords or request access to particular resources.
Xellerate identifies rogue accounts and responds according to process rules predefined by the enterprise, such as sending an e-mail alert, leaving the account unchanged, or deleting the account. Xellerate can generate reports on the history and the current state of the provisioning environment. The product also de-provisions an account when a user no longer needs access. Rollback and recovery capabilities allow administrators to stop execution of a provisioning transaction and return to the system state before the transaction started, or to recover the last known consistent state of the system if a failure occurs during a provisioning transaction.
WAVESET
Waveset’s Lighthouse Enterprise Edition provides a suite for secure identity management. The suite includes Provisioning Manager for identity provisioning; Password Manager, which includes self-service password management; Identity Broker for identity-profile management; and Directory Master for directory management. Waveset’s products enable delegation of tasks to managers in other departments and organizations, and to end users themselves, without sacrificing central authority over these activities. The suite integrates with authentication products from Entrust.
WIKID SYSTEMS
WikID Systems offers a two-factor authentication system with a request-response architecture. It combines strong asymmetric encryption with Internet-enabled devices to create a token system that it claims is as strong as hardware tokens, but with the flexibility of a software-based solution. The WikID Server is delivered as an appliance. Version 2.0 includes the capability of resetting NT/Active Directory passwords.
■ Technology Infrastructure
Security Technology infrastructure security includes products for network, perimeter, application, and data store security.
ADOBE SYSTEMS
Adobe’s Acrobat document-collaboration and -sharing software provides password security and control over end users’ access to its Portable Document Format (PDF) files, such as preventing the copying or printing of those files. PDF authors can also assign access rights based on each recipient’s PKI security certificate information, and Acrobat serves as a platform for third-party security plug-ins.
CHECK POINT SOFTWARE
Check Point Software (CPS) provides a variety of hardware and security technologies to deploy and manage VPNs and firewalls for enterprises and for small businesses.
The FireWall software, which runs on Windows, Solaris, and Linux, monitors traffic on gateway servers to detect unauthorized access across a wide range of traffic, including e-mail, instant messaging, and peer-to-peer applications. The software is preconfigured with traffic patterns for 150 common applications, so it can detect Trojan horse attempts. It is also available embedded in third-party appliances. CPS also includes the technology in its VPN 1 software, which creates secure VPNs for internal, Web-based, and remote connections. To manage VPNs and firewalls, CPS supplies the SmartCenter series of software, which lets IT create and implement VPN, firewall, and quality-of service (QoS) policies. The Pro version integrates with LDAP directories. The company’s acceleration products help speed VPN traffic. CPS also provides industry-specific solutions for health care, government, pharmaceutical, and retail customers, as well as versions of its technology aimed at satellite and branch offices, remote users, and small businesses.
CISCO SYSTEMS
Cisco Systems provides a full suite of security products that leverage the popularity of its network equipment in the enterprise.Cisco includes firewall and VPN functionality in its routers, switches, hubs, and access points, as well as supplying dedicated firewall appliances, all of which include access control lists and stateful management. Cisco’s hardware includes both intranet and Web-based network access.Cisco’s hardware also includes multiple authentication and encryption technologies—including Internet Protocol Security (IPSec), Kerberos, SSL, Remote Authentication Dial-In User Service (RADIUS), Tacacs+, 802.1x, one-time passwords, digital certificates, and smart cards.
Using a combination of Cisco network-security software and Cisco hardware, enterprises can also manage user access and detect intruders. The company also provides the Hosting Solution Engine suite for maintaining secure e-business platforms. To improve security at the network perimeter, Cisco supplies Web filtering (for both content and URLs). In spring 2003, Cisco added to several of its intrusion detection tools capabilities to detect the use of file-sharing programs that violate corporate Webuse policies and often permit malicious code into the enterprise.
By combining various Cisco hardware and software tools, enterprises can manage security for intranets (both intracampus LAN/WAN and intercampus WAN-to-WAN), Web presences, remote access to corporate networks, and wireless connections to the corporate intranet. The company also supplies security consulting and management services.
F5 NETWORKS F5, a provider of network and Web traffic-management software, also supplies dedicated hardware to speed SSL processing for e-commerce, telecommunications, and cryptographic systems. The Big-IP eCommerce Controller 540 both manages Web traffic and provides centralized processing of SSL certificates. The SSL Accelerator hardware option can interpret encrypted cookies so it can factor their processing needs for overall traffic load balancing, and the FIPS SSL Accelerator option adds FIPS 140-1 Level 3 support and physical server security, using smart cards.
GILIAN TECHNOLOGIES
Gilian’s GServer is a hardware appliance designed to protect Web-based application content and access. The GServer authenticates incoming requests to ports 80 and 443, enabling it to detect and block illegitimate traffic. The product protects against Website vandalism or the posting of unauthorized content through its ExitControl technology, which uses digital signatures to verify the authenticity of static and dynamic Web content. The product also has built-in forensics capabilities that are triggered when a security event is detected.
INGRIAN NETWORKS
Ingrian provides a suite of tools to help enterprises protect their applications and data in transit across the Internet and in storage on internal servers and databases. The company’s application security software consists of security software integrated into a family of secure transaction platforms, for end-to-end privacy of all Web-based transactions, including e-commerce, e-mail, and enterprise resource planning (ERP). Ingrian’s tools cover access control for identity management and user authorization and authentication; secure connectivity between the Internet entities involved in the transaction; application protection by inspecting, screening, and filtering transaction data before it reaches internal servers and databases; storage security to protect sensitive data processed and/or stored on back-end servers and databases; cryptographic-key management; and audit trails to ensure policies are effective and enforced. Ingrian provides these services using a combination of Security Transaction Platform rack-mounted hardware devices and Service Engine software tools for various content security management applications.
MERCURY INTERACTIVE
Mercury Interactive’s LoadRunner load testing tool predicts system behavior and performance. From a central console, IT staff can direct thousands of virtual users to perform transactions and emulate production traffic. Real-time monitors capture performance data across all tiers, servers, and network resources. This data is displayed on the central console and stored in a database repository. The LoadRunner Tuning Module provides component test libraries and information to help IT staff isolate and resolve performance bottlenecks. The LoadRunner Transaction Breakdown Module assists the identification and resolution of performance problems in J2EE applications. The LoadRunner Hosted Virtual Users tool enables organizations to load test their Webbased applications from outside the firewall at all stages of deployment.
NCIPHER
nCipher produces hardware security modules (HSMs) designed to improve cryptographic application performance and to protect cryptographic keys inside tamperresistant hardware. These devices can be used at different points of risk across an extended enterprise, including the Web infrastructure, applications, Web services, payments, and databases.
nCipher’s nShield, nForce, and payShield are dedicated modules that directly attach to individual servers. These HSMs can offload cryptographic processing requests from the host server, resulting in increased server processing capacity and accelerated cryptographic application performance. nShield can enhance security of applications, nForce helps secure SSL communications to Web servers and application servers, and payShield provides protection and the ability to handle the high volumes of symmetric and asymmetric cryptography required by payment systems for the authentication and verification of cardholders. nCipher’s nFast SSL accelerator cards also help enterprises to optimize their server performance.
nCipher’s netHSM can be shared by multiple servers. This network-attached device allows many applications to access hardware-based encryption, decryption, and signing functions using secure connections over IP networks. nCipher’s developer toolkits help developers integrate nShield and netHSM modules with software applications. nCipher’s product line also includes the Document Sealing Engine (DSE 200), a networked appliance that cryptographically seals documents by applying a digital signature and an independent, auditable time stamp.
NETSCREEN TECHNOLOGIES
NetScreen provides three types of hardware-based network security tools: firewalls, VPNs, and intrusion detection. The combined firewall and VPN appliances include stateful inspection to prevent both denial of service attacks and more-traditional hacking attempts. NetScreen supplies several models for different network needs, including models for wireless networks and GPRS cellular data networks as well as for use at network segments and remote-access points. VPN capabilities include dynamic VPNs, IPSec security, and zone-based policies for remote-access, site-to-site, and internal VPNs. For larger enterprises, NetScreen also provides a unified management system that can manage thousands of NetScreen appliances. Although the appliances can prevent many types of unauthorized access, NetScreen furnishes its Intrusion Detection and Prevention (IDP) appliance to augment their capabilities. This appliance performs multiple types of detections across the network, using sensors and in-depth packet analysis to uncover attacks disguised in otherwise legitimate traffic, including in instant messaging and peer-to-peer traffic. The IDP appliance uses IT-defined security policies to determine the way to respond to attacks.
NORTEL NETWORKS
Nortel supplies enterprises with VPN/firewall software and SSL-based encryption technology for e-commerce. The Alteon line of switches includes models that provide firewall, VPN, and accelerated SSL encryption services. Nortel also supplies the Alteon Security Manager software to manage Alteon devices in the network. For branch offices and similar medium-size environments, Nortel furnishes the Contivity line of gateways, which can act as a firewall or dedicated VPN switch, in addition to serving as an Internet Protocol router, depending on the type of license acquired. For wireless environments, Nortel provides a similar set of products. It also supplies firewall and VPN products for service providers.
OPTIMUS SOLUTIONS
Optimus provides professional services to help enterprises assess and implement solutions for their security needs. Services include perimeter security testing, wireless security assessment, policy and procedure development, enterprise design and integration, biometrics, two-factor authentication, VPN design and implementation, and intrusion detection.
PERMEO TECHNOLOGIES Permeo provides application security so the enterprise can control access to specific TCP and UDP applications to selected users. The Application Security suite contains several modules: these include an application gateway; a connector to common authentication/authorization and policy-management engines, such as LDAP, RADIUS, and Windows Domain/Active Directory; Permeo Encrypt, which provides a layer of SSL encryption with support for all major encryption algorithms; a downloadable and remotely configurable agent for securing Web and client/server applications; a clientless portal-based solution for securing Web and client/server applications; a browser-based application gateway-management application; and filter modules for validating FTP and Telnet access as well as for access audits.
SANCTUM
Sanctum provides IT development tools so enterprises can build security into their applications. The company’s AppShield is an automatic Web application firewall that provides Web intrusion prevention at the application level. AppShield allows for application deployment in a secure environment by identifying the legitimate requests made to an e-business Website and permitting only those actions to take place, enforcing the Website’s Web and business logic. Sanctum has also collaborated with Sun Microsystems to supply Sun servers running a hardened version of the Solaris operating system and AppShield as the iForce Perimeter Security Solution. Sanctum’s AppScan provides automatic Web application-security testing. AppScan learns the unique behavior of each Web application and delivers an array of attack variants to test and validate vulnerabilities, including all application-specific and common Web vulnerabilities: These include tests for Web-services technologies such as .NET. Finally, Sanctum provides the AppAudit remote audit service to determine the general security of a corporate site at the application level.
SAP As part of its enterprise applications, SAP provides NetWeaver, which serves as a platform for a variety of authentication mechanisms, including standard X.509 digital certificates, smart cards, ticketing, and username-and-password authentication. NetWeaver provides single sign-on and trust-center services for PKI. Users can be granted access to information, applications, and services automatically, on the basis of the user’s specific roles. Also available are authorization mechanisms based on access control lists. NetWeaver’s encryption features, which include HTTPS support, ensure that information exchanged among users remains private. Using external third-party security software, NetWeaver enables IT to protect communications links among distributed components of the mySAP Business Suite.
SONICWALL
SonicWall supplies firewall/VPN appliances for wired and wireless networks. The appliances provide stateful inspection for firewall protection and IPSec-enabled VPNs. Optional subscription-based services include virus detection, content filtering, and vulnerability-scanning capabilities. SonicWall also provides optional mobile VPN clients. The wireless appliances integrate access points, and some models contain signal boosters. To offload SSL processing from network servers, the company supplies a line of SSL-acceleration appliances.
SYGATE TECHNOLOGIES
Sygate’s technology assess usage patterns for servers, clients, and network segments. Using that information to build a profile of proper activity, Sygate then requires each server and Windows client that connects to validate the integrity of its activity on the basis of those norms. Using agent software at the server and client level, as well as hardware at the network node level, the Sygate system automatically validates such integrity at each connection, using the policies derived from the proper activity. These profiles can be modified as activity evolves. The Sygate technology works in both wired and wireless networks and on Solaris and Windows servers, as well as with Oracle and Microsoft SQL databases and with Sun and Microsoft Web servers. It can assure integrity within a network as well as with remote users connecting via VPNs. Sygate also furnishes a Personal Firewall Pro application meant for broadband-connected users.
TRICERAT SOFTWARE
TriCerat supplies a variety of software technologies to manage applications, printing, and other network resources. For security needs, its Thor technology eliminates all user-introduced applications and scripts. It creates a locked-down application environment in which the administrator grants rights to applications, so end users can no longer perform unauthorized application installations. It also prevents virus-laden executables from running, even from within an e-mail application or Internet browser. Thor protects sessions created by TriCerat Simplify Lockdown, RDP, ICA, Citrix Nfuse, and other portal technologies.
WATCHGUARD TECHNOLOGIES
WatchGuard provides perimeter-security firewall appliances to safeguard Internet connections into the enterprise, as well as Windows-based desktop systems and Internet servers. The company also supplies firewall appliances and VPN client software for Windows-based remote and mobile users. In addition, the company provides subscription-based security support and McAfee antivirus service to customers using its appliances.
WESTBRIDGE TECHNOLOGY
Westbridge Technology is a specialist in XML Web services and offers security enhancements for Web services application flows through its XML Message Server (XMS) product. XMS includes an XML firewall that provides directory-based authentication, role-based access control, and an encryption layer for Web services, with support for XML Signature standards. In addition to screening traffic by IP address, the firewall can check XML messages for validity before accepting them. It also includes a sophisticated rule engine for message processing, with predefined rules for conditions such as DoS attacks, SQL injection, weak passwords, and dictionary-based password attacks.
■ Wireless and Mobile Security Solutions The fundamental techniques of ensuring security are the same for wireless networks and mobile devices as they are for traditional networks and desktop clients. However, many vendors providing products for wireless and mobile security do not also supply wired security products. Companies providing both include Cisco Systems, Enterasys, Entrust, IBM, ISS, Network Associates, NetScreen, RSA, SonicWall, and Sygate; they are covered in other sections in this chapter.
AIRDEFENSE
AirDefense provides security and operational support for 802.11 WLANs, with aroundthe-clock monitoring to identify rogue WLANs, detect intruders and attacks, enforce network security policies, and monitor the health of the WLAN.
AirDefense is designed to complement wireless VPNs, encryption, and authentication. In August 2003, AirDefense announced that its wireless LAN monitoring device would be compatible with Fortress Technologies’ encryption and authentication techniques as part of a partnering arrangement.
BLUEFIRE SECURITY TECHNOLOGIES
Bluefire’s Mobile Firewall Plus is a mobile firewall, intrusion detection, device authentication, and integrity monitoring system specifically designed for 802.11-equipped Windows-based handheld devices. The enterprise manager furnishes a console to centrally manage large numbers of handhelds, define security policies, and deploy security rules to distributed handhelds. It also provides scalable central collection, consolidation, and reporting of security logs from handhelds.
BLUESOCKET
Bluesocket provides secure wireless gateways for 802.11 and Bluetooth connections. It supports IPSec and PPTP encrypted tunnels, even as clients roam among subnets. Supported IPSec clients include Safenet, SSH, PGPNet, and those built into Windows 2000 and Windows XP. The Bluesocket Wireless Gateway also allows user authentication through a built-in database or through existing central authentication servers—LDAP, RADIUS, Windows NT Domain, Active Directory—before the user can access the network. In addition, Bluesocket provides fine-grained access controls to enforce the services and destinations available to each user. IT administrators can define application and data resources, network services, and controls to specific groups of users.
CERTICOM Certicom provides wireless security tools for mobile users and for developers of mobile applications. The company’s user-oriented products include MovianMail, which encrypts mail sent to and from Pocket PC mobile devices; MovianVPN, which provides VPN access for mobile Palm, Symbian, and Pocket PC users; and MovianCrypt, which encrypts data stored on Palm and Pocket PC handhelds. The company also provides PKI certificate-issuing systems, which can be deployed in mobile and fixed-network environments. For developers, Certicom provides tools to implement authentication, encryption, digital signatures, PKI certificates, and secure WAP connections into mobile applications.
COLUMBITECH The Columbitech Wireless Suite supplies a software-only solution for enterprises having mobile staff using Bluetooth, 802.11, and cellular connections. The suite includes VPN client software installed on a Windows-based PDA or laptop; Gatekeeper software installed on a server in the DMZ for handling authentication and load balancing; and Enterprise Server installed on a server in the corporate network to handle VPN sessions, terminating both the encrypted VPN tunnel and WTLS-based WAP communications.
CRANITE SYSTEMS Cranite’s WLAN security software provides FIPS 140-2–level security. The WirelessWall Software Suite contains three components. The WirelessWall Policy Server includes creation of policies controlling the characteristics of each wireless connection into the enterprise network, working with the enterprise’s existing directory service. The WirelessWall Access Controller enforces policies for each wireless connection, encrypting and decrypting authorized traffic even as users move across subnets throughout the network. The client software operates on Windows-based mobile devices, encrypting and decrypting data for each user’s connection.
CREDANT TECHNOLOGIES
Credant’s Mobile Guardian helps enterprises centrally manage security policy administration and provide strong on-device user authentication and policy enforcement to mobile devices. It provides a Web-based console for policy management.
Much of Credant’s focus is industry-specific, targeting the pharmaceutical, health care, finance, and distribution industries, in addition to a number of other sectors.
For sectors such as health care, where protection of patient information is critical, features such as mandatory access control and data encryption become important. In addition to these kinds of features, Mobile Guardian provides on-device security, so that devices not connected to the network remain secure, a design characteristic that is helpful in meeting HIPAA compliance requirements, for example.
DIVERSINET
Diversinet provides authentication and VPN tools for wireless and mobile devices, including cell phones, PDAs, and notebooks. Its Passport One product provides an authenticated ID for each device, to validate identity for network access. Passport VPN combines PKI certificates with VPN encryption, enabling mobile users to have secure, authenticated connections. Passport Authorization tracks issued certificates. The company’s products focus on devices connecting through cellular networks, both with and without the WAP.
FIBERLINK COMMUNICATIONS
For mobile professionals, telecommuters, and branch offices, Fiberlink supplies policybased access and security products that combine IPSec VPNs, encryption, integrated distributed virus protection, firewalls, and intrusion detection on a single network or on multiple networks. Products include VPNs for remote users who connect via the Internet, as well as site-to-site VPNs to connect branch offices to the main campus. For broadband-based remote users, Fiberlink also supplies a VPN/firewall combination. Fiberlink products use existing RADIUS or LDAP infrastructure to provide a single point of remote user authentication.
FORTRESS TECHNOLOGIES
Fortress Technologies offers security products for 802.11 and other wireless networks. The AirFortress family of wireless LAN security gateways uses security at the data link layer and a three-factor authentication approach to provide protection at network, device, and user levels. Because the product encrypts data at this layer, it guards against network data exploits and the associated denial of service attacks.
In August 2003, the company announced a partnership with AirDefense, in which AirDefense’s wireless LAN monitoring platform would be used in conjunction with Fortress Technologies’ authentication and encryption.
NETMOTION WIRELESS
NetMotion Mobility mobile server software securely maintains applications when roaming across networks. It provides secure standards-based authentication, authorization, and encryption to Windows-based PDA and laptop users on private or public networks over high- or low-bandwidth connections. Users can authenticate with standard logon procedures, using Active Directory, NTLMv2, Kerberos, RADIUS, or PKI. Encryption policies can be set on a global, group, or per-user basis, using AES, Twofish, 3DES, or DES encryption. NetMotion Mobility is compatible with most common VPNs, including PPTP, L2TP/IPSec, IPSec, Nortel, and Cisco. It also allows IPSec tunnels to automatically roam with Windows 2000 and Windows XP devices and provides single sign-on compatibility with Cisco LEAP.
PERFIGO
Perfigo provides software-based WLAN security and management products. SecureSmart blends hardened WLAN security with comprehensive user and device management into a single centrally deployed system for managing and securing WLANs. The server appliance is a gatekeeper between the wireless network and the rest of the network, permitting access to authenticated and authorized traffic only, while protecting data with IPSec-plus-3DES encryption. Optional client software for laptops automates authentication, network/access-point detection, encryption setup, and rogue reporting. The company’s SmartManager software manages Perfigo appliances and clients across the enterprise.
POINTSEC MOBILE TECHNOLOGIES
PointSec provides automatic encryption and mandatory access software for Windowsbased desktops and laptops and for Pocket PC and Palm PDAs. Distributable over the network, PointSec provides boot-level protection on devices.
RED-M
Red-M’s enterprise software provides management and security for both 802.11 WLAN and Bluetooth networks and can scale up to networks containing tens of thousands of users and thousands of access points. Modules provide authentication, intrusion detection, vulnerability assessment, and policy management services.
REEFEDGE
ReefEdge, which provides a range of wireless servers, edge controllers, and routers, also provides several security products for 802.11 networks as part of its Wireless Services Fabric suite. One such product is AirMonitor, which detects access-point failures and unauthorized changes to the WLAN configuration. AirMonitor can also detect the presence of unauthorized access points. It provides around-the-clock monitoring and can alert administrators to any wireless activity requiring immediate attention. The Fabric suite provides intrusion detection, authentication, and dynamic IPSec encryption. The authentication component permits single sign-on across multiple networks (wired and wireless) and integrates with RADIUS, NT Domain Server, and Active Directory enterprise authentication servers.
SYMBOL TECHNOLOGIES
Best known for its wireless handheld scanners and inventory/delivery management systems, Symbol Technologies also supplies MobiusGuard, a suite of wireless networking security mechanisms that provide security for the components of 802.11 wireless mobile networks.
XCELLENET XcelleNet provides security management tools for Palm, Research in Motion, Symbian, and Windows handheld devices. Its Afaria software includes automatic data backup, delivery of antivirus updates, and enforcement of other corporate security processes. For example, if a device is lost or stolen, it can be locked down to avoid unauthorized access to corporate systems. Afaria’s intelligent monitoring alerts users if their system passwords or security time-out settings have been disabled. Afaria can be managed over the Web as a .NET console.
■ Threat and Vulnerability Management
Technologies in the TVM category include those used for: compliance testing, vulnerability scanning, operations availability analysis, log activity analysis, rogue technology discovery, malicious program identification, intrusion detection, security infrastructure implementation, security remediation, and incident response. Additionally, security information management technologies for intelligence analysis, standards and policies management, asset classification, event correlation, and reporting also fall under the TVM umbrella.
ARCHER TECHNOLOGIES
The Archer SmartSuite Framework forms the foundation of a suite of targeted security management applications. Archer’s Policy Management product facilitates the creation, dissemination, and tracking of network security policies. The Threat Management product provides vulnerability scanning and remediation based on known threat signatures. The Asset Management module aids administrators in locating and cataloging devices on their networks. The Risk Management component helps identify appropriate countermeasures based on the value of a given asset to the enterprise. The Incident Management product aids log and incident data analysis. Additionally, customers can build or modify their own applications using the SmartSuite Framework without knowledge of database systems integration ARCSIGHT ArcSight’s ArcSight monitors sources of network security information—including firewalls, intrusion detection systems, and other relevant sources—and consolidates all network-wide alarms and alerts into a single relational database. The ArcSight correlation system combines the severity of potential threats and attacks with the value and vulnerability of business processes and assets to calculate and communicate the risk of a security event. A central console provides a graphical display that can be used for monitoring alerts, coordinating incident response, defining rules and reports, or assigning responsibilities and roles to a new user. Multiple consoles can be deployed, and a browser-based console increases administrative flexibility. Reporting functions in ArcSight include prepackaged reports, which can be modified and customized; a dashboard, which displays as many as 10 reports on the console; and the ability to identify anomalous network activity. ArcSight has a distributed architecture to facilitate extension of the system.
ATOMZ
For companies that host their Websites and applications with Atomz, the company provides VPN Solution. This software combines dedicated Cisco network and application VPN hardware at Atomz data centers with authentication and encryption mechanisms to protect sensitive data as it leaves the boundaries of the corporate firewall. Between the customer’s firewall and the servers that host the Atomz application software, sensitive data is either physically isolated or protected by Triple DES encryption and authentication mechanisms. Atomz VPN Solution can be integrated into a customer’s existing VPN infrastructure, using additional X.509 security certificates.
AUTHENTICA
Authentica furnishes content security for both messaging and document sharing. The SafeRoute secure messaging product protects enterprise e-mail and files shared internally and across company boundaries. It combines strong content security—via authentication, encryption, and data integrity checking—with a set of rightsmanagement features. SafeRoute provides point-to-point security and provides for continuous information protection after delivery; it requires no client software for recipients and no changes to an enterprise’s e-mail workflow. SafeRoute can work with third-party content-filtering software and multiple secure e-mail applications, and can enforce e-mail retention policies wherever e-mail is stored.
Two other products provide secure document sharing: PageRecall for documents and NetRecall for Web content. Both use Authentica’s Active Rights Management (ARM) technology to protect valuable or sensitive documents and Web files both during delivery and after delivery. ARM allows the enterprise to control who can see information, and whether the viewer can print, copy, or select text; prevent information from being forwarded; recall or abort information, even after it is accessed; and track what recipients do with information (such as read or print it) after they download it.
AUTHENTIUM
Authentium’s TotalCommand is a cross-platform patch discovery and distribution utility. The software detects network vulnerabilities by scanning for patch-related security holes and provides a report of any potential security problems. TotalCommand downloads patches from vendors; the automatic application discovery and hardware detection feature can be set to recommend only patches that are applicable, required, and recommended by vendors. The software’s Scheduled Rollout wizard allows administrators to deploy patches automatically or to test them first. Patches can be deployed remotely, and TotalCommand also supports an auto-caching option and custom parameters for patch rollouts. A software management feature tracks patch and software discovery, including date and patch version. TotalCommand includes antivirus software to protect against viruses and worms.
BIGFIX
BigFix produces a family of products that finds and fixes network vulnerabilities. BigFix’s Patch Manager 3.0 identifies machines that require security patches and massdeploys the patches to those computers. Patch Manager monitors the network to ensure patches remain intact and automatically redeploys them when necessary. BigFix’s SMS Manager works with Microsoft’s SMS to monitor all managed nodes on the network and ensure that they are continuously maintained and operating optimally. BigFix also produces products for small business and individual computer users as well as for original equipment manufacturers (OEMs) suppliers.
BINDVIEW
BindView’s products are designed to assist enterprises with policy compliance, vulnerability management, and directory administration and migration. The bv-Control vulnerability management products let administrators audit, secure, and manage operating systems, applications, and directories. The software suite also provides closed-loop problem identification and remediation to find and eliminate security vulnerabilities. Directory administration and migration products, including bv-Admin, and they help IT administrators to manage directories and version migrations and to centrally automate and control Microsoft Active Directory operations. Administrators can provision user profiles and automatically notify approving managers to activate adds, changes, and deletes to the system. Permissions, or tasks, can be logically grouped by job function; this delegation model can also be used for local groups, services, and server resources.
CITADEL SECURITY SOFTWARE
The company’s Hercules product is an automated vulnerability remediation tool that helps organizations keep pace with vulnerabilities in their infrastructure. Organizations define remediation policies for the Hercules tool to follow. Then based upon infrastructure information provided by other security tools like vulnerability scanners, Hercules monitors security intelligence data to identify relevant vulnerability information. The system then automatically assesses the organization’s environment and applies required patches and makes configuration changes. The Hercules console also enables administrators to directly manage remediation processes, as well as review current asset vulnerabilities and receive customized reports.
CLEARSWIFT
Clearswift provides several systems to manage policies concerning content access and the handling of sensitive data traffic.
EnterpriseSuite is a set of software for Windows and UNIX systems meant to set and enforce content access and delivery policies. It includes a policy-management component, secure messaging component, and content filtering services for internal and external e-mail. The company also supplies a Windows-only e-mail filter.
CS Bastion II is a messaging firewall that allows exchange of e-mail between networks of differing security levels or conflicting security policies. Thus, where a security policy might otherwise preclude directly connecting networks, CS Bastion II permits controlled and accountable flow of messaging traffic. CS Bastion II operates as a standalone system, providing a bidirectional messaging firewall for both X.400 and SMTP/MIME e-mail traffic.
Clearswift combines the assured network separation of its CS Bastion II with the content-security and -management features of its EnterpriseSuite software, to create the CS DeepSecure boundary-protection product. This product includes S/MIME signature and encryption, as well as access to an X.500 or LDAP directory to obtain certificates and certificate revocation lists. CS DeepSecure uses Trusted Solaris Compartmented Mode Workstation labeling to identify and keep data of different security levels and categories.
E-SECURITY
To help enterprises centrally monitor and manage information from security software and hardware throughout their organization, e-Security offers a three-module line of monitoring and analysis tools, what the company calls security event management. At the heart is the Sentinel component, which provides a central console to monitor heterogeneous data from a variety of security clients. The Wizard component lets IT create software agents to access, filter, report, and prioritize information from security clients to automate some of the analysis and alert handling. The Advisor module maps apparent intrusion attempts to known vulnerabilities and helps assess how attacks are trying to exploit those vulnerabilities. Additional capabilities included in the suite are correlation capabilities to better detect intrusion and vulnerability patterns and reporting capabilities.
FISHNET SECURITY
FishNet provides the FireMon firewall policy revision control and auditing product for Check Point VPN-1, FireWall-1, and Provider-1 hardware, as well as for Nokia’s firewall hardware. The company also provides security training, consulting for security assessment, and professional services for implementing security solutions.
GUARDEDNET
GuardedNet’s neuSecure is a security event-management software product that correlates event data files from disparate machines, such as firewalls, intrusion detection systems, computer systems, and routers, and automatically analyzes this data to uncover legitimate threats to the enterprise. The neuSecure product allows security analysts to prioritize their investigations and focus on the essential task of responding to threats as they are occurring.
GUIDANCE SOFTWARE
Guidance Software is a provider of digital forensics and incident response solutions. Its EnCase product line assists network administrators in confirming, documenting, and containing security incidents with a variety of tools designed to increase the efficiency of the evidence-gathering process. In addition, EnCase’s case management tools help to assure that the evidence gathered and the response initiated comply with statutory and regulatory requirements. EnCase comes in two versions: the Enterprise Edition, which is a complete, integrated incident discovery, analysis, and response solution; and the Forensic Edition, which provides only the evidence-gathering and investigative tools.
HARRIS
Harris provides both intrusion detection and vulnerability assessment. Harris’s STAT Neutralizer provides behavior-based intrusion detection for real-time defense against malicious code (like Trojan horses and e-mail viruses), hackers, intrusion attempts, and human error. According to Harris, by monitoring access behavior, its system can detect intrusion attempts that rule-based systems cannot recognize. The company also supplies a server edition for Microsoft IIS Web servers. For vulnerability assessment, STAT Scanner Professional Edition performs a security analysis of Windows, Sun Solaris, HP-UX, and RedHat and Mandrake Linux resources, as well as of Cisco routers and HP printers. STAT Scanner also gives administrators the information required to update configurations and provides a mechansim to implement the corrective action.
IDEFENSE
iDefense’s iAlert provides security intelligence to organizations. iDefense analysts develop dozens of intelligence reports every day. iAlert Daily is a customized e-mail message sent to iAlert subscribers, who can choose the days (and the time) they would like to receive reports, the topics on which they would like to receive information, and the format: HTML-based e-mail, plaintext messages, or PDF. iAlert Web provides access to iDefense’s latest intelligence and full search capabilities on a database of more than 15,000 intelligence reports. iAlert Flash sends immediate, time-sensitive intelligence on a possible cyberthreat, such as a spreading virus or a hacker attack, and possible actions to mitigate the threat. These notifications can be delivered by e-mail, to a wireless device, or by telephone. Other iAlert services include iAlert Advisory, a notice sent when iDefense analysts determine that a vulnerability or malicious code appears to have the potential to become a major cyber threat; iAlert Focus, in-depth reports designed for managers; and white papers. iDefense’s other products are iAware, iPower, and iMonitor.
INTELLITACTICS
Intellitactics provides two enterprise security management products: Network Security Manager (NSM) and the NSM Advanced Analytics module.
Network Security Manager is a threat management platform that captures data from security devices and information sources throughout the enterprise. The software then normalizes and translates event signatures from disparate sources into consistent descriptions. The event correlation engine exposes the type of threat, the location of the threat, and how it will affect the enterprise. NSM includes preloaded event correlation rules and the capability to define enterprise-specific rules. Enterprise zoning and asset classification capabilities help prioritize threats.
Administrators can identify attack strategies through the real-time display of events. More than 75 Web-based reports facilitate analysis of security activity in the enterprise. NSM has a distributed architecture to enable scalability, and it has role-based access controls for increased protection.
NSM Advance Analytics, an add-on module to Network Security Manager, transforms forensic and correlated security data from NSM into graphs and charts that reveal anomalies and trends. The module allows administrators to view their security infrastructure over a period of time, and to set baselines that assist the identification of unusual patterns. This visual approach helps administrators see relationships between groups of data that might not be noticed in text-only presentations.
INTERNET SECURITY SYSTEMS
Internet Security Systems (ISS) supplies hardware and software tools to detect and prevent unauthorized and malicious access to corporate intranets and Web systems. The SiteProtector application allows consolidated management of ISS’s various tools. Those tools include the Proventia automated threat-prevention appliance; several software RealSecure detectors optimized for a variety of network configurations (including gigabit segments, multisegment application platforms, connections across network segments, and servers); and a RealSecure detector for the desktop that blocks unauthorized applications and traffic (including peer-to-peer-generated traffic). ISS also provides an alert system for network administrators and a third-party module that permits SiteProtector to manage firewalls from companies such as Cisco and Check Point. In addition, ISS provides vulnerability tools to detect possible weaknesses in the intranet or in Internet connections.
IOMART
iomart’s NetIntelligence is a suite of network security and content management tools that help organizations block and filter Web access, manage content, detect security threats, and monitor employee Web, e-mail, and application usage. NetIntelligence’s IT Asset Management module can send real-time alerts if software or hardware is removed or installed, monitor license usage, and provide up-to-date inventory reports. NetIntelligence’s e-mail filter scans the content of an e-mail and its attachment, analyzes them using NetIntelligence’s proprietary database, and identifies viruses, spam, hacking, other threatening applications, and inappropriate images and content. Organizations can define filtering rules and apply them to e-mail domains, groups, or individuals.
LANCOPE
Lancope’s StealthWatch is a behavior-based IDS that monitors and categorizes network traffic to create security intelligence at the network and host levels. StealthWatch monitors and profiles network servers, workstations, and devices in real time, enabling administrators to establish baselines of typical activity. By correlating suspicious activity across the network and detecting real-time deviations from established profiles and security policies, StealthWatch identifies, prioritizes, and contains malicious network and host behavior. StealthWatch also maintains an archived database of network flow logs and creates graphical reports of network activity. This information can be used to support remediation, response, and forensic analysis.
MESSAGELABS
MessageLabs is a provider of business e-mail security and management services that include antivirus, antispam, porn filtering, and content control services.MessageLabs’ antivirus service reroutes inbound and outbound e-mail through MessageLabs’ control towers where it is scanned before being passed on to its final destination. Skeptic, MessageLabs’ rules-based and heuristic scanner, can detect known and unknown viruses.MessageLabs’ antispam service uses a combination of Skeptic technology and customer configurable blacklists and whitelists to identify and reroute spam. MessageLabs’ porn filtering service uses composition analysis to identify pornographic images in e-mail entering and leaving an organization. MessageLabs content control service, expected to be available in late 2003, identifies confidential, malicious, or inappropriate content according to enterprise-defined settings. All these services allow enterprises to define how intercepted e-mail should be handled.
NCIRCLE
The company’s IP360 vulnerability management appliance is made up of two components, VnE Manager and Device Profiler. VnE Manager is an appliance that serves as a central data repository and management platform, while the Device Profiler appliance is a network scanner for IP-based devices. The scanner assesses devices for operating system, application, and services configuration and identifies vulnerabilities and policy violations. The device can be remotely managed.
NETFORENSICS
Security information management software from netForensics collects, analyzes, and correlates security device information across the enterprise. The components in netForensics can be distributed on many servers, facilitating scalability.
First, software agents gather security event records from infrastructure components, map and consolidate the events into netForensics Alarm IDs, and transform the data into XML format. During this process, information such as event ID, source, source port ID, target, and target port ID are reformatted into an XML record called the Alarm ID record. Next, the netForensics event correlation engine uses rules-based correlation to separate false positive security alarms from potentially significant security incidents. Then the engine uses statistical correlation, which relies on event categorization and threat scoring, to determine the threat potential of security-based anomalies. The netForensics Master Engine aggregates and correlates events across multiple engines.
Finally, netForensics displays correlated results on a centralized, real-time console. This console is part of the netForensics SIM Desktop, which provides a single point of access to the software and can be customized for operators, analysts, and managers. A dashboard view shows a real-time, enterprise-level view of security trends. The netForensics SilentRunner Analyzer option lets administrators create a 2-D topographic view of all security devices and events over time. SIM Reports provide both risk and threat trend analysis.
netForensics also has capabilities to formulate an overall risk score for each asset within the enterprise by combining threats, vulnerabilities, and enterprise-defined value. A Risk Assessment Report provides the details behind each asset and its associated risk.
NETWORK INTELLIGENCE
Network Intelligence offers a line of logging appliances aimed at identifying and capturing data pertaining to security events. Each device is targeted to suit a different traffic volume. The EX series caters primarily to small and medium-sized businesses, while the HA series is designed for the greater demands of enterprise deployments. At the high end, the LS series is designed around a highly scalable clustered architecture and is suitable for administering even geographically dispersed networks. The entire product line is capable of monitoring a range of devices, including firewalls, VPNs, Web caches, access control systems, IDSs, routers, hubs, and switches.
PATCHLINK
PatchLink’s PatchLink Update monitors and maintains patch compliance throughout an enterprise. The product discovers and deploys missing patches, hot fixes, and service packs. PatchLink Update has vulnerability and assessment capabilities that detect software-related security breaches. New features in version 5.0 include role based administration, graphical network assessment reporting, and an integrated .NET framework. Policy-based patch management capabilities allow system administrators to enforce security settings and minimum patch baselines according to corporate standards.
POLIVEC
Polivec offers software tools for automating the development, implementation, and enforcement of security policies. The Polivec suite aids in the creation of security policies, then translates those policies into real-world configuration settings for a range of network devices. Once policies have been established, Polivec can continually monitor the network for compliance and automatically enforce the policies based on workflow rules. Additionally, the software can generate reports to help organizations determine whether they are meeting their stated network security goals.
PRICEWATERHOUSECOOPERS
PricewaterhouseCoopers’s Enterprise Security Architecture System (ESAS) is designed to formalize the knowledge management of an organization’s security information. ESAS enables an organization to create an intranet-based Web portal that provides a centralized repository for security policies and technical control information. Security professionals can use the portal to dispatch data and updates to users, and the portal also gives auditors access to confirm that controls are on track.
Q1 LABS Q1
Labs provides monitoring and analysis tools to assist administrators in identifying rogue network traffic, including unauthorized access, peer-to-peer file sharing, rogue servers, worms, or DoS attacks. Its QVision product aggregates data from various sources, such as networking equipment, applications, IDSs, firewalls, and antivirus software. It then presents that information using a variety of consolidated views, each designed to highlight various anomalous behaviors. By profiling typical usage patterns for each network and comparing current traffic to those patterns, QVision can even identify irregularities that do not match known behavior signatures.
QUALYS
Qualys provides security audit services delivered via the Web. Its QualysGuard offering is a comprehensive vulnerability scanner that probes a company’s network from elsewhere on the Internet, as an attacker would. Scans can be configured and launched using a Web-based interface. Weaknesses are identified and remedies proposed based on Qualys’s extensive vulnerability signature database. Results can be correlated with IDS data to reduce false positives. While the base QualysGuard product is intended only for Internet-facing systems, when combined with the QualysGuard Intranet Scanner appliance, the same tools can be used to scan systems behind firewalls.
SANDSTORM ENTERPRISES
Sandstorm Enterprises provides a variety of network forensics and intelligence tools. Its LANWatch and NetIntercept products allow monitoring and analysis of traditional network traffic, either at the protocol or application level. A particular specialty for Sandstorm, however, is modem security. Its PhoneSweep software is a professional “war dialing” application that can be used to scan telephone lines across the enterprise for unauthorized or unsecured modems. Conversely, its Sandtrap product can be used to monitor modems and phone lines for hostile war dialing activity.
SECUNIA
Secunia provides IT security information services. Secunia’s Vulnerability Tracking Service sends alerts and advisories about vulnerabilities in an organization’s network. Secunia’s Security Manager sends to the IT manager alerts that also include the status of devices, advisories by device, and a prioritized to-do list. In addition to the information provided in the Vulnerability Tracking Service and the Security Manager, the Enterprise Security Manager gives IT executives an overview of their IT managers’ performance in responding to current security issues. All of these services can send alerts by e-mail or text messaging, include an assessment of the threat level, offer information about patches and workarounds, and provide weekly summaries.
SECURE DECISIONS
SecureScope, from Secure Decisions, is a reporting solution for security event analysis. Rather than presenting raw security data gathered from logs, IDSs, and network analysis tools as lengthy tables or lists, SecureScope consolidates the information and displays it as 3-D graphical visualizations. The software can integrate with any SQL database. Additionally, Secure Decisions offers software training, security implementation consulting, and analysis services to help customers gain the most benefit from the reporting and visualization process
SECUREINFO
SecureInfo delivers software that simplifies the reporting and management of vulnerabilities across enterprise networks on an asset-by-asset basis while automatically identifying, tracking, and correcting vulnerability-related IT security weaknesses in real time. The company also provides tools to automate network certification and accreditation as well as to automate regulatory compliance as part of risk management. SecureInfo’s Enterprise Vulnerability Management tools let IT assess each asset on the network—including users’ individual applications—and determine whether all appropriate security patches are applied. The Enterprise Vulnerability Remediation tools let IT automatically correct any such deficiencies. SecureInfo provides similar tools to government agencies, and it furnishes professional security services—from assessment through deployment— both to enterprises and to government agencies.
SECURIFY
SecurVantage is a network compliance-monitoring product, which companies can use to enforce their security policies. The product continuously monitors network traffic, validating it against the standards specified in an organization’s security policy, so that anomalies can be detected. Companies create, analyze, and apply their policies in the SecurVantage Studio module, then use the SecurVantage Monitor to oversee and evaluate network traffic. The SecurVantage Enterprise is a Web-based console that can be used to monitor multiple domains. The company’s reporting product creates customized reports including information about network topology, asset vulnerabilities, and policy violations.
SHAVLIK TECHNOLOGIES
Shavlik Technologies’ HFNetChkPro patch management tool uses drag-and-drop functionality to let administrators define which groups will be scanned, by what criteria and when, and how patches will be deployed. The product’s PatchPush Tracker feature validates that patches have been applied successfully. HFNetChkPro also provides third-party threat analysis,Microsoft severity information, and annotation capabilities, as well as the ability to manage patches by criticality, view detailed information about each patch, and validate file versions and checksums. HFNetChkLT is a free version of HFNetChkPro. HFNetChk.exe is a free command-line tool that enables administrators to scan the network and check the patch status of Windows NT 4.0, Windows 2000, and Windows XP machines. EnterpriseInspector is a network security assessment tool.
SOLSOFT
Solsoft furnishes policy-management products for network security. Solsoft NP automates the configuration and deployment of security rules on multivendor routers, switches, firewalls, and VPNs, using a single management framework. Solsoft NP also permits geographically dispersed groups of administrators to collaboratively define, deploy, audit, and maintain common policies. Products operating with Solsoft NP include those from Cisco Systems, Check Point, Evidian, NetScreen, Nokia, Nortel Networks, and Symantec. Solsoft also furnishes professional services for security-policy design and implementation.
SOPHOS
Sophos produces several antivirus software products. Sophos Anti-Virus protects servers, desktops, and laptops. Sophos’s MailMonitor examines e-mail traffic passing through an e-mail server to identify worms and viruses. Sophos’s PureMessage identifies spam according to enterprise-defined identification and handling policies, checks e-mail traffic through the e-mail server to identify worms and viruses, and helps enforce an enterprise’s e-mail policies at the gateway. Sophos’s Enterprise Manager is an administration tool that helps network administrators install, update, and monitor Sophos software across a network. SAV Interface allows third parties to integrate their applications and services with the Sophos virus detection and disinfection engine.
SOURCEFIRE
Sourcefire, whose principals created the open-source Snort intrusion detection software, provides an integrated security-monitoring infrastructure for identifying network threats and protecting against them. Components include network sensors for network monitoring, threat detection, and threat prevention; a management console for integrated high-performance data management and threat response; and real-time network awareness for preemptive passive network discovery and analysis. The efforts of Sourcefire’s Vulnerability Research Team (VRT) to discover, assess, and respond to the latest trends in hacking activity, intrusion attempts, and vulnerabilities are supported by the open-source Snort community.
SURFCONTROL
SurfControl furnishes message-filtering services for Web access, e-mail, instant messaging, and peer-to-peer connections to block viruses, spam, and other questionable content, using enterprise-defined content policies. The Windows-based software can also block specified Web connections to discourage employees from doing non-work-related tasks at the office. SurfControl also provides Web filtering tools for Linux, Check Point, Novell, and Nokia IPSO environments.
TREND MICRO
Trend Micro specializes in virus protection and content- security products and services with centralized management capabilities. Its Control Manager consolidates systemwide security status information to help administrators take consistent action at different levels of the network throughout the outbreak life cycle. ScanMail eManager provides real-time content filtering, spam blocking, and reporting to help companies monitor and control the type of information that enters or leaves the network, with optional modules for Microsoft Exchange, Lotus Notes, and OpenMail. The InterScan Messaging Security Suite provides virus protection, flexible policy-based content filtering, and management tools to help monitor and control SMTP and POP3 traffic at the messaging gateway. The company also has tools to block URL access by employees; support Microsoft ISA servers and the ICAP Web-caching standard; and block malicious applets, ActiveX, JavaScript, and VBScript code at the Internet gateway. Finally, ServerProtect provides comprehensive antivirus scanning for servers, detecting and removing viruses from files and compressed files in real time, with versions for Microsoft, Novell, EMC, Linux, and SharePoint servers, network appliances, and storage systems.
TRIPWIRE
Tripwire supplies intrusion detection for both network devices and servers. Tripwire for Network Devices assures the integrity and security of network devices by providing realtime notification of changes to configuration files and enabling automatic restoration. Tripwire for Servers reveals undesired changes that can affect the security and stability of servers across the enterprise. The Tripwire Manager console enables centralized management of Tripwire for Servers installations across the enterprise, using a graphical user interface that verifies system state and alerts users of changes to servers. Tripwire also furnishes an open-source version of its intrusion detection system for Linux systems.
TRUSECURE
TruSecure offers a range of products and services for security intelligence analysis. Its IntelliShield Alert Manager is a Web-based service that provides up-to-date security advisories based on customers’ technology profiles. The Risk Commander product provides a unified dashboard for security data gathered from disparate sources across the enterprise, including IDS and vulnerability-scanning systems. Through its policy compliance tracking features, Risk Commander can help organizations prove and document progress toward policy, standards, and regulatory requirements.
TUMBLEWEED COMMUNICATIONS
Tumbleweed provides three types of software to protect enterprise communications. One type is through network protection (which the company calls an e-mail firewall) that blocks spam, viruses, and hacking attempts and can integrate with LDAP directories and single sign-on services. The second type is e-mail compliance software that monitors messages, enforces communications policies, identifies violations, and concretely demonstrates compliance with industry, government, and company regulations, including GLB, SEC, and NASD regulations for the financial industry; HIPAA privacy and security regulations for the medical and insurance industries; and corporate policies. The third product ensures secure communication by creating trusted relationships and protecting intellectual capital with encryption, authentication, tracking, and intelligent message routing for confidential online interaction, such as for customer service, secure partner networks, and portals.
WEBSECURE TECHNOLOGIES
WebSecure provides a range of content-filtering products. MailMarshal is an e-mail content filtering gateway that enables organizations to enforce e-mail use policies; the company’s MailMarshal Secure is a version that adds encryption capabilities. Partnering with other security vendors, the company sells a range of security products, including those for vulnerability management, VPN and firewall, and intrusion detection. WebSecure also provides security management consulting and services.
WEBSENSE
The five-component Websense Enterprise suite analyzes, manages, and reports on employee Internet access, network activity, software application use, and bandwidth use. Components related to content security include Websense Enterprise, in which IT establishes content-access policies, and Client Application Manager, which identifies insecure applications like instant messaging and peer-to-peer applications and restricts or blocks their use according to rules set in Websense Enterprise.

 


■ General IDC estimates that revenues from the worldwide security technology market reached $20 billion in 2002, rising from nearly $17 billion in 2001, a growth rate of 20 percent. Even though IT markets declined in general through 2002, IT security remained a top priority for many enterprises. IDC divides the overall market into three categories:

• Software—Secure content management (SCM); firewall and Virtual Private Network (VPN) software; software for encryption, security, and intrusion detection.

• Hardware—Biometrics, token and smart cards; firewall and VPN appliances; cryptographic accelerators; and intrusion detection system (IDS) appliances.

• Services—Consulting, implementation, management, and training are typical service offerings in this category.

Services accounted for approximately 48 percent of revenues in 2002, software about 34 percent, and hardware 18 percent.
IDC predicts that security hardware and services will continue to grow at nearly 20 percent annually through 2006, with software growing at 12 percent.  The hardware forecast reflects strong demand for appliances that combine intrusion detection, content security, and policy management functionality with firewall and VPN capability. These appliances are intended as single boxes that fulfill all primary IT security requirements for small businesses. In the services segment, IDC predicts continued strong demand for enterprise risk-management-strategy advisory services to mitigate the risks involved in including suppliers, customers, and partners in a single trusted environment.

■ Security Software Gartner Dataquest divides the market for security software into infrastructure (firewall and VPN, content filtering, intrusion detection, encryption, and security management) and administration (identity and access management, PKI suites, and vulnerability assessment). The company classifies appliances as a part of its security hardware category. In January 2003, Gartner Dataquest forecast worldwide revenues for security software would rise from $3.3 billion in 2002 to nearly $4.8 billion in 2006, a compound annual growth rate (CAGR) of 10 percent. It predicts that demand for intrusion detection systems will outpace antivirus, encryption, and other security software segments during the period. More important, Gartner Dataquest expects each of these categories to become part of security software suites; network and systems management software packages; or the networking equipment market.

Overall, Gartner Dataquest expects regional shares of security software sales to change little through 2006. Because security continues to be a prominent concern for North American enterprises, sales growth for this region should keep pace with that of other regions through 2006. (See Figure49.) Europe is likely to continue to lag behind North America in the adoption rate of identity management and content filtering products because of heightened privacy concerns in Europe, as well as Europe’s generally lower level of enterprise computer usage. European spending on enterprise computing hardware accounted for 30 percent of the $45.7 billion worldwide total in 2002, compared with 41 percent for North America, according to Gartner Dataquest.
Other world regions are comparatively less mature than Europe and North America in terms of information security adoption. Gartner Dataquest estimates that Asia-Pacific, for example, spent only $575 million on IT security services in 2002, compared with $2.8 billion in Western Europe and $4.3 billion in North America.

A revised Gartner Dataquest report on the worldwide security software market, released in July 2003, reviewed vendor market shares for calendar year 2002 and adjusted its 2002 revenue total to $3.5 billion. The leading security software vendor, Symantec, gained four percentage points from 2001 to 2002, garnering a 19 percent share. Trend Micro (ranked fourth) also gained share during the year, rising to 7 percent in 2002 from 6 percent in 2001. These vendors took advantage of continuing strong demand for antivirus products.
During the same period, Check Point (ranked fifth) lost three percentage points, and Network Associates (ranked second) lost 1 percent. Gartner Dataquest noted that Network Associates, unlike Symantec and Trend Micro, had failed to capitalize on demand for antivirus products, and Check Point had confronted increasing competition from firewall appliance vendors. IBM (ranked third) remained level at 10 percent.

IDENTITY MANAGEMENT

predicts that the worldwide market for identity management software will rise from $593 million in 2002 to $4.0 billion in 2007, a CAGR of 46 percent. This category is a subsegment of IDC’s Security 3A (authentication, authorization, and administration) software market, which itself is predicted to rise from $2.5 billion in 2002 to $5.1 billion in 2007, a CAGR of 15 percent. IDC terms access management as provisioning and classifies this category under the 3A software umbrella as a part of security administration.
Over time, IDC expects the identity management category to become a superset of 3A, which would also encompass directory services and hardware authentication techniques. IDC identified the following other major trends in identity management and 3A software:

• With the maturing of Web services, identity management will evolve to address the new authentication and authorization requirements that result, for example, as a consequence of Web services applications constructed from multiple providers.

• Medium or large enterprises will tend to shy away from 3A products, because system integration costs are five to seven times higher than license fees for 3A products. The five top-ranked software vendors in 2002 were Computer Associates, IBM, VeriSign, RSA Security, and Hewlett-Packard (HP).

INTRUSION DETECTION AND VULNERABILITY ASSESSMENT IDC

defines IDSs as products that are designed to constantly monitor a device or a network for malicious activity. Vulnerability assessment products, according to IDC’s definition, perform assessments to determine the configuration, structure, and attributes of a given device on the network. According to the research company, the intrusion detection and vulnerability assessment software market rose to $723 million in 2002, from $620 million in 2001, an increase close to 18 percent. The market is forecast to increase to $1.5 billion in 2007, a CAGR of more than 17 percent. IDSs are forecast to disappear as standalone products by 2008. Additionally, the line between host- and network-based IDSs will continue to blur, and these products will merge with technologies such as antivirus and firewall.
The market for these products is crowded, with vendors roughly divided according to whether they deliver IDS as software or as an appliance. Software accounted for more than 90 percent of revenues in 2001.
Of the more than 30 companies offering such products, the top IDS software vendors in 2002 were Internet Security Systems (ISS) (25 percent), Symantec (20 percent), and BindView (8percent). The top appliance vendors by their 2001 (latest reported) market share were Cisco Systems (64 percent), Nokia (13 percent), and Enterasys Networks (5 percent).
Non-IDS-specific security market leaders, such as Check Point and NetScreen are integrating into their products technologies such as IDS and antivirus. IT security management products are appearing from companies such as ArcSight, Computer Associates, and IBM Tivoli, and managed security monitoring tools are available from Activis, ISS, Symantec, and Vigilinx.

INTRUSION PREVENTION

First-generation IDSs were known for too many false positives—alerts of intrusion that did not occur. At the end of 2002, intrusion prevention systems (IPS) began to appear. IPS products are being developed that use rules, models, and correlation engines that make it possible to prevent some attacks from executing. Because IPSs are new, market statistics on these products are not available. Vendors such as Cisco Systems, ForeScout Technologies, Network Associates, and Vsecure Technologies have announced IPS systems as potential replacements for their IDS systems. Firewall vendors are adding IPS features to their products, as are switch and load-balancing companies such as TopLayer Networks.

SECURE CONTENT MANAGEMENT

IDC divides secure content management (SCM) into three segments: Web filtering; email scanning and messaging security; and antivirus. SCM as a market reflects corporate customers’ need for policy-based Internet management tools.
IDC expects the market for secure content management to grow from an estimated $2.7 billion in 2002 to $6.4 billion by 2007, a CAGR of nearly 19 percent.
There are hundreds of vendors in SCM and competitive position changes rapidly. Those leading the field in 2002 included the following:

• Symantec held a 31 percent share of the worldwide market. Symantec’s products appear across all three sectors of the secure content management market.

• Network Associates/McAfee held 20 percent of the secure content management market share. Its products combine antivirus and SCM software and hardware.

• Trend Micro, with 12 percent of the market in 2002, focuses on securing e-mail.

• Computer Associates, with 4 percent, offers integrated SCM and antivirus products.

• SurfControl, with 2 percent of the market, derives the majority of its content security revenue from the Web filtering and e-mail filtering segments.

• Websense, with a 2 percent share of the worldwide market in 2002, derives all its content security revenue from the Web filtering market.

• Sophos, which captured 2 percent of the market in 2002 as a result of a 47percent increase in SCM revenues from 2001 to 2002, has antivirus and e-mail filtering products.

• Panda, which had less than 2 percent of the market in 2002, sells a range of antivirus products.

Web Filtering

As defined by IDC, Web filtering products identify, categorize, and manage Web content involving non-business-related topics, such as pornography, racist or hateful materials, profanity, sports, and travel. Enterprises use these products to better allocate network resources, improve employee productivity, and assure adherence to company policy.

According to IDC, Web filtering accounted for a 10 percent share of the SCM market in 2002. IDC forecasts that the Web filtering market will grow from $270 million in 2002 to $893 million in 2007, a CAGR of 27 percent.

Message Security

E-mail scanning software is used to check inbound and outbound messages for confidential information, excessive file size, and prohibited content. E-mail content scanning searches for keywords, oversize data packets, and disallowed file types. Enterprise concern with employee productivity, as well as compliance with privacy regulations, ensures the viability of this market.
E-mail scanning accounted for revenues of approximately $236 million in 2002. IDC forecasts that the e-mail scanning market will grow to approximately $1.1 billion in 2007, a 36 percent CAGR. The need to protect confidential information has gained increasing attention, a primary reason e-mail scanning products are in high demand.

 

Antivirus The antivirus (AV) software market includes software used to detect and remove viruses and malicious files (often referred to as malware), as well as to restore damaged computer files. Antivirus software is a sizable segment of the overall security software market, reaching $1.2 billion in 2002, according to Gartner Dataquest. A relatively mature market, the antivirus software market is showing signs of vendor consolidation. Nevertheless, analysts such as META Group forecast expansion of the market into new areas: for example, Web gateways and SAN storage, growing between 15 and 20 percent annually for the next three to five years.
A META Group survey performed in early 2003 indicated that Trend Micro, Symantec, and Network Associates led the enterprise antivirus software market in 2002. Together, these vendors held more than 80 percent of the market segment. Computer Associates and Sophos, a private company, are working to challenge the top three. Representative companies with less than 5 percent market share include oft-mentioned Aladdin, FSecure, MessageLabs, and Panda.

Microsoft entered the antivirus market in June 2003 through its purchase of the IP and technology assets of GeCAD Software, a Romanian antivirus vendor. Gartner views Microsoft’s entry into the antivirus market as an important event, one that will compound the pressures to innovate for established enterprise antivirus vendors. These vendors must also address numerous new kinds of threats, including those arising in peer-to-peer and Web services environments.
Because viruses continue to be one of the greatest security challenges for enterprises, companies will seek broader third-party solutions, according to Gartner. Indeed, Gartner predicts that by 2005, vendors not traditionally associated with antivirus products will supply most e-mail antivirus functionality. Gartner also predicts that, by 2008, most enterprises will use the embedded policy controls in Microsoft’s next-generation Windows operating system (code-named Longhorn), rather than discrete desktop software, for behavior blocking and detection of malicious code.

OTHER SOFTWARE IDC

places miscellaneous security software types too small to be included elsewhere into an Other category, which includes encryption toolkits, file encryption products, database security, storage security, standalone VPN and VPN clients, wireless security, Web services security, and secure operating systems. IDC predicts this category will rise from $249 million in 2002 to $571 million in 2007, a CAGR of 18 percent.

■ Security Hardware Information security

 

hardware primarily consists of appliances, biometrics, smart cards, and tokens, which are covered in this section. Card or token readers, often used in conjunction with these devices, are excluded from this discussion.
APPLIANCES IDC defines a security appliance as a combination of hardware, software, and networking technologies whose primary function is to perform specific or multiple security functions. IDC divides these appliances into three functional categories: firewall/VPN, network intrusion detection system (NIDS), and other (which includes a variety of functions, such as antivirus, secure content management, and access control).

IDC expects the market for security appliances to be generally strong, rising from an estimated $1.8 billion in 2002 to $4.7 billion in 2006, a CAGR of 27 percent. (See Figure52.) Reasons IDC noted for the rapid growth in appliances include increased broadband adoption, convenience, ease of installation and maintenance, and reduced overall complexity. Gartner points out that inclusion of network processors in a number of these appliances is significant, because of the ability gained to conduct deep packet inspection, a fundamental reason that NetScreen, for example, has seen demand for its products grow.

Vendors of security appliances have positioned them for use in conjunction with wireless technologies and Web services. According to IDC, the top firewall/VPN security appliance vendors by revenue market share in 2001 (the most recent available information) were Cisco Systems (38 percent), Nokia (21 percent), SonicWall (8 percent), NetScreen (7 percent), and WatchGuard (5 percent). Notable emerging appliance vendors include Zyxel, Global Technology Associates, and Cyzentech.

BIOMETRICS

Biometrics technologies use natural, unique features of an individual to assure identity. Biometrics methods in use include fingerprint or hand scanning, handwriting recognition, iris scanning, voice authentication, keystroke scanning, and facial recognition technologies.
International Biometric Group (IBG) reported the worldwide market for biometrics technology reached $601 million in 2002 and forecasts this market will rise to $4 billion by 2007, a CAGR of 46 percent. Figure53 presents IBG’s breakdown of the

 

SMART CARDS

 

A smart card is a credit card–size piece of plastic that, instead of a magnetic stripe, contains at least one embedded chip. The smart card industry often classifies its cards by chip type: they can be either memory cards (which store data) or microprocessor cards (which can both store and process data). Most microprocessor cards are intended for user authentication purposes.
All smart cards have an inherent security component, but only a small percentage of these are used in standalone security applications. According to Datamonitor, the vast majority of smart cards are used in telecom and financial services applications. These cards primarily consist of the following types:

• Disposable prepaid phone cards, which are memory based and are declining in use.

• Subscriber identity module (SIM) cards for mobile phone authentication, secure roaming, and value-added services, which are microprocessor-based.

• Payment cards based on the Europay, MasterCard, and Visa (EMV) standard, which are also microprocessor-based.

Datamonitor has noted adoption of EMV cards in the United Kingdom and predicts strong growth in Asia-Pacific for EMV cards through 2006. The company predicts that revenues from financial services cards generally will rise from $494 million in 2002 to $808 million in 2006, a CAGR of 13 percent. Services that use payment cards prefer smart card technology over cards with magnetic stripes, because the former are more difficult to exploit for the purposes of fraud.

For the smaller vertical-market categories, Datamonitor predicts that smart card demand for several of these will grow at faster rates than for the more established telecom and financial services verticals. In fact, cards used in standalone security/access applications will grow most rapidly, rising from 14 million in 2002 to 36 million in 2006, a CAGR of 27 percent. Other verticals having strong smart card demand include universities (with unit demand from 2002 through 2006 predicted to rise at a CAGR of 24 percent), government/health care (24 percent CAGR for the same period), and pay television (22 percent CAGR).

Datamonitor reported the five top smart card vendors worldwide in 2002 were Gemplus (29 percent share of revenue), Schlumberger (29 percent), Giesecke & Devrient (10 percent), Oberthur (9 percent), and Orga (5 percent).

Strong two- or three-factor authentication, which combines something possessed (a smart card, for example), something known (a PIN number, for instance), and/or a personal characteristic (such as a finger or voice print), is an important market driver for security smart cards. Datamonitor asserts that this kind of authentication has generally required expensive readers. The firm predicts that, during the forecast period, high infrastructure cost will slow adoption of strong authentication techniques using smart cards.

TOKENS

Hardware tokens, according to IDC, either encrypt a value, in order to provide a onetime password, or use a challenge-and-response authentication technique in conjunction with an authentication server. Tokens generally take the form of cards or key fobs, but other form factors are emerging.
IDC divides the market for hardware tokens (excluding smart cards) into traditional, USB tokens, software license authentication tokens (SLAT), and other (miscellaneous) forms of hardware authentication. USB tokens, which users can plug into USB ports on their PC, do not require additional readers—a clear advantage. However, they are less secure than traditional one-time password tokens. SLATs are USB or parallel port keys that authenticate users for the purposes of enforcing software licensing agreements. Other kinds of tokens include wireless varieties.
For 2001, the most recent full year IDC reported, the worldwide market for traditional tokens reached $175 million, followed by USB tokens, which approached $8 million, and other hardware authentication (excluding smart cards), which reached $7 million. IDC predicts a CAGR for USB tokens of 92 percent from 2001 through 2006 and a CAGR of 91 percent for other forms of hardware authentication. The remaining categories are predicted to grow more modestly during the same period, at rates of 9 percent (traditional tokens) and 8 percent (SLATs), respectively.