Draft

 Intrusion Prevention and Detection Roadmap for Large Enterprises in 2005-2010

Softpanorama whitepaper, version 1.02

Contents

Executive Summary and Proposed IDS/IPS Roadmap

Log-based analyzers as a central part of IDS troika

Host-based integrity checkers

Network IDS


Executive Summary and Proposed IDS/IPS Roadmap

  1. Large companies  should moved away from the goal of intrusion detection to the goal of  policy monitoring selected activities.  Additional investment should be strategically aligned with the technologies that are designed to develop and enforce policies that prevent intrusions from occurring in the first place.  The latter  proved to be more efficient and cost-effective, especially when policy monitoring is augmented by policy enforcement. 
     
  2. Intrusion Detection and Prevention architecture should be balanced using equal of similar investments into three major types of IDS:
    • Log-based IDS. Log-based IDS should be based on custom Perl scripts and communicate directly with Tivoli Enterprise Console (TEC).
    • Integrity-based IDS. Tripwire should uniformly used on all critical servers and Tripwire output should be integrated in Tivoli. Usage of Tripwire console should be discontinues as it failed to demonstrated its usefulness and duplicated the functions of TEC.  Open source products should be investigated in the future as Perl-based integrity checkers are more flexible and powerful that C-based products like Tripwire.
    • Network IDS. large company  might benefit from replacement of ISS outsourcing services with integrated with Tivoli alert system that are monitored by large company  server room operators and more wide usage of firewalls as a policy enforcement tool with some network intrusion detection functions.
       
  3. Fundamentally, any large company  should move away from generic signature based IDS sensors  and switch to using existing network ISS sensors as specialized monitoring devices with the explicit and limited function of  monitoring appliances that cannot be integrated into Tivoli framework.  While some companies might try to preserve existing IDS infrastructure for political reasons, they can benefit from retooling them for a more limited, narrow function of monitoring specific protocols and segments. Generic network IDS functions should probably migrate into firewall. 
     
    • The last two years had shown that if firewalls, servers and appliances are hardened and correctly configured, then the attacks against DMZ became very difficult as the attacker needs to overcome hardened multi-layered defense. Successful attacks are now performed using  application layer that is above and beyond the capabilities of standalone network IDS sensors with their inherent packet orientation.  In many cases low level network protocols-based attacks on servers behind firewall and load balancer are too complex to make any sense and can be performed only by script kiddies who do not understand what they are doing.  Even if  case of the success of  such an attack its further exploitation is difficult or impossible due to traffic restrictions.
    • One of the few function that still have marginal utility is finding statically prosperities of  some types of traffic (scanning) and IP ranges of attacking hosts. Actually for this purpose any TCP traffic recording application (tcpdump) can be used but free IDS like Snort might be a little bit more convenient. In no way this function does not need proprietary and/or complex IDS.  
    • Despite a shrinking time gap between vulnerabilities and exploits, generic signature-matching IDS has become obsolete.   Actually they always were, but now it is safe to say bout it :-)  The current industry consensus is that generic intrusion-detection systems fail to provide sufficient return on investment that can justify their maintenance costs.  Still large company  should not count on IDS to die as Gartner predicted in a controversial report last year[Gartner2003].  With proper custom set of signatures IDS sensors can still perform useful specialized functions. What really dies is universal ISS signatures and generic signature databases that currently include thousands of signatures. They provided too false positives prove to be useful.  This "false positives" problem proved to be fundamental and cannot be resolved by tuning or other cosmetic means.  In a generic IDS any tuning is wiped out by constant upgraded and represents unsustainable strategy.
       
  4. For servers that can be integrated into Tivoli framework the stress in IDS area should be on host-based monitoring and log analysis. With a solid system and applications monitoring and log analysis we can see real problem with the system and might be in better position to uncover real security events, not an endless stream of false positives.  In the near future, IDS will take a back-seat role and real-time system monitoring and log analysis will come into the forefront. What's still needed is host-based and kernel-level enforcement to make sure policies can't be tampered with, the technology some peaces of which are visible in AIX 5.3 and Solaris 10, as well as Windows server 2003, but which in general is probably a couple of years away from maturity. 
     
  5. Security advances push intrusion detection deeper into the host-based domain, essentially making most part of the functionality of the old network generic IDS sensors obsolete and the remaining part of its integrated with firewalls. IPS from major vendors should be avoided as they represent not a new technology, but a sugar-coating of existing IDS technology and attempt to add firewall functions to existing framework.  IDS and IPS use essentially the same detection techniques. Both are plagued by the challenge of accuracy. And when you’re looking at a network of dozens of important hosts including e-commerce hosts, the last what you want is shooting first and asking questions later. We judge that this superficial, "cosmetic-style" adaptation of existing product in order to lure customers doomed to be a failure.  Marcus Ranum, chief security officer at Tenable Security noted:

     “They are the same thing. IPS is just a signature IDS with firewall block rules that sits inline. Big whoop. The ‘convergence’—if there is any—is between firewalls and IDS.”

The current configuration of critical DMZ systems is characterized by absence of central logging, absence of systematic log monitoring. Currently we implicitly have to trust everyone who has the root password. There are other significant problems that diminish any potential return on investment of log analyzing application of any kind:

Log-based analyzers as a central part of IDS troika

Log analysis can be considered as the most fundamental part of an intrusion detection troika. Log files are important building blocks of any host-based IDS because they form an audit trail, making it easier to track down intermittent problems or attacks.

Log analysis is also closely related to monitoring system performance and can be integrated into Tivoli monitoring system. The latter is highly complex enterprise level monitoring framework that contains special custom components called adapters that can be integrated into the framework. Standard Tivoli log-file adapters, however, do not provide advanced logic capabilities.  A customers needs to write a set of custom script on the monitored system. They can be writing in Korn shell, Perl or other scripting language. A sample script for ITSO_ProcessNum Resource Model is provided in the Tivoli documentation ( see Example 10-1).  The custom script Resource Model checks the standard output from the custom script. Therefore, the custom script must print its result to standard output.

In ideal case using log files, one may be able to piece together enough information to discover the source of a break-in, and the scope of the damage involved much more efficiently. It also can be correlated with that data of network IDS like Niksun.

Currently large company  does not have capabilities to systematically examine logs. They are examined only occasionally, usually in the case of after-the-fact reactive problem solving. And that is not accidental. There are several problems with log analysis. One is that they are often big. Sometimes they also dirty: contain a lot of noise records due to misconfigured services, errors on configuration, etc.  All-in-all creating of log-monitoring infrastructure requires substantial investment. 

Cleaning logs from noise and standardizing configurations to make them more useful for event analysis represents a significant project and requires qualitatively better levels of configuration and maintenance of the operating systems and applications in question as well as more structured policies regulating  sysadmins activity on DMZ. It requires significant efforts from both NTI/C and NTI/R.

One of the most important part of log analysis is existence of clear, current and unambiguous policies. The following additional policies can make log analysis a little bit simpler and logs itself more consistent: 

The success of log analysis IDS is highly dependent of the quality and the level of enforcement of those policies.

Host-based integrity checkers

Integrity checkers are very useful for finding Trojan programs and backdoors like Rootkit. Theoretically they are also useful for maintenance, but in reality this goal is pretty difficult to achieve. Perl-written (or Python-written) integrity checkers are more flexible and thus have an edge over C-written tools like Tripwire.

The most popular integrity checker for Unix, Tripwire never was able to supersede its beginning as a student project. Our experience suggests that a free version of Tripwire realistically can only be used in a limited way for static servers like appliances.  A commercial version is a little bit more flexible, but not by much. Moreover an introduction of central console for all Tripwire instances created an additional security risk. We should discontinue usage of  Tripwire console as TEC can provide similar or better capabilities. 

The fairy tails that Tripwire can detect or prevent host intrusions as a standalone application are not credible. Theoretically it can, but the tool itself is so inflexible that it largely defeats its purpose without a good log analyzer.  On Linux in most case you might have more success with RPM-based checking that with Tripwire.

Still it make sense to install Tripwire on critical severs as with all its faults, it is still the most credible commercial product. At the same time open source products should be investigated in the future as Perl-based integrity checkers are more flexible and powerful that C-based products like Tripwire.

We should avoid creating an all-encompassing rulebase for each server: this is a proven road to nowhere. Older versions of Tripwire were strictly file oriented and the problem of listing all the files and directories quickly made ruleset unmaintanable. Newer version (commercial version 4.0 and later) permit specification of all files in the directory (better late then never ;-)

Still the best policy in using Tripwire is to limit yourself to a few critical system and configuration files (for example used in the rootkits), plus several critical configuration files.  Actually control of configuration files is more important and here Tripwire while weak, can at least can provide some return on investment. If you are thinking about using Tripwire for tracking changes, please think again. This is possible by writing custom scripts, but there are better tools for the same purpose.  One problem is that if you do not compare with the baseline, you compare with the set of attributes. Also if you control both directory and a file in this directory, then for each change Tripwire will complain twice. In free version of Tripwire there is an option  -loosedir which would prevent tripwire from complaining about directory modification time updates that can filter out some noise. In commercial version it became a configuration option. 

 

Network IDS

Security advances push intrusion detection deeper into the host-based domain, essentially making most part of the functionality of the old network generic IDS sensors obsolete and the remaining part of its integrated with firewalls. Drowning in bloated signatures databases and alerts that is of little or no value in locating attacks,  security specialists are fed up with signature-based IDS systems.

At least one research company proved to be brave enough to declare that "the king is naked."  A Gartner Inc. report [Gartner2003] called intrusion-detection systems a failed technology that isn't cost-effective. As Gardner report correctly stated IDS are dead:

Gartner Group, the well-known analyst firm, caused something of a stir recently with its pronouncement that Intrusion Detection Systems (IDS) and their Intrusion Prevention Systems (IPS) offspring were a market failure -- and in fact will be obsolete by the middle of the decade.

The Stamford, Conn.-based firm declared that IDS and IPS don't deliver the extra layer of security that was promised, and that many IDS implementations have been ineffective.

Gartner clearly has picked up on a massive source of end-user industry pain. IDS have long been derided as difficult to manage, creating many false positives and negatives, which is one of the reasons that security event management solutions evolved -- to make IDS both more manageable and more effective.

Some parts of ISS technology will definitely survive.  Moreover Gartner's prediction to a certain extent contradicts previous buying trends of organizations. According to the Computer Security Institute - FBI annual Computer Crime and Security Survey, only 43% of organizations bought intrusion-detection systems in 1998. That percentage has climbed steadily every year to reach 73% in 2002. Nonetheless, Stiennon considers that investments in intrusion-detection systems have already stalled because of all of their shortcomings.

Gartner suggests that  "deep packet inspection" will move into firewalls in the coming years. More realistic strategy is retooling of IDS sensors to monitor appliances that cannot be easily integrated into existing monitoring framework (Tivoli framework in case of  large company ). But what is actually dead is the sales pitch that IDS can protect the company from intrusions. It never did that in the first place and from the beginning served largely as an insurance policy

Despite a real threat of network exploits and a shrinking time gap between vulnerabilities and exploits, signature-matching IDS has become obsolete.  Here is one relevant quote: URL: http://news.zdnet.com/2100-1009_22-997106.html

Intrusion detection systems are dead, a panel of analysts told the RSA Conference on Monday. The question remains what should replace them, and whether the newly fashionable "intrusion prevention systems" are more than just a change of buzzword.

"IDS is dead," said Vic Wheatman of Gartner Group. "People bought it, installed it and turned it down when they had too many alerts."

Analyst Mike Rasmussen of Giga agreed: "75 percent of IDS installations were failures," he said, blaming a failure to allocate enough resources to weed out the false positives, where the IDS issues a false alarm. But intrusion prevention--where systems are designed to respond automatically to prevent an attack having any effect -- is not necessarily the panacea it is made out to be, he warned: "In many cases, it's the old vendors abusing the term."

large company  should retool existing ISS censors as specialized network monitors for appliances that cannot be integrated into Tivoli framework and as generic traffic analyzers (NikSun sensors). large company  should not count on IDS to die as Gartner predicted in a controversial report last year. Instead, efforts needed to integrate IDS component into larger Tivoli framework,  which should be oriented more on policy enforcement then on intrusion detection and primarily uses host monitoring and log analysis as more reliable and cost effective technologies.

In the near term, this relegates currently installed IDS to a forensics and after-the-fact inspections. But in five years or so new security technologies could cause the demise of signature-based IDS altogether

Fundamental Problems with generic IDS sensors

Among the problems associated with IDSs we can mention the following:

A fog of false alerts:   "false positives" problem proved to be fundamental and cannot be resolved by tuning or other cosmetic means.

Generic IDS sensors has proved to be prone to streams of false alerts.  The essence of the problem stems from IDS over-reliance on signatures. As AV vendors know perfectly well signature based approach is mostly reactive. But that's not suitable for network IDS stated goals,  so IDS vendors are caught in the constant pressure to make signature more generic in order to be able to catch modified variants of known threats. Unfortunately this dramatically increases the rate of false positives and, in case of  IPS, would cause legitimate traffic to be blocked. That's not acceptable in a production environment like large company  have.

That's why many Wall Street companies companies are now all too happy to rid themselves of their signature-based systems altogether [[Bradley2004]:

"Every time we got a report off an IDS, it was pulse-raising. There'd be two $100,000-a-year Cisco Certified Network Engineers plowing through event logs trying to figure out what's going on," says Chris Van Waters, senior director of IT for QuadraMed, a Westin, Va., healthcare technology company with 1,000 employees. "Meanwhile, we've still got the network degraded, traffic's going through the roof, and we don't know where it's coming from."

The problem with IDS and IPS systems is that they assume everything is good until proven bad. Policy monitoring defines what is acceptable and anything outside of that is assumed bad and as such is a more realistic strategy.

In fact, the management and performance drawbacks of IDS proved to be so notorious that a Gartner Information Security Hype Cycle report published in June 2003 declared the category a market failure [Gartner2003]. Instead Gartner recommended that organizations hold off investing in IDS and shift resources to vulnerability scanning, server hardening, and newer, deep-packet inspection firewalls, which are more adept than standard firewalls at detecting and stopping application-level attacks .

Trying to survive, some network IDS vendors started work on elimination of false positives. They resort to various heuristics to determine if an attack is relevant and are trying to sell "enhanced"  technologies like anomaly detection, heuristics traffic analysis, application level protocols recreation and analysis, etc.  For example NFR now sells an operating system fingerprinting module, a technique that uses a proprietary sniffer to determine what applications are running on the network and tuning signature database accordingly.

Another heuristics is to a baseline of common traffic patterns on a device level and for each device  correlate only anomalies from the baseline. While being scanned every second of every day does not mean much, it might be useful for customers to see the type and content of packets that are outside the baseline, the amount of such packets per hour and corresponding ports distributions for such packets. If those abnormal packets for example looks like someone's doing a specific attack on an HTTP port of commerce server, then there is higher chance that those are relevant and deserve some action of the company personnel. 

But the market already turned negative to anything connected with the word ISS because people are tired of the care and feeding of traditional, signature-based IDS implementations and see them as having negative price/return ratio.  Everybody agrees that it takes an inordinate amount of time to get meaningful IDS data from those systems, hence the investment in IDS software does not pay off.  Investment into event correlation with the hosts might help to distill1 it into a more manageable volume but this is a very expensive path. 

Cost of monitoring false positives

Typical reaction on the cost of monitoring of false positives can be substantial both in dollar metrics and its demoralizing effects (Crying Wolf problem) as aptly summarized by an anonymous  security specialist at an electric utility:

Our IDS was a mess, alerting us on absolutely everything. In fact, I can’t even remember a single legitimate alert. We never had the time or manpower to monitor it all.

All network and security analysts currently agree that false alerts are a fundamental problem that cannot be avoided with generic ISS sensors. And it can take up to 10 hours to investigate one false positive. Because in the diverse set of enterprise apps, a stream of false positives from an IDS sensors essentially represent a denial of service attack on security resources of the corporation.  IDS architecture proved to appropriate only to detecting a very  narrow band of attacks on selected hosts and is too low level for detection of any application level exploits.

All-in all IDS based monitoring proved to be very costly to companies.  According to Gartner big companies annual IDS costs are around a hundred thousand dollars.  That doesn't include the cost of on-site personnel that is involved in analyzing (or distracted to be more correct) of all those alerts.

IPS is not a successor to IDS

IPS should be considered not so much as a technology innovation but as ISS vendors attempt to escape financial hole and responsibility for pushing semi-useless systems with selling newer and better mousetrap. In essence it represents an attempt to reduce reliance on signatures and avoid the famous flood of  false-positives, that make IDS a dirty word in security. IPS sits in-line at the network perimeter, scanning incoming traffic for signs of malicious code. Unlike IDS, it can drop suspect traffic automatically or alert network security staff, who will handle it manually. But they also come short of promises.  I am very skeptical about IPS vendors claims that IDS ultimately will replace IDS altogether. It the same fundamentally flowed approach repackages to close the most gaping holes and sold to unsuspecting or outright naive customers.  Some projections are too optimistic:

Infonetics projects a jump from $132.3 million to $425.5 million in sales for inline IDS between 2004 and 2007. Gartner, too, sees IPS sales surpassing IDS sales by the end of 2005.

Generally it is difficult to continue to instigate fear for more then three years. So it is reasonable to expect that the IDS vendors sales might plummet to the level of  2002-2003 or worse no matter what they will do:

Chart

That means that resources on signature updates and testing can shrink and the quality deteriorate. And that will create additional problems for enterprises who are slow to move to policy checking.

In a dream network intrusion-prevention environment, you'd have some device monitoring all of your traffic and detection or even stopping the bad guys. But was an illusion and now it is clear that they will never be capable of doing this.  IDS might get slightly better with time, but they are so compromised idea that enterpizes would be better off by just moving on. IPS are just IDS on steroids and in addition to old problem with the false positives flood you now have a real chance of blocking the wrong traffic and thus damage your business.  It also makes possible to create attacks that use IPS as a zombie by feeding it a set of carefully forged packets in order to cut communication with the important hosts/networks. 

The Technology Hype Cycle and Dawn of Network Monitoring Mania

Security technologies remain a priority for many enterprises. Evaluating the hype and the reality is important for prudent investments and critical for properly protecting the enterprise at a reasonable cost. Like other internet technologies security technologies typically develop in five stages:

  1. Slow growth
  2. Exponential growth
  3. Super hype or bubble period,
  4. Bubble bust,
  5. Realistic assessment and usage of what was sound in it, if any

Gartner calls this "Hype cycle" and defines slightly differently:

A Hype Cycle is a graphic representation of the maturity, adoption and business application of specific technologies.

Since 1995, Gartner has used Hype Cycles to characterize the over-enthusiasm or "hype" and subsequent disappointment that typically happens with the introduction of new technologies (see Understanding Gartner's Hype Cycles
) for an introduction to the Hype Cycle concepts). Hype Cycles also show how and when technologies move beyond the hype, offer practical benefits and become widely accepted.

  1. "Technology Trigger" The first phase of a Hype Cycle is the "technology trigger" or breakthrough, product launch or other event that generates significant press and interest.

  2. "Peak of Inflated Expectations" In the next phase, a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations. There may be some successful applications of a technology, but there are typically more failures.
  3. "Trough of Disillusionment" Technologies enter the "trough of disillusionment" because they fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology.
  4. "Slope of Enlightenment" Although the press may have stopped covering the technology, some businesses continue through the "slope of enlightenment" and experiment to understand the benefits and practical application of the technology.
  5. "Plateau of Productivity" A technology reaches the "plateau of productivity" as the benefits of it become widely demonstrated and accepted. The technology becomes increasingly stable and evolves in second and third generations. The final height of the plateau varies according to whether the technology is broadly applicable or benefits only a niche market.

It is reasonable to assume that IDS technology is entered the stage 4 now. In its height of its hype IDS developers were heroes that lead us to a bright secure future. Not anymore. After Gartner report multiple critical papers litter popular network and computer magazines. The real problem is mainly architectural: most packet sniffing solutions -- whether an IDS or IPS do not have access to the full context that is required to make a sound judgment about the level of threat. Most of this context is host-based.  That's why a network IDS generally have no idea whether an attack is relevant, and the volume of events that they produce tend to hide the dangerous attacks in low-risk and false positives noise.

Frustrated IT department are trying to work around network IDS' shortcomings by correlating IDS alerts with other security and vulnerability information. But it is easier said that done and require pretty open IDS solution, free of proprietary signature database or limitations of the log access on the sensor (that effectively excludes managed IDS solutions).  Currently this is better done by writing own log analysis middleware in scripting languages and gradually integrating it into enterprise monitoring framework like Tivoli.

Where ISS completely failed is distinction between important and trivial: they are crying wolf so many time that most security departments simply stopped reacting to alert relegating them to the level of background noise. Even if IDS will detect a real attack the whole idea is so compromised that nobody will ever care.  So the first change you'll see in intrusion management this year is addition of network recoding to IDS sensors which at least can help to recreate the events after then fact. 

The average ISS system has now several thousand signatures. Most of them are simple "grep-style" packet header based string matching rules. Such a primitive approach has two major problems:

That means that few companies can allocate staff  to manage network IDS intelligently.  In most cases they are just "circulating air" providing an illusion of security instead of real security.

Also in a desperate attempt to preserve their shrinking profits IDS vendors pollute signature database with the completely unrelated staff like virus and worm detection (which are completely unrelated to Network IDS and represent higher level protocols threat). IDS companies jumped in the worm/virus detection simply because this is almost the only useful thing they can show to the customers. It also allows them to make constant updates, speculate of the fear and simplifies the  justification of their annual maintenance fees.

Webliography

[Kim&Spaffort1994] [PDF]  Gene H. Kim, Eugene H. Spafford Experiences with Tripwire: Using Integrity Checkers for Intrusion ...  Purdue Technical Report CSD-TR-94-012  http://www.cs.virginia.edu/~jones/cs551S/papers/experience_with_tripwire.pdf

[Saddi1993]  Allan Saddi   Yet Another File Integrity Checker  URL:  http://philosophysw.com/software/yafic/

[Gartner2003] Richard Stiennon. "Hype Cycle for Information Security, 2003" Gargner Research Group Report. 2003 http://www.gartner.com/pages/story.php.id.8789.s.8.jsp

[Hulme2003]   V. Hulme  Gartner: Intrusion Detection On The Way Out  InformationWeek,  June 13, 2003.
URL: http://www.informationweek.com/story/showArticle.jhtml?articleID=10300918&fb=20030618_security

[Bradley2004] Tony Bradley  Processor Editorial  The Line Between IDS & IPS Solutions Continues To Be Blurred. November 5, 2004 • Vol.26 Issue 45 Page(s) 11 in print issue. URL:  http://www.processor.com/editorial/article.asp?article=articles/P2645/33p45/33p45.asp&guid=

[Radcliff2004] Deborah Radcliff  The evolution of IDS   Network World, 11/08/04 pages 44-46. Last accessed from the URL:  http://www.nwfusion.com/research/2004/110804ids.html?page=2, Novemeber 15, 2004

[Kendall2003] Sandy Kendall  Is Intrusion Detection a Dead-End Technology - CSO Talk Back URL:

[Bekker2003] Scott Bekker   Gartner Intrusion Detection Systems a Bust    ENT News, June 11, 2003. URL:  http://www.entmag.com/news/article.asp?EditorialsID=5844

[Hollows2003] Phil Hollows  IDS is Dead -- Long Live IDS   eSecurityPlanet.com  June 27, 2003. URL:  http://www.esecurityplanet.com/views/article.php/2228631

[Franklin&Wiens2003]  Curtis Franklin Jr. ,  Jordan Wiens   "Are your Web apps secure?"Infoworld  February 06, 2004 URL:  http://www.infoworld.com/article/04/02/06/06FEsecureapp_1.html

[Ferrel2003] Keith Ferrell Intrusion Detection: Bright Future or Dead End?  TechWeb  June 18, 2003 URL: http://www.techweb.com/tech/security/20030618_security

[Schulze2003] Jan Schulze  No Unauthorized Access SAP INFO 18.08.2003 URL:  http://www.sap.info/index.php4?ACTION=noframe&url=http://www.sap.info/public/en/article.php4/Article-206743f3b51321a821/en