|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better
Copyright 2004-2005, Dr. Nikolai Bezroukov. This is a copyrighted unpublished manuscript. All rights reserved.
Note: An earlier version of this paper was published in Softpanorama Bulletin Vol 16, No. 04 (December, 2004)
“They are the same thing. IPS is just a signature IDS with firewall block rules that sits inline. Big whoop. The ‘convergence’—if there is any—is between firewalls and IDS.”
The current configuration of critical DMZ systems is characterized by absence of central logging, as well as absence of systematic, intelligent log monitoring. Currently we implicitly have to trust everyone who has the root password. The best can be done is to introduce two factor authentication (and that's a minimum requirement for any large corporation for admin with root access). The second thing is to direct all root session via special "multiplexor" server, which are the only one from which root access can be accessed on the servers. You can also disable root access completely, relying of sudo, but this is a mixed blessing.
Investment in custom and intelligent log analysis software should be massive. Otherwise this will not work. There are several significant problems that make intelligent log analysis (even using some adaptive technologies) an expensive undertaking:
One of the most important part of log analysis is existence of clear, current and unambiguous policies. The following additional policies can make log analysis a little bit simpler and logs itself more consistent:
The success of log analysis IDS is highly dependent of the quality and the level of enforcement of those policies.
Integrity checkers are very useful for finding Trojan programs and backdoors like Rootkit. Theoretically they are also useful for maintenance, but in reality this goal is pretty difficult to achieve. Perl-written (or Python-written) integrity checkers are more flexible and thus have an edge over C-written tools like Tripwire.
The most popular integrity checker for Unix, Tripwire never was able to supersede its beginning as a student project. Our experience suggests that a free version of Tripwire realistically can only be used in a limited way for static servers like appliances. A commercial version is a little bit more flexible, but not by much. Moreover an introduction of central console for all Tripwire instances created an additional security risk.
The fairy tails that Tripwire can detect or prevent host intrusions as a standalone application are not credible. Theoretically it can, but the tool itself is so inflexible that it largely defeats its purpose without a good log analyzer. On Linux in most case you might have more success with RPM-based checking that with Tripwire.
Still it make sense to install Tripwire on critical severs as with all its faults, it is still the most credible commercial product. At the same time open source products should be investigated in the future as Perl-based integrity checkers are more flexible and powerful that C-based products like Tripwire.
We should avoid creating an all-encompassing rulebase for each server: this is a proven road to nowhere. Older versions of Tripwire were strictly file oriented and the problem of listing all the files and directories quickly made ruleset unmaintanable. Newer version (commercial version 4.0 and later) permit specification of all files in the directory (better late then never ;-)
Still the best policy in using Tripwire is to limit yourself to a few critical system and configuration files (for example used in the rootkits), plus several critical configuration files. Actually control of configuration files is more important and here Tripwire while weak, can at least can provide some return on investment. If you are thinking about using Tripwire for tracking changes, please think again. This is possible by writing custom scripts, but there are better tools for the same purpose. One problem is that if you do not compare with the baseline, you compare with the set of attributes. Also if you control both directory and a file in this directory, then for each change Tripwire will complain twice. In free version of Tripwire there is an option -loosedir which would prevent tripwire from complaining about directory modification time updates that can filter out some noise. In commercial version it became a configuration option.
Security advances push intrusion detection deeper into the host-based domain, essentially making most part of the functionality of the old network generic IDS sensors obsolete and the remaining part of its integrated with firewalls. Drowning in bloated signatures databases and alerts that is of little or no value in locating attacks, security specialists are fed up with signature-based IDS systems.
At least one research company proved to be brave enough to declare that "the king is naked." A Gartner Inc. report [Gartner2003] called intrusion-detection systems a failed technology that isn't cost-effective. As Gardner report correctly stated IDS are dead:
Gartner Group, the well-known analyst firm, caused something of a stir recently with its pronouncement that Intrusion Detection Systems (IDS) and their Intrusion Prevention Systems (IPS) offspring were a market failure -- and in fact will be obsolete by the middle of the decade.
The Stamford, Conn.-based firm declared that IDS and IPS don't deliver the extra layer of security that was promised, and that many IDS implementations have been ineffective.
Gartner clearly has picked up on a massive source of end-user industry pain. IDS have long been derided as difficult to manage, creating many false positives and negatives, which is one of the reasons that security event management solutions evolved -- to make IDS both more manageable and more effective.
Some parts of IDS technology will definitely survive. Moreover Gartner's prediction to a certain extent contradicts previous buying trends of organizations. According to the Computer Security Institute - FBI annual Computer Crime and Security Survey, only 43% of organizations bought intrusion-detection systems in 1998. That percentage has climbed steadily every year to reach 73% in 2002. Nonetheless, Stiennon considers that investments in intrusion-detection systems have already stalled because of all of their shortcomings.
Gartner suggests that "deep packet inspection" will move into firewalls in the coming years. More realistic strategy is retooling of IDS sensors to monitor appliances that cannot be easily integrated into existing monitoring framework (Tivoli framework in case of large company ). But what is actually dead is the sales pitch that IDS can protect the company from intrusions. It never did that in the first place and from the beginning served largely as an insurance policy
Despite a real threat of network exploits and a shrinking time gap between vulnerabilities and exploits, signature-matching IDS has become obsolete. Here is one relevant quote: URL: http://news.zdnet.com/2100-1009_22-997106.html
Intrusion detection systems are dead, a panel of analysts told the RSA Conference on Monday. The question remains what should replace them, and whether the newly fashionable "intrusion prevention systems" are more than just a change of buzzword.
"IDS is dead," said Vic Wheatman of Gartner Group. "People bought it, installed it and turned it down when they had too many alerts."
Analyst Mike Rasmussen of Giga agreed: "75 percent of IDS installations were failures," he said, blaming a failure to allocate enough resources to weed out the false positives, where the IDS issues a false alarm. But intrusion prevention--where systems are designed to respond automatically to prevent an attack having any effect -- is not necessarily the panacea it is made out to be, he warned: "In many cases, it's the old vendors abusing the term."
Large companies should retool existing IDS censors as specialized network monitors for appliances that cannot be integrated into Tivoli framework and as generic traffic analyzers (NikSun sensors). large company should not count on IDS to die as Gartner predicted in a controversial report last year. Instead, efforts needed to integrate IDS component into larger Tivoli framework, which should be oriented more on policy enforcement then on intrusion detection and primarily uses host monitoring and log analysis as more reliable and cost effective technologies.
In the near term, this relegates currently installed IDS to a forensics and after-the-fact inspections. But in five years or so new security technologies could cause the demise of signature-based IDS altogether
Among the problems associated with IDSs we can mention the following:
Generic IDS sensors has proved to be prone to streams of false alerts. The essence of the problem stems from IDS over-reliance on signatures. As AV vendors know perfectly well signature based approach is mostly reactive. But that's not suitable for network IDS stated goals, so IDS vendors are caught in the constant pressure to make signature more generic in order to be able to catch modified variants of known threats. Unfortunately this dramatically increases the rate of false positives and, in case of IPS, would cause legitimate traffic to be blocked. That's not acceptable in a production environment like large company have.
That's why many Wall Street companies are now all too happy to rid themselves of their signature-based systems altogether [[Bradley2004]:
"Every time we got a report off an IDS, it was pulse-raising. There'd be two $100,000-a-year Cisco Certified Network Engineers plowing through event logs trying to figure out what's going on," says Chris Van Waters, senior director of IT for QuadraMed, a Westin, Va., healthcare technology company with 1,000 employees. "Meanwhile, we've still got the network degraded, traffic's going through the roof, and we don't know where it's coming from."
The problem with IDS and IPS systems is that they assume everything is good until proven bad. Policy monitoring defines what is acceptable and anything outside of that is assumed bad and as such is a more realistic strategy.
In fact, the management and performance drawbacks of IDS proved to be so notorious that a Gartner Information Security Hype Cycle report published in June 2003 declared the category a market failure [Gartner2003]. Instead Gartner recommended that organizations hold off investing in IDS and shift resources to vulnerability scanning, server hardening, and newer, deep-packet inspection firewalls, which are more adept than standard firewalls at detecting and stopping application-level attacks .
Trying to survive, some network IDS vendors started work on elimination of false positives. They resort to various heuristics to determine if an attack is relevant and are trying to sell "enhanced" technologies like anomaly detection, heuristics traffic analysis, application level protocols recreation and analysis, etc. For example NFR now sells an operating system fingerprinting module, a technique that uses a proprietary sniffer to determine what applications are running on the network and tuning signature database accordingly.
Another heuristics is to a baseline of common traffic patterns on a device level and for each device correlate only anomalies from the baseline. While being scanned every second of every day does not mean much, it might be useful for customers to see the type and content of packets that are outside the baseline, the amount of such packets per hour and corresponding ports distributions for such packets. If those abnormal packets for example looks like someone's doing a specific attack on an HTTP port of commerce server, then there is higher chance that those are relevant and deserve some action of the company personnel.
But the market already turned negative to anything connected with the word IDS because people are tired of the care and feeding of traditional, signature-based IDS implementations and see them as having negative price/return ratio. Everybody agrees that it takes an inordinate amount of time to get meaningful IDS data from those systems, hence the investment in IDS software does not pay off. Investment into event correlation with the hosts might help to distill1 it into a more manageable volume but this is a very expensive path.
Typical reaction on the cost of monitoring of false positives can be substantial both in dollar metrics and its demoralizing effects (Crying Wolf problem) as aptly summarized by an anonymous security specialist at an electric utility:
Our IDS was a mess, alerting us on absolutely everything. In fact, I can’t even remember a single legitimate alert. We never had the time or manpower to monitor it all.
All network and security analysts currently agree that false alerts are a fundamental problem that cannot be avoided with generic IDS sensors. And it can take up to 10 hours to investigate one false positive. Because in the diverse set of enterprise apps, a stream of false positives from an IDS sensors essentially represent a denial of service attack on security resources of the corporation. IDS architecture proved to appropriate only to detecting a very narrow band of attacks on selected hosts and is too low level for detection of any application level exploits.
All-in all IDS based monitoring proved to be very costly to companies. According to Gartner big companies annual IDS costs are around a hundred thousand dollars. That doesn't include the cost of on-site personnel that is involved in analyzing (or distracted to be more correct) of all those alerts.
IPS should be considered not so much as a technology innovation but as IDS vendors attempt to escape financial hole and responsibility for pushing semi-useless systems with selling newer and better mousetrap. In essence it represents an attempt to reduce reliance on signatures and avoid the famous flood of false-positives, that make IDS a dirty word in security. IPS sits in-line at the network perimeter, scanning incoming traffic for signs of malicious code. Unlike IDS, it can drop suspect traffic automatically or alert network security staff, who will handle it manually. But they also come short of promises. I am very skeptical about IPS vendors claims that IDS ultimately will replace IDS altogether. It the same fundamentally flowed approach repackages to close the most gaping holes and sold to unsuspecting or outright naive customers. Some projections are too optimistic:
Infonetics projects a jump from $132.3 million to $425.5 million in sales for inline IDS between 2004 and 2007. Gartner, too, sees IPS sales surpassing IDS sales by the end of 2005.
Generally it is difficult to continue to instigate fear on a sustainable basis. That means that resources on signature updates and testing can shrink and the quality deteriorate. And that will create additional problems for enterprises who are slow to move to policy checking.
In a dream network intrusion-prevention environment, you'd have some device monitoring all of your traffic and detection or even stopping the bad guys. But was an illusion and now it is clear that they will never be capable of doing this. IDS might get slightly better with time, but they are so compromised idea that enterprises would be better off by just moving on. IPS are just IDS on steroids and in addition to old problem with the false positives flood you now have a real chance of blocking the wrong traffic and thus damage your business. It also makes possible to create attacks that use IPS as a zombie by feeding it a set of carefully forged packets in order to cut communication with the important hosts/networks.
Security technologies remain a priority for many enterprises. Evaluating the hype and the reality is important for prudent investments and critical for properly protecting the enterprise at a reasonable cost. Like other internet technologies security technologies typically develop in five stages:
Gartner calls this "Hype cycle" and defines slightly differently:
A Hype Cycle is a graphic representation of the maturity, adoption and business application of specific technologies.
Since 1995, Gartner has used Hype Cycles to characterize the over-enthusiasm or "hype" and subsequent disappointment that typically happens with the introduction of new technologies (see Understanding Gartner's Hype Cycles) for an introduction to the Hype Cycle concepts). Hype Cycles also show how and when technologies move beyond the hype, offer practical benefits and become widely accepted.
- "Technology Trigger" The first phase of a Hype Cycle is the "technology trigger" or breakthrough, product launch or other event that generates significant press and interest.
- "Peak of Inflated Expectations" In the next phase, a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations. There may be some successful applications of a technology, but there are typically more failures.
- "Trough of Disillusionment" Technologies enter the "trough of disillusionment" because they fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology.
- "Slope of Enlightenment" Although the press may have stopped covering the technology, some businesses continue through the "slope of enlightenment" and experiment to understand the benefits and practical application of the technology.
- "Plateau of Productivity" A technology reaches the "plateau of productivity" as the benefits of it become widely demonstrated and accepted. The technology becomes increasingly stable and evolves in second and third generations. The final height of the plateau varies according to whether the technology is broadly applicable or benefits only a niche market.
It is reasonable to assume that IDS technology is entered the stage 4 now. In its height of its hype IDS developers were heroes that lead us to a bright secure future. Not anymore. After Gartner report multiple critical papers litter popular network and computer magazines. The real problem is mainly architectural: most packet sniffing solutions -- whether an IDS or IPS do not have access to the full context that is required to make a sound judgment about the level of threat. Most of this context is host-based. That's why a network IDS generally have no idea whether an attack is relevant, and the volume of events that they produce tend to hide the dangerous attacks in low-risk and false positives noise.
Frustrated IT department are trying to work around network IDS' shortcomings by correlating IDS alerts with other security and vulnerability information. But it is easier said that done and require pretty open IDS solution, free of proprietary signature database or limitations of the log access on the sensor (that effectively excludes managed IDS solutions). Currently this is better done by writing own log analysis middleware in scripting languages and gradually integrating it into enterprise monitoring framework like Tivoli.
Where IDS completely failed is distinction between important and trivial: they are crying wolf so many time that most security departments simply stopped reacting to alert relegating them to the level of background noise. Even if IDS will detect a real attack the whole idea is so compromised that nobody will ever care. So the first change you'll see in intrusion management this year is addition of network recoding to IDS sensors which at least can help to recreate the events after then fact.
The average IDS system has now several thousand signatures. Most of them are simple "grep-style" packet header based string matching rules. Such a primitive approach has two major problems:
That means that few companies can allocate staff to manage network IDS intelligently. In most cases they are just "circulating air" providing an illusion of security instead of real security.
Also in a desperate attempt to preserve their shrinking profits IDS vendors pollute signature database with the completely unrelated staff like virus and worm detection (which are completely unrelated to Network IDS and represent higher level protocols threat). IDS companies jumped in the worm/virus detection simply because this is almost the only useful thing they can show to the customers. It also allows them to make constant updates, speculate of the fear and simplifies the justification of their annual maintenance fees.
[Kim&Spaffort1994] [PDF] Gene H. Kim, Eugene H. Spafford Experiences with Tripwire: Using Integrity Checkers for Intrusion ... Purdue Technical Report CSD-TR-94-012 http://www.cs.virginia.edu/~jones/cs551S/papers/experience_with_tripwire.pdf
[Saddi1993] Allan Saddi Yet Another File Integrity Checker URL: http://philosophysw.com/software/yafic/
[Gartner2003] Richard Stiennon. "Hype Cycle for Information Security, 2003" Gargner Research Group Report. 2003 http://www.gartner.com/pages/story.php.id.8789.s.8.jsp
[Hulme2003] V. Hulme Gartner:
Intrusion Detection On The Way Out InformationWeek, June 13, 2003.
[Bradley2004] Tony Bradley Processor Editorial The Line Between IDS & IPS Solutions Continues To Be Blurred. November 5, 2004 • Vol.26 Issue 45 Page(s) 11 in print issue. URL: http://www.processor.com/editorial/article.asp?article=articles/P2645/33p45/33p45.asp&guid=
[Radcliff2004] Deborah Radcliff The evolution of IDS Network World, 11/08/04 pages 44-46. Last accessed from the URL: http://www.nwfusion.com/research/2004/110804ids.html?page=2, Novemeber 15, 2004
[Kendall2003] Sandy Kendall Is Intrusion Detection a Dead-End Technology - CSO Talk Back URL:
[Bekker2003] Scott Bekker Gartner Intrusion Detection Systems a Bust ENT News, June 11, 2003. URL: http://www.entmag.com/news/article.asp?EditorialsID=5844
[Hollows2003] Phil Hollows IDS is Dead -- Long Live IDS eSecurityPlanet.com June 27, 2003. URL: http://www.esecurityplanet.com/views/article.php/2228631/IDS-is-Dead--Long-Live-IDS.htm
[Franklin&Wiens2003] Curtis Franklin Jr. , Jordan Wiens "Are your Web apps secure?") Infoworld February 06, 2004 URL: http://www.infoworld.com/article/04/02/06/06FEsecureapp_1.html
[Ferrel2003] Keith Ferrell Intrusion Detection: Bright Future or Dead End? TechWeb June 18, 2003 URL: http://www.techweb.com/tech/security/20030618_security
[Schulze2003] Jan Schulze No Unauthorized Access SAP INFO 18.08.2003 URL: http://www.sap.info/public/en/article.php4/Article-206743f3b51321a821/en
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Created May 1, 2004; Last modified: March 12, 2019