May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

An observation about corporate security departments

Version 2.1 with updates of August 24, 2015

News Slightly Skeptical View on Enterprise Unix Administration Recommended Links Big Uncle is Watching You Privacy is Dead – Get Over It Cloud providers as intelligence collection hubs Articles Important Government publications Tutorials
"Softpanorama laws of security"  Classic Open Source  Security Tools Solaris Security Red Hat security Suse Security Managing AIX logs AIX security Hardening Top Vulnerabilities In Linux Environment
Identity theft Intrusion detection Audit and Hardening Firewalls Network Security Perl Scripts Port Scanners Integrity Checkers Nephophobia: avoiding clouds
Security Certifications CISSP Certification DNS Security DB_security Log Auditing Hype Sysadmin Horror Stories Humor Etc


Security department in a large corporation is often staffed with people who are useless in any department and they became really harmful in this new for them role. One typical case is enthusiastic "know-nothings" who after the move to the security department  almost instantly turn into script kiddies and are ready to test on internal corporate infrastructure (aka production servers) latest and greatest exploits downloaded from Internet, seeing such activity as their sacred duty inherent in this newly minted "security specialists" role. 

So let's discuss how to protect ourselves from those "institutionalized" hackers who are often more clueless and no less ambitious then a typical teenager script kiddie (Wikipedia):

In a Carnegie Mellon report prepared for the U.S. Department of Defense in 2005, script kiddies are defined as

"The more immature but unfortunately often just as dangerous exploiter of security lapses on the Internet. The typical script kiddy uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet ó often randomly and with little regard or perhaps even understanding of the potentially harmful consequences.[5]

In this case you need to think about some defenses. Enabling firewall on an internal server is probably an overkill and might get you into hot water quicker than you can realize consequences of such a conversion. But enabling TCP wrappers can help in cutting oxygen for overzealous "know-nothings" who typically are able to operate only from a dozen of so IP addresses (those addresses are visible in logs; also your friends in networking department can help ;-). 

The simplest way is to include into deny file those protocols that caused problem in previous scan attempts. For example ftpd daemon sometimes crashes when subjected to sequence of packets representing exploit. HP-UX version of wu-ftpd is probably one of the weakest in this respect.  Including in /etc/deny files selected hosts from which they run exploits can help. One thing that is necessary is to ensure that the particular daemon is compiled with TCP wrappers (which is the case for vsftpd and wu-ftpd) or run via xinetd on Linux or Solaris and via TCP wrappers on other Unixes.

Including SSH and telnet might also helpful as block the possibility of exploiting some unpatched servers. Latest fashion in large corporation is to make a big show from any remotely exploitable bug with spreadsheets going to the high management. Which create substantial and completely idiotic load on rank-and-file system administrators as typically corporation architecture has holes in which elephant can walk-in and any particular exploit change nothing in this picture. In this case restricting access to ssh to a handful useful servers or local subnets can save you some time not only with currently "hot" exploit, but in the future too. 

In this case even after inserting a new entry into /etc/passwd or something similar, it is impossible to login to the server outside a small subset of internet addresses. This is a good security practice in any case. 

See entries collected in the "Old News" section for additional hints.

Another useful measure is creating baseline of  /etc and several other vital directories (/root and cron files). Simple script comparing current status with the base line is useful not only again those jerks but also can provide important information about actions of your co-workers who sometimes make changes and forget to notify other people who administer the server.  And that means you.

And last but not least, new, unique entries in output of last command should be mailed to you within an hour. Same is for new, unique entries in /etc/messages. Server in corporate environment mostly perform dull repeating tasks and one week of long of logs serves as an excellent baseline of what to expect from the server. Any lines that are substantially different should generate a report, which in simplest case can be mailed or collected via scp on a special server with web server to view them. Ot they can be converted in pseudo-mailbox with the ability to view those reports via simple webmail interface.   

Arbitrary and excessive installation of firewalls

Now each Unix/Linux server has its own built-in firewall which with careful configuration can protect the server from many "unpatched" remote exploits. But security department like any bureaucratic organization has its own dynamic and after some spectacular remote internet exploit or revelation about activities of three-letter agencies some wise head in this department comes with the initiative of installing additional firewalls.

To object to such an initiate means to take the responsibility to possible breach, so usually corporate brass approves it. At this point real fun starts as typically the originator of the idea has no clue about vulnerable aspects of corporate architecture and is just guided by the principle the more is the better and improvise rule in ad hoc fashion. The key problem with such an approach is that as soon as some level of complexity of rulesets raise above IQ of  their creators and became the source of denial of service attacks on corporate infrastructure. In other words it became an "

institutionalized hackers"  attack on this infrastructure and instead of increasing security turns into huge, unwarranted burden for both users and system administrators.  Moreover rules became stagnant and nobody understands set of firewall rules used fully people are just afraid of touching this helter-skelter mess.  In other words they soon became non-adaptable and exists as a ritual, not as an active adaptable protection layer.

Moreover, typically people to put forward such initiatives never ask themselves a question whether there are alternative ways to login to the server. And on modern server there is such (actually extremely vulnerable) way: using ILO/DRAC or other built-in specialized computer for this purpose. Without putting those connections on a specially protected network segment and regular updates of firmware (I think exploits for it are a priority for three-letter agencies)  this path is represent essentially an open backdoor to the server OS that bypass all this "multiplexer" controlled logins.

And this tendency of arbitrary, caprice based compartmentalization using badly firewalls has some other aspects that we will discuss in a special paper. One of them is that it typically raises the complexity of infrastructure far above IQ of staff and soon the set of rules soon becomes unmanageable. Nobody fully understand it and as such it represents a weak point in the security infrastructure not a strong point. Unless periodically pruned such set often makes even trivial activities such as downloading patches extremely cumbersome and subject to periodic "security-inspired" denial of service attacks.

Actually all this additional firewall infrastructure soon magically turns into giant denial of service attack on the users and system administrators. Attack on the scale that hackers can only envy  with hdden losses in productivity far above typical script kiddies games costs ;-).  

Usage of login multiplexors

Another typical "fashion item" is install special server (we will call in login multiplexor) to block direct user logins and files transfer to the servers in some datacenter or lab. Fro now on everybody should use  "multiplexer" for initial login. Only from it you can connect and retrieve data from the any individual server with "multiplexed" territory.

The problem with this approach is that if two factor authentication is used there is already central server (server that controls tokens) that collects all the data about user logins. Why we need another one?  If security folk does not have IQ to use those already available data what exactly login multiplexor changes in this situation ?

And if multiplexer does not use two factor authentication using some type of tokens, it is a perfect point for harvesting all corporate passwords. So it in itself is just a huge vulnerability. Which should be avoided at all costs.

That suggests that while this idea has its merits, correct implementation is not trivial and as with everything corporate security requires deep understanding of architecture, understanding that is by definition lacking in security departments.

This move especially tricky (and critically effects productivity of researchers) for labs. And the first  question here: "What we are protecting?" One cynical observation is that due to wave of downsizing staff might have so little loyalty that if somebody wants particular information we are trying to protect, he/she can buy it really cheaply without troubles of breaching into particular labs computers. Moreover breaking into cloud based email servers for key researchers (emails which BTW are replicated on their cell phones) in best NSA style also is a cheaper approach.

Naively put in place "login multiplexer" severely restricts "free movement of data" which is important for research as a discipline and, as such,  has huge negative effect on researchers.

Also while  multiplexer provides all the login data for the connection to it, it does not provide automatically any means to analyze them and this part should be iether written or acquired elsewhere. In this is not done, and activity of users is not analyzed to minimize negative effects,  multiplexer soon becomes pretty useless additional roadblock for the users. Just another ritual peace of infrastructure. And nobody has courage to shout "King is naked".

"Popular exploits" and patching them as a new datacenter voodoo ritual

Linux is a complex OS with theoretically unlimited number of exploits both in kernel and major filesystems. So patching a single exploit logically does not improve security much as we never know how many "zero-days" exploit exists "in the wild" and are in the hands of hackers and three-letter agencies.

At the same time paranoia is artificially wipes up by all those security companies which understand that this represents a chance to sell their (often useless of harmful) wares to stupid folk in large corporations (aka milk cows).

Typically conditions for applicability and exact nature of exploit is not revealed so it is just proclaimed to be another "Big Bad Exploit" and everything is rushing to patch it in pretty crazy frenzy as if their life depends on it. Forgetting about a proverb that mushrooms are usually grow in packs and that some exploit that are "really exploitable" (see for example Top 5 Worst Software Exploits of 2014  -- [Added May2, 2015, NNB]) often point to some architectural flaws in the network architecture of the corporation (OpenSSL and OpenSSH vulnerabilities, and, especially,  Cryptolocker Trojan are quite interesting examples here). It goes without saying that architectural flaws can't be fixed by patching.

So patching new remotely exploited vulnerability became much like voodoo ritual by which shaman try to sway evil daemons to stay away from the corporation. Questions about about what aspects of the current datacenter architecture make this vulnerability more (or less) dangerous and how to make architecture more "exploit resistance" are never asked.

All efforts are spend on patching one current "fashionable" exploit that got great (and typically incompetent) MSM coverage, often without even investigated can it be used in the particular environment or not. All activity is driven by spreadsheet with the list of "vulnerable" servers. Achieving zero count is all that matter.  After this noble goal is achieved all activity stops until the next "fashionable" exploit.

Voodoo network security and NIDS security hype

"We are going to spend a large amount of money to produce a more complicated artifact, and it is not easy to quantify what we are buying for all the money and effort"

Bob Blakey, principal analyst with
Burton Group about password RFIDs

Bullshitting is not exactly lying, and bullshit remains bullshit whether it's true or false. The difference lies in the bullshitter's complete disregard for whether what he's saying corresponds to facts in the physical world: he does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

Harry G. Frankfurt
On Bullshit

Marketing-based security has changed the expectations about the products that are pushed by security companies in a very bad way. Like MSM in foreign policy coverage, vendors who are manufacturing a particular "artificial reality", painting an illusion, and then showing how their products make that imaginary reality more secure.

We should be aware that "snake oil" sellers recently moved into security. And considerable financial success. Of course, IT industry as a whole using "dirty" marketing tricks to sell their wares, but here situation is really outstanding when a company (ISS) which produces nothing but crap at the end was bought 1.3 billion dollars by IBM in 2006.  I think  IBM brass would be better off spending those money in best former TYCO CEO Dennis Kozlowski style with wild orgies somewhere in Cypress or other Greek island :-).  At least this way money would be really wasted in style ;-).

Let's talk about one area, that one that I used to know really well -- network intrusion detector systems (NIDS)

For some unknown to me reason the whole industry became pretty rotten selling mostly hype and FUD. Still I need to admit that FUD sells well. The total size of the world market for network IDS is probably several hundred millions dollars and this market niche is occupied by a lot of snake oil salesmen:

Synergy Research Group reported that the worldwide network security market spending continued to be over the $1 billion in the fourth quarter of 2005, in all segments -- hybrid solutions (firewall/VPN, appliances, and hybrid software solutions), Intrusion Detection/Prevention Systems (IDS/IPS), and SSL VPN.

IDS/IPS sales increased seven percent for the quarter and were up 30 percent over 2004. Read article here.

Most money spent on IDS can be spend with much greater return on investment on ESM software as well as on improving rules in existing firewalls, increasing quality of log analyses and host based integrity checking.

That means that network IDS area is a natural area where open source software is more competitive then any commercial software. Simplifying we can even state that the fact of acquisition of commercial IDS by any organization can be a sign or weak or incompetent management ( although reality is more complex and sometimes such an acquisition is just a reaction on pressures outside IT like compliances-related pressures; moreover some implementations were done under the premises of "loss leader" mentality under the motto "let those jerks who want it have this sucker" ).

Actually an organization that is spending money on NIDS without first creating a solid foundation by deploying ESM commits what is called "innocent fraud" ;-). It does not matter what traffic you detect, if you do not understand what exactly happening on your servers/workstations and view your traffic as an unstructured stream, a pond out of which IDS magically fish alerts. In reality most time IDS is crying wolf even few useful alerts are buried in the noise. Also "real time" that is selling point for IDS does not really matter: most organization have no possibility to react promptly on alerts even if we assume that there are (very rare) cases when NIDS pick up useful signal instead on noise.   A good introduction to NIDS can be found at NIST Draft Special Publication 800-94, Guide to Intrusion Detection and Prevention (IDP) Systems (Adobe PDF (2,333 KB) Zipped PDF (1,844 KB) )

A typical network IDS (NIDS) uses network card(s) in promiscuous mode, sniffing all packets on each network segment the server is connected to. Installations usually consists of several sensors and a central console to aggregate and analyze data. NIDS can be classified into several types:

The second important classification of NIDS is the placement:

Organizations rarely have the resources to investigate every "security" event. Instead, they must attempt to identify and address the top issues, using the tools they've been given. This is practically impossible if an IDS is listening to a large traffic stream with many different types of servers and protocols. In this case security personnel, if any, are being forced to practice triage: tackle the highest-impact problems first and move on from there. Eventually it is replaced with even more simple approach: ignore them all ;-). Of course much depends on how well signatures are tuned to particular network infrastructure.  Therefore another classification can be based on the type of signature used:

Even is case when you limit traffic to specific segment of the internal network (for example local sites in national or international corporation, which is probably the best NIDS deployment strategy) the effectiveness of network IDS is low but definitely above zero. That can be marginally useful in this restricted environment. Moreover that might have value for network troubleshooting (especially if they also configured to act as a blackbox recorder for traffic; the latter can be easily done using TCPdump as the first stage and processing TCPdump results with Perl scripts postprocessing, say, each quarter of an hour).  Please not that all those talks about real time detection are 99% pure security FUD. Nothing can be done in most large organizations in less then an hour ;-)

In order to preserve their business (and revenue stream) IDS vendors started to hype intrusion prevention systems as the next generation of IDS. But IPS is a very questionable idea that mixes the role of firewall with the role of IDS sensor. It's not surprising that it backfired many times for early (and/or too enthusiastic) adopters (beta addicts). 

It is very symptomatic and proves the point about "innocent fraud" that intrusion prevention usually is advertised on the base of its ability to detect mail viruses, network worms threats and Spyware.  For any specialist it is evident that mail viruses actually should be detected on mail gateway and it is benign idiotism to try to detect then on the packet filter level.  Still idiotism might be key to commercial success and most IDS vendors pay a lot of attention to the rules or signatures that provide positive PR and that automatically drives that into virus/worms detection wonderland. There are two very important points here:

May be things eventually improve, but right now I do not see how commercial IDS can justify the return on investment and NIDS looks like a perfect area for open source solutions. In this sense please consider this page a pretty naive (missing organizational dynamic and power grab issues in large organizations) attempt to counter "innocent fraud" to borrow the catch phrase used by famous economist John Kenneth Galbraith  in the title of his last book "The Economics of Innocent Fraud".  

Important criteria for NIDS is also the level of programmability:

It's rather counterproductive to place NIDS in segments with large network traffic. Mirroring port on the switches work in simple cases, but in complex cases where there are multiple virtual LANs that will not work as usually only one port can be mirrored. Also mirroring increase the load on the switch. Taps are additional component and are somewhat risky on high traffic segments unless they are designed to channel all the traffic in case of failure.  Logically network IDS belongs to firewall and some commercial firewalls have rudimentary IDS functionality. Also personal firewall with NIDS component might be even be more attractive for most consumers as they provide some insight on what is happening. They also can be useful for troubleshooting.   Their major market is small business and probably people connected by DSL or cable who fear that their home computers may be invaded by crackers.

The problem is that useful signal about probes on actual intrusions is usually buried under mountains of data and wrong signal may drive you in a wrong direction.  A typical way to cope with information overload from network IDS is to rely more on the aggregation of data (for example, detect scans not single probes) and  "anomaly detection"  (imitate firewall detector or use statistical criteria for traffic aggregation). 

Misuse detection is more costly and more problematic that anomaly detection approach with the notable exception of honeypots. It might be beneficial to use a hybrid tools that combine honeypots and NIDS. Just as a sophisticated home security system might comprise both external cameras and sensors and internal monitoring equipment to watch for suspicious activity both outside and within the house - so should an intrusion detection system.

You may not know it, but a surprisingly large number of IDS vendors have license provisions that can prohibit you from communicating information about the quality and usability of their security software. Some vendors have used these software license provisions to file or threaten lawsuits to silence users who criticized software quality in places such as Web sites, Usenet newsgroups, user group bulletin boards, and the technical support boards maintained by software vendors themselves. Here open source has a definite advantage, because it may be not the best but at least it is open, has a reasonable quality (for example Snort is very competitive with most popular commercial solutions) or at least it is the cheapest alternative among several equally bad choices ;-). 

IDS often (and wrongly) are considered to be the key component for the enterprise-level security. Often that is achieved by buying fashionable but mainly useless outsourced IDS services. Generally this idea has a questionable value proposition because of the level of false positives and problems with the internal infrastructure (often stupid misconfigurations on WEB server level, inability to apply patches in a timely manner, etc.) that far outweigh and IDS-inspired capabilities.  If you are buying IDS, the good staring point is to ask to show what attacks they recently detected and negotiate one to six month trial before you pay the money ("try before you buy"). 

The problem of false positives for IDS is a very important problem that is rarely discussed on a sound technological level. I don't think there is a 'best' IDS.  But here are some considerations:

You probably got the idea at this point: the IQ of the network/security administrators and the ability to adapt the solution to this organization is of primary importance in the IDS area, more important then in, say, virus protection (where precooked signatures sets rules despite being a huge overkill).  

All-in-all the architecture and the level of customarization of the rulebase are more important then the capabilities of the NIDS.

Instead of conclusion

From: Anonymous

Reverse tunneling is very, very useful, but only in quite specific cases. Those cases are usual the result of extreme malice and or incompetence of network staff.

From: Anonymous

The only difficult part here is to determine what's the most common attribute of most IT/networking departments: Malice or incompetence. My last IT people had certainly both.




FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October, 03, 2017