May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Enterprise Logs Collection and Analysis Infrastructure

News Unix System Monitoring Recommended Links Recommended Papers Rsyslog syslog-ng
Syslog daemon Remote Syslog Pipes in syslog Solaris Logs Managing AIX logs http logs
Log rotation Logrotate Log rotation in RHEL/CENTOS Log Rotation in Solaris    
Syslog analyzers Logwatch Syslog Anomaly Detection Analyzers Devialog Syslog viewers Logger
Event correlation Horror Stories Tips Random Findings Humor Etc

A system log is a recording of certain events. The kind of events found in a system log is determined by the nature of the particular log as well as configurations used to control those events in daemons and applications that use the central logging facility. System logs (as exemplified by the classic Unix syslog daemon) are usually text files containing a timestamp and other information specific to the message or subsystem.

The importance of a network-wide, centralized logging infrastructure cannot be underestimated. This is a really central part of any server monitoring infrastructure.  Analysis of logs also can be an important part of security infrastructure. Much more important then fashionable Network Intrusion detection system -- a black hole that consumes untold number of millions of dollars each year in most developed countries.

This page presents several approaches to collect and monitor system logs, and first of all based on traditional Unix syslog facility.  Some important issues include:

The first step in enterprise log analysis is creation of central Loghost server -- the server that collects logs from all servers or all servers of specific type (for example one for AIX, one for HP and one for Solaris).

Top Visited
Past week
Past month


Old News ;-)

[Jun 25, 2013] Edward Snowden, Bradley Manning and the risk of the low-level, tech-savvy leaker

..."required analysts to review computer logs to identify suspicious behavior on the network." That's not a trivial task...
The Washington Post

Since the disclosures by Wiki­Leaks in 2010, the Pentagon has taken steps to better protect its classified networks.

It has banned the use of thumb drives unless special permission is given, mandated that users have special smart cards that authenticate their identities and required analysts to review computer logs to identify suspicious behavior on the network.

[Mar 11, 2012] Project Lumberjack to improve Linux logging

See also Petition Lennart Poettering Stop writing useless programs - systemd, Journal
Mar 1, 2012 | Bazsi's blog
In a lively discussion at the RedHat offices two weeks ago in Brno, a number of well respected individuals were discussing how logging in general, and Linux logging in particular could be improved. As you may have guessed I was invited because of syslog-ng, but representatives of other logging related projects were also in nice numbers: Steve Grubb (auditd), Lennart Poettering (systemd, journald), Rainer Gerhards (rsyslog), William Heinbockel (CEE, Mitre) and a number of nice people from the RedHat team.

We discussed a couple of pain points for logging, logging is usually an afterthought during development, computer based processing, correlation of application logs is nearly impossible. We roughly agreed that the key to improve the situation is to involve the community at large, initiate a momentum and try to get application developers on board and have them create structured logs. We also agreed that this will not happen overnight and we need to take a gradual approach.

To move into that direction, the benefits of good logging needs to be communicated and delivered to both application developers and their users.

We also talked about what kind of building blocks are needed to deliver a solution fast, and concluded that we basically have everything available, and even better they are open source. The key is to tie these components together, document best practices and perhaps provide better integration.

Thus project Lumberjack was born, hosted as a Fedora project at

The building blocks that need some care are:

Most of these is already possible by using a combination of tools and proper configuration, however learning how to do this is not a trivial undertaking for those who only want to develop or use applications.

Changing that is the primary aim of Project Lumberjack. If you are interested in logging, make sure to check that out.

See also

[Feb 24, 2010] Log message classification with syslog-ng by Robert Fekete

In syslog-ng 3.0 a new message-parsing and classifying feature (dubbed pattern database or patterndb) was introduced. With recent improvements in 3.1 and the increasing demand for processing and analyzing log messages, a look at the syslog-ng capabilities is warranted.
January 13, 2010 |

The nine-year-old syslog-ng project is a popular, alternative syslog daemon - licensed under GPLv2 - that has established its name with reliable message transfer and flexible message filtering and sorting capabilities. In that time it has gained many new features including the direct logging to SQL databases, TLS-encrypted message transport, and the ability to parse and modify the content of log messages. The SUSE and openSUSE distributions use syslog-ng as their default syslog daemon.

In syslog-ng 3.0 a new message-parsing and classifying feature (dubbed pattern database or patterndb) was introduced. With recent improvements in 3.1 and the increasing demand for processing and analyzing log messages, a look at the syslog-ng capabilities is warranted.

The main task of a central syslog-ng log server is to collect the messages sent by the clients and route the messages to their appropriate destinations depending on the information received in the header of the syslog message or within the log message itself. Using various filters, it is possible to build even complex, tree-like log routes. For example:

It is equally simple to modify the messages by using rewrite rules instead of filters if needed. Rewrite rules can do simple search-and-replace, but can also set a field of the message to a specific value: this comes handy when client does not properly format its log messages to comply with the syslog RFCs. (This is surprisingly common with routers and switches.) Version 3.1 of makes it possible to rewrite the structured data elements in messages that use the latest syslog message format (RFC5424).

Artificial ignorance

Classifying and identifying log messages has many uses. It can be useful for reporting and compliance, but can be also important from the security and system maintenance point of view. The syslog-ng pattern database is also advantageous if you are using the "artificial ignorance" log processing method, which was described by Marcus J. Ranum (MJR):

Artificial Ignorance - a process whereby you throw away the log entries you know aren't interesting. If there's anything left after you've thrown away the stuff you know isn't interesting, then the leftovers must be interesting.

Artificial ignorance is a method to detect the anomalies in a working system. In log analysis, this means recognizing and ignoring the regular, common log messages that result from the normal operation of the system, and therefore are not too interesting. However, new messages that have not appeared in the logs before can signify important events, and should therefore be investigated.

The syslog-ng pattern database

The syslog-ng application can compare the contents of the received log messages to a set of predefined message patterns. That way, syslog-ng is able to identify the exact log message and assign a class to the message that describes the event that has triggered the log message. By default, syslog-ng uses the unknown, system, security, and violation classes, but this can be customized, and further tags can be also assigned to the identified messages.

The traditional approach to identify log messages is to use regular expressions (as the logcheck project does for example). The syslog-ng pattern database uses radix trees for this task, and that has the following important advantages:

For example, compare the following:

A log message from an OpenSSH server:

    Accepted password for joe from port 42156 ssh2
A regular expression that describes this log message and its variants:
    Accepted \ 
        (gssapi(-with-mic|-keyex)?|rsa|dsa|password|publickey|keyboard-interactive/pam) \
        for [^[:space:]]+ from [^[:space:]]+ port [0-9]+( (ssh|ssh2))? 
An equivalent pattern for the syslog-ng pattern database:
    Accepted @QSTRING:auth_method: @ for @QSTRING:username: @ from \ 
        @QSTRING:client_addr: @ port @NUMBER:port:@ @QSTRING:protocol_version: @

Obviously, log messages describing the same event can be different: they can contain data that varies from message to message, like usernames, IP addresses, timestamps, and so on. This is what makes parsing log messages with regular expressions so difficult. In syslog-ng, these parts of the messages can be covered with special fields called parsers, which are the constructs between '@' in the example. Such parsers process a specific type of data like a string (@STRING@), a number (@NUMBER@ or @FLOAT@), or IP address (@IPV4@, @IPV6@, or @IPVANY@). Also, parsers can be given a name and referenced in filters or as a macro in the names of log files or database tables.

It is also possible to parse the message until a specific ending character or string using the @ESTRING@ parser, or the text between two custom characters with the @QSTRING@ parser.

A syslog-ng pattern database is an XML file that stores patterns and various metadata about the patterns. The message patterns are sample messages that are used to identify the incoming messages; while metadata can include descriptions, custom tags, a message class - which is just a special type of tag - and name-value pairs (which are yet another type of tags).

The syslog-ng application has built-in macros for using the results of the classification: the .classifier.class macro contains the class assigned to the message (e.g., violation, security, or unknown) and the .classifier.rule_id macro contains the identifier of the message pattern that matched the message. It is also possible to filter on the tags assigned to a message. As with syslog, these routing rules are specified in the syslog-ng.conf file.

Using syslog-ng

In order to use these features, get syslog-ng 3.1 - older versions use an earlier and less complete database format. As most distributions still package version 2.x, you will probably have to download it from the syslog-ng download page.

The syntax of the pattern database file might seem a bit intimidating at first, but most of the elements are optional. Check The syslog-ng 3.1 Administrator Guide [PDF] and the sample database files to start with, and write to the mailing list if you run into problems.

A small utility called pdbtool is available in syslog-ng 3.1 to help the testing and management of pattern databases. It allows you to quickly check if a particular log message is recognized by the database, and also to merge the XML files into a single XML for syslog-ng. See pdbtool --help for details.

Closing remarks

The syslog-ng pattern database provides a powerful framework for classifying messages, but it is powerless without the message patterns that make it work. IT systems consist of several components running many applications, which means a lot of message patterns to create. This clearly calls for community effort to create a critical mass of patterns where all this becomes usable.

To start with, BalaBit - the developer of syslog-ng - has made a number of experimental pattern databases available. Currently, these files contain over 8000 patterns for over 200 applications and devices, including Apache, Postfix, Snort, and various common firewall appliances. The syslog-ng pattern databases are freely available for use under the terms of the Creative Commons Attribution-Noncommercial-Share Alike 3.0 (CC by-NC-SA) license.

A community site for sharing pattern databases is reportedly also under construction, but until this becomes a reality, pattern database related discussions and inquiries should go to the general syslog-ng mailing list.

[Oct 17, 2009] MultiTail

MultiTail lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). It can also monitor wildcards: if another file matching the wildcard has a more recent modification date, it will automatically switch to that file. That way you can, for example, monitor a complete directory of files. Merging of 2 or even more logfiles is possible. It can also use colors while displaying the logfiles (through regular expressions), for faster recognition of what is important and what not. It can also filter lines (again with regular expressions). It has interactive menus for editing given regular expressions and deleting and adding windows. One can also have windows with the output of shell scripts and other software.

When viewing the output of external software, MultiTail can mimic the functionality of tools like 'watch' and such.
For a complete list of features, look here.

[Oct 17, 2009] Monitoring logs and command output

Aug 25, 2009 | developerWorks

Summary: Monitoring system logs or the status of a command that produces file or directory output are common tasks for systems administrators. Two popular open source tools simplify these activities for modern systems administrators: the multitail and watch commands. Both are terminal-oriented commands, which means that they are easily ported to most UNIX® or UNIX-like systems because they do not depend on any specific graphical desktop environment.

[Jul 20, 2008] Project details for kazimir

Perl-based log analyzer with some interesting capabilities.
Kazimir is a log analyzer. It has a complete configuration file used to describe what kind of logs (or non-regression test) to be watched or spawned and the kind of regexp to be found in them. Interesting information found in logs may be associated with "events" in a boolean and chronological way. The occurrence of events may be associated with the execution of commands.

Release focus: Initial freshmeat announcement

[Jul 17, 2008] fsheal

Useful Perl-script

FSHeal aims to be a general filesystem tool that can scan and report vital "defective" information about the filesystem like broken symlinks, forgotten backup files, and left-over object files, but also source files, documentation files, user documents, and so on. It will scan the filesystem without modifying anything and reporting all the data to a logfile specified by the user which can then be reviewed and actions taken accordingly.

[Jul 16, 2008] Skulker 0.5.1 by Simon Edwards

About: Skulker is a rules-based tool for log and temporary file management. It offers a wide range of facilities to help manage disk space, including compression, deletion, rotation, archiving, and directory reorganization. It provides dry-run facilities to test new rules, as well as detailed space reclaimed reporting.

Changes: The pattern match limit functionality that allows a particular rule to limit the number of files processed in any one invocation had a bug when using negative numbers. This has now been resolved and works as per the documentation.

[Dec 5, 2007] Project details for log4sh

log4sh is a logging framework for shell scripts that works similar to the other wonderful logging products available from the Apache Software Foundataion (eg. log4j, log4perl). Although not as powerful as the others, it can make the task of adding advanced logging to shell scripts easier, and has much more power than just using simple "echo" commands throughout. In addition, it can be configured from a properties file so that scripts in a production environment do not need to be altered to change the amount of logging they produce.

Release focus: Major feature enhancements

This release finally flushes out nearly all of the planned features for the 1.3 development series. It will hopefully be the last release in the 1.3 series before moving to the 1.4/1.5 series. In this release, the SyslogAppender is now fully functional, several bugs have been fixed, and there are additional unit tests to verify functionality. There is also a new Advanced Usage section in the documentation.

Kate Ward [contact developer]

[Oct 27, 2007] UNIX System Administration Tools

Centralized Logging with syslog-ng and SEC - October 2007 (PDF, 309 KB)

[Feb 20, 2007] Microsoft Log Parser Toolkit by Gabriele Giuseppini, Mark Burnett, Jeremy Faircloth, Dave Kleiman

Universal query tool to text-based data (log files, XML and CSV files), Event Logs, Registry and Active Directory

Dream Book on Dream Tool, October 3, 2006
Reviewer: Joaquin Menchaca (San José, CA USA) - See all my reviews
This tool is amazing in that it supports a variety input and output formats including reading in syslog and outputting into databases are pretty Excel charts. The filtering uses an SQL syntax. The tool comes with a DLL that can be registered, so that scripters (VBScript, Perl, JScript, etc.) can access the power of this tool.

This book not only covers the tool (alternative being to scrape the network for complex incomprehensible snippets), but shows real world practical solutions with the tool, from analyzing web logs, system events, security and network scans, etc.

This tool is just heavensend for analysis and transforming of any data in a variety of formats. The book and tool go hand-in-hand, and I highly recommend incorporating this into your tool (and book) into your tool kit and/or scripting endeavors immediately.

[Feb 20, 2007] NIST Guide to Computer Security Log Management, September 2006 Adobe .pdf (1,909 KB)

[Dec 15, 2006] Interview syslog-ng 2.0 developer Balázs Scheidler by: Robert Fekete

December 13, 2006 ( ) syslog-ng is an alternative system logging tool, a replacement for the standard Unix syslogd system-event logging application. Featuring reliable logging to remote servers via the TCP network protocol, availability on many platforms and architectures, and high-level message filtering capabilities, syslog-ng is part of several Linux distributions. We discussed the highlights of last month's version 2.0 release with the developer, Balázs Scheidler.

NewsForge: How and why did you start the project?

Balázs Scheidler: Back in 1998 the main Hungarian telecommunication company was looking for someone on a local Linux mailing list to port nsyslog to Linux. nsyslog -- developed by Darren Reed -- was at that time incomplete, somewhat buggy, and available only for BSD. While at university, I had been working for an ISP and got often annoyed with syslogd: it creates too many files, it is difficult to find and move the important information, and so on. Developing a better syslog application was a fitting task for me.

NF: Why is it called syslog-ng?

BS: syslog-ng 1.0 was largely based on nsyslog, but nsyslog did not have a real license. I wanted to release the port under GPL, but Darren permitted this only if I renamed the application.

NF: What kind of support is available for the users?

BS: There is a community FAQ and an active mailing list. If you are stuck with the compiling or the configuration, the mailing list is the best place to find help. My company, BalaBit IT Security, offers commercial support for those who need quick support.

NF: Documentation?

BS: The reference guide is mostly up-to-date, but I hope to improve it someday. I am sure there are several howtos floating around on the Internet.

NF: Who uses syslog-ng?

BS: Everyone who takes logging a bit more seriously. I know about people who use it on single workstations, and about companies that manage the centralized logging of several thousand devices with syslog-ng. We have support contracts even with Fortune 500 companies.

NF: What's new in version 2.0?

BS: 1.6 did not have any big problems, only smaller nuances. 2.0 was rewritten from scratch to create a better base for future development and to address small issues. For example, the data structures were optimized, greatly reducing the CPU usage. I have received feedback from a large log center that the new version uses 50% less CPU under the same load.

Every log message may include a timezone. syslog-ng can convert between different timestamps if needed.

It can read and forward logfiles. If an application logs into a file, syslog-ng can read this file and transfer the messages to a remote logcenter.

2.0 supports the IPv6 network protocol, and can also send and receive messages to multicast IP addresses.

It is also possible to include hostnames in the logs without having to use a domain name server. Using a DNS would seriously limit the processing speed in high-traffic environments and requires a network connection. Now you can create a file similar to /etc/hosts that syslog-ng uses to resolve the frequently used IP addresses to hostnames. That makes the logs much easier to read.

syslog-ng 2.0 uses active flow control to prevent message losses. This means that if the output side of syslog-ng is accepting messages slowly, then syslog-ng will wait a bit more between reading messages from the input side. That way the receiver is not flooded with messages it could not process on time, and no messages are lost.

NF: Is syslog-ng available only for Linux, or are other platforms also supported?

BS: It can be compiled for any type of Unix -- it runs on BSD, Solaris, HP-UX, AIX, and probably some others as well. Most bigger Linux distributions have syslog-ng packages: Debian, SUSE, Gentoo.... I think Gentoo installs it by default, replacing syslogd entirely.

NF: What other projects do you work on?

BS: syslog-ng is a hobby for me; that is why it took almost five years to finish version 2.0. My main project is Zorp, an application-level proxy firewall developed by my company. Recently I have been working on an appliance that can transparently proxy and audit the Secure Shell (SSH) protocol.

During development I stumble into many bugs and difficulties, so I have submitted patches to many places, such as glib and the tproxy kernel module.

NF: Are these projects also open source?

BS: No, these are commercial products, but the Zorp firewall does have a GPL version.

NF: Any plans for future syslog-ng features?

BS: I plan to support the syslog protocol that is being developed by IETF.

I would like to add disk-based buffering, so you could configure syslog-ng to log into a file if the network connection goes down, and transmit the messages from the file when the network becomes available again.

It would be also good to transfer the messages securely via TLS, and to have application-layer acknowledgments on the protocol level.

[Nov 11, 2006] Sisyphus 1.1 is now available at

[Oct 23, 2006] [PDF] Guide to Computer Security Log Management -- NIST publication.

[Oct 22, 2006] NIST 800-92.A Guide to Computer Security Log Management

Mostly fluff but all-in-all not bad for a government publication :-). Compare with Solaris™ Operating Environment Security

[Oct 22, 2006] Five mistakes of log analysis by Anton Chuvakin

October 21, 2004 (Computerworld) -- As the IT market grows, organizations are deploying more security solutions to guard against the ever-widening threat landscape. All those devices are known to generate copious amounts of audit records and alerts, and many organizations are setting up repeatable log collection and analysis processes.
However, when planning and implementing log collection and analysis infrastructure, the organizations often discover that they aren't realizing the full promise of such a system. This happens due to some common log-analysis mistakes.
This article covers the typical mistakes organizations make when analyzing audit logs and other security-related records produced by security infrastructure components.
No. 1: Not looking at the logs
Let's start with an obvious but critical one. While collecting and storing logs is important, it's only a means to an end -- knowing what 's going on in your environment and responding to it. Thus, once technology is in place and logs are collected, there needs to be a process of ongoing monitoring and review that hooks into actions and possible escalation.
It's worthwhile to note that some organizations take a half-step in the right direction: They review logs only after a major incident. This gives them the reactive benefit of log analysis but fails to realize the proactive one -- knowing when bad stuff is about to happen.
Looking at logs proactively helps organizations better realize the value of their security infrastructures. For example, many complain that their network intrusion-detection systems (NIDS) don't give them their money's worth. A big reason for that is that such systems often produce false alarms, which leads to decreased reliability of their output and an inability to act on it. Comprehensive correlation of NIDS logs with other records such as firewalls logs and server audit trails as well as vulnerability and network service information about the target allow companies to "make NIDS perform" and gain new detection capabilities.

Some organizations also have to look at log files and audit tracks due to regulatory pressure.
No. 2: Storing logs for too short a time
This makes the security team think they have all the logs needed for monitoring and investigation (while saving money on storage hardware) and then leading to the horrible realization after the incident that all logs are gone due to its retention policy. The incident is often discovered a long time after the crime or abuse has been committed.
If cost is critical, the solution is to split the retention into two parts: short-term online storage and long-term off-line storage.

... ... ...

Log monitoring-analysis

OSSEC v2.7.0 documentation

Inside OSSEC we call log analysis a LIDS, or log-based intrusion detection. The goal is to detect attacks, misuse or system errors using the logs.

LIDS - Log-based intrusion detection or security log analysis are the processes or techniques used to detect attacks on a specific network, system or application using logs as the primary source of information. It is also very useful to detect software misuse, policy violations and other forms of inappropriate activities.

Handbook of Research on Web Log Analysis

Copyright © 2009 by IGI Global.

Help - IBM WebSphere Help System / Adapter configuration file samples

Adapters created using the Generic Log Adapter framework can be used for building log parsers for the Log and Trace Analyzer. The following adapter configuration files are provided as examples for creating rules-based adapters and static adapters.

Log file type Adapter sample available Directory
AIX errpt log V4.3.3(rules), V5.1.0(rules), V5.2.0(rules) <base_dir>\AIX\errpt\v4
AIX syslog regex.adapter <base_dir>\AIX\syslog\v4
Apache HTTP Server access log regex.adapter, static.adapter <base_dir>\Apache\access\v1.3.26
Apache HTTP Server error log regex.adapter <base_dir>\Apache\error\v1.3.26
Common Base Event XML log regex.adapter <base_dir>\XML\CommonBaseEvent\v1.0.1
ESS (Shark) Problem log regex.adapter <base_dir>\SAN\ESS(Shark)\ProblemLog
IBM DB2 Express diagnostic log
regex.adapter <base_dir>\DB2\diag\tool
IBM DB2 Universal Database Cli Trace log static.adapter <base_dir>\DB2\cli_trace\V7.2,8.1
IBM DB2 Universal Database diagnostic log regex.adapter(v8.1), static.adapter(v8.1), regex.adapter(v8.2) <base_dir>\DB2\diag\v8.1, <base_dir>\DB2\diag\v8.2
IBM DB2 Universal Database JDBC trace log static.adapter <base_dir>\DB2\jcc\v8.1
IBM DB2 Universal Database SVC Dump on z/OS static.adapter <base_dir>\DB2\zOS\SVCDump
IBM DB2 Universal Database Trace log static.adapter <base_dir>\DB2\trace\V7.2,8.1
IBM HTTP Server access log regex.adapter, static.adapter <base_dir>\IHS\access\v1.3.19.3
IBM HTTP Server error log regex.adapter, static.adapter <base_dir>\IHS\error\v1.3.19.3
IBM WebSphere Application Server activity log static.adapter <base_dir>\WAS\activity\v4
IBM WebSphere Application Server activity log regex.adapter, regex_example.adapter, regex_showlog.adapter, regex_showlog_example.adapter, static.adapter <base_dir>\WAS\activity\v5
IBM WebSphere Application Server error log for z/OS static.adapter <base_dir>\WAS\zOSerror\v4
IBM WebSphere Application Server plugin log regex.adapter, regex_example.adapter <base_dir>\WAS\plugin\v4,5
IBM WebSphere Application Server trace log static.adapter <base_dir>\WAS\trace\v4, <base_dir>\WAS\trace\v5, <base_dir>\WAS\trace\v6
IBM WebSphere Commerce Server ecmsg, stdout, stderr log regex.adapter, regex_example.adapter <base_dir>\WCS\ecmsg\v5.4, <base_dir>\WCS\ecmsg\v5.5
IBM WebSphere Edge Server log static.adapter <base_dir>\WES\v1.0
IBM WebSphere InterChange Server log static.adapter <base_dir>\WICS\server\4.2.x
IBM WebSphere MQ error log static.adapter <base_dir>\WAS\MQ\error\v5.2
IBM WebSphere MQ FDC log regex.adapter, regex_example.adapter <base_dir>\WAS\MQ\FDC\v5.2,5.3
IBM WebSphere MQ for z/OS Job log static.adapter <base_dir>\WAS\MQ\zOS\v5.3
IBM WebSphere Portal Server appserver_err log static.adapter <base_dir>\WPS\appservererr\v4
IBM WebSphere Portal Server appserverout log regex.adapter, regex_example.adapter <base_dir>\WPS\appserverout\v4,5
IBM WebSphere Portal Server run-time information log
regex.adapter, regex_example.adapter <base_dir>\WPS\runtimeinfo\V5.0
IBM WebSphere Portal Server systemerr log
static.adapter <base_dir>\WPS\systemerr
Logging Utilities XML log
static.adapter <base_dir>\XML\log\v1.0
Microsoft Windows Application log regex.adapter, regex_example.adapter <base_dir>\Windows\application
Microsoft Windows Security log regex.adapter, regex_example.adapter <base_dir>\Windows\security
Microsoft Windows System log regex.adapter, regex_example.adapter <base_dir>\Windows\system
Rational TestManager log static.adapter <base_dir>\Rational\TestManager\2003.06.00
RedHat syslog regex.adapter, regex_example.adapter <base_dir>\Linux\RedHat\syslog\v7.1,8.0
SAN File system log static.adapter <base_dir>\SAN\FS
SAN Volume Controller error log regex.adapter <base_dir>\SAN\VC\error
SAP system log static.adapter, example.log <base_dir>\SAP\system
Squadrons-S-HMC log static.adapter <base_dir>\SAN\Squadrons-S\HMC
SunOS syslog regex.adapter, regex_example.adapter <base_dir>\SunOS\syslog\v5.8
SunOS vold log regex.adapter, regex_example.adapter <base_dir>\SunOS\vold\v5.8
z/OS GTF Trace log static.adapter <base_dir>\zOS\GTFTrace
z/OS Job log static.adapter <base_dir>\zOS\joblog\v1.4
z/OS logrec static.adapter <base_dir>\zOS\logrec\v1.4
z/OS master trace log static.adapter <base_dir>\zOS\mtrace\v1.4
z/OS System log static.adapter <base_dir>\zOS\syslog\v1.4
z/OS System trace log static.adapter <base_dir>\zOS\systemtrace

<base_dir> is the directory where the Agent Controller is installed:


Related concepts
Overview of the Log and Trace Analyzer

Related tasks
Creating a log parser for the Log and Trace Analyzer

Related references
Supported log file types

Monitoring with Simple Event Correlator Page 1

Frequently, it is useful for security professionals, network administrators and end users alike to monitor the logs that various programs in the system write for specific events -- for instance, recurring login failures that might indicate a brute-force attack. Doing this manually would be a daunting, if not infeasible, task. A tool to automate log monitoring and event correlation can prove to be invaluable in sifting through continuously-generated logs.

The Simple Event Correlator (SEC) is a Perl script that implements an event correlator. You can use it to scan through log files of any type and pick out events that you want to report on. Tools like logwatch can do much the same thing, but what sets SEC apart is its ability to generate and store contexts. A context is an arbitrary set of things that describe a particular event. Since SEC is able to essentially remember (and even forget) these contexts, the level of noise generated is remarkably low, and even a large amount of input can be handled by a relatively small number of rules.

Looking for root login attempts

For instance, let's start with something basic, like looking for direct ssh root logins to a machine (security best practice is to completely not allow such logins, but let's not follow that for the sake of this example):

   Feb  1 11:54:48 sshd[20994]: [ID 800047] Accepted publickey for root 
     from port 33890 ssh2

Ok, so we can create an SEC configuration file (let's call it root.conf) that contains the following:

   pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Accepted (.+) for root 
     from (\d+\.\d+\.\d+\.\d+)
   desc=direct ssh root login on $2 from $4 (via $3) @ $1
   action=add root-ssh_$2 $0; report root-ssh_$2 /usr/bin/mail -s "Direct root login on $2 from $4"

This is an example of a rule in SEC. The first line describes the type, in this case, "Single" which tells SEC that we just want to deal with single instances of this event. The second line, ptype, tells SEC how we want to search for patterns. In this case we've chosen "RegExp" which says to use Perl's powerful regular expression engine. We can choose other types of matches, such as substring matches, tell the rule the utilize a Perl function or module, or tell it to look at the contents of a variable you can set.

The next line in this rule, the pattern in this case, is a big regular expression (regex) that would match on log entries where someone is logging in directly as root. We've grouped the timestamp, the IPs for both the source and destination and the method used to login for us to use later in an email. (If you're familiar with Perl, you can see SEC uses a similar regex grouping.)

The next line is the description of this rule. The final line is the action we intend to take. In this case, we add the entire log entry to a context called root-ssh_$2, where $2 will expand out to be the IP address of the machine being logged into. Finally, the rule will send mail out to with the contents of the context, which will include the matching log entry.

To run this thing we do:

   sec -detach -conf=root.conf -input=/var/log/messages

It will start up and begin looking for direct root logins in the background. We can tell SEC to watch multiple files (using Perl's glob() function):

   sec -detach -conf=root.conf -input=/var/log/incoming/logins*

Say this rule chugs away and sends you e-mail every morning at 5am when your cron job from some machine logs into another machine (as root!) to run backups. You don't want to get email every morning, so we can suppress those using the aptly named suppress rule type. To do that, we insert the following rule above our existing "look for root logins" rule:

   pattern=^.+\d+ \d+:\d+:\d+ \d+\.\d+\.\d+\.\d+ sshd\[\d+\]: \[.+\] Accepted .+ for root from

Then we can send SIGABRT to the sec process we started previously:

   kill -SIGABRT `ps ax | grep sec | grep root.conf | awk '{print $1}'`

which will tell that SEC process to reread its configuration file and continue.

Looking for brute force attacks

Now let's look at using SEC to watch for a brute force attack via ssh:

   # create the context on the initial triggering cluster of events
   pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Failed (.+)
     for (.*?) from (\d+\.\d+\.\d+\.\d+)
   desc=Possible brute force attack (ssh) user $4 on $2 from $5
   action=create SSH_BRUTE_FROM_$5 60 (report SSH_BRUTE_FROM_$5 /usr/bin/mail -s 
     "ssh brute force attack on $2 from $5"; add SSH_BRUTE_FROM_$5
     5 failed ssh attempts within 60 seconds detected; add SSH_BRUTE_FROM_$5 $0
   # add subsequent events to the context
   pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Failed (.+)
     for (.*?) from (\d+\.\d+\.\d+\.\d+)
   desc=Possible brute force attack (ssh) user $4 on $2 from $5
   action=add SSH_BRUTE_FROM_$5 "Additional event: $0"; set SSH_BRUTE_FROM_$5 30

This actually specifies two rules. The first is another rule type within SEC: SingleWithThreshold. It adds two more options to the Single rule we used above: window and thresh. Window is the times pan this rule should be looking over and thresh is the threshold for number of events that need to appear within the window to trigger the action in this rule. We're also using the context option, which tells this rule to trigger only if the context doesn't exist. The rule will trigger if it matches 5 failed login events within 60 seconds. The action line creates the context ($5 representing the IP of the attacker) which expires in 60 seconds. Upon expiration it sends out an e-mail with a description and the matching log entries. The second rule adds additional events to the context, and extends the context's lifetime by 30 seconds, as long as the context already exists; otherwise it does nothing.

The flexibility of SEC

The creation and handling of these contexts which are created dynamically lies at the heart of SEC's power and what set it apart from other "log watcher" style programs.

For example, a printer having a paper jam may issue a lot of incessant log messages until someone gets over to the printer to deal with it, and if a log watcher was set to send an e-mail every time it matched on the paper jam message, that's a lot of e-mail, most of which will get deleted. It would be worse if it was an e-mail to a pager. SEC can create a context stating, "I've seen a paper jam event and have already sent out a page," which the rule can check for in the future and suppress further e-mails if the context already exists.

Another good example of this is included with SEC, a simple horizontal portscan detector, which will trigger an alarm if 10 hosts have been scanned within 60 seconds, which has been traditionally a difficult thing to detect well.

John P. Rouillard has an extensive paper in which he demonstrates much of the power of SEC's contexts and we highly recommend reading it for much more of the gory details on log monitoring in general and SEC in particular.

In addition to contexts, SEC also includes some handy rule types beyond what we've shown so far (from the sec manual page):

SingleWithScript - match input event and depending on the exit value of an external script, execute an action.

SingleWithSuppress - match input event and execute an action immediately, but ignore following matching events for the next t seconds.

Pair - match input event, execute an action immediately, and ignore following matching events until some other input event arrives. On the arrival of the second event execute another action.

PairWithWindow - match input event and wait for t seconds for other input event to arrive. If that event is not observed within a given time window, execute an action. If the event arrives on time, execute another action.

SingleWith2Thresholds - count matching input events during t1 seconds and if a given threshold is exceeded, execute an action. Then start the counting of matching events again and if their number per t2 seconds drops below the second threshold, execute another action.

Calendar - execute an action at specific times.

The Calendar rule type, for instance, allows us to look for the absence of a particular event (e.g. a nightly backup being kicked off). Or, you can use it to create a particular contexts, like this example from the SEC man page:

   time=0 23 * * *
   action=create %s 32400

This way, you can have your other rules check to see if this context is active and take different actions at night versus during the day.

More examples

Let's say we want to analyze Oracle database TNS-listener logs. Specifically, we want to find people logging into the database as one of the superuser accounts (SYSTEM, SYS, etc), which is a Bad Thing (tm):

   24-FEB-2005 00:26:52 * (CONNECT_DATA=(SID=fprd)
     * (ADDRESS=(PROTOCOL=tcp)(HOST= * establish * fprd * 0

In my environment, we chop up the listener logs everyday and we run the following rules on each day's log:

   pattern=^(\d{2}-\p{IsAlpha}{3}-\d{4} \d{1,2}:\d{1,2}:\d{1,2}).*CID=\((.*)\)\(HOST=(.*)\)\
   desc=$4 login on $5 @ $1 from $3 ($2)
   action=add $4_login $0; create FOUND_VIOLATIONS
   desc=Write all contexts to stdout
   action=eval %o ( use Mail::Mailer; my $mailer = new Mail::Mailer; \
   $mailer->open({ From => "root\@syslog", \
				To => "admin\", \
				Subject => "SYSTEM Logins Found",}) or die "Can't open: $!\n";\
   while($context = each(%main::context_list)) { \
	print $mailer "Context name: $context\n"; \
	print $mailer '-' x 60, "\n"; \
	foreach $line (@{$main::context_list{$context}->{"Buffer"}}) { \
	print $mailer $line, "\n"; \
	} \
	print $mailer '=' x 60, "\n"; \
   } \

We run this configuration using the following Perl script that will pick out today's logfile to parse:

   use strict;
   use Date::Manip;
   my $filedate = ParseDate("yesterday");
   my $fileprefix = UnixDate($filedate, "%Y-%m-%d");
   my $logdir = "/var/log/oracle-listener";
   opendir(LOGDIR, $logdir) or die "Cannot open $logdir! $!\n";
   my @todaysfiles = grep /$fileprefix/, readdir LOGDIR;
   if (scalar(@todaysfiles) > 1 ) { print "More than one file matches for today\n"; }
   closedir LOGDIR;
   foreach (@todaysfiles) {
    my $secout = `sec -conf=/home/tmurase/sec/oracle.conf -intevents 
     -cleantime=300 -input=$logdir/$_ -fromstart -notail`;
       print $secout, "\n";

The Perl script invokes SEC with the -intevents flag which generates internal events that we can catch with SEC rules. In this case, we wish to catch that SEC will shutdown after it finishes parsing the file. Another option, -cleantime=300 gives us 5 minutes of grace time before the SEC process terminates.

Here we are using the first rule to simply add events to an automatically named context, much as we did above, and creating the context FOUND_VIOLATIONS as a flag for the next rule to evaluate. The second rule will check for the existence of FOUND_VIOLATIONS and the SEC_INTERNAL_EVENT context which is raised during the shutdown sequence, and we look for the SEC_SHUTDOWN event come across input using a simple substring pattern. (This technique of dumping out all contexts before shutdown is pulled from SEC FAQ 3.23.)

As you can see, the action line of the second rule has a lot going on. What we're doing is calling a small Perl script from within SEC that will generate an email with all of the database access violations the first rule collected.

Another thing that we often wish to monitor closely are the nightly backups. Namely, we want to make sure they've actually started, and that they actually managed to finish.

Say that a successful run looks like this in the logs:

   Apr  9 00:01:10  localhost /USR/SBIN/CRON[15882]: (root) CMD ( /root/bin/ / )

time passes...

   Apr  9 03:14:15  localhost[15883]: finished successfully

An unsuccessful run would be, for our purposes, the absence of these two log entries. We can kick off a Calendar rule to set a context that indicates we are waiting for the first log entry to show up:

   time=55 23 * * *
   action=create %s 3600 shellcmd /usr/local/scripts/

Here we create the context "Wait4Backup" and set it to expire at 55 minutes after midnight, whereupon it executes a shell script that will presumably do some cleanup actions and notifications. The time parameter for the calendar rule uses a crontab-esque format with ranges and lists of numbers allowed.

We'll want to delete the Wait4Backup context and create a new context when the log entry for the start of the backup shows up:

   pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*CRON[(.+?)]: (root) CMD ( /root/bin/ / )
   desc=Nightly backup for $1 starting at $0 pid $2
   action=delete Wait4Backup; create BackupRun_$1_$2 18000 shellcmd /usr/local/scripts/

With this rule, we've created a five-hour window in which the backup should finish before this new context expires and reports a failure.

Now for the last part: what to do when the backup finishes.

   pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*[(.+?)]: finished successfully
   action=delete BackupRun_$1_$2; shellcmd /usr/local/scripts/
   pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*[(.+?)]: error: (.*)
   action=delete BackupRun_$1_$2; shellcmd /usr/local/scripts/ $4

The first rule takes care of what to do when it does finish successfully. The latter takes care of what happens when the backup script has errors. With these four rules, we have SEC covering the various possible states of our simple backup script, catching even the absence of the script starting on time.

Go forth and watch logs!

SEC is a powerful tool that builds on simple statements to handle the type of application monitoring and log monitoring that rivals commercial tools such as Tivoli or HP Openview. It does not have a GUI frontend or convenient reports, however, so a little more time must be spent on generating and formatting output of SEC's information. For those looking for more examples, a new rules collection has been started up at

Event correlation and data mining for event logs

The Simple Event Correlator (SEC) is a Perl script that implements an event correlator. You can use it to scan through log files of any type and pick out events that you want to report on. Tools like logwatch can do much the same thing, but what sets SEC apart is its ability to generate and store contexts. A context is an arbitrary set of things that describe a particular event. Since SEC is able to essentially remember (and even forget) these contexts, the level of noise generated is remarkably low, and even a large amount of input can be handled by a relatively small number of rules.

[Mar 16, 2006] Slashdot Host Integrity Monitoring Using Osiris and Samhain

it can be as simple as looking at logging output" by GringoGoiano (176551) on Monday August 22, @08:31PM (#13376174)

Looking at logging output in an enterprise environment can be very difficult. To make this really useful you need to aggregate information in a central repository, from all different servers/apps running on many machines. For true heavy duty log analysis you need to resort to tools such as SenSage []'s log storage/analysis tool.

Any other tool will choke on the volume of information you'll be chugging through in an enterprise environment, unless you pay for a multi-million-dollar Oracle deployment.

A Linux-based product used by Blue Cross/Blue Shield, Yahoo, Lehman Brothers, etc. For true enterprise security you need something like this.

[Dec 12, 2005] A very interesting open source implementation of syslog daemon for windows by Alexander Yaworsky. Quality coding.

This is another syslog for windows, it includes daemon and client. Features:

Unix Log Analysis Program

Logalizer is a Perl script that will analyze log files and email you the results.

One of the most important things a system administrator should do is to examine his system log files. However, most system administrators, myself included, have neither the time nor inclination to to do so. After setting up appropriate site filters (a bit of work), logalizer will automatically analyze log files and send the reports to you by email so that you can more easily detect attempted break-ins or problems with your systems.

This program is very easy to install. Customization to minimize the amount of noise is done by writing regular expressions to include and exclude parts of the log files that are interesting. Sample filter files are provided, but they will need to be tuned for your site.

[Apr 15, 2003] Bhavin Vaidya bhavin at

Thanks to ...

devnull at
Rob Rankin
Carsten Hey
Tom Yates

devnull at and Tom Yates came with the upto the point solution for me.

I have added following line to my /etc/syslog.conf file and then touched /var/log/sulog
----                                               /var/log/sulog
#                     ^^^^ this white space is TABs, not spaces
and then,

/etc/init.d/syslog restart

We wanted to do this as following line was creating too large a file and we also write
our logs to syslog server.
*.info;mail.none;authpriv.none;cron.none                /var/log/messages
So, we changed our above line to ...      (which made a loose su log info)
*.notice;mail.none;authpriv.none;cron.none              /var/log/messages

Sorry for the late summary posting as was sick at home since Friday after making the
changes on Thursday. Hope this helps all.

Regards, Bhavin

Bhavin Vaidya wrote:

> Hello,
> We would like to customise the sulog activity accross all (Solaris, HP-UX, AIX and
> Red Hat) OSs.
> Red Hat logs the login and sulogin activity under /var/log/messages file.
> We would like to customise the sulog written to it's own log file say /var/log/sulog,
> like it does on rest of the other OSs.  I have tried looking at /etc/pam.d/su but
> didn't find any config statement there.
> Will appriciate if any one let me know how I can achive this.
> BTW, I'm the Solaris, HP-UX and AIX tech and very new to Red Hat Linux.
> Thanks in advance and with regards,
> Bhavin

Re Solaris log files

I vote for /var/log/cron.19990220, /var/log/ftp.19990220, /var/log/authlog.199902, etc. Do you have so many logs online that they need more than one flat directory? Then go one more level down, but not 4. Also, putting the timestamp in the filename makes restores and greps of the files less confusing.

But I think the problem is even bigger than that.

Some log files grow VERY RAPIDLY -- many megabytes per day. Some grow very slowly. authlog comes to mind. It's best to keep individual log files under some certain size. 1MB is great. 10MB is OK. 50MB is getting kinda big.

But with these different growth rates, the tendency is to age some of them daily, others weekly, others yearly(!).

Then there's the annoying ones like wtmp that are binary.

And let's not forget that some processes need to be restarted after a logfile move, while others don't.

And some programs follow the paradigm "my logfile must exist and be writable by me or else I will silently log nothing".

I've always considered writing some tool that would allow you to manage and age all your log files from one config file. Maybe the config file would be a table that lists the base logfile name, the interval at which it gets aged, the number of logs or amount of space to keep online before deleting them, etc.

Anybody know of any such program? It might be too much work for too little gain.

The ultimate would be an ADAPTIVE process that keeps fewer old logs online if space is getting tight, etc. Personally I think an adaptive news expire program would be nice, too.

I'll get right on these, as soon as I get this other stuff done for my boss... :-)

Todd Williams Manager, Computer and Communication Systems
MacNeal-Schwendler Corp. ("MSC"), 815 Colorado Blvd., Los Angeles, CA 90041 (323)259-4973
geek n. : a carnival performer often billed as a wild man whose act usu.
includes biting the head off a live chicken or snake -Webster's New Collegiate

CERT/Understanding system log files on a Solaris

Solaris systems use the /var directory to store logs and other local files so that the operating system can support other directories being mounted as read only, sometimes from file servers elsewhere on the network. The /var directory is thus often on a partition that is local to the system.

All of the log files described below can be found in subdirectories under /var. There may be other application-specific log files that you will also need to inspect. However, it is beyond the scope of this implementation to describe all of the log files that you might want to inspect for your specific Solaris installation.

Because log files often provide the only indication of an intrusion, intruders often attempt to erase any evidence of their activities by removing or modifying the log files. For this reason, it is very important that your log files be adequately protected to make it as difficult as possible for intruders to change or remove then. See the practice "Managing logging and other data collection mechanisms" for more information on this topic.

[PDF] Solaris™ Operating Environment Security

Log Files

Log file are used by the system and applications to record actions, errors, warnings, and problems. They are often quite useful for investigating system quirks, for discovering the root causes of tricky problems, and for watching attackers. There are typically two types of log files in the Solaris Operating Environment: system log files which are typically managed by the syslog daemon and application logs which are created by the application.

set sys:coredumpsize = 0

Log Files Managed by syslog

The syslog daemon receives log messages from several sources and directs them to the appropriate location based on the configured facility and priority. There is a programmer interface, syslog(), and a system command, logger, for creating log messages. The facility (or application type) and the priority are configured in the /etc/syslog.conf file to direct the log messages. The directed location can be a log file, a network host, specific users, or all users logged into the system. By default, the Solaris Operating Environment defines two log files in the /etc/syslog.conf file. The /var/adm/messages log files contains a majority of the system messages. The /var/log/syslog file contains mail system messages. A third log file is defined but commented out by default. It logs important authentication log messages to the /var/log/authlog file. Uncomment the following line in /etc/syslog.conf to enable logging these messages: Save the file and use the following command to force syslogd to re-read its configuration file:

All of these files should be examined regularly for errors, warnings, and signs of an attack. This task can be automated by using log analysis tools or a simple grep command.

Application Log Files

Application log files are created and maintained by commands and tools without using the syslog system. The Solaris Operating Environment includes several commands that maintain their own log files. Here is a list of some of the Solaris Operating Environment log files:

/var/adm/sulog messages from /usr/bin/su

/var/adm/vold.log messages from /usr/sbin/vold

/var/adm/wtmpx user information from /usr/bin/login

/var/cron/log messages from /usr/sbin/cron

The /var/adm/wtmpx file should be viewed with the last command.

#auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost)

# kill -HUP `cat /etc/`

The /var/adm/loginlog file does not exist in the default of the Solaris Operating Environment installation, but it should be created. If this file exists, the login program records failed login attempts. All of these logs should also be monitored for problems.

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles


Open Directory - Computers: Software: Internet: Site Management ...

**** NIST 800-92. A Guide to Computer Security Log Management. Not very well written but still useful publication from NIST.

loganalysis - system log analysis, infrastructure, and auditing -- mail list

LogAnalysis.Org System Logging

Unix Log Analysis

Log Analysis Basics

Basic Steps in Forensic Analysis of Unix Systems

Aggregate Logs from Remote Sites

Central Loghost Mini-HOWTO Syslog related links


NIST Guide to Computer Security Log Management, September 2006 Adobe .pdf (1,909 KB)

Man Pages

Random Findings

/var/log/utmp; /var/log/utmpx

These logs keep track of users currently logged into the system. Using the who command, check the users logged in at the current time:

<userid> pts/1 Mar 31 08:40 (origination hostname)

Look for user logins that are unexpected (e.g., for staff on vacation), occur at unusual times during the day, or originate from unusual locations.

/var/log/wtmp; /var/log/wtmpx

These logs keep track of logins and logouts. Using the last command, do the following:

Look for user logins occurring at unusual times.

<userid> pts/4 <hostname> Sat Mar 22 03:14 - 06:02 (02:47)

Look for user logins originating from unusual places (locations, addresses, and devices).

<userid> pts/12 <strange hostname> Fri Mar 21 08:59 - 13:30 (04:31)

Look for unusual reboots of the syst

reboot system boot Sun Mar 23 05:36


By default, the syslog file will contain only messages from mail (as defined in the /etc/syslog.conf file). Look for anything that looks unusual.


This log records system console output and syslog messages. Look for unexpected system halts.

Look for unexpected system reboots.

Mar 31 12:48:41 unix: rebooting...

Look for failed su and login commands.

Mar 30 09:14:00 <hostname> login: 4 LOGIN FAILURES ON 0, <userid>

Mar 31 12:37:43 <hostname> su: 'su root' failed for <userid> on /dev/pts/??

Look for unexpected successful su commands.

Mar 28 14:31:11 <hostname> su: 'su root' succeeded for <userid> on /dev/console


This log records the commands run by all users. Process accounting must be turned on before this file is generated. You may want to use the lastcomm command to audit commands run by a specific user during a specified time period.

compile <userid> ttyp1 0.35 secs Mon Mar 31 12:59


This log keeps track of dial-out modems. Look for records of dialing out that conflict with your policy for use of dial-out modems. Also look for unauthorized use of the dial-out modems

Other log files Solaris includes Basic Security Module (BSM), but it is not turned on by default. If you have configured this service, review all the files and reports associated with BSM for the various kinds of entries that have been described in the practice "Inspect your system and network logs."

If your site has large networks of systems with many log files to inspect, consider using tools that collect and collate log file information. As you learn what is normal and abnormal for your site, integrate that knowledge into your specific procedures for inspecting log files.


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September 12, 2017