May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Best Unix Security Papers

News See Also Recommended Books Recommended Links Classic Papers Usenix CERT Solaris security
People Auditing Presentations Decoy-based defense    Humor  Random Findings Etc

There are two type major types of security papers that I am interested in:

Of course one of the first thing to do if you try to immerse into a new field is to read classic papers.  I value those author who are not only security researchers but gited programmers and among recent authors David Curry and Wietse Venema are my favorites. The latter also is the author of a really excellent MDA (Postscript). He also wrote several important papers including Murphy's law and computer security.

As for more practical papers , I would recommend to pay attention to the authors of Sun Security Blueprints. For example Alex Noordergraaf  wrote one very important paper SolarisTM Operating Environment Minimization for Security: A Simple, Reproducible and Secure Application Installation Methodology. There are some other pretty decent Sun blueprint papers also well worth reading.  After reading an excellent paper Build A Honeypot by Larry Spitzner  I now think that he is a rising star in the computer security journalism. That's a really meteoritic rise that I would never expect from somebody "who used to blow up things of a different nature" ;-).   Alex Noordergraaf wrote several other less important but still interesting papers that also well worth reading.  

See also - docs -- good collection of documents. Among vendors RAPTOR has probably one of the best Web security library.

Usenix now opened its archives and can be a good source of papers on security. For example here is one interesting paper Infrastructure A Prerequisite for Effective Security by Bill Fithen, Steve Kalinowski, Jeff Carpenter, and Jed Pickel, CERT Coordination Center

The authors started their presentation with some scary data compiled by CERT. A 1997 survey shows that 50% of systems were not kept up to date with security patches after they were compromised. One site appeared in 35 incidents between 1997 and 1998; the site was used for password sniffing and probing of other sites in many of those cases. Ten of the 35 incidents involved root compromise of the host. In another break-in, 20-25 hosts were compromised. All of these systems needed to be rebuilt, but the site's administrator said that they didn't have enough resources to do so. The authors set out to improve infrastructure manageability at CERT by creating an easily maintained system of distributing software packages. The result is SAFARI, a centralized repository of 900 collections of software for multiple versions of UNIX. Using SAFARI, a sysadmin can build new systems from scratch and update existing systems with patches and new packages. SAFARI includes flexible version controls so that developers and admins can easily post and retrieve software packages from the same central repository.

Usenix membership is not required for the access of full text of papers that are more than a year old.

Dr. Nikolai Bezroukov

Note: Links to the sources are not necessary current (and keeping them current is not the goal of this page -- the selection of the most important papers is), but you can always use Google or other search engine to find some location of the text of WEB. In such cases I would appreciate information about broken link and the new URL for the paper. 

Top Visited
Past week
Past month


Old News ;-)

Breathing new life into old news
The more things change, the more they stay the same."

[Feb 26, 2000] - Hack Attacks Inherent To E-business Experts [News]

[Jan 27, 2000] DDJ X.509 CERTIFICATES by Paul Tremblett

Paul unravels X.509 certificates, one of the most popular computer security standards specifying the contents of digital certificates, by showing how you can decode and display them in a readable form. Additional resources include x509.txt (listings) and (source code).

Sys Admin The Best Guides for Managing Information Security by Kerry Thompson

There are many resources available on the Internet to help with managing IT security -- far too many for the newcomer to be able to sort out the valuable ones from the useless ones. In this article, I'll present a number of very useful documents designed to help in managing enterprise security in a practical manner. I will review some of the most common documents that I've used to help IT organizations evaluate their security and provide them with assistance on what to do to maintain security. Rather than referring to the many, many books available or to voluminous and boring standards documents, I'll present freely available and easily understood documents that can be easily adapted and applied to most IT organizations.

Why do systems administrators need to use guides, practices, and checklists? The answer is simple -- admins can't possibly be experts in all areas of IT security that must be managed by modern enterprises. Even a small company with one or two servers, an Internet connection, and 20 or so workstations poses a lot of work to fully evaluate how secure it is. So, we need guides, written practices, and checklists to provide us with guidance on how to maintain security and to make sure that we cover all the details.

Specifically in this article, I'll review the Open Source Security Testing Methodology Manual (OSSTMM), a number of NIST Special Publications, some of the DISA guides and checklists, the Standard of Good Practice (SoGP), and the ISO17799 standard. These are all freely available (except for ISO17799) and will greatly ease the task of evaluating and maintaining enterprise security.

The Open Source Security Testing Methodology Manual (OSSTMM)

The Open Source Security Testing Methodology Manual is a guide for evaluating how secure systems are. It contains detailed instructions on how to test systems in a methodological way, and how to evaluate and report on the results.

The OSSTMM consists of six sections:

It also includes a number of templates intended for use during the testing process to capture the information gathered.

The OSSTMM is a great resource for systems administrators who want to evaluate the security of a wide range of systems in an ordered and detailed way. It contains instructions on testing systems but few details on how to protect systems.

NIST Special Publications

The Information Technology Laboratory of the National Institute of Standards and Technology (NIST) publishes a number of guides and handbooks under the Special Publications program. Some of these are quite high-level, covering areas of management, policy, and governance. But many include details that are perfect for systems administrators and operations people. The following is an overview of some of the available guides -- check the NIST Web site for the full list of currently available guides.

The great thing about the NIST documents and checklists is that they are not copyrighted. That's right; you can copy and modify these as much as you want without fear of reprisals. You can modify these checklists to suit your own requirements, for example, to develop your own checklist for new servers going into production or to define your own security auditing process. You can even adapt these guides to become your new security policy.

NIST SP800-100 Information Security Handbook: A Guide for Managers

This is a big document (178 pages) that supersedes the older SP800-12 as a general handbook on managing information security. For IT managers or systems administrators new to security this is really the best place to start, although much of the content is at a high level targeted for managers. Some of the chapters, such as those on governance and investment management will be too high level for systems administrators, but others such as the ones on incident response, contingency planning, and configuration management will be very useful. This guide includes an appendix containing a list of Frequently Asked Questions (FAQs), which provides a lot of useful information.

NIST SP800-44 Guidelines on Securing Public Web Servers

If you're operating Web servers on the public Internet, then you need to read this guide. Aimed at technical and operations people, it describes the threats to public Web servers and provides detailed guidelines for securing them. The following areas are covered:

Examples and references are provided for the Apache and Microsoft IIS Web servers, and there is a comprehensive appendix with details on installing and configuring both of these. There is also an appendix containing a very useful checklist for securing Web servers.

NIST SP800-45 Guidelines on Electronic Mail Security

Version 2 of the Guidelines for Electronic Mail Security was released in February 2007. This guide covers many areas from the installation and secure operation of email servers to encryption and signing of emails and securing various email clients. The following areas are covered in detail:

As in the guide for Web servers, a checklist is provided in the Appendices for quickly checking the security of an existing or planned mail server. It doesn't have any operating system or mail software specific sections but is detailed enough to cover almost any installation.

NIST SP800-81 Secure Domain Name System (DNS) Deployment Guide

DNS is a critical component of most IT environments, and risks to DNS need to be taken very seriously and managed appropriately. This guide presents recommendations for secure deployment of DNS servers. It examines the common threats to DNS and recommends approaches to minimize them. It covers the technical details of installing the BIND DNS server on Unix systems and provides recommendations for securing the operating system.

This guide explains how to secure zone transfers with TSIG signatures and gives a very good overview of DNSSEC implementation and management. It is thoroughly recommended if you are involved with managing DNSSEC services.

NIST SP800-48 Wireless Network Security (802.11, Bluetooth, and Handheld Devices)

This guide was written in 2002, so it is a bit outdated now. However, the fundamentals of wireless technology haven't changed a lot, and this guide does a very good job of explaining the threats to wireless networks. It covers primarily IEEE 802.11 (WiFi) and Bluetooth and presents good guidelines on security controls, such as positioning access points, controlling network access, and encryption methods. Even if you're not familiar with wireless networking, this guide serves as an excellent introduction.

NIST SP800-92 Guide to Computer Security Log Management

Just about every device in the world of IT generates log messages. Some devices, such as firewalls, generate huge amounts of log data all of which needs to be managed in a secure manner.

This guide introduces the requirement to securely manage log data. It includes guides on log management infrastructure and processes such as reporting and analysis tools. It also includes details on the Unix syslog system and contains references to many tools and further guides for managing log data.

NIST and DISA Checklists

Sometimes we just don't have the spare time to read though the lengthy guides; this is when checklists come in handy. NIST has developed a program for the development of checklists for securing IT systems. The program is now owned by DISA (Defense Information Systems Agency), and it provides a large number of checklists that make the job of evaluating systems much easier and more methodological.

A number of checklists are available here, including ones covering:

Unix Security Checklist

The Unix Security Checklist comes as a zip file containing a number of documents with three major sections and five appendices. Some of the documents are very large (one is 360 pages long). The checklist is very detailed and contains checks for the Unix OS and most common applications found on Unix (such as SSH). The checks are all in .doc Word format, which makes it very easy to adapt them to your own purposes. The most important sections are Section 2 and Section 3.

Section 2, "SRR Results Report" contains a table that allows you to document the vulnerabilities discovered during the Security Readiness Review (SRR). Section 3, "System Check Procedures", covers procedures about how to perform the SRR for Unix systems. Unix systems covered by this checklist are HP-UX, AIX, Solaris, and Red Hat Linux.

Standard of Good Practice (SoGP)

Published by the Information Security Forum (ISF), the Standard of Good Practice presents comprehensive best practices for managing IT systems from a business perspective but in a practical and achievable way. It has been targeted for larger businesses, but is still applicable to the small to medium businesses as well.

The standard is broken down into six sections, which it calls "aspects":

This is a very large document (247 pages), which would be very well suited for adoption as a comprehensive security policy. Even if you're not specifically solving security problems, the SoGP would act as a good set of guidelines for IT management practices.


No overview of security guides and practices would be complete without a mention of ISO17799. Titled "A Code of Practice for Information Security Management", it was originally developed in 1993 by a number of companies and published as a British standard. It became an ISO standard in 2000 with a number of later editions and add-on documents following. It essentially consists of about 100 security controls within 10 major security headings. It is intended to be used as a reference document to identify the measures required to be applied to specific areas and issues. It contains 10 sections on the following subjects:

The good thing about ISO17799 is that it is a standard against which an organization can be audited, and it can be seen as a common standard for IT security management. There are also many additional documents and books available to supplement the standard.

The bad thing about ISO17799 is that it is heavily commercialized; the 115-page document costs approximately US $200 and contains information that is available elsewhere at no cost (such as the SoGP).


There are many security guides available, and in this article I've presented some of the best ones that you can get and use for free. The OSSTMM and NIST/DISA checklists are good guides for evaluating the security of existing systems. The NIST guides are good for defining the best practices to manage systems securely, and the SoGP and ISO17799 documents offer standards against which your enterprise can be evaluated.

Managing IT security across the enterprise can be a bewildering experience; many managers and systems administrators have problems simply deciding where to start. With the right guides and checklists, however, the job can be greatly simplified and more easily understood.


ISO17799 --

NIST & DISA Checklists -- or

NIST Special Publications --

Open Source Security Testing Methodology Manual (OSSTMM) --

Standard of Good Practice (SoGP) --

Unix Security Checklist --

Kerry Thompson is a Security Consultant in Auckland, New Zealand with more than 20 years commercial experience in Unix systems, networking, and security. In his spare time he is a technical writer, software developer, sheep farmer, woodworker, private pilot, and father. Contact him at:

Landwehr, C.E. and David M. Goldschlag, "Security Issues in Networks with Internet Access." Invited paper, Proc IEEE Vol 85, No. 12, Dec. 1997, pp.2034-2051. PostScript, PDF

This paper describes the basic principles of designing and administering a relatively secure network. The principles are illustrated by describing the security issues a hypothetical company faces as the networks that support its operations evolve from strictly private, through a mix of Internet and private nets, to a final state in which the Internet is fully integrated into its operations, and the company participates in international electronic commerce. At each stage, the vulnerabilities and threats that the company faces, the countermeasures that it considers, and the residual risk the company accepts are noted. Network security policy and services are discussed, and a description of Internet architecture and vulnerabilities provides additional technical detail underlying the scenario. Finally, a number of building blocks for secure networks are presented that can mitigate some of the vulnerabilities. Keywords: computer network security, internet, cryptography, authentication

CNS - Knowledge Base usefil link collection. See RFC part and several userful links to papers:

M. Baker and M. Sullivan, "The Recovery Box: Using Fast Recovery to Provide High Availability in the UNIX Environment." Proceedings of the Summer 1992 USENIX Conference, June 1992. (Award paper -- Best Student Paper.).

From Mary Baker's Publications

The Inevitability of Failure: The Flawed Assumption of Security in Modern Computing Environments (.pdf) by Peter A. Loscocco, Stephen D. Smalley, Patrick A. Muckelbauer, Ruth C. Taylor, S. Jeff Turner, John F. Farrell, National Security Agency

Dave Dittrich home page

***** Sys Admin Magazine Aug 2001 Volume 10 Number 8 Locking the Front Door of Password Security by Victor Burns.

Burns describes a PAM created to check the strength of user-supplied passwords when user changes his/her password. Very good and simple approach to improving password security.

[Jan 27, 2000] DDJ ATTACK TREES by Bruce Schneier

Attack trees provide a formal, methodical way of describing the security of systems, based on varying attacks. Bruce shows how you can use them to improve security by modeling attacks.

[Jan 15, 1999] Network Magazine Back Issues July 1999 Security Reality Check by Rik Farrow

[Jan 15, 1999] Internet Application Security -- decent overview

[Jan 3, 1999] SysAdmin: Implementing Security on Linux by Patrick Lambert -- good discussion of basics

Here is good comment on this paper from LinuxToday

Subject: Very good story ... (Jan 2, 2000, 20:05:28 )

... although some additional notes might be helpful. I enjoy every story on security, but reading BugTraq on a daily base has made me paranoid ?!?

You should install GNU/Linux on a system, which is standalone or is only connected to a small set of systems (in a test network) where others don't have access. Otherwise you can't never be sure if your system is infected during the install. Who do you trust ?

Root logins via telnet or ssh are never allowed by me. If you need root access, sudo or su will do.

In addition to the setuid problem also the guid should be taken care for (-2000 in the find command).

To see with ports are open to the world run 'nmap -sS -O hostname' to see them. Whichever you don't need or don't know : close them down, you really don't need them (otherwise you would know). If you installed firewalling afterwards on your system, run nmap again.

If you're running OpenSSH or SSH, you don't need in some cases inetd. Telnet, ftp, rcp are substituted by these secure shells. Normally I need neither inetd nor portmapper. If you really do need inetd, consider xinetd. In many cases the RPC's aren't needed, they can be substituted by expect scripts. Also rsync is of great help.

You won't run NFS on systems, connected to the Internet, do you ? Nor X11, I assume. If you think you do need these protocols, you inserted a firewall between the Internet and your system to prohibit these protocols to be exchanged with other computers on the Internet ?

Firewalling ? Very important. You did start with deny on all interfaces as a default, right ?

No comments on the final thoughts, he's really on topic.

For those lucky people who understand the Dutch language, have a look at for an advances study on firewalling.



See also: Secure Shell (SSH/SSH2) Setup Guide (Dec 29, 1999)
SecurityPortal: OpenSource projects - what I learned from Bastille (and others) (Dec 24, 1999)
IBM developerWorks: Open source software: Will it make me secure? (Dec 24, 1999)
ZDTV: The Philosophy of Security (Dec 21, 1999)
LinuxPR: Bastille Linux releases v1.0.0 at SANS San Francisco Security Conference 99 (Dec 14, 1999)
UNIX & LINUX Computing Journal: Linux Security Tools (Dec 12, 1999)
Linux Gazette: Securing Linux: The First Steps (Nov 15, 1999)

[Nov.7, 1999] forum - Guest Feature Implementing a Secure Network


Hardening a Unix computer for Internet use

Wizard's Guide to Security by Carole Fennelly

Dave Wreski: Linux Security Administrator's Guide.
A very good handbook to improve the security of our Linux system. Dave Wreski explains here the filesystem security mechanisms, passwords, Cryptography... DOWNLOAD.


The USENIX Association - Sponsoring Research, Education, Training and Conferences for UNIX, LINUX, System and Network Administration, Security and Open Source technologies. -- home page

Usenix Security Symposiums

Complete Usenix Security Symposium '02 Program

9th USENIX Security Symposium

8th USENIX Security Symposium, Washington, D.C.

7th USENIX Security Symposium, San Antonio, Texas

The Sixth USENIX Security Symposium, San Jose, California

5TH USENIX UNIX Security Symposium

Security IV, Santa Clara, California


Special issues:

Selected papers

Classic Papers
(Limited to papers printed in the last century ;-)

Note: In case some link below turned to be dead please check ResearchIndex [NEC Research Institute; CiteSeer; Computer Science]. Many papers are also available in postscript from You can also try the Purdue Coast archive ( but this site prohibit direct pointing and thus not listed in links. I actually hate one particular security pseudo-academic for this restriction ;-)

***** [Thomson1995] Ken Thompson Reflections on Trusting Trust
Ken Thompson famous Turing lecture in which he introduced the idea of self re-installable Trojan horse. Real classic.
[Richie1986] Dennis M. Ritchie On he Security of UNIX(ResearchIndex) in UNIX System Manager's Manual, 4.3 Berkeley Software Distribution, Virtual VAX--11 Version, Computer Science Research Group, Computer Science Division, Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, pp. 17:1-3 (Apr. 1986) html ps version.
Nothing special from the point of view of contemporary user, but still it's an important, classic, historical paper. Maybe the first paper about Unix security, written by one of its designers. Even for contemporary reader the points about the necessity of maximum restrictions on setuid and setgid programs are still valid.
[Morris_Thompson] Password Security: A Case History. by Robert Morris, Ken Thompson:
Morris and Thompson describe here the design the password crypt() mechanism, its first faults, its improvements... ps version.

**** [Morris] A Weakness in the 4.2BSD Unix TCP-IP Software by Robert T. Morris.

Probably the first paper where the well known IP Spoofing attack is described. They speak about the mechanism which allows an untrusted host to appear like a trusted one, and access this way to certain restricted services. ps and pdf

Old Bellovin papers:
[Curry90] Improving The Security Of Your Unix System by David A. Curry:
One of the classical articles when talking about Unix Security. Here the author, who belongs to the old schholl and not only talk about security but write a useful tools, makes an exhaustive analysis of the threads to the system, the protection mechanisms offered by Unix, the rules when offering network services, etc. html ps.
Bill Cheswick An Evening with Berferd, in which a Cracker is Lured, Endured and Studied.
In this surrealistic paper by B. Cheswick (a revisited version appears in Firewalls and Internet Security, by Cheswick and Bellovin), the author describes the history of a cracker knocking at AT&T gateway in 1991. He analizes the cracker's activities, methods and failures when trying to access the gateway. While from the point of view of content this is a half-truth half-fiction, ( another Cookoo egg story), this paper has a histrical value. ps version.

Daniel V. Klein Foiling the Cracker A Survey of, and Improvements to, Password Security
A survey of, and improvements to, password security. In this classical paper, it's shown the brute-force attack to password files by using dictionaries, and how a weak password can compromise the entire system. As a solution, the use of a proactive password checker is proposed. ps version.

[Farmer_Spafford1992] Daniel Farmer and Eugene N. Spafford The COPS Security Checker System USENIX Conference Proceedings, Pages 165-170, Anaheim, CA, Summer 1990.
In this paper the authors introduced one of the first (but far from being the first) internal vulnerability checker. The name made the story and everything else is now history. Farmer is also the co-author of Satan,
[Venema] Wietse Venema Murphy's law and computer security
[Farmer_Venema] Dan Farmer and Wietse Venema Improving the Security of Your Site by Breaking Into it
[Kim_Spafford, 1993] Gene Kim and Eugene H. Spafford. The Design and Implementation of Tripwire: A File System Integrity Checker. Technical Report CSD-TR-93-071, Purdue University, 1993.
[Bergels19xx] Walter Bergels UNIX password security
In this article they analyze the significance of an acceptable password for all the system's security; also they talk about the Unix cipher mechanism, and also it's described how an attacker can "discover" a password. ps version.

[Feldmeier_Karn1989]David Feldmeier and Philip Karn UNIX Password Security - Ten Years Later
Ten years after the publication of the last paper (this was from 1979) they reexamine the vulnerabilities at the authentication mechanism of every Unix system. Times have changed and with new technology faster attacks can be done. So here they present some solutions to this vulnerabilitie. ps version.
[Miller_Fredriksen_So1989] Barton P. Miller, Lars Fredriksen, Bryan So An Empirical Study of the Reliability of UNIX Utilities -
A classic study about reliability and stability of some common Unix tools. Authors arrive to surprising conclusions: the third part of tested tools failed. Fortunately, it has rained a lot since then (1989), and nowadays most (but not all) Unix utilities included in commercial distributions are equal or better than GNU free tools security-wise (Solaris tools seems to be better: support for extended attributed, etc.) ps version
[Miller1995] Barton P. Miller, David Koski, Cjin Pheow Lee, Vivekananda Maganty, Ravi Murthy, Ajitkumar Natarajan, Jeff Steidl Fuzz Revisited- A Re-examination of the Reliability of UNIX ...

On 1995, Barton P. Miller, one of the authors of the previous paper, re-examine the reliability of Unix tools with another group of researchers. A large improvement has been done, but the most strange result is this: the most reliable Unix system is Linux Slackware, a free Unix clone that runs on some platforms (i386 and SPARC between them), and which has been developed by programmers from all around the world, without a big company with them, and with Linus Torvalds as their leader. ps version(146k)
[Smith] Nathan P. Smith Stack Smashing vulnerabilities in the UNIX ...
Here they present and analyze the vulnerabilities of Unix OS based upon the possibility of executing stack code (on Intel x86 and compatibles). This is one of the most important Unix security faults, because an error on the source code of a process that runs with root privileges becomes on the possibility of a privileged access. ps version.
[Bishop] Matt Bishop Race Conditions, Files, and Security Flaws; or the Tortoise ... : Race Conditions, Files and Security Flaws; or the Tortoise and the Hare Redux.

In this paper Matt Bishop studies other of the most common Unix attacks: race conditions. This study is done from real examples (passwd, binmail...), and finally some solutions are proposed. ps version.
[Bishop_Dilger] Matt Bishop, Michael Dilger Checking for Race Conditions in File Accesses - Bishop, ... Checking for Race Conditions in File Accesses.
Continuing with race conditions attacks on the Unix OS, in this paper they study mechanisms that allow the detection of these failures when accessing files. ps version.
[Bishop] Matt Bishop: How to write a SetUID program.

Matt Bishop analices in this paper the problems derived from the existence of setuid programs in Unix systems. He shows the potential attacks to these programs, and also the basic rules to write some of them. DOWNLOAD.
Geoff Morrison: UNIX Security Tools. DOWNLOAD.
Here the author analizes the most common Unix security tools. He classifies them into three different groups: system tools (to prevent internal attacks), network tools (to prevent external ones) and, at last, other group of tools.
Robert B. Reinhardt: An Architectural Overview of Unix Network Security. DOWNLOAD.
In this article its author presents a model of security architecture in Unix, based upon the Network connection model (ISO/OSI layer structure).
Matt Bishop: A Taxonomy of Unix System and Network Vulnerabilities. DOWNLOAD.
Outdated. Matt Bishop describes here some Unix weaknesses, how to detect them at our machine to prevent crackers, and, of course, how to eradicate those failures in the system. He analyzes, between others, the Thompson's Trojan for the login program, some race conditions, network daemons failures, IP Spoofing, etc.
Peter H. Salus Net Insecurity Then and Now (1969-1998) -- Salus is not a specialist in security.
Landwehr, Bull, McDermott, Choi: A Taxonomy of Computer Program Security Flaws, with Examples.DOWNLOAD.

One of the bests papers (and most complete) between all of those which try to establish a taxonomy of system vulnerabilities. In this article's appendix they present, classified by its system, some examples of insecurities and its classification into this taxonomy. The Unix section is excellent.
Bellovin Steven M. Bellovin: There Be Dragons. DOWNLOAD.

Bellovin is a gifted writer. This outdated but still fasinating article, shows the attacks to the AT&T gateway by crackers from all around the world in old days when there where no corporate and ersonal firewalls on almost each computer connected to the Internet.
Matt Blaze, John Ioannidis: The Architecture and Implementation of Network-Layer Security under Unix.DOWNLOAD.

In this paper the authors shows the design, philosophy and functionality of swIPe, an IP layer security protocol. swIPe is fully compatible with the current protocol, but it offers authentication, integrity and confidentiality for IP datagrams.
Fuat Baran, Howard Kaye, Margarita Suarez: Security Breaches: Five Recent Incidents at Columbia University.DOWNLOAD.

In 1990, Columbia University (USA) suffered various attacks on its Unix machines. In this paper they are described (some of them against password files from some machines), as well as the security measures token.
Dan Farmer, Wietse Venema: Improving the Security of your site by breaking into it. DOWNLOAD.

Controvercial classic by Dan Farmer and Wietse Venema descibes ideas of Satan. The term ubercracker was introduced in this paper.
Matt Bishop: Proactive Password Checking. DOWNLOAD.

In this chapter the author analyzes the suitable passwords Unix problem, and some possible solution with programs like npasswd or passwd+. Both of them (see the Software section) are analyzed and compared to see how they solve the weak passwords problem.
Steve Simmons: Life Without Root. DOWNLOAD.

In this article the author studies the problem of doing certain administration activities as root. The accesses as administrator to the system have to be reduced, because of security, and here it's described how to make some tasks without the need of total privileges, but with the use of dedicated system users.
Bob Vickers: Guide to Safe X. X Window is very unsecure. DOWNLOAD.

In this paper are described some approaches on how to improve the security of X .
Eugene Spafford: Unix and Security: The influence of History.

Dan Franklin. UNIX: Rights and wrongs. In Mitchell Waite, editor, UNIX Papers for UNIX Developers and Power Users, chapter 1, pages 2-40. Howard W. Sams & Company, 1987.
USENIX LISA 98/Titan Dan Farmer, Earthlink Network; Brad Powell, Sun Microsystems, Inc.; Matthew Archibald, KLA-Tencor
Titan is a freely available host-based tool that can be used to audit or improve the security of a UNIX system. It started as a Bourne Shell script to reconfigure various daemons. Checks for verifying configurations were added, and over time Titan became an effective tool for auditing computers. The authors made it clear that this is a powerful tool not designed for the weak or timid sysadmin. Using it incorrectly, you could easily render a system unusable or even unbootable. For the SA willing to put in the time to learn Titan thoroughly, it can save a great deal of time while helping to verify and maintain security across multiple hosts. The authors also made it clear that Titan is not the be-all and end-all of information-systems security; it is designed to be only part of the overall infrastructure. Titan now runs on most versions of Solaris, but it shouldn't be too difficult to port the scripts to other flavors of UNIX. By editing the scripts you can reconfigure Titan so that it performs auditing and configuration changes appropriate to the type of host you are running it on and the security policies that your network requires.

See <>.

[Noordergraaf_Watson1999 ] Alex Noordergraaf and Keith Watson SolarisTM Operating Environment Minimization for Security: A Simple, Reproducible and Secure Application Installation Methodology December 1999.
This probably No.1 paper that explains how to remove unnecessary packages -- actually they consider a very practical case of Solaris + Netscape Enterprise Server. The paper a little bit weak on the tool side, though.

CERT Security Improvement Modules

Recommended Solaris Hardening Articles

SolarisTM Operating Environment Minimization for Security

CERT Security modules Vandenberg&Wyess
Armoring Solaris by Lance Spitzner Sabetnet Security Guide Boran's Hardening Paper
Building Secure N-Tier Environments Solaris Default Processes and init.d Solaris Operating Environment Network Settings for Security Rapid Recovery Techniques: Exploring the Solaris[tm] Software Registry YASSP Etc

SolarisTM Operating Environment Minimization for Security

***** SolarisTM Operating Environment Minimization for Security: A Simple, Reproducible and Secure Application Installation Methodology By Alex Noordergraaf and Keith Watson December 1999. Great paper. This probably No.1 paper that explains how to remove unnecessary packages -- actually they consider a very practical case of Solaris + Netscape Enterprise Server. The paper a little bit weak on tool side, though.

The Solaris Operating Environment installation process requires the selection of one of four installation clusters:

Each installation cluster represents a specific group of packages (operating system modules) to be installed. This grouping together of packages into large clusters is done to simplify the installation of the OS for the mass market. Because each of these installation clusters contains support for a variety of hardware platforms (SolarisTM Operating Environment (Intel Platform Edition), microSPARCTM, UltraSPARCTM, UltraSPARC II, and so on) and software requirements (NIS, NIS+, DNS, OpenWindowsTM, Common Desktop Environment (CDE), Development, CAD, and more), far more packages are installed than will actually ever be used on a single Solaris Operating Enironment.

The Core cluster installs the smallest Solaris Operating Environment image. Only packages that may be required for any SPARCTM or Solaris Operating Environment (Intel Platform Edition) system are installed. The End User cluster builds on the Core cluster by also installing the window managers included with the Solaris Operating Environment (OpenWindows and CDE). The Developer and Entire Distribution clusters include additional libraries, header files, and software packages that may be needed on systems used as compile and development servers.

The size of the clusters varies significantly: the Core cluster contains only 39 packages and uses 52MBytes; the End User cluster has 142 packages and uses 242 MBytes; the Developer cluster has 235 packages and consumes 493 MBytes of disk space. Experience to date has shown that in many cases, a secure server may require only 10 Solaris Operating Environment packages and use as few as 36MBytes of disk space.

Installing unnecessary services, packages, and applications can severely compromise system security. One well known example of this is the rpc.cmsd daemon, which is unnecessary on many data center systems. This daemon is installed and started by default when the End User, Developer, or Entire Distribution cluster is chosen during the installation process.

There have been many bugs filed against the rpc.cmsd subsystem of OpenWindows/CDE in the last few years, and at least two CERT advisories (CA-99-08, CA-96.09). To make matters even worse, scanners for rpc.cmsd are included in the most common Internet scanning tools available on the Internet. The best protection against rpc.cmsd vulnerabilities is to not install the daemon at all, and avoid having to insure it is not accidentally enabled.

The problem described above is well known in the computer industry, and there are hundreds of similar examples. Not surprisingly, almost every security reference book ever written discusses the need to perform "minimal OS installations" [Garfinkel]. Unfortunately, this is easier said than done. Other than the occasional firewall, no software applications are shipped with lists of their package requirements, and there's no easy way of determining this information other then through trial and error.

Because it is so difficult to determine the minimal set of necessary packages, system administrators commonly just install the Entire Distribution cluster. While this may be the easiest to do from the short-term perspective of getting a system up and running, it makes it nearly impossible to secure the system. Unfortunately, this practice is all too common, and is even done by so-called experts brought in to provide infrastructure support, web services, or application support. (If your organization is outsourcing such activities, be sure to require the supplier to provide information on what their OS installation policies and procedures are, or you may be in for some unpleasant surprises.)

The rest of this article presents one method for determining the minimal set of packages required by a particular application--the iPlanetTM Enterprise Server. Future articles will discuss other applications. The tentative list includes NFSTM Servers (with SecureRPC and Solstice DiskSuiteTM), iPlanetTM WebTop, and SunTM Cluster. If you have followed this procedure and developed the scripts for a particular application, please forward them to the authors for inclusion in future articles.

CERT Security modules

CERT® Security Improvement Modules very uneven, some are weak, some are outdated.

Each CERT Security Improvement module addresses an important but narrowly defined problem in network security. It provides guidance to help organizations improve the security of their networked computer systems.

Each module page links to a series of practices and implementations. Practices describe the choices and issues that must be addressed to solve a network security problem. Implementations describe tasks that implement recommendations described in the practices. For more information, read the section about module structure.


Building Secure N-Tier Environments

Building Secure N-Tier Environments by Alex Noordergraaf - This article provides recommendations on how to architect and implement secure N-Tier commerce environments.


**** Securing Solaris Servers -- (Securing Solaris Servers - A Checklist Approach
Paul D. J. Vandenberg and Susan D. Wyess) point by point checklist from Usenix -- this is probably the third best paper on this topic paper that is available on the Internet.

This material is excerpted from an internal U.S. Government document on web security, which the authors played leading roles in preparing. This material has been officially reviewed, and the authors have been granted permission to use this material in a non-official publication.

Server Security Checklist Overview
About the Authors
Using the Checklists

Armoring Solaris by Lance Spitzner

**** Armoring Solaris by Lance Spitzner: -- good and based on the concept of minimizing the number of packages installed.

Solaris Operating Environment Network Settings for Security

***** SolarisTM Operating Environment Network Settings for Security[updated December 2000]

-by Keith Watson and Alex Noordergraaf
Discuss the many low-level network options available within Solaris and their impact on security.

Sabetnet Security Guide

**** Solaris Security Guide -- good sabernet paper. Some advice is of qustionable quolity. Yes, of course, you can enable BSM but you better buy an additional hard 80G drive before and hire somebody to look into all those logs ;-)

Boran's Installing a Firewall and Hardening Solaris 2.7

**** Installing a Firewall bastion host Hardening Solaris 2.7 By Seбn Boran. He is the author of IT Security Cookbook -- this a very good paper that contain important information that is difficult to find elsewhere.

This article presents a concise step-by-step approach to securely installing Solaris for use in a firewall DMZ, or other sensitive environment. There are many books and web articles on general hardening, but deciding exactly how to do it for your Solaris system can be tricky.

The focus in this article is on preparing the Operating System to securely run services, rather than the setup of the services themselves. Firewall engines like Raptor, Firewall-1, Sunscreen etc. are not examined here.

This article is specific to Solaris 2.7, other versions are similar, but will have some differences in startup file names, kernel parameters etc. This article has been updated since the original release, see the Additional Notes section.

Checkpoint Sun Stripping Paper

***+ Strip Down SUN Servers by Strip Down SUN Servers by Joe@Checkpoint: looks like rework of Security FAQ. Decent list of checks, but nothing special. Might be useful as an add on to Vandenberg-Wyess paper.

1) Keep the system disconnected from the network until all is ready.

2) Install only the core operating system, adding only necessary packages.

1. Install the latest OS version supported by CheckPoint S/W Tech.

2. Be sure root has a umask setting of 077 or 027 after you have fully configured the system.

3. Be sure root has a safe search path, as in /usr/bin:/sbin:/usr/sbin It helps avoid Trojan horses in the current working directory.

4. Generally, examine all "S" files in /etc/rc2.d and /etc/rc3.d. Any files that start unneeded facilities should be renamed (be sure the new names don't start with "S"). Test all boot files changes by rebooting, examining /var/adm/messages, and checking for extraneous processes in ps -elf output.

5. Make sure the to enable the "CONSOLE" line in /etc/default/login. To disable use of ftp by root, add "root" to /etc/ftpusers.

6. Remove /etc/hosts.equiv, /.rhosts, and all of the "r" commands from /etc/inetd.conf Do a kill -HUP of the inetd process.

7. Remove, lock, or comment out unnecessary accounts, including "sys", "uucp", "nuucp", and "listen". The cleanest way to shut them down is to put "NP" in the password field of the /etc/shadow file. Also consider using the noshell program to log attempts to use secured accounts.

8. The file /etc/logindevperm contains configuration information to tell the system the permissions to set on devices associated with login (console, keyboard, etc). Check the values in this file and modify them to give different permissions.

9. No file in /etc needs to be group writeable. Remove group write permission via the command chmod -R g-w /etc

10. By default, if a Solaris machine has more than one network interface, Solaris will route packets between the multiple interfaces. This behavior is controlled by /etc/init.d/inetinit. To turn of routing on a Solaris 2.4 (or lesser) machine, add "ndd -set /dev/ip ip_forwarding 0" at the end of /etc/init.d/inetinit. For Solaris 2.5, simply "touch /etc/notrouter". Be aware that there is a small window of vulnerability during startup when the machine may route, before the routing is turned off.

Solaris Default Processes and init.d

FOCUS on Sun Solaris Default Processes and init.d Pt. I

This article has been written to provide insight into a stock installation of Solaris 8, and the services started by default. Solaris 8 by default runs many services. This example was provided using Solaris 8, which is the latest version available, and a Sparcstation 20. Most of this document will apply to releases of Solaris prior to 8, and to both the Sparc and Intel architectures. For documentation purposes, a full OEM install was done. Many topics discussed will be familiar to seasoned administrators. However, this document will benefit all parties involved in the administration and security aspects of Solaris.

Solaris Security Primer

Here is a simple approach which merely checks the files integrity against what was originally installed. If something has changed, i.e. someone has replaced a system binary with something malicious, it will find it. Of course if you don't scan everything, you won't really know.

This script will compare what is on a CD-ROM to what is on a hard drive. This assumes that you can fit everything important to you on a CD. I think this is fairly reasonable if you only take system/software binaries and not data. The minimal Solaris install is 200-300 MB depending on version, which will fit fairly easily on a CD. Having a good trusted backup is paramount. Without it you can only guess as to what has been changed.


cd /cdrom
find . -type f | grep -v TRANS.TBL | \
        grep -v /proc | \
         while read F
        VAR1=`md5sum $F | awk '{print $1}'`
        VAR2=`md5sum ${F#.} | awk '{print $1}'`
        if [ "$VAR1" != "$VAR2" ]
                echo $F CHANGED
                echo $VAR1 $VAR2

When setting up a system, include a run of the following script. It is paramount that this be done before connecting to the Internet. That way it is a "trusted" OS at the time you run this. If you had your system on the Internet for two years and then do it, you may have already been attacked, and just don't know it.

Rapid Recovery Techniques: Exploring the Solaris[tm] Software Registry

**** Rapid Recovery Techniques: Exploring the Solaris[tm] Software Registry
-by Richard Elling Discuss how to use processes to recover from errors caused by people.

Etc (3 and less stars)


{***} YASSP Yet Another Solaris Security Package by Jean Chouanard, Xerox PARC. See also Softpanorama Unix Audit and internal Scanning Tools (Internal Vulnerability Scanning) page. Package seems to be dead (was not updated since November 2000)

As the source of the SECclean package are available, it is easy for you to copy it and to localize it so it will reflect your configuration. From this package, we have derived different classes of package to install NIS server, NFS server and end user workstation.

Files Installed:

Files Replaced:

Files Modified:

Files Deleted:

RC files Deleted

    Long list of RC files turn off : "cacheos cachefs.root asppp uucp cachefs.daemon xntpd spc rpc autoinstall nfs.client autofs nscd lp nfs.server volmgt PRESERVE sendmail cacheos.finish sysid.sys snmpdx dmi dtlogin power init.dmi init.snmpdx".

    These names are the name of the init files located in the /etc/init.d directory. For all the links existing under any /etc/rc?.d/ directory, the postinstall script will delete these link and write a trace trace log under /etc/rc?.d/Disable-By-SECclean which enable you to re-create the link if needed.
    If you need to re-enable some of these RC file, you can either re-create the package to fit your need (see Package modification) or just manually recreate the link after the install.

RC files Replaced

    These files are based on the SUN distribution files, but have been simplified.

RC files Added

*** Spreading the wealth -- nothing new. Rehash of old info.

Peter is conducting an experiment this month. He'd like his column to act as a forum featuring tips gathered from around our wide sysadmin audience. Read Peter's helpful hints here -- and then send in your own! If the experiment succeeds, network tips will become a regular feature of future columns as well. (1,300 words)

Disable root logins everywhere but on the system console by uncommenting the CONSOLE= line in /etc/default/login. Disable ftp root logins by adding root (and other system accounts) to /etc/ftpusers.

Disable all stack-overflow security attacks by adding the line noexec_user_stack to /etc/system and then rebooting. (Available starting with Solaris 2.6.)

(This tip from Jochen Bern.)

There are a couple of security holes that a simple "umask 022" at the beginning of boot scripts will fix -- and virtually no boot scripts break by adding it.

Room to grow 'Debugging' Solaris

As a leader in Internet technologies, Sun must assume more responsibility for the security of its users' applications and data. Many straightforward improvements to Solaris would benefit the vast majority of users. For example, the fix-modes program, which is used frequently by sites wishing to secure their Solaris installations, has no known side effects. Sun could eliminate the need for the program by implementing its capabilities as part of a Solaris release.

Another tool that Sun could incorporate is Titan, again cowritten by a Sun employee. The ASET tool is included with Solaris, but is rarely found to be useful by the security community. To improve the security of Solaris, more than a single tool is needed; there must be a systemic approach to improving its security as shipped and allowing systems administrators to further increase its security.

Many other changes, such as those found in Solaris Security FAQ, could be implemented or made optional with the use of simple programs or scripts. In fact, Jean Chouanard of Xerox PARC has done just that by using content from the FAQ and Hal Pomeranz's forthcoming SANS Securing Solaris: Step-by-Step booklet. The script is available for downloading; see Resources for the URL.

Other potential security improvements include making syslog.conf work out of the box, keeping BIND up to date, and replacing sendmail with something more secure (qmail, for instance).

***+ Peter Galvin's Solaris Security FAQ -- important historical document. Almost a standard reference on Solaris security. Partially outdated. Still contains a very good points. Some points are incorrect for Solaris 7 and 8. Some recommendations are pretty superficial.

***+ Securing a Solaris 2 Machine [local copy]

This document is just to get you started; it is not exhaustive. The Computing Service offers the advice in this document only as a guide to the sort of problems of which a Unix System Administrator should be aware and accepts no responsibility for any problems which may arise from its application to particular systems outside the control of the Service.

This document should be read in conjunction with the Computing Service's leaflet ``So you want to run a secure Unix system, do you?'' which covers the general concepts.

The most recent version of Solaris 2 is version 2.4. The Sun CD comes with a directory of patches. Load these patches together after the operating system at install time.

***+ Secure Solaris Setup -- this is a decent starting document, but there are better than that.[Dec 20, 1999]

This page is intended to walk you through some of the steps required to set up a secure Solaris machine. For obvious reasons, I am unable to post all of the security measures that I take on the machines that I manage. The following steps should be considered a good beginning, not a guarantee that the resulting machine will be secure

***+ Hardening a Unix computer for Internet use by Hal Stern's. See also his other Sysadmin columns at

Rob Kolstad, long-time USENIX executive and noted industry personality, often points out that a good system administrator is a master of change on many time scales. That statement is most appropriate in the context of last month's topic, managing TCP/IP connections....

... ... ...

Connection erection
Just because you can name the remote end of a socket with an IP address and port number pair doesn't mean the other side can or even wants to talk to you. Making yourself appear interesting (and trusted) is a security problem we'll cover shortly. Making sure your servers have sufficient connection management resources is a growing performance problem. As the use of network services has exploded, many years-old assumptions about resource allocation have proven far too restrictive.

A server-side process prepares to accept socket connections by first calling listen() and then accept(). The first call determines the depth of the incoming connection queue, while the second call is what actually puts the socket into a receive-ready state. In the days of pre-Internet boom the default value of five pending connections was frequently hard-coded in the implementation of listen(). Current socket interface code, however, interprets the argument and sets the queue depth. When the socket in question is owned by httpd, or any other process that receives a high volume of connection requests, the queue depth is a critical performance limit.

An embryonic socket connection goes through a three-way handshake between client and server. The connection stays on the incoming connection queue until the handshake has been completed. Knowing the steps involved will help you determine just how long the average connection dance will take.

The connection remains in the queue for the duration of the last two packet exchanges, or the total of the round-trip network transfer time between client and server, plus the time required for the client to process the server's initial packet.

Once the connection queue is full, further attempts to connect to the socket are discarded. If you find connections are refused, or if your browser is complaining that it can't open a URL because the server isn't responding, you're probably bumping into the backlog limit.

Using a bit of queuing theory, we can determine the maximum connection request rate (RR) knowing the average round-trip time (RT) and the connection queue depth (QD): RR = QD/RT. If the depth is left at its default value of 5, and it takes about 200 msec to complete a round-trip, you can handle 25 connections/second. Increase the latency for a handshake over a series of wide-area links to 500 msec, and that rate drops to 10 connections/second. Crank the queue depth up to 32, however, and you can handle 64 connections/second at 500 msec round-trip, and a more respectable 160/second at 200 msec.

Here's another way to calculate the expected depth of your socket connection waiting line. Starting with the RR = QD/RT relationship, multiply both sides by RT, yielding QD = RR * RT. The average connection backlog will be the connection arrival rate (expressed in connections/second) multiplied by the average round-trip time (in seconds) for a three-way handshake. A site bombarded by 100 connection requests/second from local machines, where the round-trip service time sits near 30 msec, will only have a backlog of (0.03 second * 100) = 3 connection requests. Accept that same load from the Internet, where the handshake round trip time is more 300 msec on a good day, and the queue depth increases to 30.

There are two steps required to raise the connection backlog limit. First, change your server-side code so that listen() is passed a more accurate depth parameter. Second, inform the kernel of the larger backlog high-water mark. In Solaris 2.4, do this using ndd:

luey# ndd -set /dev/tcp tcp_conn_req_max 32

The default value is still only 5. This command should be placed in /etc/init.d/S69inet, or executed by a boot script before httpd is started, or you'll be clamped at the too-small default. You can increase the backlog up to 32 in Solaris 2.4, and Solaris 2.5 further increases the upper bound to 1,024 connections. (Thanks to Bob Gilligan of Sun's Internet engineering team for the math and explanation of the connection request mechanics).

... ... ...

Policies of firmness
Current literature on firewall and network security roughly divides policies into two classes: those that specifically deny some services, allowing anything else by default, and those that specifically allow services and deny connections by default. While the latter policy camp is much more restrictive, it also tends to limit the number of headaches you will have to deal with. Being firm and denying services by default means you're in for fewer surprises from unexpected holes in previously unused services.

Consider the task of protecting a home-grown application that you want to make accessible across firewall or company boundaries. The TCP wrapper package protects services owned by inetd and the modified portmapper covers RPC based applications such as NIS and various license managers. However, services managed by daemons started at boot time, outside of portmap or inetd control, are not protected by either wrapper. You'll need to enforce access controls at your router, using a low-level packet filter, or modify the application's installation so it can be managed by inetd. Avoid retooling applications to have them perform network authorization -- you're likely to end up with inconsistent or incomplete implementations, leaving you open to a host of attacks on your host.

If you are going to use wrapper services to restrict service access inside your organization, the problem of network security extends beyond the protected server. Let's say you decide to configure a TCP wrapper that allows any connection from a machine on the "inside" of your network, assuming all employees are well-intentioned and to be trusted. What you can't trust, however, are the packets coming through your router or Internet gateway. If an attacker hand-crafts a connection request packet with an IP address that appears to be inside your network, it's possible that the TCP wrapper will happily accept the connection. This problem, known as IP spoofing, must be dealt with at the boundary between the internal and external (Internet) networks. Your router, gateway or firewall should discard packets that appear on the external network connection but proclaim to be from the inside, using a forged IP address. More information on IP spoofing and how it was used by Internet rogues like Kevin Mitnick can be found on the Information Works! publications list.

Keep your SOCKS on
So far, we've concentrated on keeping the unwanted characters out through careful inspection of their source addresses. We've assumed a fair bit of transparency through your connection to the Internet, with host-level security taking the spotlight. What if your gateway or Internet firewall doesn't forward IP packets? Most hosts that straddle "inside" and "outside" networks do not automatically route IP packets, whether outside is the Internet proper or simply an untrusted stretch of data highway. In a purely perimeter-oriented defense, turning off IP forwarding helps to keep the bad guys out. It also keeps the good guys in unless you create proxy, or relay applications on the gateway that connect through your locked door to the outside world.

Of course, there's another publicly available package to solve this problem: SOCKS, a name that is derived more as a contraction of "sockets" than as an acronym. Learn more about the package's history and availability on the SOCKS Web page. Using SOCKS, connections from the inside are relayed to the outside network, with only minor modifications to the application to make it conscious of the relay. Changing application code is a small price to pay for user-level transparency; users won't have to contend with explicitly talking to a proxy instead of a familiar command line.

Socks consists of two components: a daemon that runs on your gateway host and a library used to build applications to talk to that daemon. The SOCKS daemon listens for connections emanating inside the firewall, and relays them to the outside. The library is consulted in place of socket set-up calls such as bind() and connect(), causing them to talk to the daemon instead of the actual exterior network service. These routines have an R prefix, so the SOCKS version of bind() is called Rbind() and the modified connect() is Rconnect(). In addition to these two calls, accept(), listen(), getsockname(), and select() are overridden.

Rebuilding an application to understand the SOCKS relay is known as "SOCKSifying" the client. The simplest approach, which involves no source-code changes, is to modify the Makefile to redefine the necessary library functions as macros, substituting the SOCKS client library name as the macro's value:

-Dconnect=Rconnect -Dbind=Rbind

Wherever connect(args) appears in the application code, it will be replaced with Rconnect(args). If the Makefile trick results in bizarre compilation and linker errors, you'll have to manually modify the client to use the SOCKS library routines. Of course, if your code is completely dynamically linked (a topic we'll visit in coming months), you can build SOCKS as a shared library and have the dynamic linker do the dirty work.

Socks isn't a panacea, since most environments will have many non-Unix clients and servers. There are SOCKS libraries available for Macintosh and Windows clients, although the SOCKS daemon must run on a Unix host. A client using SOCKS-compatible applications contains an /etc/socks.conf configuration file that points to available SOCKS relay hosts. This bit of client-side work makes SOCKS non-trivial to install on thousands of hosts. Another downside to the tool is that it only works for TCP-based services; you'll need the companion package udprelay to handle connectionless services. More information is contained in both books on firewalls mentioned above, as well as in the installation and configuration notes that come with SOCKS.

SA388_TOC.html SA-388 Solaris 2.X Security and Firewall-1 -- course for Sun that contains some online notes.

{***+} System Administration - Hardening Solaris -- this is a compilation and is weaker than prev. papers, but have some useful info on logs

The goal of this document is to bring together all of the various suggestions for hardening Solaris. The suggestions contained here are focused on Solaris 7, but all should transfer to earlier versions, but may require additional kernel tweaks to get the same level of protection. The suggestions contained below should be used as a starting point for building a server, after installing the OS. This is not a complete list, and I hope to develop it as time goes on. If you have any suggestions or comments, please e-mail me at

Please note: It is not possible to make a system completely hacker proof, but you can make your server a very difficult and unattractive target, which is the goal of the following suggestions. The security of a server is also dependant on the security of the programs (services) that are running. If you are running an exploitable service, none of this will do any good. You will need to be diligent in checking for and applying security patches to both the OS and the services running on the system.

Before following these directions, download the latest patch cluster from You will need to regularly download and install the latest patch cluster, repeating the following steps after each install. (The patch cluster will re-create files that you have deleted, over-write files that you have changed and loosen the permissions on files you have tightened.)

To disable the ability to execute code out of the stack, insert the following into /etc/system:

set noexec_user_stack=1
set noexec_user_stack_log=1

To protect against tcp session hijacking, add the following to /etc/default/inetinit:


Make the following additions to the end of /etc/init.d/inetinit:

ndd -set /dev/ip ip_respond_to_timestamp 0
ndd -set /dev/ip ip_respond_to_timestamp_broadcast 0

#Protect against routing attacks
ndd -set /dev/ip ip_ignore_redirect 1

# Disable IP forwarding
/usr/sbin/ndd -set /dev/ip ip_forwarding 0
ndd -set /dev/ip ip_forward_directed_broadcasts 0

# Protection against syn floods
/usr/sbin/ndd -set /dev/tcp tcp_ip_abort_cinterval 10000
echo "tcp_param_arr+14/W 0t10240" | adb -kw /dev/ksyms /dev/mem
/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q 1024
/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max_q0 8192

#Protection against ping of death
/usr/sbin/ndd -set /dev/ip ip_respond_to_echo_broadcast 0

# performance tweaks
/usr/sbin/ndd -set /dev/tcp tcp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_recv_hiwat 65536
/usr/sbin/ndd -set /dev/tcp tcp_cwnd_max 65535
/usr/sbin/ndd -set /dev/udp udp_xmit_hiwat 65536
/usr/sbin/ndd -set /dev/udp udp_recv_hiwat 65536

Comment out any entry in /etc/rmmount.conf which references "".
chmod 644 /etc/hostname.*
chmod o-rw /dev/rmt/*

Disable all unnecessary entries in the /etc/rc*.d directories, such as:
S47asppp S73nfs.client S76nscd S80lp S88sendmail S89bdconfig
S15nfs.server S70uucp S71rpc S93cacheos.finish
S74autofs S41cachefs.root S74xntpd S85power
S77dmi S76snmpdx and anything related to dtlogin.

To limit the ability for unauthorized users to become root, add the users who are allowed to su as root to the sys group in /etc/group. Then do the following:

/usr/bin/chgrp sys /usr/bin/su
/usr/bin/chmod 4750 /usr/bin/su
/usr/bin/chmod o-rwx /sbin/su.static

Lock down the files .rhosts, .netrc, and /etc/hosts.equiv. The r commands use these files to access systems. To lock them down, touch the files, then change the permissions to zero, locking them down. This way no one can create or alter the files. For example,
/usr/bin/touch /.rhosts /.netrc /etc/hosts.equiv
/usr/bin/chmod 0 /.rhosts /.netrc /etc/hosts.equiv

Remove the existing /etc/inetd.conf and replace with one conaining:
ssh stream tcp nowait root /usr/local/sbin/sshd sshd √I

or disable inetd altogether (delete it from /etc/rc2.d/S72inetsvc) and run sshd as a daemon by creating a file such as /etc/rc2.d/S99sshd as follows:


If you do decide to retain inetd, edit /etc/rc2.d/S72inetsvc and change the line that starts inetd to:

/usr/sbin/inetd -st &

Unless you have a clear need for print services, delete the lpr associated programs:

Eliminate unnecessary SUID programs. Run the command:
find / -perm -4000 -print

and consider removing or chmod √4000 each of the filenames that are returned (Unless they serve a useful purpose to you, such as su)

Eliminate unnecessary SGID programs. Run the command:
find / -perm -2000 -print

and consider removing or chmod -2000 each of the filenames that are returned (Unless they serve a useful purpose to you)

Strongly consider setting up a hardened (and dedicated) syslog server to accept syslog messages. If you do, add the following lines to the end of your /etc/syslog.conf:

*.emerg @sysloghost
*.alert @sysloghost
*.crit @sysloghost
*.err @sysloghost
*.warning @sysloghost
*.info @sysloghost
*.debug @sysloghost

Edit /usr/lib/newsyslog and change all chmod commands to chmod 640.


[Nov.12, 1999] Linux Today (Singapore) Linux and Network Security

"A set of presentation slides to introduce the basic ways in which a Linux server can be compromised and how Linux and its suite of software can help to prevent that.


The Slide Presentation may be found here.

[ To view the slides, you need to have JavaScript enabled on your browser. lt-ed. ]

Related Stories: (Singapore): Regarding Network Security (Nov 02, 1999) Deploying a Linux Firewall (Oct 08, 1999)
CRN: Case Study -- VAR Uses Linux For Security (Aug 23, 1999)
Security Portal: Better Network Security Through Peer Pressure (May 31, 1999)
RPMS available for the Linux Security Toolkit (Feb 21, 1999)
InfoWorld: Trinux introduces the Linux-shy to the world of security tools with compassion and ease (Feb 21, 1999)


***+ Root Prompt -- Auditing Your Firewall Setup by Lance Spitzner [Apr. 10, 2000]

You've just finished implementing your new, shiny firewall. Or perhaps you've just inherited several new firewalls with the company merger. Either way, you are probably curious as to whether or not they are implemented properly. Will your firewalls keep the barbarians out there at bay? Does it meet your expectations? This paper will help you find out. Here you will find a guide on how to audit your firewall and your firewall rulebase. Examples provided here are based on Check Point FireWall-1, but should apply to most firewalls.

Where to Start

This paper can help you in one of two situations. First, you have certain expectations of what your firewall can or cannot do and you want to validate those expectations. Second, you do not know what to expect, so you need to audit your firewall to learn more. Either way, this paper can hopefully help you out. We are not going to cover how to audit or "hack" a network, that is a different subject. Also, we are not going to discuss which firewall is better then others, each firewall has its own advantages and disadvantages. What is going to make or break you is not choosing the "best" firewall, but implementing it correctly. That is the purpose of this paper, making sure our firewall is correctly implemented and behaves as we expected it.

Are you ready for your audit

Goddard CNE System Security Checklist -- 1997 but pretty decent

CHECKLIS.htm -- same thing ??? the NASIRC Security Checklist for Unix Systems.

Billions and billions of bugs -- good

**** DII COE Security Checklist Version 2.0 -- Australian site -- good

ESM NetRecon report

The first generation scanners were typically provided as either source code that needed to be compiled for a specific platform or as scripts for that hardware platform. This code was typically freely distributed across the internet. The code was compiled and built on the destination machine and provided security scanning for that platform of machines. As platforms and operating systems evolved, the scanner had to be rebuilt to execute on the platform. All checks were performed sequentially for each platform and provided no comprehensive reporting capability.

The second generation scanners brought with them more power and sophistication. The products are commercially available and provide mechanisms to scan multiple platforms and operating systems from a single platform in the network. All checks performed in the network are performed in isolation from all other checks. This provides for a methodical check through the network, but there is no thought given from one check to the next. There is the capability of providing more extensive reporting for the network security.

The third generation scanner brings a different philosophy of scanning into existence. That difference is the capability to couple multiple scanning data outputs and use that data to scan for additional vulnerabilities. In this way, the 3rd generation scanners operate in the same way that a "Tiger Team" of security experts would operate. They would find as many vulnerabilities as possible, and then couple those vulnerabilities to determine if there were additional vulnerabilities. Many times those additional vulnerabilities are potentially more dangerous than any of the preceeding. This is due to the fact that in many instances, the data from less secure servers is used to attack a more secure server. This analytical use of the data from the vulnerability scan is the key difference between a 2nd generation scanner and a 3rd generation scanner. The 2nd generation scanner would find the data on the less secure server, but would not use that to attack other more secure resources.


Metastasis refers to the process by which an attacker propagates a computer penetration throughout a computer network. The traditional methodology for Internet computer penetration is sufficiently well understood to define behavior which may be indicative of an attack, e.g. for use within an Intrusion Detection System. A new model of computer penetration: distributed metastasis, increases the possible depth of penetration for an attacker, while minimizing the possibility of detection. Distributed Metastasis is a non-trivial methodology for computer penetration, based on an agent based approach, which points to a requirement for more sophisticated attack detection methods and software to detect highly skilled attackers.

FDIC Risk Assessment Tools and Practices for Information System Security


Oh no! Another security audit! Why do we need another audit?

Sound familiar? Many people have an initial negative perception of audits in general and security-related audits or assessments in particular. However, an audit of security relevant events is important in ensuring that access to information networks follows the established security policy that firewalls and other protection devices guard against. Being sure of the compliance of access controls on systems connected to the network is a major component for protecting your network. How do you know your network access controls are working unless an independent audit or assessment is done?

forum - Guest Feature Broadening the Scope of Penetration Testing Techniques

forum - Guest Feature Auditing Your Firewall Setup -- Auditing Your Firewall Setup by Lance Spitzner Tue Sep 21 1999

Network Security Audit Report

DRG Digital Resources Group, L -- sample report

Host Analysis

Based on information gained from our Scanner probes to this host, the following conclusions can be made about its overall security. For more information on interpreting this analysis, see the report introduction.

Warning! This host is significantly threatened:

This host can be compromised completely by a remote attacker.

Primary Threats

High risk vulnerabilities are present with these impacts: System Integrity, Accountability, Authorization, Availability


Many of the threats to this host are due to supported services with fundamentally insecure design. These problems may not be easy to solve, and consideration should be given to entirely replacing insecure services with more secure alternatives.

The following graphs depict information about the current host in comparison with other hosts on the network. The value associated with the current host is plotted in red on the bar labelled "Current". Above this, on the bar labelled "Max" is the value associated with the host with the maximum count. Below it is the average value across all hosts on the network (labelled "Avg"), and finally the value of the host with the minimum count.

Net Collections

Nice collection of historically important papers is at Matt's Unix Security Page

***** Advanced Linux security/Securing Linux, Part 2 by Michael H. Warfield (LinuxWorld) Good overview. Nice references. One of the few descriptions of using immutable and append-only attributes on the EXT2 filesystem:

Filesystem partitions
There is a reason for the filesystem standard, and it is well worth your while to invest time in taking full advantage of it. The filesystem can be divided into major partitions, and each partition can be configured and mounted differently. I strongly recommend separate /, /usr, /usr/local, /var, and /home partitions, at the very least.

/usr can be mounted read only and can be considered inviolate for purposes of validation. If anything ever changes in /usr, that change should ring an alarm bell -- literally. Of course, if you change something in /usr yourself, you will know that the change is coming.

The same idea applies to /lib, /boot, and /sbin. If you can make them read-only and alarm any attempts to change files, directories, or permissions, then do it.

It isn't possible to mount all of your major partitions as read only. For example, /var, by its very nature, cannot be read only, and for that reason nothing should be allowed to execute out of it. Things like configuration files for X servers should be symbolically linked to files which are kept in places which can be made read only -- and not through variable storage dumps.

Extending ext2
Use of the append-only and immutable attributes on the ext2 file system can provide enhancements to a secure installation. While not perfect in and of themselves, these attributes can be useful in detecting intrusion attempts when an attacker attempts to install rootkits or other backdoors over existing files. To be sure, such measures can be thwarted once they are detected. But by then, you should already have been notified and made aware of the intrusion.

If you have critical filesystems mounted read only and the files are marked as immutable, an intruder must remount the filesystems and remove the immutable bits -- all without getting caught or triggering an alarm. This is no small feat, and an intruder who recognizes this is more likely to go off in search of more vulnerable prey than risk being caught.

The immutable and append-only attributes are just two of the extended attribute flags on the ext2 filesystems. A file which is flagged as immutable cannot be changed, not even by root. A file flagged as append only can be changed, but it can only have material appended to it. Even the root user cannot modify it in any other way.

These attributes are added to a file by the chattr command, and can be listed with the list attribute or lsattr command. For more information on enhanced file protection through ext2 permissions attributes, see man chattr.

While partitions and ext2 attributes seem simple enough on the surface, they are actually bits of arcana -- and little effort or progress has been made in making them user-friendly. Even sophisticated users and administrators have been known to get tripped up on them, and so you should not treat them trivially.

Secure log files
The immutable and append-only attributes are particularly effective when used in combination with log files and log backups. You should set active log files to append only. When the logs are rotated, the backup log file created by the rotation should be set to immutable, while the new active log file becomes append only. This usually requires some manipulation of your log rotation scripts.

[Oct. 20, 1999] Securing Linux The First Steps LG #47

Nearly all Linux distributions available today are insecure right out of the box. Many of these security holes can be easily plugged, but tradition and habit have left them wide open. A typical Linux installation boots for the first time offering a variety of exploitable services like SHELL, IMAP and POP3. These services are often used as points of entry for rogue netizens who then use the machine for their needs, not yours. This isn't just limited to Linux--even the most sophisticated commercial UNIX flavors ship with these services and more running right out of the box.

Securing Network Services

First, gain superuser (root) access to the system and take an inventory of its current network state by using the netstat command (part of net-tools and standard on most Linux systems). An example of its ouput is shown here:

root@percy / ]# netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address         State
tcp        0      0 *:imap2                 *:*             LISTEN
tcp        0      0 *:pop-3                 *:*             LISTEN
tcp        0      0 *:linuxconf             *:*             LISTEN  
tcp        0      0 *:auth                  *:*             LISTEN  
tcp        0      0 *:finger                *:*             LISTEN  
tcp        0      0 *:login                 *:*             LISTEN  
tcp        0      0 *:shell                 *:*             LISTEN  
tcp        0      0 *:telnet                *:*             LISTEN  
tcp        0      0 *:ftp                   *:*             LISTEN  
tcp        0      0 *:6000                  *:*             LISTEN  
udp        0      0 *:ntalk                 *:*                     
udp        0      0 *:talk                  *:*                    
udp        0      0 *:xdmcp                 *:*                     
raw        0      0 *:icmp                  *:*             7       
raw        0      0 *:tcp                   *:*             7

As you can see from that output, a fresh installation left a number of services open to anyone within earshot. Most of these services are known troublemakers and can be disabled in the configuration file, /etc/inetd.conf.

Open the file with your favorite text editor and begin to comment out any services you do not want. To do this, simply add a ``#'' to the beginning of the line containing the service. In this example, the entire file would be commented out. Of course, should you decide at some point that you would like to offer some of these services, you are free to do so.

Now, restart inetd to reflect the changes. This can be done in a number of ways and can differ from system to system. A simple

killall -HUP inetd

should do the trick. Check the open sockets again with netstat and note the changes.

Next, take a look at what processes are running. In most cases, you'll see things like sendmail, lpd and snmpd waiting for connections. Because this machine will not be responsible for any of these services, they will all be turned off.

In most cases, these services are launched from the system initialization scripts. These can vary somewhat from distribution to distribution, but they are most commonly found in /etc/init.d or /etc/rc.d. Consult the documentation for your distribution if you are unsure. The goal is to prevent the scripts from starting these services at boot time.

If your Linux distribution uses a packaging system, take the time to remove the services you do not want or need. On this example machine, those would be sendmail, any of the ``r'' services (rwho, rwall, etc), lpd, ucd-snmp and Apache. This is a much easier approach and will ensure the services aren't activated accidentally.

Securing X

Most recent distributions enable machines to boot for the first time into an X Window System login manager like xdm. Unfortunately, that too is subject to exploits. By default, the machine will allow any host to request a login window. Since this machine has only one user that logs into the console directly, that feature will need to be disabled as well.

The configuration file for this varies depending on which version of the login manager you are using. This machine is running xdm, so the /usr/X11R6/lib/X11/Xaccess file will need to be edited. Again, add a ``#'' to prevent the services from starting. My Xaccess file looks like this:

#* #any host can get a login window
#* #any indirect host can get a chooser

The changes will take effect when xdm restarts.

{****} Introduction to Linux Security by Michael Jastremski -- nice nine starting steps (see also Adding Security to Common Linux Distributions -- essentially the same paper)

1. Remove all unnecessary network services from your system. Fewer ways to connect to your computer equal fewer ways for an intruder to break in to your computer. Comment out everything you don't need from /etc/inetd.conf. Don't need telnet on this system? Disable it. Same goes for ftpd,rshd,rexecd,gopher,chargen, echo,pop3d and friends. Dont forget to do a 'killall -HUP inetd' after editing inetd.conf. Also don't neglect the /etc/rc.d/init.d directory. Some network services (BIND,printer daemons) are standalone programs started from these scripts.

2. Install SSH. SSH is a drop-in replacement for most of those antiquated Berkeley 'r' commands.

3. Use vipw(1) to lock any non-login accounts. Note that under RedHat Linux, accounts with a null login shell names are default to /bin/sh, which is probably not what you want. Also make sure that none of your accounts have null password fields. The following is an example of what the system part of a healthy password file might look like.

  ftp:*:14:50:FTP User:/home/ftp:/bin/sync

4. Remove the 's' bits from root-owned programs that won't require such privelege. This can be accomplished by executing the command 'chmod a-s' with the name(s) of the offending files as it's arguments.Such programs include, but aren't limited to:

  1. Programs you never use
  2. programs that you don't want any non-root users to run
  3. programs you use occasionallly, and don't mind having to su(1) to root to run.

I've placed an asterisk (*) next to each program i personally might disable. Remember that your system needs some suid root programs to work properly, so be careful. Alternately, you could create a special group called 'suidexec', place trusted user accounts into this group. chgrp(1) the iffy suid program(s) to the suidexec group, and remove the world execute permissions.
# find / -user root -perm "-u+s"
*/bin/mount -- only root should be mounting
*/bin/umount -- same here
/bin/su -- don't touch this!
*/sbin/cardctl -- PCMCIA card control utility
*/usr/bin/rcp -- Use ssh
*/usr/bin/rlogin -- ditto
*/usr/bin/rsh -- "
*/usr/bin/at -- use cron, or disable altogether
*/usr/bin/lpq -- install LPRNG
*/usr/bin/lpr -- "
*/usr/bin/lprm -- "
/usr/bin/passwd -- don't touch!
*/usr/bin/suidperl -- each new version of suidperl
seems to have a buffer overflow
*/usr/bin/sperl5.003 -- use it only if necessary
/usr/bin/procmail --
*/usr/X11R6/bin/dga -- lots of buffer overflows
in X11 as well
*/usr/X11R6/bin/xterm -- "
*/usr/X11R6/bin/XF86_SVGA -- "
*/usr/sbin/traceroute -- you can stand to type the
root password once in a while.

5. Upgrade sendmail. Download the source from il. Unpack it and read the instructions. Install smrsh (packaged with sendmail) if you have a couple extra minutes. this program addresses many of the concerns people have with sendmail, such as sending email to arbitrary programs. Edit and set the 'PrivacyOptions' option to 'goaway':

        O PrivacyOptions=goaway

If you don't plan to receive internet email, DON'T RUN SENDMAIL IN RECEIVE MODE (sendmail -bd)!. In this case, disable /etc/rc.d/init.d/sendmail.init and do a 'killall -TERM sendmail'. You'll still be able to send outbound email.

6. Upgrade BIND if you use it. The latest BIND can be found at . Otherwise disable it altogether.

7. Recompile the kernel. I usually do this if just to reduce the bloat of the default kernel. HINT: turn on all of the firewalling options even if the computer isn't a firewall.

        # CONFIG_IP_FORWARD is not set
        # CONFIG_IP_MULTICAST is not set
        # CONFIG_IP_MASQUERADE is not set
        # CONFIG_IP_TRANSPARENT_PROXY is not set
        # CONFIG_IP_ROUTER is not set
        # CONFIG_NET_IPIP is not set

8. Apply patches: Any known problems with their software can be found at RedHat on their Errata pages. (see to see which patches apply to your release. RedHat does a very good job of keeping those pages up to date. They also includes links to the RPM files you'll need with installation instructions.

9. Configure tcp_wrappers: Tcp_wrappers are a method for controlling which computers on the 'net(c) are permitted to 'talk' to your computer. This package, written by security guru Wieste Venema, sits in front of programs run from inetd (or those linked with it's library) consulting it's configuration files to determine whether to deny or permit a network transaction. For example, to allow telnet & ftp from home via your isp, while denying everything else, put the following in /etc/hosts.allow:

        in.ftpd : : allow
        all : all : deny 

SSH, sendmail and other packages can be built with tcp_wrappers support. Read the tcpd(1) manual page for further information.

Let's say I want to scam people's credit card numbers, and don't want to break into a server. What if I could get people to come to me, and voluntarily give me their credit card numbers? Well, this is entirely too easy.

I would start by setting up a web server, and copying a popular site to it, say, time required to do this with a tool such as wget is around 20-30 minutes. I would then modify the forms used to submit information and make sure they pointed to my server, so I now have a copy of that looks and feels like the "real" thing. Now, how do I get people to come to it? Well I simply poison their DNS caches with my information, so instead of pointing to, I would point it to my server at Now when people go to they end up at my site, which looks just like the real one.

How to prevent being taken

Most forms online are not on secure servers, but the data you provide is usually sent to a secure server, which leads to one of the major problems. The form data may not be going where it should. A simple attack is to have the fake site, and a form that takes the data, without using a secure server at all. How many of you actively check the source HTML of pages you are plugging your credit card data into? The title bar should start with https:// followed by the sitename (i.e.: You should also examine the HTML source to make sure the form data points to where it should go, you should see something like:

<form method="POST" action="/order.cgi">


<form method="POST" action="">

If a store is using the "GET" method, do not buy from them, any data you enter will be passed along as the query string, if you look in the text of your address bar you will see your credit card info. If a store specifies a relative link (i.e.: /something/something.cgi) then make sure the current site you are at is a secure server, and that the certificate is legitimate. If the link is absolute, and points to an IP address, be suspicious, I personally would not buy if this were the case. Ideally the link should point to something like "", and you should first browse to that site, and make sure the certificate is legitimate, before hitting the submit button on your order form. Most current SSL attacks are based on fooling the user, more so than breaking the technology. If you are vigilant, and check certificates before you submit to sites you will be a little safer (but not completely).

SSL Certificates contain various pieces of information, such as who issued them, when it was issued, when it expires, who it was issued to and so forth. Who it was issued to (usually the "subject") is a very important field, and the issuer field. To view the certificate details double click on the lock icon, usually at the bottom left of the screen in Netscape, and at the bottom right in Microsoft Internet Explorer. Let's take for example, the Issuer field looks like:

OU = Secure Server Certification Authority
O = RSA Data Security, Inc.
C = US

The C stands for country, the O for organization (usually the company's name), and the OU stands for organizational unit (a division of the company). The subject field looks like:

CN =
OU = mscom
O = Microsoft
L = Redmond
S = Washington
C = US

The S stands for state, the L for locality (the city), and the CN is the certificate name (the site it applies to). Make sure all these are spelt correctly, many attackers will use domain names that look familiar (such as in order to get legitimate certificates. Taking these precautions every time you use an SSL secured service is tedious, and underlines one of the major flaws with SSL, in that is susceptible to "social engineering" attacks. Another flaw in SSL is that it only secures the session, it doesn't secure any actually transaction. This means if someone does steal your credit card number and use it online, it is almost impossible to prove that it wasn't actually you that issued the order. SSL does allow for the client to authenticate to the server, however very few people have digital certificates compatible with this (I have one, and know of perhaps a half dozen other people, a definite minority). In addition to this the major certificate vendors have stopped issuing the personal certificates that guarantee the person's identity, so they are a dead end. There are newer protocols and systems that allow for two parties to safely conduct transactions with all these features.

Linux PR Maximum Security Linux

INDIANAPOLIS, Oct. 21 /PRNewswire/ -- If you place any significant value on the data contained within your Linux operating system, it is critical to implement and maintain strong information security. To help combat the risk of systems abuse and attack from outsiders, Macmillan USA, in association with, recently released Maximum Security Linux. This comprehensive software package provides all of the necessary tools and documentation to keep Linux systems safe and secure.

"With the ever increasing size and complexity of networked software, combined with the sophistication of today's hackers, we are more exposed to security threats than at any other time," says Jim Reavis, Webmaster for, a web site and information services provider. "We are thrilled to join with Macmillan to offer such unique security solutions to the Linux community."

Maximum Security Linux features the Linux Security Suite from This suite supplies Linux administrators with the resources they need to assure systems security: policy guides, best practices FAQs, security tips, as well as the best GNU General Public License security software available for Linux. "Macmillan's retail distribution and SecurityPortal's security knowledge makes for a great partnership," commented Steve Schafer, Sr. Title Manager for Macmillan's Linux software. "Getting this knowledge and these tools into the hands of the Linux user is essential to help ensure the security of the many personal and corporate Linux systems being installed every day."

[Oct. 25, 1999] LJ 68 Transparent Firewalling by Federico and Christian Pellegrin

The authors describe how to split an existing network without affecting the configuration of the machines already present by using the proxy arp technique.

[Oct. 20, 1999] DNS Security - closing the b(l)inds

[Sept.10, 1999] Hacker Mythology 101 -- no longer available

[Aug 15, 1999] tn19990816.html -- Detecting Intruders in Linux

[Aug 14, 1999] - Secure Shell Configuration and Installation [Feature Articles]

[July 17, 1999] SunWorld - The Solaris Security FAQ -- decent, but somewhat outdated

[July 10, 1999] Linux Firewall and Security Site

[June 17, 1999] -- There is a tool from Bell Labs called NSBD (not-so-bad-distribution) that claims to handle the problem of secure distribution over the internet ...

[June 10, 1999] Security Watch (InfoWorld) -- useful collection of article of widely different quality

[June 8, 1999] Unix SysAdm Resources Firewalls & Unix Security -- good collection of links

[June 8, 1999] Securing Linux Part 1 -- Elementary security for your Linux box. Michael H. Warfield

[June 7, 1999] Performance Computing - Top Open-Source Security Tools For UNIX

[June 5, 1999] -- good linux security links

[May 29, 1999] Securing Your Linux Box

[May 29, 1999] Breaking Into Your Own System

[May 17, 1999] LinuxPlanet - Tutorials - Linux network security - Network Services

{***} Dealing with System Crackers-- --Basic Combat Techniques Issue 20 by Andy Vaught. Weak by contains some useful info:

The first thing you want to check for is the possibility that the intruder is still logged on. A quick way to check this is 'w' or 'who' commands--look for someone logged from a remote machine. The thing to remember about these commands is that they work by reading a file ('utmp', typically found in /var/adm) that keeps track of who is logged in. If the intruder has broken into the root account, then he can change that file to make it look like he's not there.

Two good ways of finding such phantom users are to use the ps and netstat programs. Since these query kernel data structures rather than files, they are harder to spoof. Using ps, look for shells and programs that aren't associated with a legitimate user. Netstat is a lesser-used utility used to display the network status. If it is not in the normal system directories, look in /sbin or /usr/sbin. By default netstat displays active Internet connections. Again, look for connections to suspicious sites.

The best solution to an intruder on your system is to immediately disconnect the Ethernet cable. Without giving him any warning, this puts a stop to whatever he is doing and isolates your computer, preventing further damage. Furthermore, it will appear to him that the network has failed-- which is in fact what has happened.

{**+} More Linux Security Issue 16 by By Andrew Berkheimer

{**+} Learning about Security Issue 15 By Jay Sprenkle,

Firewalling and Proxy Server HOWTO

{***+} Finding Evidence of Your Cracker By Chris Kuethe -- useful explanation that cracker usually make a lot of mistakes...

One fine day I was informed that we'd just had another break-in, and it was time for me to show my bosses some magic. But like a skilled cardshark who's forced to use an unmarked deck, my advantage of being at the console had been tainted. Our cracker had used a decent rootkit and almost covered her tracks.

In general, a rootkit is a collection of utilities a cracker will install in order to keep her root access. Things like versions of ps, ls, passwd, sh, and other fairly essential utilities will be replaced with versions containing back doors. In this way, the cracker can control how much evidence she leaves behind. Ls gets replaced so that the cracker's files don't show up, and ps is done so that her processes are not displayed either. Commonly a cracker will leave a sniffer and a backdoor hidden somwhere on your machine. Packet sniffers - programs that record network traffic which can be configured to filter for login names and passwords - are not part of a rootkit per se, but they are nearly as loved by hackers as a buggered copy of ls. What wouldn't want to try intercept other legitimate user passwords?

In nearly all cases, you can trust the copy of ls on the cracked box to lie like a rug. Don't bet on finding any suspicious files with it, and don't trust the filesizes or dates it reports; there's a reason why a rootkit binary is generally bigger than the real one, but we'll get there in a moment. In order to find anything interesting, you'll have to use find. Find is a clever version of 'ls -RalF | grep | grep | ... | grep '. It has a powerful matching syntax to allow precise specification of where to look and what to look for. I wasn't being picky - anything whose name began with a dot was worth looking at. The command: find / -name ".*" -ls

Sandwiched in the middle of a ton of useless temporary files and the usual '.thingrc' files (settings like MS-DOS's .ini) we found '/etc/rc.d/init.d/...'. Yes, with 3 dots. One dot by itself isn't suspicious, nor are two. Play around with DOS for about two seconds and you'll see why: '.' means "this directory" and '..' means "one directory up." They exist in every directory and are necessary for the proper operation of the file system. But '...' ? That has no special reason to exist.

Well, it was getting late, and I was fried after a day of class and my contacts were drying up, so I listed /etc/rc.d/init.d/ to check for this object. Nada. Just the usual SysV / RH5.1 init files. To see who was lying, changed my directory into /tmp/foo, the echoed the current date into a file called '...' and tried ls on it. '...' was not found. I'd found the first rootkit binary: a copy of ls written to not show the name '...' . I will admit that find is another target to be compromised; in this case it was still clean and gave me some useful information.

Now that we knew that '...' was not part of a canonical distribution, I moved into to it and had a look. There were only two files; linsniffer and tcp.log. I viewed tcp.log with more and made a list of the staff who would get some unhappy news. Ps didn't show the sniffer running, but ps should not be trusted in this case, so I had to check another way.

We were running in tcsh, an enhanced C-syntax shell which supports asychronous (background) job execution. I typed './linsniffer &' which told tcsh to run the program called linsniffer in this directory, and background it. Tcsh said that was job #1, with process ID 2640. Time for another ps - and no linsniffer. Well, that wasn't too shocking. Either ps was hacked or linsniffer changed its name to something else. The kicker: 'ps 2640' reported that there were no processes available. Good enough. Ps got cracked. This was the second rootkit binary. Kill the currently running sniffer.

Now we check the obvious: /etc/passwd. There were no strange entries and all the logins worked. That is, the passwords were unchanged. In fact the only wierd thing was that the file had been modified earlier in the day. An invocation of last showed us that 'bomb' had logged in for a short time around 235am. That time would prove to be significant. Ain't nobody here but us chickens, and none of us is called bomb...

I went and got my crack-detection disk - a locked floppy with binaries I trust - and mounted the RedHat CD. I used my clean ls and found that the real ls was about 28K, while the rootkit one was over 130K! Would anyone like to explain to me what all those extra bytes are supposed to be? The 'file' program has our answer: ELF 32-bit LSB executable, Intel 80386, version 1, dynamically linked, not stripped. Aha! So when she compiled it, our scriptkiddie forgot to strip the file. That means that gcc left all its debugging info in the file. Indeed, stripping the program brings it down to 36K, which is about reasonable for the extra functionality (hiding certain files) that was added.

Remember how I mentioned that the increased filesize is important? This is where we find out why. First, new "features" have been added. Second, the binaries have verbose symbol tables, to aid debugging without having to include full debug code. And third, many scriptkiddies like to compile things with debugging enabled, thinking that it'll give them more debug-mode backdoors. Certainly our 'kiddie was naive enough to think so. Her copy of ls had a full symbol table, and source and was compiled from /home/users/c/chlorine/fileutils- 3.13/ls.c - which is useful info. We can fetch canonical distributions and compare those against what's installed to get another clue into what she may have damaged.

I naively headed for the log files, which were, of course, nearly as pure as the driven snow. In fact the only evidence of a crack they held was a four day gap. Still, I did find out something useful: this box seemed to have TCP wrappers installed. OK, those must have failed somehow since she got in to our system. On RH51, the TCP wrappers live in /usr/sbin/in.* so what's this in.sockd doing in /sbin? Being Naughty, that's what. I munged in.sockd through strings, and found some very interesting strings indeed. I quote: You are being logged , FUCK OFF , /bin/sh , Password: , backon . I doubt that this is part of an official RedHat release.

I quickly checked the other TCP wrappers, and found that RedHat's in.rshd is 11K, and the one on the HD was 200K. OK, 2 bogus wrappers. It seems that, looking at the file dates, this cracked wrapper came out the day after RH51 was released. Spooky, huh?

I noticed that these binaries, though dynamicically linked, used libc5, not libc6 which we have. Sure, libc5 exists, but nothing, and I mean nothing at all uses it. Pure background compatiblity code. After checking the other suspect binaries, they too used libc5. Thats where strings and grep (or a pager) gets used.

Now I'm getting bored of looking by hand, so lets narrow our search a little using find. Try everything in October of this year... I doubt our cracker was the patient sort - look at her mistakes so far - so she probably didn't get on before the beginning of the month. I don't claim to be a master of the find syntax, so I did this:

find / -xdev -ls | grep "Oct" | grep -v "19[89][0-7]" > octfiles.txt

In english: start from the root, and don't check on other drives, print out all the file names. Pass this through a grep which filters everything except for "Oct" and then another grep to filter out years that I don't care about. Sure, the 80's produced some good music (Depeche Mode) and good code (UN*X / BSD) but this is not the time to study history.

One of the files reported by the find was /sbin/in.sockd. Interestingly enough, ps said that there was one unnamed process with a low (76) process id owned by uid=0, gid=26904. That group is unknown on campus here - whose is it? And how did this file get run so early so as to get that low a PID? In.sockd has that uid/gid pair... funky. It has to get called from the init scripts since this process appears on startup, with a consistently low PID. Grepping the rc.sysinit file for in.sockd, the last 2 lines of the file are this:

#Start Socket Deamon
exec in.sockd

Yeah, sure... That's not part of the normal install. And Deamon is spelled wrong. Should a spellchecker be included as an crack- detector? Well, RedHat isn't famous for poor docs and tons of typos, but it is possible to add words to a dictionary. So our cracker tried to install a backdoor and tried to disguise it by stuffing it in with some related programs. This adds credibility to my theory that our cracker has so far confined her skills to net searches for premade exploits.

The second daemon that was contaminated was rshd. About 10 times as big as the standard copy, it can't be up to anything but trouble. What does rsh mean here? RemoteSHell or RootShell? Your guess is as good as mine.

So far what we've found are compromised versions of ls, ps, rshd, in.sockd, and the party's just beginning. I suggest that once you're finished reading this, you do a web search for rootkit and see how many you can scrounge up and defeat. You have to know what to look for in order to be able to remove it.

While the log files had been all but wiped clean, the console still had some errors printed on it, quite a few after 0235h. One of these was a refusal to serve root access to / via nfs at 0246h. That coincided perfectly with the last access time to the NFS manpage. So our scriptkiddie found something neat, and she tried to mount this computer via NFS, but she didn't set it up properly. All crackers, I'd say, make mistakes. If they did everything perfectly we'd never notice them and there would be no problems. But it's the problems that arise from their flaws that cause us any amount of grief. So read your manuals. The more thorougly you know your system, the more likely you are to notice abnormalities.

... ... ...

Appendix A: Programs you want in a crack-detection kit

For security reasons these should all be statically linked.

{***} Securing Your Linux Box LG #34 By Peter Vertes Published in Issue 34 of Linux Gazette, November 1998

Musings on open source security models

{**+} Linuxetc An introduction to Linux and Unix security -July 1997

[August 10, 1998] Holy Intruders! IP-Based Security Auditing Tools

Network Flight Recorder

How to Hacker-Proof Your Computer Systems

Performance Computing - Features - Ten Commandments For Converting Your Intranet Into A Secure Extranet

VPNs For Enterprise Internetworking

Access Exploits In UNIX And Windows NT

Kerberos- A Secure Passport


David Curry Wietse Venema Dan Farmer Peter Galvin Lance Spitzner Alex Noordergraaf Reg Quitnton Sean Boran
Matt Bishop Papers

Alex Noordergraaf

***** SolarisTM Operating Environment Minimization for Security: A Simple, Reproducible and Secure Application Installation Methodology By Alex Noordergraaf and Keith Watson December 1999. Great paper. This probably No.1 paper that explains how to remove unnecessary packages -- actually they consider a very practical case of Solaris + Netscape Enterprise Server. The paper a little bit weak on tool side, though.

The Solaris Operating Environment installation process requires the selection of one of four installation clusters:

  • Core
  • End User
  • Developer
  • Entire Distribution

Each installation cluster represents a specific group of packages (operating system modules) to be installed. This grouping together of packages into large clusters is done to simplify the installation of the OS for the mass market. Because each of these installation clusters contains support for a variety of hardware platforms (SolarisTM Operating Environment (Intel Platform Edition), microSPARCTM, UltraSPARCTM, UltraSPARC II, and so on) and software requirements (NIS, NIS+, DNS, OpenWindowsTM, Common Desktop Environment (CDE), Development, CAD, and more), far more packages are installed than will actually ever be used on a single Solaris Operating Enironment.

The Core cluster installs the smallest Solaris Operating Environment image. Only packages that may be required for any SPARCTM or Solaris Operating Environment (Intel Platform Edition) system are installed. The End User cluster builds on the Core cluster by also installing the window managers included with the Solaris Operating Environment (OpenWindows and CDE). The Developer and Entire Distribution clusters include additional libraries, header files, and software packages that may be needed on systems used as compile and development servers.

The size of the clusters varies significantly: the Core cluster contains only 39 packages and uses 52MBytes; the End User cluster has 142 packages and uses 242 MBytes; the Developer cluster has 235 packages and consumes 493 MBytes of disk space. Experience to date has shown that in many cases, a secure server may require only 10 Solaris Operating Environment packages and use as few as 36MBytes of disk space.

Installing unnecessary services, packages, and applications can severely compromise system security. One well known example of this is the rpc.cmsd daemon, which is unnecessary on many data center systems. This daemon is installed and started by default when the End User, Developer, or Entire Distribution cluster is chosen during the installation process.

There have been many bugs filed against the rpc.cmsd subsystem of OpenWindows/CDE in the last few years, and at least two CERT advisories (CA-99-08, CA-96.09). To make matters even worse, scanners for rpc.cmsd are included in the most common Internet scanning tools available on the Internet. The best protection against rpc.cmsd vulnerabilities is to not install the daemon at all, and avoid having to insure it is not accidentally enabled.

The problem described above is well known in the computer industry, and there are hundreds of similar examples. Not surprisingly, almost every security reference book ever written discusses the need to perform "minimal OS installations" [Garfinkel]. Unfortunately, this is easier said than done. Other than the occasional firewall, no software applications are shipped with lists of their package requirements, and there's no easy way of determining this information other then through trial and error.

Because it is so difficult to determine the minimal set of necessary packages, system administrators commonly just install the Entire Distribution cluster. While this may be the easiest to do from the short-term perspective of getting a system up and running, it makes it nearly impossible to secure the system. Unfortunately, this practice is all too common, and is even done by so-called experts brought in to provide infrastructure support, web services, or application support. (If your organization is outsourcing such activities, be sure to require the supplier to provide information on what their OS installation policies and procedures are, or you may be in for some unpleasant surprises.)

The rest of this article presents one method for determining the minimal set of packages required by a particular application--the iPlanetTM Enterprise Server. Future articles will discuss other applications. The tentative list includes NFSTM Servers (with SecureRPC and Solstice DiskSuiteTM), iPlanetTM WebTop, and SunTM Cluster. If you have followed this procedure and developed the scripts for a particular application, please forward them to the authors for inclusion in future articles.

***** SolarisTM Operating Environment Network Settings for Security
-by Keith Watson and Alex Noordergraaf
Discuss the many low-level network options available within Solaris and their impact on security.

Building Secure N-Tier Environments by Alex Noordergraaf - This article provides recommendations on how to architect and implement secure N-Tier commerce environments.

Lance Spitzner

Reg Quitnton

Reg Quinton documentation about suid programs on Solaris is probably the best available on the Net. He also authored a hardening package

Security How to Documents

  • Securing SNMP on Solaris (04-Oct-2000)

    The default SNMP configuration, while perhaps reasonably secure, can be made substantially more secure with a little effort. Recommendations show how to eliminate three daemons and nine network services. Tested on Solaris 8, should apply to other versions.

  • Solaris Network Hardening (19-Sept-2000)

    Hardening Solaris by removing network services. Details how to determine services your system offers and an analysis of vendor provided services. Recommendations on services to remove and a kit to implement different policies.

  • Solaris: Network Settings for Security (29-Jun-2000)

    Shell script from Sun blueprint paper hardens network configuration to better protect Solaris systems against various vulnerabilities. Recommended configuration for all Solaris systems.

  • Solaris -- Patch Management (04-Dec-2000)

    Patch management is fundamental to security. Two simple tools we've developed for patch managment are presented -- CheckPatches to list outstanding patches and GetApplyPatch to apply them. Traditional Unix tar kit available.

UW/SISP: Hardening Solaris 2.6 (access controlled) -- securing a Solaris server for the SISP project. Some related documents on hardening:

Sean Boran

B o r a n C o n s u l t i n g H o m e P a g e

Peter Galvin (columnist in SunWorld)

David Curry


Wietse Venema

Dan Farmer

Dan Farmer -- the author of one of the first internal vulnerability scanners (COPS) and co-author of the much hyped "Satan" is a pretty controversial figure. Some call him exhibitionist.


Random Findings

N F R In The News

Linux Security Page by Alexander O. Yuriev ( This largely outdated site contains four papers written by Alexander Yuriev on Unix security.

Classic examples of hype:

LG Articles -- nice fairy tale ;-)


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September 12, 2017