|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
|News||Softpanorama Laws of Security||Recommended books||Recommended Links||Best Security Papers||Linux vs. Solaris Security whitepaper|
|Access control||ACLs||RBAC||Privilege Sets in Solaris 10||Zones||Filesystem security||PAM|
|Authentication and Accounts Security||Hardening||Minimization||Patching||Solaris root password recovery||Log centralization and audit||Integrity checking|
First of all security is human-related feature (or, more correctly, an organization IQ related featureIQ related feature -- organizations with stupid management usually do not have great security) and Solaris admins are often more qualified then Linux admin and more professionally trained, often large corporations require them to be certified (although Red Hat certification is better then Sun certification). They often are older and have more years under the belt, although this is both advantage and disadvantage (see remark about firewalls below).
Solaris security advantage rests on combination of unique features: RBAC (which now includes the concept of privileges) and zones. In addition ACLs are also more widely used in Solaris although they are now fully available in Linux. Linux admins typically do not know this feature and as a result do not use it, but it is available.
There are some minor things like the fact that primary CPU for Solaris is pretty obscure RISC CPU (UltraSparc) which kills most exploits dead (security via obscurity) but this now is in a process of matching by IBM which tries to promote Linux on Power CPUs.
Also Linux is now Microsoft of Unix world and that means that most exploits are directed at popular Linux distributions, especially Red Hat.
Solaris filesystem security is weaker then in Free BSD, but somewhat better then in Linux. For example you can make
/usr read only in Solaris and JASS (standard hardening toolkit) does
exactly that. A Sun BluePrints article (pdf)
describes the Solaris Fingerprint Database (sfpDB), a security tool that enables users to verify the integrity of files
distributed with the Solaris OS. It is different and better that RPM based security checking.
Linux has a distinct advantage of more wide and established use of local firewall (Red Hat training actually presuppose that this feature is enabled; Solaris training does not).
Networking stack is better engineered in Solaris and as such is more secure. Solaris implemented IPv6 earlier then Linus and is is more mature and less prone to problems.
As for application security Suse Apparmor is superior to anything Solaris has in application security hardening. See http://en.opensuse.org/Apparmor
Friday Nov 02, 2007
It is with great pleasure that I can (albeit belatedly) announce the arrival of the latest security guidance from both Sun and the Center for Internet Security. Working together, in concert with representatives from academia, industry and government, we have published security guidance for Solaris 10 11/06 and 8/07. This content represents the best and most complete form of Solaris security guidance ever produced.
Not only are the recommendations based upon industry consensus but they are also supported by Sun. What is even better is that this material was completed with support and feedback from both the National Security Agency and the Defence Information Systems Agency. I would like to especially thank both organizations for their significant contributions to this material! This iteration brings us (Sun, CIS, NSA and DISA) closer than even toward a single, consistent set of security recommendations for the Solaris OS.
The Benchmark itself has been restructured. Today, it comes in the form of two documents: (1) the core hardening Benchmark itself and (2) an extended appendix covering additional Solaris security controls with examples and references for more information. Further, the Benchmark itself has been significantly reorganized to improve its correctness and flow. Thanks to Carole, our editor!
Some new elements to the Benchmark include headers for each item that tell you if a given recommendation is a Solaris 10 default value, for what platforms it applies and even what configuration settings you need to implement the recommendation using the Solaris Security Toolkit. Overall the document is a tremendous step forward toward bringing the world the best available insight into how to harden and more generally secure their Solaris systems. There have also been quite a few updates to account for changes and enhancements in Solaris. The Solaris Security Appendix document is completely new and provides an overview of the security capabilities of the Solaris OS with many examples and references for more information including step-by-step BluePrints and HOWTOS. If you are responsible for managing or securing a Solaris 10 system, these documents are for you!
You can find a copy of these documents at both the CIS web site as well as on OpenSolaris.org (CIS Solaris Benchmark, Solaris Security Appendix). As always, feedback and ideas for future revisions are encouraged! If you are interested in participating in future versions of these documents, please consider joining the CIS Unix Benchmark Team. Contact Dave for more information!
December 30, 2013 | The Register
A BBC FTP server ftp.bbc.co.uk was compromised by a Russian hacker and access to it touted online, say computer security researchers.
The miscreant behind the attack on the internet-facing file store tried to sell access to the infiltrated system to other crims on Christmas Day, we're told. Hold Security – which this year has helped break news of data heists at Adobe and a top-flight limo company – spotted someone trying to sell access to ftp.bbc.co.uk, according to Reuters.
FTP is a 1970s vintage protocol for transferring information in bulk over the internet; its use is discouraged because usernames and passwords to log into accounts are sent over the network unencrypted, although there are ways to establish secure connections.
The hacked service was used by reporters to file material from the field, and by advertisers to upload video to BBC Worldwide channels. The invaded computer was cleaned up over the weekend.
Right now the system appears to be running ProFTPD 1.3.3g on Solaris, but there's nothing to indicate that was the vulnerable software. However, versions of ProFTPD prior to 1.3.3g suffer from a use-after-free bug (CVE-2011-4130) that allows an attacker to execute code remotely on the machine hosting the server; a flaw that's been known about since 2011.
"The only other information that I can offer is that the hacker was offering a screenshot proving that he had administrative access to the BBC server," Alex Holden, chief information security officer at Hold Security, told BBC News.
It is not clear how deep the hacker managed to penetrate Auntie: specifically, whether the miscreant obtained just an FTP admin account login, gained control of the user account running the FTP daemon, or gained full control of the machine running the file-transfer server. Don't forget, a compromised computer could have acted as a stepping stone to other systems within the Beeb's network.
If it's not broke...
See title. Just because it's old, doesn't mean it doesn't work. And with very much less overhead than sending big files via HTTP.
Though granted, a restricted-access FTP site should really be sFTP.
Re: If it's not broke...
"should be living on the DMZ"
Should be on A DMZ, not THE DMZ. Why should my FTP server be anywhere near the web server or mail server? Modern firewall design allows individual dirty networks for services so why only have a single big dirty network playground for hackers? The fewer systems they can access from the compromised one the less likely it is they will spread to the internal networks.
I also hate the term DMZ since the dirtiest network after internet is often the internal client one, and DMZ sits next to the internal networks rather than between them and internet these days so DMZ is very outdated.
Re: If it's not broke...
Yup, nothing wrong with FTP if you ask me. It's simple, robust and can be made as secure as a remote connection can be. Certainly the method of choice for the Beeb's field reporters, safer and more robust than pretty much anything more "current", bar sFTP (which ain't that "current" itself, if a good 20 years younger).
Re: If it's not broke...
"safer and more robust than pretty much anything more "current""
There is also FTPS which predates SFTP by a few years while using the actual FTP protocol and daemons. Of course, the protocol isn't what the problem was here, it was a software bug leading to rights escalation and so could just as easily affect SCP/SFTP. It's less likely that anyone would find the bugs in the FTP/S daemon these days when compared to SFTP due to lower usage but if someone wants your system there is usually a way.
Re: If it's not broke...
"FTP is a 1970s vintage protocol".
Yes, like TCP and IP and many others in everyday use. What's your point?
Re: If it's not broke...
> There is also FTPS
I don't usually consider FTPS a separate protocol; it's still FTP
> a software bug leading to rights escalation and so could just as easily affect SCP/SFTP.
Indeed. Especially SCP, which is known to be vulnerable (which is why most "scp" clients actually use SFTP under the hood).
Re: If it's not broke...
The problem with the "If its not broke, don't fit it" attitude is that, when it infects management, it is used as an excuse to deny or delay all preventative maintenance, patching, and so on. Resulting in, eventually, system failures and security breaches due to outdated, bugged, and vulnerable versions of software or sub-optimal configuration. Management would often prefer to have failures they can blame on software bugs or attackers to having a failed modification or patch being blamed on their own department.
Yes, FTP is a relatively lightweight and efficient protocol, but you still need to keep up with the patching and improve security (such as switching to sFTP or FTPS as you mentioned).
Re: If it's not broke...
The problem with the "If its not broke, don't fit it" attitude
And when the Damagement have the desire to fix everything regardless of whether it's broke, we end up with the Windows 8 UI. The problem is in how to educate the bosses enough that they understand what "maintenance" is without going batshit crazy on "new". Or worse, "better because it's newer".
Why blame the BBC? This stuff was outsourced to Siemens in 2004. I should know, I was one of the poor sods who was sold!
That said from the sounds of it, the ftp access pre-dates even BBC Technology back to the days of the beardy wierdy geniuses at Kingswood Warren.
The service part of Siemens was bought out by Atos.
There are many parts of BBC IT outsourced to Atos (BBC Desktop for example) but much is run in house as well by BBC Technology / Tech Ops (most of the web based services and as noted above, BBC Worldwide).
There will probably be much finger pointing as there often is with these things. That's if there was any serious threat. A "stepping stone" it may have been, but into what exactly? And let's face it, the Beeb is just a media organisation, not a bank or a holder of huge amounts of important personal data.
Maybe someone could have done us a favour and taken Radio 1 off the air.
Re: Siemens: To be fair...
It's pretty safe to say that the BBC have enormous amounts of personal data.
Given the prevalence of password reuse, they hold plenty of concern even if you only think in terms of email/password pairs. That said, I do see your point. Anybody with best practices in mind when watching "World's Craziest Fools," is fine.
*nips off to change some passwords*
The 1337day site has an exploit for sale which claims to be for ProFTPD 3.3.3g and quotes the BBC FTP site. Some of their exploits for sale have been a bit dubious in the past so rather than it being a new ProFTPD vulnerability it may just be instructions on a misconfiguration of that particular server.
Always have loved the simplicity and stability of FTP personally and added secure SSL functionality has been available for years on many clients/servers. FxP'ing between servers still happens!
"account running the ftp daemon"
Since this was the bbc, what are the chances that ftp was running as root?
Many of our clients still use ftp to send data to us every few minutes throughout the day (Gas Industry). This is all over Europe and beyond not just in the UK so FTP is very far from dead. As for the attack itself, shocker, an FTP account where the username and password are sent in plain text was compromised (although it seems the attacker here had it even easier). That is why an FTP box just does FTP and sits out on its own in the DMZ and only has the required ports open to the outside (in other words was SSH available to the Outside). I do also wonder if they restrict user accounts, I only allow 3rd parties FTP and FTPS access (and that FTPS access is not run by my SSH daemon either), they have no shell so would have to find a vulnerability in order to elevate themselves somehow. Even if they did compromise the box, it wouldn't help them much here as it has no access to anything else.
I live under the assumption I have been hacked or will be, makes it much easier to manage risk. I hope the BBC do the same.
Software Clients - pass the blame
Maybe this is something to do with Microsoft, having failed to support sftp clients/servers as part as their supplied install packages, whilst maintaining support for ftp.
Re: Software Clients - pass the blame
Erm, but Microsoft has supplied and supported an RFC based FTPS (FTP over SSL) server ever since IIS7....
Re: Software Clients - pass the blame
FTPS != SFTP (which is far more widely used IME)
They use a pretty convoluted aspera based ingest system for almost everything important, content wise anyway.
That said, the bbc is a loose collection of individuals who basically hate each other and are allowed to operate as virtually separate companies.
There are hundreds of FTP servers operating internally and externally for various puropses, getting files on and off the system for engineering purposes, providing logs to suppliers for support, just the usual mash up.
They use a broad range of operating systems ranging from windows 3.11 all the way up to win8 and a whole host of x like systems. Nothing gets patched, in case the patch upsets some of the unsupported 15 year old mission critical software that Dave from FM&T wrote in 1999.
I swear, only a few years ago, I looked after ceefax that has only just been switched off, when asked to find out why it kept falling over, I found the servers in a cupboard and they were a rag tag assortment of 386's the occasional Pentium 1 and, well, you can imagine the rest.
They do take perimeter network security reasonably seriously though so I very much doubt that this FTP server will have made an easy stepping stone into the rest of the network.
ITN hacked via FTP too
Many years ago (about 1999/2000) I was called in to deal with a hack via FTP which defaced ITN's web ste. That was a Solaris box to - a Sun E450.
It wasn't a technically difficult hack though. FTP was world available, the username was ITN and the password ITN. This account had root privilege. Doh!
Learn how Immutable Service Containers provide a way to safeguard virtualized environments.
On reviewing the excellent security benchmarks available over at CI Security, I wanted to automate the security checks of my Solaris 10 servers and produce a highly detailed report listing all security warnings, together with recommendations for their resolution. The solution was seccheck - a modular host-security scanning utility. Easily expandable and feature rich, although at the moment only available for Solaris 10.
This doesn't cover 100% of the checks recommended by CI Security, but has 99% of them - the ones that I consider important. For example, I don't check X configuration because I always ensure my servers don't run X.
The source distribution should be unpacked to a suitable location. I suggest doing something like the following:# mkdir /usr/local/seccheck # chown root:root /usr/local/seccheck # chmod 700 /usr/local/seccheck # cd /usr/local/seccheck # mkdir bin output # cd /wherever/you/downloaded/seccheck # gzip -dc ./seccheck-0.7.6.tar.gz | tar xf - # cd seccheck-0.7.6 # mv modules.d seccheck.sh /usr/local/seccheck/bin
Everything is implemented as bash shell scripts, so there are no really strict installation guidelines, place the files wherever you wish. You can specify an alternate location for the modules directory with the -m option anyway.
By default, seccheck.sh will search for a modules.d directory in the same directory in which the seccheck.sh script is located. If your modules are not located there, you can use the -m option to specify an alternate module location, for example:# ./seccheck.sh -m /security/seccheck/mymodules
seccheck will then scan through the modules.d for valid seccheck modules (determined by filename). A seccheck module filename should be of the following format:
Where nn is a two digit integer that determines the order in which modules should be executed. For example, included with the current seccheck distribution you'll find the following files in modules.d:# ls -1 modules.d seccheck_00_services.sh seccheck_01_users.sh seccheck_03_kernelcheck.sh seccheck_05_logging.sh seccheck_10_accessauth.sh seccheck_99_perms.sh seccheck_NN_template.sh.NOT
You can see that seccheck_00_services.sh will be processed before seccheck_01_users.sh, and so on. You can disable a module by renaming it something other than the convention, for example, by appending a .NOT suffix to the module filename.
A template is provided so that you can write your own seccheck modules.
By default, seccheck will write everything out to STDOUT and STDERR. If you want to redirect to an output file, just use the -o option and specify an output directory. After running the script, you'll be left with a file such as:
containing the output of your modules.
You can download the latest seccheck distribution, including all current modules, below:
User Contributed Modules
Please feel free to submit your own seccheck modules - send them through to firstname.lastname@example.org. Bear in mind that any scripts submitted will be distributed freely under the terms of the GPL. Also please note that these are user contributed modules, and as such are unsupported by me!
Module Name Author Date Added View Download Description seccheck_80_audits.sh Scott Everard 26/05/07 View D/L Check Solaris Audit Daemon configuration seccheck_89_zones.sh Scott Everard 26/05/07 View D/L Check Solaris Zones configuration
There are many resources available on the Internet to help with managing IT security -- far too many for the newcomer to be able to sort out the valuable ones from the useless ones. In this article, I'll present a number of very useful documents designed to help in managing enterprise security in a practical manner. I will review some of the most common documents that I've used to help IT organizations evaluate their security and provide them with assistance on what to do to maintain security. Rather than referring to the many, many books available or to voluminous and boring standards documents, I'll present freely available and easily understood documents that can be easily adapted and applied to most IT organizations.
Why do systems administrators need to use guides, practices, and checklists? The answer is simple -- admins can't possibly be experts in all areas of IT security that must be managed by modern enterprises. Even a small company with one or two servers, an Internet connection, and 20 or so workstations poses a lot of work to fully evaluate how secure it is. So, we need guides, written practices, and checklists to provide us with guidance on how to maintain security and to make sure that we cover all the details.
Specifically in this article, I'll review the Open Source Security Testing Methodology Manual (OSSTMM), a number of NIST Special Publications, some of the DISA guides and checklists, the Standard of Good Practice (SoGP), and the ISO17799 standard. These are all freely available (except for ISO17799) and will greatly ease the task of evaluating and maintaining enterprise security.
The Open Source Security Testing Methodology Manual (OSSTMM)
The Open Source Security Testing Methodology Manual is a guide for evaluating how secure systems are. It contains detailed instructions on how to test systems in a methodological way, and how to evaluate and report on the results.
The OSSTMM consists of six sections:
- Information Security
- Process Security
- Internet Technology Security
- Communications Security
- Wireless Security
- Physical Security
It also includes a number of templates intended for use during the testing process to capture the information gathered.
The OSSTMM is a great resource for systems administrators who want to evaluate the security of a wide range of systems in an ordered and detailed way. It contains instructions on testing systems but few details on how to protect systems.
NIST Special Publications
The Information Technology Laboratory of the National Institute of Standards and Technology (NIST) publishes a number of guides and handbooks under the Special Publications program. Some of these are quite high-level, covering areas of management, policy, and governance. But many include details that are perfect for systems administrators and operations people. The following is an overview of some of the available guides -- check the NIST Web site for the full list of currently available guides.
The great thing about the NIST documents and checklists is that they are not copyrighted. That's right; you can copy and modify these as much as you want without fear of reprisals. You can modify these checklists to suit your own requirements, for example, to develop your own checklist for new servers going into production or to define your own security auditing process. You can even adapt these guides to become your new security policy.
NIST SP800-100 Information Security Handbook: A Guide for Managers
This is a big document (178 pages) that supersedes the older SP800-12 as a general handbook on managing information security. For IT managers or systems administrators new to security this is really the best place to start, although much of the content is at a high level targeted for managers. Some of the chapters, such as those on governance and investment management will be too high level for systems administrators, but others such as the ones on incident response, contingency planning, and configuration management will be very useful. This guide includes an appendix containing a list of Frequently Asked Questions (FAQs), which provides a lot of useful information.
NIST SP800-44 Guidelines on Securing Public Web Servers
If you're operating Web servers on the public Internet, then you need to read this guide. Aimed at technical and operations people, it describes the threats to public Web servers and provides detailed guidelines for securing them. The following areas are covered:
- Planning and management of Web servers
- Securing the operating system
- Securely installing and configuring the Web server
- Securing Web content
- Authentication and encryption technologies
- Implementing a secure network for a Web server
- Administering a Web server
Examples and references are provided for the Apache and Microsoft IIS Web servers, and there is a comprehensive appendix with details on installing and configuring both of these. There is also an appendix containing a very useful checklist for securing Web servers.
NIST SP800-45 Guidelines on Electronic Mail Security
Version 2 of the Guidelines for Electronic Mail Security was released in February 2007. This guide covers many areas from the installation and secure operation of email servers to encryption and signing of emails and securing various email clients. The following areas are covered in detail:
- Planning and managing mail servers
- Securing the mail server operating system
- Securing mail servers and content
- Administering the email server
- Implementing a secure network infrastructure
- Securing mail clients
- Signing and encrypting email content
As in the guide for Web servers, a checklist is provided in the Appendices for quickly checking the security of an existing or planned mail server. It doesn't have any operating system or mail software specific sections but is detailed enough to cover almost any installation.
NIST SP800-81 Secure Domain Name System (DNS) Deployment Guide
DNS is a critical component of most IT environments, and risks to DNS need to be taken very seriously and managed appropriately. This guide presents recommendations for secure deployment of DNS servers. It examines the common threats to DNS and recommends approaches to minimize them. It covers the technical details of installing the BIND DNS server on Unix systems and provides recommendations for securing the operating system.
This guide explains how to secure zone transfers with TSIG signatures and gives a very good overview of DNSSEC implementation and management. It is thoroughly recommended if you are involved with managing DNSSEC services.
NIST SP800-48 Wireless Network Security (802.11, Bluetooth, and Handheld Devices)
This guide was written in 2002, so it is a bit outdated now. However, the fundamentals of wireless technology haven't changed a lot, and this guide does a very good job of explaining the threats to wireless networks. It covers primarily IEEE 802.11 (WiFi) and Bluetooth and presents good guidelines on security controls, such as positioning access points, controlling network access, and encryption methods. Even if you're not familiar with wireless networking, this guide serves as an excellent introduction.
NIST SP800-92 Guide to Computer Security Log Management
Just about every device in the world of IT generates log messages. Some devices, such as firewalls, generate huge amounts of log data all of which needs to be managed in a secure manner.
This guide introduces the requirement to securely manage log data. It includes guides on log management infrastructure and processes such as reporting and analysis tools. It also includes details on the Unix syslog system and contains references to many tools and further guides for managing log data.
NIST and DISA Checklists
Sometimes we just don't have the spare time to read though the lengthy guides; this is when checklists come in handy. NIST has developed a program for the development of checklists for securing IT systems. The program is now owned by DISA (Defense Information Systems Agency), and it provides a large number of checklists that make the job of evaluating systems much easier and more methodological.
A number of checklists are available here, including ones covering:
- Most versions of Unix
- Microsoft Windows 2000, 20003, XP, Vista
- Oracle RDBMS
- BIND DNS servers
- Cisco PIX firewalls
- Cisco IOS
- Wireless networks
- Apache Web server
Unix Security Checklist
The Unix Security Checklist comes as a zip file containing a number of documents with three major sections and five appendices. Some of the documents are very large (one is 360 pages long). The checklist is very detailed and contains checks for the Unix OS and most common applications found on Unix (such as SSH). The checks are all in .doc Word format, which makes it very easy to adapt them to your own purposes. The most important sections are Section 2 and Section 3.
Section 2, "SRR Results Report" contains a table that allows you to document the vulnerabilities discovered during the Security Readiness Review (SRR). Section 3, "System Check Procedures", covers procedures about how to perform the SRR for Unix systems. Unix systems covered by this checklist are HP-UX, AIX, Solaris, and Red Hat Linux.
Standard of Good Practice (SoGP)
Published by the Information Security Forum (ISF), the Standard of Good Practice presents comprehensive best practices for managing IT systems from a business perspective but in a practical and achievable way. It has been targeted for larger businesses, but is still applicable to the small to medium businesses as well.
The standard is broken down into six sections, which it calls "aspects":
- Aspect SM: Security Management
- Aspect SD: System Development
- Aspect CB: Critical Business Applications
- Aspect CI: Computer Installations
- Aspect NW: Networks
- Aspect UE: User Environment
This is a very large document (247 pages), which would be very well suited for adoption as a comprehensive security policy. Even if you're not specifically solving security problems, the SoGP would act as a good set of guidelines for IT management practices.
No overview of security guides and practices would be complete without a mention of ISO17799. Titled "A Code of Practice for Information Security Management", it was originally developed in 1993 by a number of companies and published as a British standard. It became an ISO standard in 2000 with a number of later editions and add-on documents following. It essentially consists of about 100 security controls within 10 major security headings. It is intended to be used as a reference document to identify the measures required to be applied to specific areas and issues. It contains 10 sections on the following subjects:
- Development of an enterprise IT security policy
- Establishing a security organization, defining management and responsibility
- Asset classification and control
- Security of personnel -- resources, training, awareness, incident reporting
- Implementing physical security controls
- Management of computers and networks
- Controlling access to computer systems
- Integrating security into new systems
- Business continuity and disaster planning
- Compliance with security requirements
The good thing about ISO17799 is that it is a standard against which an organization can be audited, and it can be seen as a common standard for IT security management. There are also many additional documents and books available to supplement the standard.
The bad thing about ISO17799 is that it is heavily commercialized; the 115-page document costs approximately US $200 and contains information that is available elsewhere at no cost (such as the SoGP).
There are many security guides available, and in this article I've presented some of the best ones that you can get and use for free. The OSSTMM and NIST/DISA checklists are good guides for evaluating the security of existing systems. The NIST guides are good for defining the best practices to manage systems securely, and the SoGP and ISO17799 documents offer standards against which your enterprise can be evaluated.
Managing IT security across the enterprise can be a bewildering experience; many managers and systems administrators have problems simply deciding where to start. With the right guides and checklists, however, the job can be greatly simplified and more easily understood.
ISO17799 -- http://www.iso-17799.com/
NIST & DISA Checklists -- http://csrc.nist.gov/checklists/repository/ or http://iase.disa.mil/stigs/checklist/index.html
NIST Special Publications -- http://csrc.nist.gov/publications
Open Source Security Testing Methodology Manual (OSSTMM) -- http://www.osstmm.org
Standard of Good Practice (SoGP) -- http://www.isfsecuritystandard.com/index_ns.htm
Unix Security Checklist -- http://csrc.nist.gov/checklists/repository/1078.html
Kerry Thompson is a Security Consultant in Auckland, New Zealand with more than 20 years commercial experience in Unix systems, networking, and security. In his spare time he is a technical writer, software developer, sheep farmer, woodworker, private pilot, and father. Contact him at: email@example.com.
The Solaris Trusted Extensions project is a reimplementation of Trusted Solaris 8 based on new security features in Solaris 10. It has been renamed because it will be delivered as an optional set of extensions to Solaris. The layered functionality consists of a set of label-aware services that are derived from Trusted Solaris 8.
A partial list of such services includes:
- Labeled Networking
- Label-aware Filesystem Mounting and Sharing
- Labeled Printing
- Labeled Desktops
- Java Desktop System
- Common Desktop Environment
- Label Configuration and Translation
- Label-aware System Management Tools
- Label-aware Device Allocation
Solaris Trusted Extensions extends Solaris security by enforcing a mandatory access control policy. Sensitivity labels are automatically applied to all sources of data (networks, filesystems, windows) and consumers of data (user and processes). Access to all data is restricted based on the relationship between the label of the data (object) and the consumer (subject).
A whitepaper, An Architectural Overview of Solaris Trusted Extensions, is a good place to start.
The official Solaris Trusted Extensions Collection is now available on Sun's document website. This includes a developer guide for those that need to know how to write label aware services.
The Trusted Extensions software is now part of OpenSolaris. Although most of the code is always present in Solaris, to enable this feature, you must install additional packages from the
Solaris11/ExtraValue/CoBundled/TrustedExtensionsdirectory. Before installing the software please review the README file in that directory.
Trusted Extensions software was first integrated into Solaris Express 7/06 which was based on Nevada build 42a. There were some packaging errors in that release, so please review the workarounds described in Installation Issues for Build 42a. These problems were fixed in build 43.
Several people have asked about configuring Trusted Extensions for laptops. The steps are described in Laptop Instructions.
Although the current multilevel desktop is based on CDE, a multilevel version of the Java Desktop will be putback to Nevada in the near future. In anticipation of this desktop migration, the current CDE Actions which are used for trusted administration need to be replaced with GNOME equivalents. This is an area where community involvement would be appreciated.
To kick-start this effort we have provided two shell scripts as prototypes of administrative tools. Both scripts use zenity(1) to provide a point and click interface. The first script, txzonemgr is a Labeled Zone Manager which manages labeled zones in a structured manner and automates many of these steps. As zones are transitioned from one state to another, only valid choices are presented. It supports the complete zone life-cycle, including configuration, installation, label assignment, starting, stopping, making snapshots, uninstalling, and deleting.
The second script, txnetmgr is a prototype of a Labeled Network Manager. It automates many network interface functions such creating logical interfaces, viewing and assigning network templates, and sharing interfaces with all zones. Both of these scripts needs to run as root in the global zone, using CDE or Java Desktop. Give them a try and provide feedback or enhancements.
August 2006. This presentation provides an introduction to the Solaris Secure by Default project, including how it was implemented and how it can be used to deploy more secure systems.
Sun Java System Identity Manager 7.0 is the first to combine user provisioning - the process of creating, updating, and deleting user access across applications and systems - with identity auditing, the process of analyzing applications and systems for identity control violations, notifying corporate compliance officers of violations, and addressing policy exceptions. By combining these capabilities, Sun enables customers to avoid managing two separate processes, bridging the gap between IT security and auditors, internal and external, responsible for compliance with regulations such as Sarbanes-Oxley. The new offering supports the Solaris 10 Operating System (OS) - the most secure OS on the planet – and extends its market leading security capabilities. This enables control of identities across the operating system, applications, data and physical locations.
"Customers need to simplify and automate compliance efforts. These enhancements to Sun's Identity Management Suite make it easier and cheaper for our customers to manage and report compliance with regulatory mandates," said Sara Gates, Vice President of Identity and Web Services at Sun. "We think innovations like this have industry watchers, such as Gartner*, naming Sun as the worldwide marketshare leader in identity management software based on total software revenue."
*Gartner Dataquest Market Share: User Provisioning, Worldwide, 2005, Nicole S. Latimer – Livingston, 10 Aug 2006. To read Gartner Group's marketshare information, visit: http://www.gartner.com
With the latest version of Java System Identity Manager, enterprises can check and confirm that identities comply with audit policy before the access is provided. Its identity auditing capabilities include automated reviews and proactive scanning to ensure consistent policy enforcement and repeatable processes. Automated reviews and identity scanning also enable early detection and notification of identity policy violations, helping to reduce their impact.
Tip of the Month: Enabling TCP Wrappers in Solaris 10
Before answering this question, let's first provide a little background. TCP Wrappers has been around for many, many years. It is used to restrict access to TCP services based on host name, IP address, network address, etc. For more detailed on what TCP Wrappers is and how you can use it, see tcpd(1M). TCP Wrappers was integrated into Solaris starting in Solaris 9 where both Solaris Secure Shell and inetd-based (streams, nowait) services were wrapped. Bonus points are awarded to anyone who knows why UDP services are not wrapped by default.
TCP Wrappers support in Secure Shell was always enabled since Secure Shell always called the TCP Wrapper function host_access(3) to determine if a connection attempt should proceed. If TCP Wrappers was not configured on that system, access, by default, would be granted. Otherwise, the rules as defined in the hosts.allow and hosts.deny files would apply. For more information on these files, see hosts_access(4). Note that this and all of the TCP Wrappers manual pages a stored under /usr/sfw/man in Solaris 10. To view this manual page, you can use the following command:$ man -M /usr/sfw/man -s 4 hosts_access
inetd-based services use TCP Wrappers in a different way. In Solaris 9, to enable TCP Wrappers for inetd-based services, you must edit the /etc/default/inetd file and set the ENABLE_TCPWRAPPERSparameter to YES. By default, TCP Wrappers was not enabled for inetd.
In Solaris 10, two new services were wrapped: sendmail and rpcbind. sendmail works in a way similar to Secure Shell. It always calls the host_access function and therefore TCP Wrappers support is always enabled. Nothing else needs to be done to enable TCP Wrappers support for that service. On the other hand, TCP Wrappers support for rpcbind must be enabled manually using the new Service Management Framework ("SMF"). Similarly, inetd was modified to use a SMF property to control whether TCP Wrappers is enabled for inetd-based services.
Let's look at how to enable TCP Wrappers for inetd and rpcbind...
To enable TCP Wrappers support for inetd-based services, you can simply use the following commands:# inetadm -M tcp_wrappers=true # svcadm refresh inetd
This will enable TCP Wrappers for inetd-based (streams, nowait) services like telnet, rlogin, and ftp (for example):# inetadm -l telnet | grep tcp_wrappers default tcp_wrappers=TRUE
You can see that this setting has taken effect for inetd by running the following command:# svcprop -p defaults inetd defaults/tcp_wrappers boolean true
Note that you can also use the svccfg(1M) command to enable TCP Wrappers for inetd-based services.# svccfg -s inetd setprop defaults/tcp_wrappers=true # svcadm refresh inetd
Whether you use inetadm(1M) or svccfg is really a matter of preference. Note that you can also use inetadm or svccfg to enable TCP Wrappers on a per-service basis. For example, let's say that we wanted to enable TCP Wrappers for telnet but not for ftp. By default, both the global and per-service settings for TCP Wrappers are disabled:# inetadm -p | grep tcp_wrappers tcp_wrappers=FALSE # inetadm -l telnet | grep tcp_wrappers default tcp_wrappers=FALSE # inetadm -l ftp | grep tcp_wrappers default tcp_wrappers=FALSE
To enable TCP Wrappers for telnet, use the following command:# inetadm -m telnet tcp_wrappers=TRUE
Let's check out settings again:# inetadm -p | grep tcp_wrappers tcp_wrappers=FALSE # inetadm -l telnet | grep tcp_wrappers tcp_wrappers=TRUE # inetadm -l ftp | grep tcp_wrappers default tcp_wrappers=FALSE
As you can see, TCP Wrappers has been enabled for telnet but none of the other inetd-based services. Pretty cool, eh?
You can enable TCP Wrappers support for rpcbind by running the following command:# svccfg -s rpc/bind setprop config/enable_tcpwrappers=true # svcadm refresh rpc/bind
This change can be verified by running:# svcprop -p config/enable_tcpwrappers rpc/bind true
That is all that there is to it! Quick, easy and painless! As always, let me know what you think!
" Entertainment value of ja... | Weblog Friday February 03, 2006
My little Solaris security cheat sheet
This returned me to sanity a few times while learning about Solaris security. Like many others, I'm not a security expert and I often need a short version to fit in my head.
authorization A right assigned to users that is checked by privileged programs to determine whether users can execute restricted functionality. More in auth_attr(4).
privilege An attribute that provides fine-grained control over the actions of processes, as opposed to traditional unix all-or-nothing, super-user vs user, model. More in privileges(5).
profile A logical grouping of authorizations and commands. Profile shells, pf[ck]sh, interpret profiles to form a secure execution environment. More in prof_attr(4), exec_attr(4).
role A type of user account, with associated authorizations and profiles. Roles cannot be logged in directly - users assume roles using su(1M).
how to get CLI API authorizations auths(1) getauthattr(3SECDB) privileges ppriv(1) getppriv(2) profiles profiles(1) getprofattr(3SECDB) roles roles(1) -
authorizations privileges Per-user: all user processes have same authorizations. Per-process: each process has separate privilege sets. Static: once assigned to user, remains the same. Dynamic: privilege sets can change during process lifecycle. A simple token. In theory can be easily added to other OSes. Integrated deep into Solaris. Userland Userland and kernel. Introduced in Solaris 8 1 Introduced in Solaris 10 1
1Was also available much earlier in Trusted Solaris.
Commentor: Casper Dik
Added: September 7, 2004
It is rather pointless to install TCP wrappers for Solaris 9 and later as the version included in the OS is exactly the same as the one available on porcupine. That version has also been reved twice because of bugs we ran into. Solaris 9 SSH already has libwrap support compiled on. In S10 and later we also provide rpcbind linked with libwrap.
by Glenn Brunette
This Sun BluePrints Cookbook describes how to centralize and automate the collection of file integrity information using the following Solaris features:
* Secure Shell
* Role-based Access Control (RBAC)
* Process Privileges
* Basic Auditing and Reporting Tool (BART)
Each of these features can be quickly and easily integrated to centralize and automate the process of collecting file fingerprints across a network of Solaris 10 systems.
Note: This article is available in PDF Format only.
This Tech Tip explains how to use NFS to inspect the underlying directory structure if the reported disk usage seems inconsistent.
Read about the build, configuration, and subsequent hardening of UNIX servers that constitute a secured FTP solution.
So what makes Solaris Privileges different? Why didn't we copy something else like Trusted Solaris Privileges or "POSIX" capabilities?
Let's start from what we formulated as our requirements near the beginning of our project.
One of the important features of Solaris is complete binary backward compatibility; in order to offer that we needed to design the privilege subsystem in such a manner that current practices, binaries and products would continue to work. Of course, some have solved this issue by providing a system wide knob to turn: root / root + privileges / just privileges. We don't like knobs in our OS; specifically not ones which drastically alter the behaviour of a system. It makes it harder to develop software; it needs to work for all settings. Certain products may require conflicting settings, and so on. So we decided on a "per-process" knob which is largely automatic
With backward compatibility comes the onus on the software developer to develop future proof interfaces; that ruled out all other interfaces as they all have fixed bitmaps and fixed privilege/capability numbers, fixed structure sizes in the programmer visible parts of the system. Solaris Privileges have none of that. And while we could savely reuse the names of the Trusted Solaris interfaces we can not redefine interfaces even from a defunct standard. So we have interfaces which smell like Trusted Solaris but with a completely new userland representation of privileges and privilege sets. We can never have more signals; but we can have more privileges and more privilege sets!
The privileges and privilege sets in Solaris 10 are represented to userland processes and non-core kernel modules as strings; privilege sets are bitmasks of undetermined size; they can only be allocated through the C library routines. Privilege set names are also strings and not plain integer indices; this gives us even more flexibility. A Solaris binary compiled for 4 privilege sets of each 32 privileges will continue to work on a Solaris system with 5 privilege sets each of which can contain 64 privileges and with all the privileges having their internal representation renumbered.
... Many software exploits count on this escalated privilege to gain superuser access to a machine via bugs like buffer overflows and data corruption. To combat this problem, the Solaris 10 Operating System includes a new least privilege model, which gives a specified process only a subset of the superuser powers and not full access to all privileges.
The least privilege model evolved from Sun's experiences with Trusted Solaris and the tighter security model used there. The Solaris 10 OS least privileged model conveniently enables normal users to do things like mount file systems, start daemon processes that bind to lower numbered ports, and change the ownership of files. On the other hand, it also protects the system against programs that previously ran with full root privileges because they needed limited access to things like binding to ports lower than 1024, reading from and writing to user home directories, or accessing the Ethernet device. Since setuid root binaries and daemons that run with full root privileges are rarely necessary under the least privilege model, an exploit in a program no longer means a full root compromise. Damage due to programming errors like buffer overflows can be contained to a non-root user, which has no access to critical abilities like reading or writing protected system files or halting the machine.
The Solaris 10 OS least privilege model includes nearly 50 fine-grained privileges as well as the basic privilege set.
- The defined privileges are broken into the groups
- The basic privilege set includes all privileges granted to unprivileged processes under the traditional security model:
Increasing life expectancy
The past 12-24 months has seen a significant downward shift in successful random attacks against Linux-based systems. Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2 ) have an online mean life expectancy of 3 months before being successfully compromised. Meanwhile, the time to live for unpatched Win32 systems appears to continues to decrease. Such observations have been reported by various organizations, including Symantec , Internet Storm Center and even USAToday. The few Win32 honeypots we have deployed support this. However, Win32 compromises appear to be based primarily on worm activity.
T H E D A T A
Background Our data is based on 12 honeynets deployed in eight different countries (US, India, UK, Pakistan, Greece, Portugal, Brazil and Germany). Data was collected from the calendar year of 2004, with most of the data collected in the past six months. Each honeynet deployed a variety of different Linux systems accessible from anywhere on the Internet. In addition, several Win32 based honeypots were deployed, but these were limited in number and could not be used to identify widespread trends. A total of 24 unpatched Unix honeypots were deployed, of which 19 were Linux, primarily Red Hat. These unpatched honeypots were primarily default server installations with additional services enabled (such as SSH, HTTPS, FTP, SMB, etc). In addition, on several systems insecure or easily guessed passwords were used. In most cases, host based firewalls had to be modified to allow inbound connections to these services. These systems were targets of little perceived value, often on small home or business networks. They were not registered in DNS or any search engines, so the systems were found by primarily random or automated means. Most were default Red Hat installations. Specifically one was RH 7.2, five RH 7.3, one RH 8.0, eight RH 9.0, and two Fedora Core1 deployments. In addition, there were one Suse 7.2, one Suse 6.3 Linux distributions, two Solaris Sparc 8, two Solaris Sparc 9, and one Free-BSD 4.4 system. Of these, only four Linux honeypots (three RH 7.3 and one RH 9.0) and three Solaris honeypots were compromised. Two of the Linux systems were compromised by brute password guessing and not a specific vulnerability. Keep in mind, our data sets are not based on targets of high value, or targets that are well known. Linux systems that are of high value (such as company webservers, CVS repositories or research networks) potentially have a shorter life expectancy.
The science is methodical, premeditated actions to gather and analyze evidence. The technology, in the case of computers, are programs that suite particular roles in the gathering and analysis of evidence. The crime scene is the computer and the network (and other network devices) to which it is connected.
Your job, as a forensic investigator, is to do your best to comb through the sources of evidence -- disc drives, log files, boxes of removable media, whatever -- and do two things: make sure you preserve as much of this data in its original form, and to try to re-construct the events that occurred during a criminal act and produce a meaningful starting point for police and prosecutors to do their jobs.
Every incident will be different. In one case, you may simply assist in the seizure of a computer system, which is analyzed by law enforcement agencies. In another case, you may need to collect logs, file systems, and first hand reports of observed activity from dozens of systems in your organization, wade through all of this mountain of data, and reconstruct a timeline of events that yields a picture of a very large incident.
In addition, when you begin an incident investigation, you have no idea what you will find, or where. You may at first see nothing (especially if a "rootkit" is in place.) You may find a process running with open network sockets that doesn't show up on a similar system. You may find a partition showing 100% utilization, but adding things up with du only comes to 50%. You may find network saturation, originating from a single host (by way of tracing its ethernet address or packet counts on its switch port), a program eating up 100% of the CPU, but nothing in the file system with that name.
The steps taken in each of these instances may be entirely different, and a competent investigator will use experience and hunches about what to look for, and how, in order to get to the bottom of what is going on. They may not necessarily be followed 1, 2, 3. They may be way more than is necessary. They may just be the beginning of a detailed analysis that involves De-compilation of recovered programs and correlation of packet dumps from multiple networks.
Instead of being a "cookbook" that you follow, consider this a collection of techniques that a chef uses to construct a fabulous and unique gourmet meal. Once learned, you'll discover there are plenty more steps than just those listed here.
Its also important to remember that the steps in preserving and collecting evidence should be done slowly, carefully, methodically, and deliberately. The various pieces of data -- the evidence -- on the system are what will tell the story of what occurred. The first person to respond has the responsibility of ensuring that as little of this evidence as possible is damaged, thereby making it useless in contributing to a meaningful reconstruction of what occurred.
One thing is common to every investigation, and it cannot be stressed enough. Keep a regular old notebook handy and take careful notes of what you do during your investigation. These may be necessary to refresh your memory months later, to tell the same long story to a new law enforcement agent who takes over the case, or to refresh your own memory when/if it comes time to testify in court. It will also help you accurately calculate the cost of responding to the incident, avoiding the potentially exaggerated estimates that have been seen in some recent computer crime cases. Crimes deserve justice, but justice should be fair and reasonable.
As for the technology aspect, the description of basic forensic analysis steps provided here assumes Red Hat Linux on i386 (any Intel compatible motherboard) hardware. The steps are basically the same with other versions of Unix, but certain things specific to i386 systems (e.g., use of IDE controllers, limitations of the PC BIOS, etc.) will vary from other Unix workstations. Consult system administration or security manuals specific to your version of Unix.
It is helpful to set up a dedicated analysis system on which to do your analysis. An example analysis system in a forensic lab might be set up as follows:
- Fast i386 compatible motherboard with 2 IDE controllers
- At least two large (>8GB) hard drives on the primary IDE controller (to fit the OS and tools, plus have room to copy partitions off tape or recover deleted file space from victim drives)
- Leave second IDE cable empty. This means you won't need to mess with jumpers on discs -- just plug them in and they will show up as /dev/hdc (master) or /dev/hdd (slave)
- SCSI interface card (e.g., Adaptec 1542)
- DDS-3 or DDS-4 4mm tape drive (you need enough capacity to handle the largest partitions you will be backing up)
- If this system is on the network, it should be FULLY PATCHED and have NO NETWORK SERVICES RUNNING except SSH (for file transfer and secure remote access) -- Red Hat Linux 6.2 with Bastille-Linux hardening is a good choice
(It can be argued that no services should be running, not even SSH, on your analysis systems. You can use netcat to pipe data into the system, encrypting it with DES or Blowfish stream cyphers for security. This is fine, provided you do not need remote access to the system.)
Another handy analysis system is a new laptop. An excellent way of taking the lab to the victim, a fast laptop with 10/100 combo ethernet card, an 18+GB hard drive, and a backpack with padded case, allows you to easily carry everything you need to obtain file system images (later written to tape for long-term storage), analyze them, display the results, crack intruder's crypt() passwords you encounter, etc.
A cross-over 10Base-T cable allows you to get by without a hub or switch, and to still use the network to communicate with the victim system on an isolated mini-network of two systems. (You will need to set up static route table entries in order for this to work.)
A Linux analysis system will work for analyzing file systems from several different operating systems that have supported file system under Linux, e.g., Sun UFS. You simply need to mount the file system with the proper type and options, e.g. (Sun UFS):
# mount -r -t ufs -o ufstype=sun /dev/hdd2 /mnt
Another benefit to Linux are "loopback" devices, which allow you to mount a file containing an image copy (obtained with dd) into the analysis system's file system. See Appendices A and B.
The next item of my list of lesser known and/or publicized security enhancements to the Solaris 10 OS is account lockout. Account lockout is the ability of a system or service to administratively lock an account after that account has suffered "n" consecutive failed authentication attempts. Very often "n" is three hence the "three strikes" reference.
Recall from yesterday's entry on non-login and locked accounts that there is in fact a difference. Locked accounts are not able to access any system services whether interactively or through the use of delayed execution mechanisms such as cron(1M). So, when an account is locked out using this capability, only a system administrator is able to re-enable the account, using the passwd(1) command with the "-u" option.
Account lockout can be enabled in one of two ways. The first way will enable account lockout globally for all users. The second method will all more granular control of which users will or will not be subject to account lockout policy. Note that the account lockout capability will only apply to accounts local to the system. We will look at both in a little more detail below.
Before we look at how to enable or disable the account lockout policy, let's first take a look at how you configure the number of consecutive, failed authentication attempts that will serve as your line in the sand. Any number of consecutive, failed attempts beyond the number selected will result in the account being locked. This number is based on the RETRIES parameter in the /etc/default/login file. By default, this parameter is set to 5. You can certainly customize this parameter based on your local needs and policy. By default, the Solaris Security Toolkit will set the RETRIES parameter to 3.
To better handle software faults, Sun has redesigned the way it starts and monitors services. Instead of the the traditional
/etc/init.dstartup scripts, many programs in the Solaris 10 OS have been converted to use the service management framework (smf) of the Solaris Service Manager to start, stop, modify, and monitor programs. The service manager is also used to identify software interdependencies and ensure that services are started in the correct order. Should a service, such as sendmail, suddenly die, the service manager automatically verifies that all of the requirements for the sendmail service are running and respawns the necessary programs. When a hardware fault occurs and hardware is offlined, the service manager can restart any programs under service manager control that needed to be stopped to remove the hardware from service.
Each service under the control of the service manager is controlled by an XML configuration file, called a manifest, that defines the name of the service, the type, any dependencies, and other important information. These manifests are stored in a repository and can be viewed and modified by the repository daemon,
svc.configd(1M). The repository is read by the master restarter daemon,
svc.startd(1M), which evaluates the dependencies and initiates the services as needed. Traditional inetd services are now part of the service manager as well. Any of the inetd services can be enabled, disabled, or restarted via the same mechanism as any other service manager-enabled program.
Itch scratching, and audit (Score:3, Interesting)
by RedPhoenix (124662) on Tuesday September 14, @09:15PM (#10251879)
At the risk of the post sounding like a discussion at a head-lice convention, everyone has their own personal itch to scratch.
Several posts thus far, have questioned the viability of establishing yet another secure-debian project, similar to other existing projects, and have indicated that there would be a better use of available resources if everyone would just get along and work together (or at least, form under a single project). Fair enough.
However, there are a whole range of reasons why diversity and natural selection w.r.t many competing projects can provide benefits over and above a single large project - organisational inertia, effective and efficient communication, and development priority differences, for example.
'Organisational inertia' in particular, whereby the larger a organisation/project gets, the slower it can react to changing requirements, is a good reason why this effort-amalgamation can potentially be a bad thing.
Each of these projects probably has a slightly different 'itch' to 'scratch'. There's no reason why, later on down the track, that the best elements of each of these projects cannot be merged into something cohesive.
A good example is the current situation in Linux Auditing (as in C2/CAPP style auditing and event logging, not code verification) and host-based audit-related intrusion detection. Over time, we've had Snare (http://www.intersectalliance.com), SLES (http://www.suse.com), and Riks Audit Daemon (http://www.redhat.com). Each project had a slightly different focus, and each development team have come up with some great solutions to the problems of auditing / event logging.
The developers of each of these projects are now communicating and collaborating, with a view to bringing a effective audit subsystem to Linux that incorporates the best ideas from each approach.
BTW: How about auditing in this project? Here's a starting point:
Red. (Snare Developer)
About: pam_passwdqc is a simple password strength checking module for PAM-aware password changing programs, such as passwd(1). In addition to checking regular passwords, it offers support for passphrases and can provide randomly generated passwords. All features are optional and can be (re-)configured without rebuilding.
Changes: The module will now assume invocation by root only if both the UID is 0 and the PAM service name is "passwd". This should fix changing expired passwords on Solaris and HP-UX and make "enforce=users" safe. The proper English explanations of requirements for strong passwords will now be generated for a wider variety of possible settings.
Each CERT Security Improvement module addresses an important but narrowly defined problem in network security. It provides guidance to help organizations improve the security of their networked computer systems.
Each module page links to a series of practices and implementations. Practices describe the choices and issues that must be addressed to solve a network security problem. Implementations describe tasks that implement recommendations described in the practices. For more information, read the section about module structure.
- List of modules
- List of practices
- List of implementations
- Configuring NCSA httpd and Web-server content directories on a Sun Solaris 2.5.1 host
- Enabling process accounting on systems running Solaris 2.x
- Installing, configuring, and using tcp wrapper to log unauthorized connection attempts on systems running Solaris 2.x
- Configuring and using syslogd to collect logging messages on systems running Solaris 2.x
- Using newsyslog to rotate files containing logging messages on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized login attempts on systems running Solaris 2.x
- Installing, configuring, and using logdaemon to log unauthorized connection attempts to rshd and rlogind on systems running Solaris 2.x
- Understanding system log files on a Solaris 2.x operating system
- Installing, configuring, and using swatch to analyze log messages on systems running Solaris 2.x
- Installing, configuring, and using logsurfer on systems running Solaris 2.x
- Configuring and installing lsof 4.50 on systems running Solaris 2.x
- Configuring and installing top 3.5 on systems running Solaris 2.x
- Installing, Configuring, and using npasswd to improve password quality on systems running Solaris 2.x
- Installing and configuring sps to examine processes on systems running Solaris 2.x
- Installing and securing Solaris 2.6 servers
- Installing, configuring, and operating the secure shell (SSH) on systems running Solaris 2.x
- Characterizing files and directories with native tools on Solaris 2.X
- Detecting changes in files and directories with native tools on Solaris 2.X
- Installing and operating lastcomm on systems running Solaris 2.x
- Installing, configuring, and using spar 1.3 on systems running Solaris 2.x
- Installing and operating tcpdump 3.5.x on systems running Solaris 2.x
- Installing, configuring, and using argus to monitor systems running Solaris 2.x
- Using newarguslog to rotate log files on systems running Solaris 2.x
- Installing libpcap to support network packet tools on systems sunning Solaris 2.x
- Writing rules and understanding alerts for Snort, a network intrusion detection system
- Disabling network services on systems running Solaris 2.x
- Installing noshell to support the detection of access to disabled accounts on systems running Solaris 2.x.
- Disabling user accounts on systems running Solaris 2.x
- Installing OpenSSL to ensure availability of cryptographic libraries on systems running Solaris 2.x.
- Installing and Operating ssldump 0.9 Beta 1 on systems running Solaris 2.x.
Linux sources that might be useful (some Linux HOW-TO are not bad and are largely applicable to other Unix environments):
FAQs and RFCs
Practical Solaris 10 Security. (October 2006, Glenn Brunette).
This talk was given at the NSA Red Team/Blue Team Symposium and focuses on security controls from the viewpoint of someone attacking a Solaris 10 system.
Solaris 10 Security TOI – Deep Dive. (October 2006. Glenn Brunette).
This is a 2-3 hour technical discussion of the various security controls found in Solaris 10. This talk has been updated for Solaris 10 11/06.
Solaris Secure by Default Overview. (August 2006, Scott Rotondo). This presentation provides an introduction to the Solaris Secure by Default project, including how it was implemented and how it can be used to deploy more secure systems.
Solaris 10 Security Presentation. (June 2005, Darren Moffat)
GSoC-2006.pdf by darrenm (Darren J. Moffat)
Basic file privileges cec2006-dtrace-sec.pdf by gbrunett (Glenn M. Brunette)
Enhancing Security Awareness and Control with DTrace Presentation kcf-man.tar.gz by darrenm (Darren J. Moffat)
KCF draft man pages losug-security-rbac.pdf by darrenm (Darren J. Moffat)
RBAC Presentation nist-secauto-solsec-v1.4.pdf by gbrunett (Glenn M. Brunette)
There and Back Again - A Solaris Security Story nsa-rebl-solaris.pdf by gbrunett (Glenn M. Brunette)
Practical Solaris 10 Security Presentation pam_app_auth.c by darrenm (Darren J. Moffat)
PAM: simple application pam_netgroup.c by darrenm (Darren J. Moffat)
PAM: pam netgroup/user allow/deny pam_root_console.c by darrenm (Darren J. Moffat)
PAM: root console pam_xauth_cred.c by darrenm (Darren J. Moffat)
PAM: X11 xauth cred delegater privdebug.pl by gbrunett (Glenn M. Brunette)
Solaris 10 Privilege Debugging Tool s10-security-dive-20061024.pdf by gbrunett (Glenn M. Brunette)
Solaris 10 Security - Technical Deep Dive Presentation sfpc-v0.4.tar.gz by gbrunett (Glenn M. Brunette)
See also Solaris History
There and Back Again – A Solaris Security Story. (Glenn Brunette, September 2006).
This talk provides an overview of the security features introduced in various Solaris and Trusted Solaris OS releases as well as provides an overview of Sun's participation in government and industry collaboration in the area of security recommendations for the Solaris OS.
Some insecurely-configured Web proxy servers can be exploited by a remote attacker to make arbitrary connections to unauthorized hosts. Two common abuses of a misconfigured proxy server are to use it to bypass firewall restrictions and to send spam email. A server is used to bypass a firewall by connecting to the proxy from outside the firewall and then opening a connection to a host inside the firewall. A server is used to send spam by connecting to the proxy and then having it connect to a SMTP server. It has been reported that many Web proxy servers are distributed with insecure default configurations.
Users should carefully configure Web proxy servers to prevent unauthorized connections. It has been reported that http://www.monkeys.com/security/proxies/ contains secure configuration guidelines for many Web proxy servers. We can not verify the accuracy of this information, and if there are any questions users should contact their vendors.
Solaris Fingerprint Database Companion & Sidekick
Sun Managers Mailing List Archive
Yassp Development Mailing List Archive
A stack smashing attack is most typical for C programs. Many C programs have buffer overflow vulnerabilities, both because the C language lacks array bounds checking, and because the culture of C programmers encourages a performance oriented style that avoids error checking... Several papers contain "cook book style" descriptions of stack smashing attacks exploitation. If the attacker has access to a non-privileged account than unless the server has a hardware or software protection the only remaining work for an wanna-be attacker is to find a suitable non-patched utility and download or write an exploit. Hundreds of such exploits have been reported in recent years.
Aleph One's "Smashing The Stack For Fun And Profit" from Phrack 49
Mudge's "How to write Buffer Overflows"
Richard Jones and Paul Kelly's bounds checking patches to GCC
Solar Designer's Non-executable user stack area -- Linux kernel patch
Miller, Fredrickson and So's "An Empirical Study in the Reliability of UNIX utilities"
*** SecurityPortal.com Securing your File System in Linux. Average discussion.
Best practices in Linux file system security dictate a philosophy of configuring file system access rights in the most restrictive way possible that still allows legitimate users and processes to function properly. However, even with the most careful planning and restrictive settings, successful file system attacks and corruption can occur. To have the most comprehensive plan for Linux file system security, a system administrator needs to modify a default installation's settings, proactively monitor and audit file system changes and have multiple methods to recover from a file system attack.
In configuring file system security, the key areas to be concerned about are: access rights granted to legitimate users to create/modify files and execute programs, access to the file system granted to remote machines, and any area of the file system designated as world-writable.
To quickly review Linux permissions for files and directories, there are three basic types: read (numerically represented as 4), write (2) and execute (1). The values are summed to determine the permissions for the file or directory - a value of 4 meaning read-only, a value of 7 meaning read, write and execute are allowed. A file or directory is assigned three standard sets of permissions: access allowed to the owner, the associated group, and everyone.
umask A common occurrence over time on Linux systems is that when files get created or modified, the permissions become significantly more liberal than what was originally intended. When new files are created by users, administrators or processes, the default permissions granted are determined by umask. By using umask to set a restrictive default, files and directories that are created will retain more restrictive permissions unless they are manually changed with chmod. Umask defaults for all users are set in /etc/profile. Default permissions are determined by subtracting the umask value from 777. Files created by a user with a umask of 037 would have permissions of 640 (that isn't new math, the execute bit is not getting set for the owner), which means the owner can read/write the file, the group can read the file, and everyone has no access. Setting umask values to 077 means no one else has any access to files created by the owner.
... ... ...
NFS, Samba - The "Not For Security" file system should be avoided where possible on Linux boxes directly connected to the Internet. NFS requires a high degree of trust of the peer machine that will be mounting your partitions. You must be very careful about providing anything beyond read access to hosts in the /etc/exports. Samba, while not using a peer trust system, can nonetheless be complex to maintain user rights. They are both network filing services, and the only way to be sure that your file system is not at risk is to be running in a completely trusted environment.
Auditing your file system regularly is a must. You should look for files with the permissions anomalies described above. You should also be looking for changes in standard files and packages. At a minimum, you can use the find command to search for questionable file permissions:
Suid & sgid: find / \( -perm -2000 -o -perm -4000 \) -ls (You can add -o -perm -1000 to catch sticky bit files and directories)
World-writable files: find / -perm -2 ! -type l -ls
Files with no owner: find / \( -nouser -o -nogroup \) -print (thanks to Michael Wood for correcting this)
You can write and create cron job for a simple script that directs this output to a file, compares it with a file created by a search the day before and mails the difference to you.
As you might guess, several people have written simple to complex tools that check for files with questionable permissions, checksum binaries to detect tampering and a host of other functions. Here are a few:
Remote audit services
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: September 12, 2017