|May the source be with you, but remember the KISS principle ;-)|
|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
|News||Recommended Links||Introduction to Access Control||Authentication||Unix permissions model||The umask||PAM||Primary Group||User private groups|
|Group administration||The su Command||Sudo||Root Account||Root Security||System Accounts||Accounts security||/etc/passwd File||Shadow Passwords|
|chmod command||chown||mount||SUID/SGID attributes||SUID/SGID Checker||Solaris 10 Privilege Sets||Solaris RBAC||ACL||Kerberos|
|Principle of Least Privilege||Separation of Duties||Security Models||SOX craziness||RBAC, SOX and Role Engineering||Admin Horror Stories||Unix History||Humor||Etc|
The access control is just another name for compartmentalization of resources. It is useful to group general problems involved in making certain that files are not read or modified by unauthorized personnel or by less privileged programs running on the operating system. There are two major aspects of access control:
Additionally, in UNIX each process has two halves: the user part and the kernel part. When a process does a system call it switches from the user role to the kernel role. The kernel role has access rights to a different set of objects from the user part. A system call therefore invokes a domain switch, from user to kernel.
At every instant in time each process runs in a protection domain. Therefore, there is some collection of objects that the process can access. In addition, each object has a set of access rights. Processes can switch from domain to domain during program execution.
The UNIX process division into user and kernel parts is a legacy of a more powerful domain switching mechanism that was used in MULTICS. In the MULTICS system the hardware supported up to 64 domains for each process, not the two (kernel and user) like in UNIX or MVS.
A MULTICS process could be considered as a collection of procedures, each running in a domain, called a ring. The innermost ring was the most powerful, the operating system kernel. Radically, moving outwards from the operating system kernel the rings became successively less powerful. When a procedure in one ring called a procedure in a different ring a trap occurred. Once a trap occurred the system had the option to change the protection domain for that process.
Note: For the history of SOX enforcement in wrong places and auditors feeding frenzy on Section 404 see SOX Related Links
December 14, 2013 | Slashdot
Daniel_Stuckey writes "Earlier this year, it was London. Most recently, it was a university in Germany. Wherever it is, [artist Aram] Bartholl is opening up his eight white, plainly printed binders full of the 4.7 million user passwords that were pilfered from the social network and made public by a hacker last year.
He brings the books to his exhibits, called 'Forgot Your Password,' where you're free to see if he's got your data — and whether anyone else who wanders through is entirely capable of logging onto your account and making connections with unsavory people.
In fact, Bartholl insists: "These eight volumes contain 4.7 million LinkedIn clear text user passwords printed in alphabetical order," the description of his project reads. "Visitors are invited to look up their own password.""
Recently there has been much interest in the security aspects of operating systems and software. At issue is the ability to prevent undesired disclosure of information, destruction of information, and harm to the functioning of the system. This paper discusses the degree of security which can be provided under the UNIX# system and offers a number of hints on how to improve security.
The first fact to face is that UNIX was not developed with security, in any realistic sense, in mind; this fact alone guarantees a vast number of holes. (Actually the same statement can be made with respect to most systems.) The area of security in which UNIX is theoretically weakest is in protecting against crashing or at least crippling the operation of the system. The problem here is not mainly in uncritical acceptance of bad parameters to system calls -- there may be bugs in this area, but none are known -- but rather in lack of checks for excessive consumption of resources. Most notably, there is no limit on the amount of disk storage used, either in total space allocated or in the number of files or directories. Here is a particularly ghastly shell sequence guaranteed to stop the system:while : ; do mkdir x cd x done
Either a panic will occur because all the i-nodes on the device are used up, or all the disk blocks will be consumed, thus preventing anyone from writing files on the device.
In this version of the system, users are prevented from creating more than a set number of processes simultaneously, so unless users are in collusion it is unlikely that any one can stop the system altogether. However, creation of 20 or so CPU or disk-bound jobs leaves few resources available for others. Also, if many large jobs are run simultaneously, swap space may run out, causing a panic.
It should be evident that excessive consumption of disk space, files, swap space, and processes can easily occur accidentally in malfunctioning programs as well as at command level. In fact
UNIX is essentially defenseless against this kind of abuse, nor is there any easy fix. The best that can be said is that it is generally fairly easy to detect what has happened when disaster strikes, to identify the user responsible, and take appropriate action. In practice, we have found that difficulties in this area are rather rare, but we have not been faced with malicious users, and enjoy a fairly generous supply of resources which have served to cushion us against accidental overconsumption.
The picture is considerably brighter in the area of protection of information from unauthorized perusal and destruction. Here the degree of security seems (almost) adequate theoretically, and the problems lie more in the necessity for care in the actual use of the system.
Each UNIX file has associated with it eleven bits of protection information together with a user identification number and a user-group identification number (UID and GID). Nine of the protection bits are used to specify independently permission to read, to write, and to execute the file to the user himself, to members of the user's group, and to all other users. Each process generated by or for a user has associated with it an effective UID and a real UID, and an effective and real GID. When an attempt is made to access the file for reading, writing, or execution, the user process's effective UID is compared against the file's UID; if a match is obtained, access is granted provided the read, write, or execute bit respectively for the user himself is present.
If the UID for the file and for the process fail to match, but the GID's do match, the group bits are used; if the GID's do not match, the bits for other users are tested.
The last two bits of each file's protection information, called the set-UID and set-GID bits, are used only when the file is executed as a program. If, in this case, the set-UID bit is on for the file, the effective UID for the process is changed to the UID associated with the file; the change persists until the process terminates or until the UID changed again by another execution of a set-UID file. Similarly the effective group ID of a process is changed to the GID associated with a file when that file is executed and has the set-GID bit set. The real UID and GID of a process do not change when any file is executed, but only as the result of a privileged system call.
The basic notion of the set-UID and set-GID bits is that one may write a program which is executable by others and which maintains files accessible to others only by that program. The classical example is the game-playing program which maintains records of the scores of its players. The program itself has to read and write the score file, but no one but the game's sponsor can be allowed unrestricted access to the file lest they manipulate the game to their own advantage. The solution is to turn on the set-UID bit of the game program. When, and only when, it is invoked by players of the game, it may update the score file but ordinary programs executed by others cannot access the score.
There are a number of special cases involved in determining access permissions. Since executing a directory as a program is a meaningless operation, the execute-permission bit, for directories, is taken
instead to mean permission to search the directory for a given file during the scanning of a path name; thus if a directory has execute permission but no read permission for a given user, he may access files with known names in the directory, but may not read (list) the entire contents of the directory. Write permission on a directory is interpreted to mean that the user may create and delete files in that directory; it is impossible for any user to write directly into any directory.
Another, and from the point of view of security, much more serious special case is that there is a ``super user'' who is able to read any file and write any non-directory. T he super-user is also able to change the protection mode and the owner UID and GID of any file and to invoke privileged system calls. It must be recognized that the mere notion of a super-user is a theoretical, and usually practical, blemish on any protection scheme.
The first necessity for a secure system is of course arranging that all files and directories have the proper protection modes. Traditionally,
UNIX software has been exceedingly permissive in this regard; essentially all commands create files readable and writable by everyone. In the current version, this policy may be easily adjusted to suit the needs of the installation or the individual user. Associated with each process and its descendants is a mask, which is in effect and-ed with the mode of every file and directory created by that process. In this way, users can arrange that, by default, all their files are no more accessible than they wish. The standard mask, set by login, allows all permissions to the user himself and to his group, but disallows writing by others.
To maintain both data privacy and data integrity, it is necessary, and largely sufficient, to make one's files inaccessible to others. The lack of sufficiency could follow from the existence of set-UID programs created by the user and the possibility of total breach of system security in one of the ways discussed below (or one of the ways not discussed below). For greater protection, an encryption scheme is available. Since the editor is able to create encrypted documents, and the crypt command can be used to pipe such documents into the other text-processing programs, the length of time during which clear text versions need be available is strictly limited. The encryption scheme used is not one of the strongest known, but it is judged adequate, in the sense that cryptanalysis is likely to require considerably more effort than more direct methods of reading the encrypted files. For example, a user who stores data that he regards as truly secret should be aware that he is implicitly trusting the system administrator not to install a version of the crypt command that stores every typed password in a file.
Needless to say, the system administrators must be at least as careful as their most demanding user to place the correct protection mode on the files under their control. In particular, it is necessary that special files be protected from writing, and probably reading, by ordinary users when they store sensitive files belonging to other users. It is easy to write programs that examine and change files by accessing the device on which the files live.
On the issue of password security, UNIX is probably better than most systems. Passwords are stored in an encrypted form which, in the absence of serious attention from specialists in the field, appears reasonably secure, provided its limitations are understood. In the current version, it is based on a slightly defective version of the Federal DES; it is purposely defective so that easily-available hardware is useless for attempts at exhaustive key-search. Since both the encryption algorithm and the encrypted passwords are available, exhaustive enumeration of potential passwords is still feasible up to a point. We have observed that users choose passwords that are easy to guess: they are short, or from a limited alphabet, or in a dictionary. Passwords should be at least six characters long and randomly chosen from an alphabet which includes digits and special characters.
Of course there also exist feasible non-cryptanalytic ways of finding out passwords. For example, write a program which types out ``login: '' on the typewriter and copies whatever is typed to a file of your own. Then invoke the command and go away until the victim arrives.
The set-UID (set-GID) notion must be used carefully if any security is to be maintained. The first thing to keep in mind is that a writable set-UID file can have another program copied onto it. For example, if the super-user (su) command is writable, anyone can copy the shell onto it and get a password-free version of su. A more subtle problem can come from set-UID programs which are not sufficiently careful of what is fed into them. To take an obsolete example, the previous version of the mail command was set-UID and owned by the super-user. This version sent mail to the recipient's own directory. The notion was that one should be able to send mail to anyone even if they want to protect their directories from writing. The trouble was that mail was rather dumb: anyone could mail someone else's private file to himself. Much more serious is the following scenario: make a file with a line like one in the password file which allows one to log in as the super-user. Then make a link named ``.mail'' to the password file in some writable directory on the same device as the password file (say /tmp). Finally mail the bogus login line to /tmp/.mail; You can then login as the super-user, clean up the incriminating evidence, and have your will.
The fact that users can mount their own disks and tapes as file systems can be another way of gaining super-user status. Once a disk pack is mounted, the system believes what is on it. Thus one can take a blank disk pack, put on it anything desired, and mount it. There are obvious and unfortunate consequences. For example: a mounted disk with garbage on it will crash the system; one of the files on the mounted disk can easily be a password-free version of su; other files can be unprotected entries for special files. The only easy fix for this problem is to forbid the use of mount to unprivileged users. A partial solution, not so restrictive, would be to have the mount command examine the special file for bad data, set-UID programs owned by others, and accessible special files, and balk at unprivileged invokers.
for p in $(rpm -qa); do rpm --setugids $p; done
for p in $(rpm -qa); do rpm --setperms $p; done
Feb 26, 2008 | developerworks
If you're concerned about protecting world-writeable shared directories such as /tmp or /var/tmp from abuse, a Linux® Pluggable Authentication Module (PAM) can help you. The pam_namespace module creates a separate namespace for users on your system when they login. This separation is enforced by the Linux operating system so that users are protected from several types of security attacks. This article for Linux system administrators lays out the steps to enable namespaces with PAM.
To improve security, it's often wise to use more than one method of protection (also called "defense in depth"). That way, if one method is breached, another method remains operational and prevents further intrusion. This article describes a way to add another layer of depth to your security strategy: using PAM to polyinstantiate world-writeable shared directories. This means that a new instance of a directory, such as /tmp, is created for each user.
Polyinstantiation of world-writeable directories prevents the following types of attacks, as Russell Coker illustrates in "Polyinstantiation of directories in an SELinux system" (see Resources):
- Race-condition attacks with symbolic links
- Exposing a file name considered secret information or useful to an attacker
- Attacks by one user on another user
- Attacks by a user on a daemon
- Attacks by a non-root daemon on a user
However, polyinstantiation does NOT prevent these types of attacks:
- Attacks by a root daemon on a user
- Attacks by root (account or escalated privilege) on any user
How PAM and polyinstantiation work
PAM creates a polyinstantiated private /tmp directory at login time within a system instance directory; this redirection is transparent to the user logging in. The user sees a standard /tmp directory and can read and write to it normally. That user cannot see any other user's (including root's) /tmp space or the actual /tmp file system.
Polyinstantiated user directories are neither hidden nor protected from the root user. If you are interested in that level of protection, SELinux can help. The configuration examples provided here should work whether or not you have enabled SELinux. See Resources for links to more information about using SELinux.
This section shows you how to enable polyinstantiation of /tmp and /var/tmp directories for users on your system. It also describes the optional configuration steps necessary to accommodate X Windows or a graphical display manager. I used Red Hat Enterprise Linux 5.1 (RHEL 5.1) to write this article, but you can try the procedures described here on any Linux distribution that includes the pam_namespace module.
First we'll edit namespace.conf.
The first file you'll edit is /etc/security/namespace.conf, which controls the pam_namespace module. In this file, list the directories that you want PAM to polyinstantiate on login. Some example directories are listed in the file included with PAM and are commented out. Type
man namespace.confto view a comprehensive manual page. The syntax for each line in this file is
polydir instance_prefix method list_of_uids.
Page 1. GOVERNMENT FAILURE VERSUS MARKET FAILURE Clifford Winston Brookings Institution November 2005 Preliminary draft -- not for quotation
April 11, 2005 | National Review.
EDITOR'S NOTE: This piece appears in the April 11, 2005, issue of National Review.
Early this year, an unusual full-page ad appeared in the Wall Street Journal and other financial newspapers. The ad attempted to refute claims from businessmen about the costs imposed by the mandates of the Sarbanes-Oxley Act, the "corporate reform" law Congress passed in 2002 after accounting scandals hit Enron, Worldcom, and other companies. Yes, procedures stemming from that law "are neither simple nor inexpensive," the ad said, but the costs are well worth it if the result is restored investor confidence. "The [law's] greater goal and promise," the ad proclaimed, "is that the rigorous demands of compliance can lay the groundwork for improved and more reliable financial reporting, leading to a higher level of public trust."
The ad's message itself was not unusual; it mirrored the standard response the law's defenders give to complaints about cost. A Washington Post editorial intoned, "The nation's corporate chieftains . . . complain about the cost of this new regulation, not pausing to mention the cost of Enron-type scandals." But what this newspaper ad shows is that not all corporate chieftains oppose this law. The expensive ad was not paid for by a pension fund or another group representing the investors the law was intended to serve: Its sponsor was, rather, PricewaterhouseCoopers, the multi-billion-dollar accounting firm making a bundle in fees for doing all the audits the law has ended up requiring of business. By creating so many hurdles for public companies, the law has birthed a golden goose for those who audit them. And ironically, despite the media and legislative clamor to "get" the big accounting firms after Enron imploded, it's the Big Four accounting firms that have turned out to be the big winners from Sarbanes-Oxley.
THE BIG FOUR VS. AMERICA"Auditors who lost their jobs when Arthur Andersen folded in the wake of the Enron scandal now find themselves up to their ears in work . . . auditing local businesses that are racing to meet Sarbanes-Oxley regulations," the San Jose Mercury News reported in December. According to BusinessWeek, PricewaterhouseCoopers has hired more than 1,600 new auditors and 400 temps from English-speaking lands to perform the extensive audits of businesses. The Big Four are hiring big-time, and have stepped up their recruiting efforts on college campuses: BusinessWeek says KPMG has upped its college recruitment by 40 percent in the last two years. Accounting is now a hot major. A headline in the magazine Practical Accountant summed up the accounting frenzy: "Cash in on Sarbanes-Oxley; reform unleashed a plethora of new and varied engagement opportunities."
But what's good for the Big Four isn't necessarily good for America. Other businesses, and ultimately the economy as a whole, are footing the bill for this regulation-driven auditing boom. Mounting evidence shows that the accounting-industry growth generated by Sarbanes-Oxley is coming at the expense of productivity, new jobs, and innovation in the general business world. A survey by Korn/Ferry International found that the law cost Fortune 1000 companies an average of $5.1 million in compliance expenses last year. For middle-market public companies, the law firm Foley & Lardner found that the act has increased the "cost of being public" — everything from audit fees to director insurance — by 130 percent. Substantial man-hours have also been diverted to Sarbanes-Oxley from other, more productive tasks. The industry group Financial Executives International found that the average firm was spending at least 30,700 man-hours a year on compliance with this law. As a result, a number of small U.S. and big foreign firms are rushing to deregister from U.S. stock exchanges — a blow to the U.S. capital markets, and, in turn, to the smaller U.S. companies that depend on these capital markets for financing.
Still, despite business complaints, the administration and the congressional majority — who have other critics, ones accusing them of wanting to repeal the New Deal — show no signs of willingness to scale back what has been called the greatest expansion of federal corporate law since FDR. Congress passed Sarbanes-Oxley less than a month after Worldcom announced it was in serious trouble; it was also six months after Enron's bankruptcy, and three months before the 2002 midterm elections. The Senate, then under Democratic control, had crafted a sweeping corporate overhaul bill by Sen. Paul Sarbanes, Democrat of Maryland. The House had passed a more modest bill by House Financial Services Committee chairman Mike Oxley, Republican of Ohio. When Worldcom announced the earnings restatement that would lead to its bankruptcy, the Bush administration and congressional Republicans went into crisis mode and approved Sarbanes's bill with very minor changes; about the only thing the House Republicans added to the final bill was a provision increasing the jail terms for those convicted of corporate wrongdoing. The bill passed the Senate 99-0, and the House approved it with only three members voting no.
The final product, the Sarbanes-Oxley Act, goes against a 30-year trend of general economic deregulation under Republican and Democratic presidents. It undermines federalism, by going where the federal government has never gone before in areas of corporation law that had long been provinces of the states; UCLA law professor Stephen Bainbridge wrote in Regulation magazine that the act has ushered in "the creeping federalization of corporate law." It regulates the structure and functions of boards of directors, and prescribes the duties of specific employees and board members. Intentionally or unintentionally, the law takes a significant step toward the longtime goal of Ralph Nader and other leftists: federal chartering of corporations. Environmental and labor activists are looking at ways to use the law to launch "shareholder complaints" to force companies to bend to their agenda. As William Greider noted approvingly in a recent cover article in The Nation, this new leftist "reform impulse is different because it seeks to change the system from within, using workers' capital as the driving wedge."
Oxley and the administration are standing firmly behind the law and seem opposed to significant changes in it. When I recently asked Oxley's office for his current views on the law, I was referred to remarks he made early last year at an event with Sarbanes at Washington's National Press Club, in which he said that "the objective ways that you measure this seems to me on the positive side," and that the market had gotten better since the law was passed. As for compliance costs, he said, they "pale in comparison to the costs of corporate fraud that could have occurred without this legislation." Treasury secretary John Snow in a recent BusinessWeek interview praised Sarbanes-Oxley as "critically important legislation" and said Congress didn't need to modify the law.
Meanwhile, the law is fulfilling its promise to create, in President Bush's words at the signing ceremony, "the most far-reaching reforms of American business practices since the time of Franklin Delano Roosevelt." Indeed, one section of the law threatens to become the most extensive day-to-day regulation of American business since FDR's National Industrial Recovery Act, the price-and-output regulatory scheme struck down by a unanimous Supreme Court in 1935. Just as the NIRA created industry boards that had to approve prices and output, Sarbanes-Oxley's Section 404 and its regulatory extensions mandate that the most minute bookkeeping practices have to be okayed by auditors.
PEEKABOO, WE SEE YOUSection 404 requires that a business's executives sign off on the "internal controls" over financial statements and that the company's outside auditors "attest to" the soundness of these controls. The law also created the quasi-private Public Company Accounting Oversight Board to regulate accountants and set auditing standards. Although the Big Four initially opposed the tougher regulation this body would entail as well as the law's bans on consulting services that can be sold to an audit client, they quickly decided that having the board define "internal controls" as broadly as possible would likely more than make up for their losses. "We love the PCAOB and we love Section 404, especially given the other regulations in the law that affect us," says a staffer in the Washington, D.C., office of a Big Four firm.
The PCAOB, non-affectionately referred to as "Peekaboo" by many in the companies that are under its thumb, gave the accountants what they wanted: Last March, it defined internal controls as "controls over all relevant financial statement assertions related to all significant accounts and disclosures in the financial statements." It also defined the law's phrase "attestation" as a full-blown audit of each of these controls, just as the company's numbers have traditionally been audited. In practice, this means that such things as the technology used to derive accounting numbers must be audited every year by the accountants. The board states that "the nature and characteristics of a company's use of information technology in its information system affect the company's internal control over financial reporting." This passage alarms public-company tech employees, because no technology is perfect, and even knowledgeable techies disagree about which is the best software. One public company's chief financial officer says this could mean that an auditor could label a computer with Windows 97, rather than an updated version, a bad internal control.
Daniel Goelzer, a member of the PCAOB, says this isn't likely: "I can't offhand think of a way that using an old version of Windows would make it more likely that your financial statements would be inaccurate." But he adds, "Maybe if there's something about the way that your Windows 95 interacted with the rest of the accounting software at a particular company, maybe it's conceivable." It's really up to the judgment of the individual auditor to decide whether a control passes muster, he says. "We have definitions, but I would certainly say to you that applying those definitions to a particular company requires judgment. That's why auditing's a profession, not a trade."
Yet it's a profession that the law is turning into a mini-regulator. And it's troubling when auditors have the power to second-guess management's best judgment on matters like technology, particularly when this power is combined with other parts of the law requiring "material weaknesses" discovered by auditors to be disclosed to shareholders and establishing criminal penalties for "willfully" disregarding proper accounting procedures. If management, which presumably knows the company better than anyone, has to go against its judgment of what's best to please an auditor, shareholders could lose out as well. And with every new business venture, there's a whole new set of internal controls. This could lead to a slowdown in business investment.
As if all this weren't enough for businesses to cope with, a whole bunch of interest groups now have their own definitions of "internal controls" they want to have imposed. The Oakland-based Rose Foundation for Communities and the Environment has called on the PCAOB to mandate that "independent auditors include reviewing the financial impacts of environmental conditions and environmental liabilities as part of their scope."
Many companies are hurrying to escape Sarbanes-Oxley by leaving the stock exchanges: According to a Wharton study, 198 American companies deregistered from exchanges in 2003, the year after the law was passed — nearly triple the number that deregistered in 2002. Prominent European firms, such as Siemens, are also considering pulling their U.S. listing because of the law. In 2004, the New York Stock Exchange had only ten new foreign listings.
This is clearly a threat to overall economic vitality. Alfred C. Eckert III, CEO of the GSC Partners investment firm, also worries that both Section 404 and the law's mandates for boards of directors will lead to the "bureaucratization" of large American firms: "We're going to have people who are much more bureaucratic . . . and who are frightened and will react in always the most conservative course and will rely on process dictated by lawyers rather than good business judgment." Eckert warns that if Bush and Congress ignore these effects of Sarbanes-Oxley, Bush's planned tax and Social Security reforms will not come to full fruition. "[Sarbanes-Oxley] will make capital more expensive and lower the rate of growth of America. It's very simple."
— John Berlau is the Warren T. Brookes Journalism Fellow at the Competitive Enterprise Institute.
Softpanorama hot topic of the month
SecurityUnit -Color Books
Rule Set Based Access Control (RSBAC) - Overview
Apache Authentication, Authorization, and Access Control
Apache has three distinct ways of dealing with the question of whether a particular request for a resource will result in that resource actually be returned. These criteria are called Authorization, Authentication, and Access control.
Authentication is any process by which you verify that someone is who they claim they are. This usually involves a username and a password, but can include any other method of demonstrating identity, such as a smart card, retina scan, voice recognition, or fingerprints. Authentication is equivalent to showing your drivers license at the ticket counter at the airport.
Authorization is finding out if the person, once identified, is permitted to have the resource. This is usually determined by finding out if that person is a part of a particular group, if that person has paid admission, or has a particular level of security clearance. Authorization is equivalent to checking the guest list at an exclusive party, or checking for your ticket when you go to the opera.
Finally, access control is a much more general way of talking about controlling access to a web resource. Access can be granted or denied based on a wide variety of criteria, such as the network address of the client, the time of day, the phase of the moon, or the browser which the visitor is using. Access control is analogous to locking the gate at closing time, or only letting people onto the ride who are more than 48 inches tall - it's controlling entrance by some arbitrary condition which may or may not have anything to do with the attributes of the particular visitor.
Because these three techniques are so closely related in most real applications, it is difficult to talk about them separate from one another. In particular, authentication and authorization are, in most actual implementations, inextricable.
If you have information on your web site that is sensitive, or intended for only a small group of people, the techniques in this tutorial will help you make sure that the people that see those pages are the people that you wanted to see them.
(Orange Book) DoD Trusted Computer System Evaluation Criteria, 26 December 1985, dtd 15 Aug 83).
C Technical Report 32-92: The Design and Evaluation of INFOSEC systems: The Computer Security Contribution to the Composition Discussion, June 1992.
C Technical Report 79-91: Technical Report, Integrity in Automated Information Systems, September 1991 .
C1 Technical Report 001: Technical Report, Computer Viruses: Prevention, Detection, and Treatment, 12 March 1990
(Green Book) DoD Password Management Guideline, 12 April 1985.
(Light Yellow Book) Computer Security Requirements -- Guidance for Applying the DoD TCSEC in Specific Environments, 25 June 1985
(Yellow Book) Technical Rational Behind CSC-STD-003-85: Computer Security Requirements -- Guidance for Applying the DoD TCSEC in Specific Environments, 25 June 1985.
(Tan Book) A Guide to Understanding Audit in Trusted Systems 1 June 1988, Version 2.
(Bright Blue Book) Trusted Product Evaluations - A Guide for Vendors, 22 June 1990.
(Neon Orange Book) A Guide to Understanding Discretionary Access Control in Trusted Systems, 30 September 1987.
(Red Book) Trusted Network Interpretation of the TCSEC (TNI), 31 July 1987.
(Amber Book) A Guide to Understanding Configuration Management in Trusted Systems, 28 March 1988.
(Burgundy Book)A Guide to Understanding Design Documentation in Trusted Systems, 6 October 1988.
(Dark Lavender Book) A Guide to Understanding Trusted Distribution in Trusted Systems 15 December 1988.
(Venice Blue Book) Computer Security Subsystem Interpretation of the TCSEC 16 September 1988.
(Aqua Book) A Guide to Understanding Security Modeling in Trusted Systems, October 1992.
(Red Book) Trusted Network Interpretation Environments Guideline - Guidance for Applying the TNI, 1 August 1990.
(Pink Book) RAMP Program Document, 1 March 1995, Version 2
(Purple Book) Guidelines for Formal Verification Systems, 1 April 1989.
(Brown Book) A Guide to Understanding Trusted Facility Management, 18 October 1989
(Yellow-Green Book) Guidelines for Writing Trusted Facility Manuals, October 1992.
(Light Blue Book) A Guide to Understanding Object Reuse in Trusted Systems, July 1992.
(Blue Book) Trusted Product Evaluation Questionaire, 2 May 1992, Version 2.
(Silver Book) Trusted UNIX Working Group (TRUSIX) Rationale for Selecting Access Control List Features for the UNIX® System, 7 July 1989.
(Purple Book) Trusted Database Management System Interpretation of the TCSEC (TDI), April 1991.
(Yellow Book)A Guide to Understanding Trusted Recovery in Trusted Systems, 30 December 1991.
(Bright Orange Book) A Guide to Understanding Security Testing and Test Documentation in Trusted Systems
(Forest Green Book) A Guide to Understanding Data Remanence in Automated Information Systems, September 1991, Version 2,
(Hot Peach Book) A Guide to Writing the Security Features User's Guide for Trusted Systems, September 1991.
(Turquiose Book) A Guide to Understanding Information System Security Officer Responsibilities for Automated Information Systems, May 1992.
(Violet Book) Assessing Controlled Access Protection, 25 May 1992.
(Blue Book) Introduction to Certification and Accreditation Concepts, January 1994.
(Light Pink Book) A Guide to Understanding Covert Channel Analysis of Trusted Systems, November 1993.
(Purple Book) NCSC-TG-024 Vol. 1/4: A Guide to Procurement of Trusted Systems: An Introduction to Procurement Initiators on Computer Security Requirements, December 1992.
(Purple Book) NCSC-TG-024 Vol. 2/4: A Guide to Procurement of Trusted Systems: Language for RFP Specifications and Statements of Work - An Aid to Procurement Initiators, 30 June 1993.
(Purple Book) NCSC-TG-024 Vol. 3/4: A Guide to Procurement of Trusted Systems: Computer Security Contract Data Requirements List and Data Item Description Tutorial, 28 February 1994.
NCSC Technical Report 002: Use of the TCSEC for Complex, Evolving, Mulitpolicy Systems
NCSC Technical Report 003: Turning Multiple Evaluated Products Into Trusted Systems
NCSC Technical Report 004: A Guide to Procurement of Single Connected Systems - Language for RFP Specifications and Statements of Work - An Aid to Procurement Initiators - Includes Complex, Evolving, and Multipolicy Systems
NCSC Technical Report 005 Volume 1/5: Inference and Aggregation Issues In Secure Database Management Systems
NCSC Technical Report 005 Volume 2/5: Entity and Referential Integrity Issues In Multilevel Secure Database Management
NCSC Technical Report 005 Volume 3/5: Polyinstantiation Issues In Multilevel Secure Database Management Systems
NCSC Technical Report 005 Volume 4/5: Auditing Issues In Secure Database Management Systems
NCSC Technical Report 005 Volume 5/5: Discretionary Access Control Issues In High Assurance Secure Database Management Systems
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October 25, 2015